For archived HoloGenomics News articles before 2010 click here.
Articles before Hologenomics (before mid-2008) are listed in Archives, bottom here.

(Mar 02, 2012) The Creative Destruction of Medicine
(Feb 29) A scalable global business model and a (sub)continent offers support of Pellionisz' Fractal Approach
(Feb 06) Roche’s Illumina Bid May Spur Buying in DNA Test Land Grab
(Jan 31) Genome Informatics Globalized - USA - China - Korea - India
(Jan 14) UNESCO’s Memorial Year Honours János Szentágothai
(Jan 02) Power in Numbers [Eric Lander: math must grab genomics, beneath neuroscience – AJP]
(Jan 01) Paired Ends: Lee Hood, Andras Pellionisz - 2012 a Year of Turing and theYear of Turning
2012
(Dec 27, 2011) The genetic code, 8-dimensional hypercomplex numbers and dyadic shifts
(Dec 10) Biophysicists Discover Four New [Fractal - AJP] Rules of DNA 'Grammar
(Dec 07) Francis Collins [Head of NIH of the USA in Bangalore, India -AJP]
(Dec 06) Paired Ends: Lee Hood, Andras Pellionisz
(Dec 03) ELOGIC Technologies (Bangalore) to launch Genome Analytics Service (See Binet - Genome)
(Dec 02) DNA Sequencing Caught in Deluge of Data
(Dec 01) The FDA’s confusing stance on companies that interpret genetic tests is terrible for consumers
(Nov 30) ELOGIC Technologies Private Limited Bangalore, India [Pellionisz unites Silicon Valley-s of USA/India - AJP]
(Nov 29) The Search for RNAs
(Nov 28) High order chromatin architecture shapes the landscape of chromosomal alterations in cancer ["Fractal Defects" as root causes of cancer -AJP]
(Nov 14) Recursive Genome Function of the Cerebellum. Geometrical Unification of Neuroscience and Genomics
(Nov 12) Altered branching patterns of Purkinje cells in mouse model for cortical development disorder
(Oct 31) Geometric Unification of Neuroscience and Genomics
(Oct 27) ...Initially Jobs sought alternatives to surgery
(Sep-Oct) Bio-IT World; Savoring an NGS Software Smorgasbord
(Sep 23) A DNA Tower of Babel
(Sep 10) Sam Waksal, Pfizer Venture Investments, and More: Moderator Looks Forward to All-Star Chat at New York Life Sciences 2031
(Sep 05) Lost In Translation? Andy Grove blasts "Change the System!" in his Anti-Medical School Course at UC Berkeley
(Sep 03) W.M. Keck Foundation awards Jefferson scientists with $1M medical research grant [Rigoutsos - AJP]
(Sep 02) Samsung Launches Genome Analysis Service, Offers Free Genome
(Aug 20) Comment by Andras J. Pellionisz to New York Times "Cancer's Secrets coming into Sharper Focus"
(Aug 17) Everything Scientists Thought They Knew About Cancer Might Be Totally Wrong
(Aug 15) Spit and know your future [This time, for India... AJP]
(Jul 31) Researchers uncover a new method of checking for skin cancer
(Jul 17) A surge of top-quality papers pointing into "methylation-defects" as predicted by FractoGene as culprits for cancer
(Jul 21) How accurate is the new Ion Torrent genome, really? [Gordon Moore sequenced twice - AJP]
(Jul 21) [Former Intel President] Grove Backs an Engineer’s Approach to Medicine
(Jul 20) Loophole found in genetic traffic laws [Experimentaly Proven Death Certificate of Crick's "Central Dogma" - AJP]
(Jul 16) Editing the genome - Scientists unveil new tools for rewriting the code of life
(Jul 15) Clue to kids' early aging disease found [The Colossal Paradigm-Shift - AJP]
(Jul 14) Researchers Use Genome Editing Methods to Swap Stop Codons in Living Bacteria
(Jul 09) The Mathematics of DNA [is Fractal - says Dr. Perez]
(Jul 08) Cell Surface as a Fractal: Normal and Cancerous Cervical Cells Demonstrate: Different Fractal Behavior of Surface Adhesion Maps at the Nanoscale
(Jul 08) China genomics institute outpaces the world
(Jul 06) Searching For Fractals May Help Cancer Cell Testing
(Jul 01) A quest for better genetics [from Moscow...]
(Jun 30) Study Suggests Widespread Loss of Epigenetic Regulation in Cancer Genomes
(Jun 27) "So What?" - if you separate Fractal Defects from Structural Variants of Human Diversity? - In Vivo Genome Editing!
(Jun 24) 23andMe-Led Team Reports on Findings from Web-Based Parkinson's GWAS
(Jun 23) Researchers Develop Methylation-Based Model for Predicting Age from Spit DNA
(Jun 20) Goodbye, Genetic Blueprint
(May 30) Cells may stray from 'central dogma'
(May 23) The fractal globule as a model of chromatin architecture in the cell
(May 22) The Principle of Recursive Genome Function: Quantum Biophysical Semeiotics clinical and experimental evidences
(May 22) The Myth of Junk DNA - an issue fallen from science in 2006 to a rejected ideology for the masses to chew on as an Amazon bestseller
(May 16) Eric Schadt Joins Mount Sinai Medical School [Dir. of Institute for Genomics AND Multiscale Biology]
(May 12) Battelle Study: The $796 Bn Economic Impact of the Human Genome Project
(May 08) In an improbable corner of China, young scientists are rewriting the book on genome research [Newsweek]
(May 04) Systems Biology 'Makes Sense of Life' [Once the System is Identified - AJP]
(May 01) Virginia Tech partners with NVIDIA to “Compute the Cure” for Cancer
(Apr 30) Breast cancer prognosis goes high tech [Fractal - AJP]
(Apr 16) FractoGene (2002) and Fractal Frenzy set off by The Principle of Recursive Genome Function, YouTube (2008) [AJP]
(Apr 12) The Structural Struggle - [vs. Fractal Algorithmic Elegance - AJP]
(Apr 11) Cancer center builds Texas-sized cloud [private cloud!]
(Apr 11) Cancer as Defective Fractal Recursive Genome Function (Pellionisz and Lander et. al trigger escalation of fractal approach)
(Apr 07) The Trouble with Genes [Article Gets Prize - Mattick joins leaders to admit that basic premises were all wrong - AJP]
(Mar 27) Eric Schadt Extreme Science [and other kooks - AJP]
(Mar 25) Global Scaling Institute of Germany explores roots of fractals with Euler
(Mar 24) Avesthagen launches Whole Genome Scanning [India blows away FDA -AJP]
(Mar 15) Complementing Private Domain Genome Sequencing Industry - the Birth of Genome Analytics Industry
(Mar 15) Scientists need new metaphor for human genome [or better yet, science for industrialization of the new paradigm - AJP]
(Mar 15) RNA regulation of human development, cognition and disease (Mattick in Dubai)
(Mar 14) Hamdan Bin Rashid to inaugurate HGM 2011 Monday
(Mar 01) DRC Computer Invites Dr. Andras Pellionisz to Advisory Board
(Feb 20) NHGRI Celebrates Tenth Anniversary of Human Genome Sequence [what went wrong - Green]
(Feb 19) Initial impact of the sequencing of the human genome [what went wrong according to Sci. Advisor of the President]
(Feb 14) Primates' Unique Gene Regulation Mechanism: Little-Understood DNA Elements Serve Important Purpose
(Feb 13) Genomes: Know Your Genes ... Fast
(Jan 31) Computing in the Age of the $1,000 Genome
(Jan 26) The State of Science [What was Obama's Sputnik? What should be his Apollo? - AJP]
With 3 Genomics meetings (San Francisco, Mountain View, San Diego) and NOVA Special on Fractal Geometry of Nature the server crashed ... Now migrating ...
(Jan 17) Like Life Itself, Sustainable Development is Fractal
(Jan 13) EU Funds Development of Gene Regulation Software Suite
(Jan 03) Biotech's Biggest Winner [according to the Forbes, it is Illumina - AJP]
(Jan 01) The Next $100 Billion Technology Business
2011
(Dec 28, 2010) 23andMe lowers price from $499 to $199 permanently
(Dec 19) Genetic Tests Debate: Is Too Much Info Bad for Your Health?
(Dec 18) Key information about breast cancer risk and development is found in 'junk' DNA
(Dec 09) DIY DNA on a Chip: Introducing the Personal Genome Machine
(Dec 09) Break Out: Pacific Biosciences Team Identifies Asian Origin for Haitian Cholera Bug
(Dec 03) Which Is a Better Buy: Complete Genomics or Pacific Biosciences?
(Nov 27) A Geneticist's Cancer Crusade
(Nov 25) News from 23andMe: a bigger chip, a new subscription model and another discount drive [Grab it NOW - $99 Holiday Sale]
(Nov 17) This Recent IPO Could Soar [as money for "Analytics" makes "Sequencing" sustainable - AJP]
(Nov 16) BGI – China’s Genomics Center Has A Hand in Everything
(Nov 15) Most doctors are behind the learning curve on genetic tests - the $1,000 sequence and $1 M interpretation
(Nov 11) Forget About a Second Genomics Bubble: Complete Genomics Tumbles on IPO First Day
(Nov 11) The Daily Start-Up: Gene Tests Attracting Money & Scrutiny [23andMe C round with J&J]
(Nov 09) NIH Chief Readies the Chopping Block [ASHG in Washington ending on a sour note - AJP]
(Nov 09) Next Generation Sequencing
(Nov 08) Experts Discuss Consumer Views of DTC Genetic Testing at ASHG, Washington
(Nov 07) Complete Genomics plans Tuesday [November, 9] initial public offering of stock
(Nov 02) Today, we know all that was completely wrong - Lander's Keynote at ASHG, Washington
(Nov 01) 1,000 Genomes Project Maps 16 Million DNA Variants: Why?
(Oct 29) Parody of Public’s Attitude Toward DTC Genetics
(Oct 27) UPDATE: Pacific Biosciences IPO Rises While First Wind Cuts Price
(Oct 26) UPDATE 1-Pacific Biosciences IPO prices at midpoint-underwriter
(Oct 26) Complete Genomics Sets IPO Price Range [Analytics is the key - AJP]
(Oct 24) IPO Preview: Pacific Biosciences [this week; a huge surge for Fractal Analytics - AJP]
(Oct 19) Benoît B Mandelbrot: the man who made geometry an art [censored - reinstated AJP]
(Oct 18) 'Fractalist' Benoît Mandelbrot dies [Long Live FractoGene ... AJP]
(Oct 16) Benoît Mandelbrot, Novel Mathematician, Dies at 85
(Oct 14) Going 'Beyond the Genome'
(Oct 09) Cold Spring Harbor Lab Says Benefits of ARRA Funding Will Outlast Stimulus Program
(Oct 08) New Research Buildings Open at Cold Spring Harbor Laboratory
(Oct 07) What to Do with All That Data?
(Oct 06) Pacific Biosciences Targeting $15-$17 Share Price for IPO
(Oct 06) The Road to the $1,000 Genome
(Oct 18) Revolution [was] Postponed [for too long, over Half a Century - AJP]
(Oct 01) The $1,000,000 Genome Interpretation
(Sep 27) Mastering Information for Personal Medicine
(Sep 20) Cacao Genome Database Promises Long Term Sustainability
(Sep 18) US clinics quietly embrace whole-genome sequencing
(Sep 17) The Broad's Approach to Genome Sequencing (Part II)
(Sep 10) Pellionisz Principle; "Recursive Genome Function" gets well over a quarter of a Million hits (261,000)
(Sep 08) Victory Day of Recursion over Junk DNA and Central Dogma COMBINED
(Sep 07) Complete Genomics to Sequence 100 Genomes for National Cancer Institute Pediatric Cancer Study
(Sep 01) Junk DNA can rise from the dead and haunt you [Comments]
(Sep 01) Pacific Biosciences Denies Helicos' Infringement Claims
(Aug 24) Will Fractals Revolutionize Physics, Biology and Other Sciences?
(Aug 19) Reanimated ‘Junk’ DNA Is Found to Cause Disease
(Aug 18) Life Technologies inks $725M deal for Ion Torrent
(Aug 16) PacBio files for $200 million IPO
(Aug 15) How Can the US Lead Industrialization of Global Genomics? [AJP]
(Aug 15) Francis Collins: One year at the helm [US gov. over the cliff in Genomics - AJP]
(Aug 15) BGI Americas [and BGI Europe] Offers Sequencing the Chinese Way
(Aug 15) Junk DNA: Does it Hold More than what Appears?
(Aug 12) Biotech is back [in Korea - AJP]
(Aug 12) CLC bio [of Denmark] and PSSC Labs [California] Deliver Turnkey Solution for Full-Genome Data Analysis
(Aug 10) GenomeQuest and SGI Announce Whole-Genome Analysis Architecture
(Aug 10) Pacific Biosciences Expands into European Union
(Aug 09) Pacific Biosciences launches PacBio DevNet at ISMB 2010 - [Partners]
(Aug 08) Illumina Inc. et al. v. Complete Genomics Inc.
(Aug 05) I was wrong ...
(Aug 04) "Recursive Genome Function" - winner takes it all
(Aug 03) Pellionisz' "Recursive Genome Function" supersedes both obsolete axioms of "Central Dogma" AND "Junk DNA"
(Jul 31) Mountain View's Complete Genomics to make Wall Street debut
(Jul 30) SPIEGEL Interview with Craig Venter: 'We Have Learned Nothing from the Genome'
(Jul 28) GenePlanet in Europe makes Genome Testing Global
(Jul 27) Pfizer to Study Liver Cancer in Korean Patients with Samsung Medical Center
(Jul 27) Lee Min-joo donates 3 billion won to genome project
(Jul 27) Working with regulators-the road ahead
(Jul 23) GAO Studies Science Non-Scientifically
(Jul 23) FDA's 'Out-of-The-Box' Plans
(Jul 22) DTC Genome Testing of SNP-s “Ready for Prime Time”?
(Jul 17) Message arrived ... "the scientific community had to re-think long held beliefs"
(Jul 15) A Proving Ground for P4
(Jul 15) Ion Torrent, Stealthy Company Tied to Harvard’s George Church, Nabs $23M Venture Deal
(Jul 15) PacBio Nabs $109M to Make Cheaper, Faster Gene Sequencing Tools
(Jul 14) Recursive Genome Function at the crossroads - Charlie Rose Panel on Human Genome Anniversary
(Jul 08) The Sudden Death of Longevity
(Jul 07) 23andMe Letter to Heads of FDA and NIH
(Jul 07) Amazon Sees the Future of Biology in the Cloud
(Jul 06) Calling GWAS Longevity Calls into Question [Gene(s)]
(Jul 04) IBM setting up cloud for genome research
(Jul 02) Scientists Discover the Fountain of Youth! Or Not.
(Jul 01) IBM DNA Decoding Meets Roche for Personalized Medicine
(Jun 30) How to Build a Better DNA Search Engine
(Jun 30) 'Jumping genes' make up roughly half of the human genome
(Jun 27) A coding-independent function of gene and pseudogene mRNAs regulates tumour biology
The Second Decade: Recursive Genome Function

(Jun 26) Business Models for the Coming Decade of Genome-Based Economy - the past and transition
(Jun 26) Business Models for the Coming Decade of Genome-Based Economy - the transition and future
(Jun 25) The Genome and the Economy
(Jun 24) 23andMe Publishes Web-Based GWAS Using Self-Reported Trait Data
(Jun 24) Francis Collins: the extended genome anniversary interview
(Jun 24) The Big Surprise of the First Decade - The Genome Affects You to Prevent Diseases, Before it Cures Diseases
(Jun 23) Sergey Brin’s Search for a Parkinson’s Cure
(Jun 23) Data-Driven Discovery Research at 23andMe
(Jun 22) ACI Personalized Medicine Congress in Silicon Valley postponed from June 23-25 to December 9-10, 2010
(Jun 20) The Genome, 10 Years Later
(Jun 16) The Path to Personalized Medicine
(Jun 15) FDA Cracks Down on DTC Genetic Testing
(Jun 15) FDA Did Not Crack Down on DTC Genetic Testing [AJP]
(Jun 11) Why the FDA Is Cracking Down on Do-It-Yourself Genetic Tests: An Exclusive Q&A
(Jun 11) Breaking: FDA Likely to Require Pre-Market Clearance for DTC Personal Genomics Tests
(Jun 11) The Gutierrez Letters from FDA to DTC Genome Testing Companies
(Jun 11) What Five FDA Letters Mean for the Future of DTC Genetic Testing
(Jun 10) Silicon Valley' Genome-Based Personalized Medicine Meeting Postponed to Dec 9-10
(Jun 09) Would Regulation Kill Genetic Testing?
(Jun 04) Stanford School of Medicine Launches Center for Genomics and Personalized Medicine
(Jun 04) Your Genome Is Coming [to where? - AJP]
(Jun 03) Illumina Drops Personal Genome Sequencing Price to Below $20,000
(Jun 02) The Journal Science Interviews J. Craig Venter About the first "Synthetic Cell"
(Jun 02) Scientist: 'We didn't create life from scratch'
(Jun 01) The Genome Project is 10 Years Old - Where is the Health Care Revolution?
(May 27) Get Your Genotype Tests Now Before Congress Makes Them Illegal
(May 26) Who Should Control Knowledge of your Genome
(May 25) 'Junk' DNA behind cancer growth
(May 24) Transparency First: A Proposal for DTC Genetic Testing Regulation
(May 24) Convey Computer Hails Genomics Search Record
(May 18) CVS Follows Walgreens Down Pathway of Least Resistance
(May 11) Company plans to sell genetic testing kit at drugstores
(May 22) Why The Debate Over Personal Genomics Is a False One
(May 21) Existence Genetics is Pioneering the Field of Predictive Medicine - Nexus Technologies Critical in Understanding and Preventing Deadly Disease
(May 21) Where to next for personal genomics?
(May 20) How Bad Can a House Investigation be for DTC Genomics?
(May 20) Joining The Genomics Revolution Early
(May 20) DTC Genomics Targeted by Congressional Investigation
(May 19) BGI Expands Into Denmark with Plans for $10M Headquarters, Staff of 150
(May 17) Potential of genomic medicine could be lost, say science think-tanks
(May 16) Effects of Alu elements on global nucleosome positioning in the human genome
(May 15) Rapid Rise of Russia
(May 12) Genomics goes beyond DNA sequence
(May 12) Walgreens To Sell Genetic Test Kits For Predisposition To Diseases, Drug Response
(May 11) Bio-informatics Springs Up to Place Genome in Neverland
(May 09) Hood Wins $100k Kistler Prize
(May 06) Crisis in the National Cancer Institute
(May 03) Stanford bioengineer [Quake et al.] explores own genome
(Apr 28) Joint research begins on individual-level mechanisms of gene expression [RIKEN and Complete Genomics]
(Apr 28) James Watson Just Can't Stop Talking at GET
(Apr 27) New Algorithmic Method Helps Elucidate Molecular Causes of Inherited Genetic Diseases
(Apr 26) Affymetrix Launches Axiom Genome-Wide ASI Array For Maximized Coverage of East Asian Populations
(Apr 25) Digitization Slashing Health IT Vendor Dominance
(Apr 24) When Reading DNA Becomes Cheaper Than Storing the Data [Not "Disposable Genome" - AJP]
(Apr 23) 23andMe Special Sale on DNA Day (Apr 23 only) - full service for $99
(Apr 22) Predictive, Participatory, Personalized Prevention (P4) Health Care [Chaired by International HoloGenomics Society Founder, Dr. Pellionisz]
(Apr 21) BioMerieux, Knome Team on Sequencing-Based MDx
(Apr 20) Eric Lander's Secrets of the Genome ["Mr. President, the Genome is Fractal!" - AJP]
(Apr 18) Malaysian Genomics Resource Centre Berhad Launches US$4000 Human Genome Bioinformatics Service
(Apr 15) Barcode app tracks allergies [to be tested with Nestle products]
(Apr 14) Human Genome Mapping’s Payoff Disappoints Scientists
(Apr 13) Big science: The cancer genome challenge
(Apr 12) Francis Collins: DNA May Be A Doctor's Best Friend
(Apr 05) Korean Scientists Discover Asian-Specific CNV Genome Catalog
(Apr 03) Middle East Healthcare News [Asia & Middle East Alliance - AJP]
(Apr 02) Genome Sequencing to Predict, Prevent, Treat Diseases [Samsung in Korea - AJP]
(Mar 31) Life is complicated [but complexity is in the eye of the bewildered; think FractoGene - AJP]
(Mar 31) Human Genome Mapping’s Payoff Disappoints Scientists
(Mar 27) Genome Maps of 10 Koreans Completed
(Mar 19) 'Junk' DNA gets credit for making us who we are
(Mar 18) Can a gene test change your life? [Yes, says Francis Collins, with the example of his life...]
(Mar 15) Why the State of Personal Genomics is Not as Dire as You Think
(Mar 11) "Personal" study shows gene maps can spot disease
(Mar 09) A Vision for Personalized Medicine
(Mar 03) Genome Service Available for Predicting Illness [in Korea - and Asia]
(Mar 01) It will not be a DNA Data-Deluge. Get ready for a Tsunami while the data-level is at a low-ebb
(Feb 28) Doctors ‘lack training in genetics to cope with medical revolution
(Feb 27) Genetic testing may yield personalized health treatments
(Feb 26) Splash Down: Pacific Biosciences Unveils Third-Generation Sequencing Machine
(Feb 25) The Future Has Already Happened - How it might unfold by Complete Genomics and Pacific Biosciences?
(Feb 24) Pacific Biosciences Names First Ten Early Access Sequencer Customers
(Feb 24) Oral Cancer Study Shows Full Tumor Genome; Novel Method Speeds Analysis for Individualized Medicine
(Feb 23) Junk DNA could provide vital clues to heart disease
(Feb 22) Three YouTubes later: Is IT ready for the Dreaded DNA Data Deluge?
(Feb 18) Complete Genomics To Sequence A Million Genomes - CEO
(Feb 15) The end of the deCODEme personal genomics service? [with comment -AJP]
(Feb 08) Art Communicates Better than Science ..
(Feb 06) The Principle of Recursive Genome Function Blogged by a Software Developer
(Feb 03) Procter and Gamble Invests in Navigenics
(Feb 02) Genomic Advances of the 2000s Will Demand an Informatics Revolution in the 2010s
(Jan 31) Fractals and DNA - The Old, the Young and the Ugly
(Jan 28) The Potential Of Personalized Medicine
(Jan 24) Knome Challenged to Keep in Step with Falling Genetic Sequencing Prices
(Jan 23) Google, Microsoft May Help Usher in Personalized Medicine Wave, Says George Church
(Jan 21) Navigenics names Vance Vanier, MD, to serve as President and Chief Executive Officer
(Jan 21) Why Your DNA Isn't Your Destiny
(Jan 20) At Personalized Medicine World Conference 2010 HolGentech contributes the only proprietary Genome Computing Architecture
(Jan 19) A Preview of A Personal Genome Assistant
(Jan 16) HolGenTech YouTube for Funding Round at PMWC2010
(Jan 07) The Language of Life - Book on Personalized Medicine by Francis Collins
(Jan 06) What Recession in Genomics ??? Triple-Digit Stock Price Increases !!!
(Jan 04) Personalized Medicine World Conference, Silicon Valley, January 19-20

Latest News

The Creative Destruction of Medicine

New York Times; BOOKS

Genomics as a Final Frontier, or Just a Way Station

By ABIGAIL ZUGER, M.D.

Published: February 27, 2012

The medical world is holding its breath, waiting for the revolution. It will be here any minute. Definitely by the end of the decade. Or perhaps it will take a little longer than that, but seriously, it’s right around the corner. More or less.

That’s the genomics revolution, with its promise of treatment focused on the individual rather than the group. At last, patients will be more than the product of their age, sex, ethnicity, illnesses and bad habits; treatments will be aimed like a laser at their personal genetic particulars, and if those genes are not quite what they should be, then those genes will be fixed.

Over the last few years, various breathless visions of this therapeutic future have been written out for public admiration. A particularly readable and comprehensive version can be found in Dr. Eric J. Topol’s new book, “The Creative Destruction of Medicine.”

Dr. Topol, a cardiologist and researcher at the Scripps Research Institute with the energy of 10 (if his prose style and his honor-laden biography are any indication), dispenses in short order with our current population-based medical strategies. They are wasteful and inexact, he points out, often marginally beneficial to the group and downright harmful to the individual.

He presents an array of far better ideas, a few now actually being practiced in rudimentary form. These include pharmacogenomics, in which specific genes that govern responses to medications are routinely assayed, and cancer treatments that probe tumors for specific genetic targets rather than relying on standard chemotherapy.

But that’s not all: Dr. Topol also points out that soon a person’s precise genetic data will be augmented by an extraordinary wealth of other digital data (provided by, say, the continuous monitoring of blood pressure, pulse and mood, and a variety of ultra-precise scans). The outcome will be nothing short of a new “science of individuality,” one that defines individuals “at a more granular and molecular level than ever imaginable.”

Praise:

The Creative Destruction of Medicine by Eric Topol

“A must-read that lays out a road map for how new technologies in genomics, information technology, and mobile medicine may completely change the way we treat and prevent illness. It’s highly recommended, because Topol has a unique vantage point: he’s one of the few researchers to have played an important role in the old, mass-market medicine world and the newer, genetically focused one.”

—Forbes

“Topol demonstrates how the digital revolution can be used to change individual care and prevention, and even the economics of American healthcare. From cell phones that automatically collect medical data, to biosensors, advanced imaging, individualized prescriptions and gene-specific drugs, Topol’s book leads readers through science-fiction-sounding scenarios that may soon be a reality.”

—Salon

“The Creative Destruction of Medicine – an allusion to economist Joseph Schumpeter’s description of ‘creative destruction’ as an engine of business innovation – is a venture capitalist’s delight, describing dozens of medical technologies that show great promise. The book also provides colorful anecdotes about Dr. Topol’s own sampling of these products, as both a doctor and stand-in patient…. [The book’s] most important contributions are in portraying how medical innovation will coalesce to change clinical practice and what the coming changes mean for today’s policy debates…. In Dr. Topol’s vision, innovation that enables real-time diagnosis and personalized treatments is a certainty, though not because reluctant or ‘sclerotic’ doctors accept it or because Washington wills it into being. A seductive technology that works like a dream and improves lives will set off a consumer clamor, whether the new tool is an iPhone 4S or an implantable blood-sugar meter.”

—Wall Street Journal

“Topol does an excellent job of explaining all, and his enthusiasm for the possibilities of what the future holds is infectious. It can only be hoped, as the convergence he so convincingly predicts materializes, that the barriers erected by the gatekeepers of yesterday’s paradigms will be easily dismantled so as not to impede the benefits it promises.”

—Boston Globe

“An eye opening account of why conventional medicine is doomed…. [C]ompelling stuff…. [T]he book provides an excellent summary of the current state of medical genetics and how fast it is progressing, with examples that may surprise even those working in medicine.”

—New Scientist

“Deriving inspiration from the economist Joseph Schumpeter, Topol proposes nothing less than the ‘creative destruction’ of medicine as it is currently practiced, replacing it with a brave new world in which interconnected technologies dramatically improve patient outcomes…. [T]he book is an enjoyable, high-level review of the current state of the field, intended for a general audience but referenced for those inclined to read more deeply. With its rich discussion of science and technology and companies specific to the last couple of years, this book certainly has contemporary relevance.”

—Nature Genetics

“Topol makes the case that the masses of macro-data at our fingertips (literally), will unleash micro-level diagnostic and curative solutions never before imagined or hypothesized. It’s a remarkably bold vision that many experienced physicians will call naïve since it defies conventional wisdom – which is precisely why I think he’s on to something big.”

—Longwoods eLetter


“[A] prescient view of the near future of medicine…. Every patient should read this book in order to understand the rapidly evolving role in they play in their own care…. The Creative Destruction of Medicine is a call to action for doctors and patients alike. We must see our world and our job as doctor and patient very differently. In a profession so uncertain of its future, we need precisely the vision and critical dialog offered here…. I suspect that 150 years from now when historians are looking back at the most dramatic flexion point in medicine’s history they’ll reference this book as one of the first to identify the start of medicine’s creative destruction.”

—Bryan Vartabedian, MD, 33 Charts.com


“The digital age opens up the possibility of a new type of medicine in which an individual’s health data are digitized using wearable sensors, smartphone apps and genome information, writes geneticist and cardiologist Eric Topol. With this wealth of data, medical interventions could be tailored to our uniqueness. Topol covers failures in patient information; what might happen if genomics, imaging, sensors and better health information were to converge; and the potential pitfalls of this brave new medical world.”

—Nature

“Modern medicine needs a makeover. Topol, of the Scripps Research Institute, believes the process begins with embracing the digital world. His plan involves genomics, wireless biosensors, advanced imaging of the body, and highly developed health information technology. Smartphones will tie these elements together to make health care individualized,efficient, and accessible. Topol foresees a future medical landscape characterized by virtual house calls, remote monitoring, and a lessened need for hospitals.”

—Booklist


“[E]minently readable.”

—Sharon Begley, Kirkus Reviews


“Topol weaves useful knowledge about how to evaluate the choices open to patients into this exciting account of the revolutionary changes we can expect.”

—Kirkus


“The book makes a compelling case for the role of digital technology in bringing nimbleness to ossified healthcare systems worldwide.”


—Calestous Juma, Harvard Kennedy School


“Our sequencing of the human genome eleven years ago was the beginning of the individualized medicine revolution, a revolution that cannot happen without digitized personal phenotype information. Eric Topol provides a path forward using your digitized genome, remote sensing devices and social networking to place the educated at the center of medicine.”

—J. Craig Venter,Chairman and President, J. Craig Venter Institute

“Eric Topol gives us an eye-opening look at what’s possible in healthcare if people can mobilize to charge the status quo. The Creative Destruction of Medicine is simply remarkable.”

—Clayton M.Christensen, Robert and Jane Cizik Professor of Business Administration, Harvard Business School,and author of The Innovator’s Dilemma

“Eric Topol has been a longtime innovator in healthcare. In The Creative Destruction of Medicine, he cites the big waves of innovation that will save healthcare for the future. Real healthcare reform has not yet begun, but it will. The Creative Destruction of Medicine lays out the path.”

—Jeffrey Immelt,Chairman and CEO of General Electric

“This is the one book to read for a complete and clear view of our medical future, as enabled by the convergence of digital, mobile,genomic, and life science breakthroughs. Dr. Topol explains how iPhones, cloud computing, gene sequencing, wireless sensors, modernized clinical trials,internet connectivity, advanced diagnostics, targeted therapies and other science will enable the individualization of medicine—and force overdue radical change in how medicine is delivered, regulated, and reimbursed. This book should be read by patients, doctors, scientists, entrepreneurs, insurers,regulators, digital engineers—anyone who wants better health, lower costs, and participation in this revolution.”

—Brook Byers,Partner, Kleiner Perkins Caufield & Byers

“Eric Topol is that rare physician willing to challenge the orthodoxies of his guild. He recognizes that in the U.S., health care business-as-usual is unsustainable. But he does not despair. He bears witness to the rise of Homodigitus and the promise it holds to upend the inefficiencies and dysfunction so entrenched in clinical medicine. The Creative Destruction of Medicine is a timely tour de force. It is a necessary heresy.”

—Misha Angrist, Assistant Professor, Duke Institute for Genome Sciences &Policy, and author of Here is a Human Being


“Eric Topol provides an excellent and pragmatic view of the U.S. healthcare system from a patient’s perspective. He then offers, through numerous examples, an exciting vision for the future ... when technology can be used to dramatically improve the quality of care and reduce cost at the same time. The Creative Destruction of Medicine is a highly informative and enjoyable book, which truly triggers the reader’s imagination as to what is possible.”

—Omar Ishrak,Chairman and CEO of Medtronic

“Eric Topol has written an extraordinarily important book at just the right moment. Drawing upon a unique and impressive array of convergent expertise in medical research, clinical medicine, consumer and health technological advancements, and health policy, Dr. Topol opens the door for an essential discussion of old challenges viewed through an innovative lens. In the context of increasingly unaffordable health care costs, suboptimal quality of care delivery, a tsunami of preventable chronic illness, and new accountabilities for consumer’s health choices and behaviors, this book helps all of us to think about solutions in new and exciting ways!”

—Reed Tuckson, MD,Executive Vice President and Chief of Medical Affairs, UnitedHealth Group

“It may sound like hyperbole, but it’s true: Medicine is undergoing its biggest revolution since the invention of the germ theory.As Eric Topol writes, thirty years ago, ‘digital medicine’ referred to rectal examinations. Dr. Topol is both a leader of and perfect guide to this brave new health world. His book should be prescribed for doctors and patients alike.”

—A. J. Jacobs,author of My Life as an Experiment and The Year of Living Biblically

“Much of the wealth created over the last decades arose out of a brutal transition from ABC’s to digital code. While creating some of the world’s most valuable companies, this process also upended whole industries and even countries. Now medicine, health care, and life sciences are undergoing the same transition. And, again, enormous wealth will be created and destroyed.This book is a road map of what is about to happen.”

—Juan Enriquez,Managing Director, Excel Venture Management, and author As the FutureCatches You

“Eric Topol outlines the creative destruction of medicine that must be led by informed consumers. Smart patients will push the many stakeholders in health to accelerate change as medicine adapts to a new world of information and technology.”

—Mehmet Oz, MD,Professor and Vice-Chair of Surgery, NY Presbyterian/Columbia University

“Health care is poised to be revolutionized by two forces—technology and consumerism—and Dr. Eric Topol explains why. One-size-fits-all medicine will soon be overtaken by highly personalized,customized solutions that are enabled by breakthroughs in genomics and mobile devices and propelled by empowered consumers looking to live longer, healthier lives. Fasten your seat belts and get ready for the ride—and learn what steps you can take to begin to take control of your health.”

—Steve Case,co-founder, AOL, and founder of Revolution LLC

“If we keep practicing medicine as we know it today,healthcare will become an unbearable burden. We are in a real race between healthcare innovation and the resistance to change of the medical system. In a comprehensive and well researched tour de force, Eric Topol, always a clear and uncompromising thought leader of his generation, challenges us to imagine the revolutionary potential of a world where medical information no longer belongs to a few and can be automatically collected from the many to greatly improve healthcare for all. This is a must read!”

—Elias Zerhouni, MD,President, Global R&D, Sanofi and former director, National Institutes of Health

“The Creative Destruction of Medicine is an engaging look into how the discoveries in genetics and biology will change the landscape of medicine. Along the way, Dr. Topol provides a fascinating compendium of stories about the shortcomings of medicine as it is currently practiced and how the revolutionary discoveries coming since the first sequencing of the human genome a decade ago will shape the delivery of healthcare in the 21st century.”

—William R. Brody,MD, PhD, President, The Salk Institute for Biological Studies

“Dr. Topol believes that medicine, catalyzed by extraordinary innovation that exploits digital information, is about to go through its biggest shakeup in history. His newest book calls for a ‘jailbreak’ from the ideas of the past. In the next phase of medicine, powerful digital tools including mobile sensors and advanced processors will transform our understanding of the individual, enabling creative ‘mash-ups’ of data that will spark entirely new discoveries and spawn ultra-personalized health and fitness solutions. And with over 5.7 billion mobile connections worldwide, the mobile technology platform will have a major impact on that vision—leading to what Dr.Topol describes as nothing less than a ‘reboot’ of the health care system. Qualcomm, and its partners all around the world, are working to bring wireless innovations to market that will contribute to the solution. And we share Dr.Topol’s view that individual consumers have the opportunity, and the power, to increase the pace of the titanic change that’s coming.”

—Paul E. Jacobs,PhD, Chairman and CEO, Qualcomm Incorporated

“What happens when you combine cellular phone technology with the cellular aberrations in disease? Or create a bridge between the digital revolution with the medical revolution? How will minute biological sensors alter the way we treat lethal illnesses, such as heart attacks or cancer? This marvelous book by Eric Topol, a leading cardiologist, gene hunter and medical thinker, answers not just these questions, but many many more.Topol’s analysis draws us to the very front lines of medicine, and leaves us with a view of a landscape that is both foreign and daunting. He manages to recount this story in simple, lucid language—resulting in an enthralling and important book.”

—Siddhartha Mukherjee, author of The Emperor of All Maladies: A Biography of Cancer

“Eric Topol offers a new and intriguing perspective on how the intersections of medicine and technology could further transform the delivery of healthcare and the role of a patient. He advocates for a future world of medicine where informed consumers are in the driver seat and control their own healthcare based on genomic information and real-time data obtained through nanosensors and wireless technology.”

—John Martin,Chairman and CEO, Gilead Sciences

“What happens when the super-convergence of smart phones further combines with million-fold lower-cost genomics and diverse wearable sensors? The riveting answer leads to a compelling call to activism—not only for medical care providers, but all patients and everyone looking for the next ‘disruptive’ economic revolution. This future is closer than most of us would have imagined before seeing it laid out so clearly. A must-read.”

—George Church,Professor of Genetics, Harvard Medical School

“In an upbeat, comprehensive volume, Dr. Topol has woven the prevailing technological undercurrents of the post-PC world—its power of many; its Gucci of gadgets; its cloud ecosystem; its ‘Arab Spring’ of apps; and its ubiquitous, calm computing—with the disruptive innovations of biomedicine, to create a compelling account of how this bio-digital transformation will hasten personalization of the highest quality of medical care.”

—Eric Silfen, MD,Senior Vice-President and Chief Medical Officer, Philips Healthcare


“Dr. Topol is the top thought leader in medicine today, with exceptional vision for how its future can be rebooted. This book will create and catalyze a movement for the individualization and democratization of medicine—and undoubtedly promote better health care.”

—Greg Lucier, CEO, Life Technologies


“Eric Topol is the perfect author for this book.He has a unique understanding of both genomics and wireless medicine and has a remarkable track record as a charismatic pioneer, visionary, and change agent in medicine. I’m sure this book will reach a very large number of people with information that can both empower and help transform their lives for the better.”

—Dean Ornish, M.D., Founder and President, Preventive Medicine Research Institute, and author of The Spectrum


“Dr. Eric Topol is an extraordinary doctor. He’s started a leading medical school, identified the first genes to underlie development of heart disease, led major medical centers, and been a pioneer of wireless medicine. But he is also a remarkable communicator—one of the few top-flight scientists in medicine to be able to genuinely connect with the public. He was, for example, the first physician researcher to question the safety of Vioxx—and unlike most who raise safety questions, actually succeed in bringing the concerns to public attention. I have known and admired Dr. Topol for a long time. I recommend him highly.”

—AtulGawande, M.D., author of The Checklist Manifesto


“Eric Topol is uniquely positioned to write such a timely and important book. He leads two institutions—one in genomics and one in wireless health—that will each play a huge role in transforming medicine in the twenty-first century. From this vantage point, he can see unifying themes that will underlie the coming revolution in population and personal health, and he communicates his vision with vibrant energy. Everyone will want to read this book.”

—James Fowler,Professor of Medical Genetics and Political Science, UC San Diego, and author of Connected


A scalable global business model and a (sub)continent offers support of Pellionisz' Fractal Approach

Ten slides winning over a (sub)continent for funding Pellionisz' Fractal Approach


Roche’s Illumina Bid May Spur Buying in DNA Test Land Grab

January 31, 2012, 12:32 PM EST

Bloomberg

By Robert Langreth, Meg Tirrell and Ryan Flinn

(Updates with closing share price in the fifth paragraph.)

Jan. 26 (Bloomberg) -- Less than 10 years after the first human genome was decoded, Roche Holding AG’s hostile $5.7 billion bid for Illumina Inc. may spark additional deals as companies race to bring DNA scanning into routine medical use.

Illumina competes with Life Technologies Corp., Affymetrix Inc. and other companies to sell gene-decoding machines that are just starting to be used to tailor therapies for patients with cancer and inherited diseases. While scientific excitement around genome sequencing is high, the companies’ shares have plummeted over the last year because their target customers are mostly scientists dependent on grants in a tough economy.

Getting the technology out of the lab and into doctors’ offices and hospitals could vastly expand the existing $1.5 billion market for gene sequencing machines, industry officials and analysts said.

“This is going to be an enormous opportunity, and now you see it unfolding,” said Greg Lucier, chief executive officer of Life Technologies, based in Carlsbad, California, in a telephone interview. The bid Roche is an acknowledgment that DNA mapping is key to the future of diagnostics, particularly involving its use in cancer treatment, he said.

Illumina today adopted a so-called poison-pill takeover defense in which shareholders will receive one preferred stock purchase right as a dividend for each common share held as of the close of business on Feb. 6.

The San Diego-based company fell 4.5 percent to $52.65 in New York trading. Roche, based in Basel, Switzerland, rose less than 1 percent to 160.20 Swiss francs in Zurich.

‘Unwilling to Participate’

Roche made its hostile offer directly to Illumina shareholders after saying the testing company was “unwilling to participate in substantive discussions,” according to a statement yesterday.

The rights agreement adopted today by Illumina can block a hostile bid by making it prohibitively expensive. Should Roche or another bidder own 15 percent or more of Illumina’s stock, other shareholders will be able to exercise the rights to buy new common stock, diluting the stake of the prospective bidder.

Before initial talk of a takeover attempt surfaced in December, the shares of Illumina -- which draws a third of its revenue from researchers funded by the National Institutes of Health -- had dropped 58 percent over 12 months. Life Technologies and Affymetrix also tumbled, leaving them as vulnerable as Illumina to buyout bids, said Bill Bonello, an analyst with RBC Capital Markets in Minneapolis.

‘Intensity M&A Angle’

“This will intensify the M&A angle people will look at with these stocks,” Ross Muken, an analyst with Deutsche Bank Securities Inc. in New York, said in a telephone interview.

The human genome was first sequenced in 2003. The market for machines that map DNA has been “fast-growing” over the last five years, said Daniel O’Day, chief operating officer of Roche’s diagnostics division, in a conference call yesterday.

“We expect that to continue into the future,” O’Day said. “Today, it is over a $1 billion marketplace, and we expect that to be over a $2 billion marketplace in 2015.”

The devices sold by Illumina, Life Technologies and Affymetrix search through DNA coding that contains the instructions for making all human cells. Scientists use the technology to build an understanding of how variations or mutations found by the machines contribute to disease.

Cancer Variations

This is particularly true in cancer, where variations can contribute to uncontrolled cell growth. Doctors want to use genetic data to aim cancer treatments precisely at these variations, and stop only diseased cells from growing. Genome sequencing is also helping doctors understand, diagnose and, in some cases, treat mysterious childhood diseases that had previously taken years to identify.

The National Human Genome Research Institute has allocated funds to determine how to integrate individuals’ genetic information into day-to-day clinical care issues, such as the appropriate dosing of drugs.

Already, U.S. regulators are working with drugmakers in approving cancer drugs tied to companion genetic tests. Pfizer Inc.’s crizotinib, a treatment for a form of lung cancer caused by a genetic defect, was approved in August along with a companion diagnostic made by a unit of Abbott Laboratories that determines whether a patient has the abnormal gene.

Roche, the world’s biggest developer of cancer medicines, has particular experience with gene-targeted therapies.

Herceptin, Zelboraf

The company sells the breast-cancer drug Herceptin, one of the first cancer medicines aimed only at a subset of patients whose tumors have a particular genetic abnormality. In August, it garnered U.S. regulatory approval for Zelboraf, a melanoma drug that works on patients whose tumors have a certain gene mutation. Roche also sells a companion test with Zelboraf.

While Roche may be a pioneer in its bid for Illumina, it isn’t clear that other drugmakers will seek to acquire similar companies unless they already have a toe in the business, said Les Funtleyder, a portfolio manager with Miller Tabak & Co. in New York, whose fund owns Illumina shares.

“It seems like a bit of a leap for a pharmaceutical company to get into a whole new line of business,” Funtleyder said in a telephone interview. “You don’t need a sequencer to develop a companion diagnostic; you just need the sequence. And you can outsource that.”

Funtleyder cited General Electric Co. and Abbott as companies with existing businesses that may consider a similar acquisition.

‘Best in Class’

With Illumina, Roche is “buying best in class,” said Peter Lawson, an analyst with Mizuho Securities in New York, by telephone. “Illumina’s one of the most interesting companies in this space. They’ve been serial innovators, they’ve been great acquirers of technology and great executors.”

Illumina has been in a race to develop the first machine to be able to parse the building blocks of life in a day, rather than weeks or months. It announced Jan. 10 that it would market such a machine in the second half of this year. Life Technologies said on the same day it had reached the same goal. The current Illumina machines can sequence five human genomes in 10 days, according to the company.

Erik Gordon, a professor at the Ross School of Business at the University of Michigan in Ann Arbor, sees Roche and Illumina as a perfect fit.

“On its own, Illumina will have trouble reaching the broader clinical markets for its devices and will remain dependent on the shaky government-funded markets,” Gordon said in an e-mail. ‘As part of Roche, it quickly gets through the door at clinics worldwide.”

At the same time, he said, “Roche gets another product to run through its sales channel.”

Harvard Geneticist

George Church, a Harvard Medical School geneticist who has founded and advised numerous companies in the industry, said Roche “probably had their eye on Illumina for a long time and were waiting for the price to come down. They knew it was a valuable company; why not buy it at its lowest point?”

The sequencing technology is moving so fast the Illumina technology may become quickly outmoded, said Craig Venter, who led a private team that sequenced one of the first two human genomes a decade ago and runs the J. Craig Venter Institute in Rockville, Maryland.

“I don’t understand why Roche would do this deal when the technology is changing so rapidly,” he said. “I am puzzled.”

When Venter was racing a government team to scan the first human genome, he needed 300 expensive sequencing machines in 100,000 square feet of lab space, he said. Now researchers can build a world-class facility with just 10 smaller desktop machines, he said.

Four Years Difference

“One of these new machines replaces 100 of our old machines four years ago,” he said.

Michael Pellini, chief executive officer of Foundation Medicine in Cambridge, Massachusetts, a company that sells a test looking at 200 cancer genes, said the research market for the machines is saturated while the far bigger market of potential routine medical use is just emerging.

“This technology has not crossed over into the clinical world in earnest,” Pellini said in a telephone interview. “That is the big disconnect.” Roche can help bridge the divide with its expertise in diagnostic tests, he said.

Roche’s pursuit of Illumina reflects the growing focus of health-care companies on personalized medicine, said Susan Clark, professor of medicine at the University of New South Wales, whose lab at the Garvan Institute in Sydney uses Illumina equipment to study cancer gene expression.

The challenge is to better match their cancer therapies with the specific patient populations who will be benefit most, she said. Faster DNA scanning technologies could help, she said.

“A lot of money has been spent by pharmaceutical companies to try to find designer drugs,” Clark said in an interview. “But with designer drugs, you need to know the population that they will target because they are so expensive.”

--With assistance from Jason Gale in Singapore, Naomi Kresge in Berlin and John Lauerman in Boston. Editors: Reg Gale, Andrew Pollack

To contact the reporters on this story: Robert Langreth in New York at rlangreth@bloomberg.net; Meg Tirrell in New York at mtirrell@bloomberg.net; Ryan Flinn in San Francisco at rflinn@bloomberg.net

To contact the editor responsible for this story: Reg Gale at rgale5@bloomberg.net

[I predicted a "Dreaded DNA Data Deluge" in my Google Tech Talk YouTube in 2008 since Sequencing, with billions of dollars of investment became one half of "Industrialization of Genomics" - while the other half, Analytics was not attended, and therefore an oversupply of sequences produced an unsustainability. As a result, all four major "Sequencing Companies" lost much of their valuation (illustrated by stock-market graphs added to the New York Times article "DNA Sequencing Caught in a Deluge of Data", November 30th below). There should be no question that a major wave of merger/acquisition of the type of Roche/Illumina will occur - but in itself will not solve the problem. Without the two types of IT (Information Technology and Information Theory), requiring the SAMSUNG-type of genome analytics service, Industrialization of Genomics will remain unbalanced. If/when "Roche" will acquire "Illumina", look for their next step, to globalize their solution with IT, preferably in Asia]


Genome Informatics, Globalized; - USA - China - Korea - India

Francis Collins (in Bangalore)

How much does the US spend on medical research?

Typically, the NIH invests $30 billion every year. But I should say the budget's been about there for almost eight years. We're having difficulties in fiscal deficit, so medical research may not grow much in the near future. Thoughtful decisions have to be taken on what we want to do.

What strengths do you see in Indian institutions? How are they compared to China?

India's great strength now is its IT and computational capacity. Biology is now more a computational science. To understand diseases like diabetes and cancer, we need computational strategies to sift through vast datasets. India can provide that asset.

India is on right track for biomedical research, says NIH director

Date:2011-12-05houhaizhen

Bangalore, Dec 05, 2011: Currently on a tour of India to improve collaborations between research institutes in India and the NIH, Dr Francis Collins, director, NIH, USA, spoke about the impact NIH funding has had on research in India and how it intends expand its presence in India.

National Centre for Biological Sciences (Bangalore)

Francis Collins in Bangalore

--

Andras Pellionisz, Board of Advisors to ELOGIC Technologies in Bangalore, tours Bangalore, Hyderabad, Trivandrum

China excels in Genome Informatics by running in BGI (Shenzhen) the World's largest capacity of Genome Sequencing Machines (all USA-made), and employing up to 4,000 computer scientists, with an every age of 27, with access to the World's fastest supercomputers (a hybrid of Intel serial chips and NVIDIA graphics parallel chips, all USA-made). Korea announced by Sept. 1, 2011 that SAMSUNG started to provide a Global Genome Analytics Service.

India's strengths, as Francis Collins pointed out in Bangalore, to deploy her massive IT and mathematics expertise towards personalized therapy against cancer, combined with clinical trials.

Andras Pellionisz, his science presentation featured above in Hyderabad, builds collaborations of Silicon Valley of California with "Silicon Valley of India"; Bangalore.


UNESCO’s Memorial Year Honours János Szentágothai

[János Szentágothai - AJP]

As decided by the general assembly of UNESCO, the year 2012 is to be dedicated to honour the 100th birthday of János Szentágothai, a formal president of the Hungarian Academy of Sciences and a groundbreaker in his field. According to Ambassador Katalin Bogyay, the memorial year provides a unique opportunity to showcase the achievements of Hungarian science and Hungarian culture in general to a wider audience through the legacy of the great Hungarian scientist.

UNESCO has long been taking part in commemorating the anniversaries of historical events and outstanding personalities. The national committees of the organisation may make such proposals each year. Headed by member of HAS József Hámori, the Hungarian National Committee initiated a Szentágothai Memorial Year, a proposal supported by the Jury of Experts and the Executional Council and finally was accepted by UNESCO's General Assembly.

"According to my plans, an exhibition and conference are going to be organised commemorating the greatness of János Szentágothai in the centre of UNESCO in Paris. Through introducing his life to an international audience, we are to bring the achievements of Hungarian science into the forefront, while also drawing attention to the responsibility of science and to the connection between science and art", Katalin Bogyay said. "As a Christian thinker and music teacher Franz Liszt had been an ambassador of Hungarian culture in his age, so was János Szentágothai committed to both science and art, a true renaissance man of Hungarian intellectual life."

"János Szentágothai was a real founder of a school of thought. We, today's Hungarian brain researchers, are all standing on his shoulders", former student József Hámori said. Member of HAS and President of the Hungarian UNESCO Committee, professor Hámori started his career at the Department of Anatomy of Pécs University in 1955 where János Szentágothai had established a research community whose members not only improved in expertise but also had the chance to enrich the spiritual-cultural aspects of their personalities. "We worked from 9 am. until late evening every day, but were not doing science exclusively", József Hámori recalls. "The art of painting, baroque music was just as often the topic of our discussions as was poetry." However it was his findings in brain research that had earned him world fame. Among these were his results on the functions of the spinal cord, the cerebellum, and the structure of the neocortex, making him a Nobel-prize nominee several times.

Besides being an avid researcher, János Szentágothai had great affection for teaching and always encouraged his students to pass on their knowledge. He considered education as a crucial aspect of science, not only for the sake of the next generation, but also because he believed it's the perfect way for teachers to keep their knowledge up-to-date. Professor Szentágothai authored Functional Anatomy and the Atlas of Human Anatomy with Miklós Réthelyi and Ferenc Kiss respectively.

János Szentágothai was the President of the Hungarian Academy of Sciences between 1977 and 1985. Besides holding several national awards, he was also a member of many international organisations, such as the Royal Society, and the Papal Academy of the Vatican. He was an honorary doctor at several universities, among them the University of Oxford. Professor Szentágothai also played a significant role in the public life of Hungary. He was a representative of science in the Hungarian Parliament until his death in 1994.

[Why was Prof. Szentágothai passed over for Nobel Prize? - AJP]

The simple answer is that the Nobel Committee did not vote him in. At what rate and why not, is only guesswork, since the Minutes of the Committee are classified for 50 years.

As a pupil of the “Szentágothai school” since 1967, my belief is that the reason might be that the “Grand Old Man” thought and acted in terms of “Schools of Science”, not limited to that of his own. For the Nobel Committee, however, it is much easier to award the Prize to focused efforts, especially since the Science Prizes are to be awarded to living persons, divided at a maximum to three people.

Prof. Szentágothai built his school of science to a large degree on that of the schools of “vestibulo-ocular cerebellar systems” of Drs. Bárány, Hőgyes and Lenhossék – wherein Dr. Bárány was already some time ago awarded by the Nobel Prize. In itself, this could not be a major problem, as both Drs. Kornberg and his son were awarded. In Szentágothai’s school, like in all schools of excellence, there was more than a single eminent course.

There seems to be no doubt that the legacy of Hungarian schools of science, and the international schools of science that Prof. Szentágothai built a solid alliance with, made the cerebellar sensory-motor system a pre-eminent focus of his ouvre – with the cerebellum making up to 1/3 of the brain with birds top make them capable of coordinated flight.

By 1967 János Szentágothai was about ready with the proof of their Springer book, co-authored with the leading British-Australian-American Sir John (Eccles) Nobel Laureate and Japanese Masao Ito. The title was “The Cerebellum as a Neural Machine”. This splendid accomplishment could have been the reason for János Szentágothai NOT having been awarded by a Nobel. Why? The bestseller book, according to Google Scholar under “Eccles” was cited 2,117 times – that is far more than any citation of Szentágothai’s own publications (highest is 472, often in different fields). Thus, it could be difficult to award a Nobel for the authors of the book, divided to the maximal three allotted recipients who might share the Prize. For one issue, this would have been the second Nobel to Sir John (Eccles). Not impossible, for the precedents of double Laureates. However, the electrophysiology of the cerebellum, comprised in the book, was the result of cardinal collaborative effort with Rodolfo Llinás and other colleagues, and similarly the electronmicroscopy laid out in the book was performed largely by József Hámori. Third, the contribution to the book by the Japanese Masao Ito was his flabbergasting discovery that the sole output of the cerebellum, from the Purkinje neurons, are inhibitory (as opposed to the expected excitatory action). The book on the “Neural Machine” simply could not interpret this experimental result. As a conclusion, on the very last page of the book (pp. 177.) the main three authors, altogether at least five individuals, over the limit of maximal three for any Nobel, confessed in effect that “we know everything about the neural networks of the cerebellum, except how it works as a Neural Machine”. The Nobel Committee could have wondered about the best stance towards a book that verbatim predicted by the last sentence of the book that “It is essential to be guided by the insights that can be achieved by communication theorists and cyberneticists who have devoted themselves to a detailed study of cerebellar structure and function. We are confident that the enlightened discourse between such theorists on the one hand and neurobiologists on the other will lead to the development of revolutionary hypotheses of the way in which the cerebellum functions as a neuronal machine and it can be predicted that these hypotheses will lead to revolutionary developments in experimental investigation”. Pondering over such a mesmerising conclusion, perhaps they thought that “there are too many authors now and the conclusion still awaits some time” and decided to reconsider any award for the function of the cerebellum later.

It is noteworthy that Szentágothai was ahead of his time far further than he could have ever envisioned. Springer is just proofing their handbook “The Cerebellum and Cerebellar Diseases”, in which a Chapter on Recursive Genome Function of Cerebellum: Geometric Unification of Neuroscience and Genomics” reveals that the mathematical “trick” of the cerebellum (a topic of neuroscience) is also found, in an even more profound manner, in genomics. The function of the cerebellum as a neuronal machine is to turn the independent, sensory-type motor intentions (that are mathematically covariant) into physically properly assembled motor-execution components (that are mathematically contravariant). This “invention” by mother Nature for the cerebellum is fairly recent in evolution – it only goes back to about 400 million years, when the sharks, equipped with the emerging cerebellum could outswim less coordinated competitors.

Szentágothai, who passed away in 1994, could not have known about “RNA interference” (discovered by Fire and Mello 4 years later, in 1998, to be awarded by Nobel in 2006). “The RNA system acting as a genomic cerebellum” (by both contravariant, cRNA and covariant ncRNA, by their interference comprising the metric of coordinated genome function) is a far more ancient invention about 530 million years ago by mother Nature; it resulted in the so-called “Cambrian explosion” in evolution, by enabling coordinated genome function of multi-cellular organisms.

Our beloved “Grand Old Man” (Prof. Szentágothai) would get truly excited, since he was the epitomy of scientific curiosity and its ultimate reward (understanding) - regardless of the destination of Prizes of any kind, if at all.


János Szentágothai, prof. of Semmelweis University Medical School, Budapest [at right], and András Pellionisz, prof. of New York University Medical School, New York [at left]]


Power in Numbers [math must grab genomics, beneath neuroscience – AJP]

[Eric Lander is right, again. In the video above, essentially he says: For genomics, the homework is to explain cerebellar brain function by its intrinsic math. This is exactly the strategy Tensor Network Theory (applied to Recursive Function of Cerebellar Neural Nets) and its Unification with the Fractal Approach to Genome Function followed. Lander skipped, for a number of reasons, the few decades I invested in “geometrization of neuroscience”– and Lander zoomed right into mathematization of genome (as it is “informatics”). While as a Science Adviser to the President, the US might follow his advice (“Mr. President, the Genome is Fractal!”) because of the inertia of the medical establishment countries with smaller legacy but faster growth in their GDP, plus mobilizing their mathematics-physics-engineering formerly honed to create their own nuclear industry, might even overtake the US in certain segments of “Industrialization of Genomics”. Fortunately, Dr. Lander is also schooled in economics – thus along with fellow Havard Professor economist Dr. Juan Enriquez (see his decade-old-bestseller “As the Future Catches You”) the US can maintain leadership in “Genome Based Economy”. – AJP]

“I took random classes in Harvard about the brain… To know about the brain, you have to learn about cell biology… to know about cell biology you have to learn about molecular biology… I have to know about genetics! “

“I began to appreciate that the career of mathematics is rather monastic,” Dr. Lander said. “Even though mathematics was beautiful and I loved it, I wasn’t a very good monk.” He craved a more social environment, more interactions.

“I found an old professor of mine and said, ‘What can I do that makes some use of my talents?’ ” He ended up at Harvard Business School, teaching managerial economics.

He had never studied the subject, he confesses, but taught himself as he went along. “I learned it faster than the students did,” Dr. Lander said.

Yet at 23, he was growing restless, craving something more challenging. Managerial economics, he recalled, “wasn’t deep enough.”

He spoke to his brother, Arthur, a neurobiologist, who sent him mathematical models of how the cerebellum worked."...


Paired Ends, Lee Hood, Andras Pellionisz - 2012 a Year of Turing and Year of Turning

January 1st, 2012

[Genomeweb - AJP]

Genomeweb paired us prompted by our respective press releases, signaling the general trend of new global alliances that bridge the USA to Asia. However, as 2012 begins, the Centennary Year of Alan Turing (born in 1912), some deeper analysis may be warranted. This Centennary year closes a decade of fierce struggle. It starting by the passing of Ohno (“Junk DNA” since 1972) in 2000, and Crick in 2004 (“Central Dogma” since 1956), ENCODE’s belated admission in 2007 that “Junk DNA is anything but”, and the Principle of Recursive Genome Function (paper and YouTube in 2008, Unification of Neuroscience and Genomics in print 2012).

This year may become another turning point in mathematical theory of genome function, with great advances in deciphering cancer, see the potential of Full DNA Sequencing (and Analytics) in cancer diagnosis and personalized therapy e.g. in this YouTube by Matthew Ellis (University of Washington in St. Louis).

A Decade ago (2002) both Dr. Hood and I went on our respective records with the notion that “Biology is an Informational Science”; see Lee Hood and Andras Pellionisz.

LeRoy Hood (2002), in his Commemorative Lecture acceptance speech of the Kyoto Prize in Advanced Technologies announced that he was writing a book on “The Living Code: Biology As An Informational Science” assuming Bertalanffy’s “General Systems Theory” (1956, Dover, New York) as his intellectual foundation.

Andras Pellionisz (2002) , for better or worse, specifically pointed out the mathematics intrinsic to genomes with his FractoGene (2002). The “Fractal Approach to DNA” – after the magical 7 years of silencing – is now boosted by the group of Eric Lander (Science Advisor to the President, featuring fractal folding of DNA on Science magazine cover issue, Oct. 2009 based on decades-old pioneering by Alexander Grosberg). Further, a Boston group of four (MIT/Broad/Harvard/Dana-Farber), with Leonid Mirny in both the 2009 paper and now, found in 2011 that “fractal defect” (in their case, a Copy Number Variation, CNV) blocked the see-through transparency of 3D Hilbert-curve, becoming the root cause of cancer.

My personal FractoGene (interlaced with Tensor Network Theory) is a lineage from relativistic tensor geometry brought into a synthesis of the quantum-theory-inspired-Schroedinger school of thought; Schroedinger, “What is Life?”, 1944). I was intellectually imprinted for life by the Hungarian Edition of John von Neumann “The Computer and the Brain” (1958), where on the last page von Neumann posthumously made a visionary statement. Von Neumann was tragically short-lived 1903-1957, quite possibly exposed to nuclear radiation while participating in the Manhattan-project, died of bone cancer. His statement was that whatever the intrinsic mathematics Nature uses in biological systems (notably in brain function) is certainly not the mathematics we know and that he used to create computers.

Perhaps the most revealing article is by Alan Turing, born a Century ago, who published a seminal article well over half a Century ago (1952), before the Double Helix was ever discovered, in 1952 (in Phil. Trans. R. Soc. Lond. B.)

“The Chemical Basis of Morphogenesis”. Its Chapter 13. was entitled “Non-linear theory. Use of Digital Computers”. Aptly, he wrote: “Most of an organism, most of the time, is developing from one pattern into another, rather than from homogeneity into a pattern. One would like to be able to follow this more general process mathematically also. The difficulties are, however, that one cannot hope to have any very embracing theory of such process, beyond the statement of the equations. It might be possible, however, to treat a few particular cases in detail with the aid of a digital computer”.

Neither Ludwig von Bertalanffy (1956), nor John von Neumann (1958) had time enough to specify the math intrinsic to coordinated genome regulation, in spite of early pioneering by Schroedinger (1944) predicting in his essay “What is Life?” that covalent Hydrogen-bondings over an aperiodical crystal (later to be found as DNA Double Helix in 1953), along with other pioneering by Norbert Wiener’s Cybernetics (1949) and the departure from war-time, having deciphered cryptography, to biology by Alan Turing (1952).

In view of the above, it would be grossly unfair to take the credit of a single decade of (2002-2012), despite all turf-protecting-below-the-belt-strikes by the few mathematically unseasoned, for the now prevailing notion that the sine qua non of progress is the software enabling intrinsic algorithm of genome informatics.

I was lucky enough to be exposed at the level of a personal friend to Edward Teller in his late years. Teller was Heisenberg’s Ph.D. student, with relativistic and quantum physics in his weaponry – and I was also inspired by Benoit Mandelbrot’s fractals - Mandelbrot was a Ph.D. student of John von Neumann.

Is the fact that the lineage of concepts of genome informatics has a clear parallel to war-time efforts of Relativity (published in final form in 1916) and quantum theory that made both peaceful (Leo Szilard) and strategic applications of nuclear industry possible (Heisenberg on one side and Turing, Teller, von Neumann, Wiener on the other)?

War-time efforts drained the best minds to lay down the exact (“bullet-proof”) intellectual infrastructure, as if developing an armor capable of deflating all flak, friendly or not. Mathematicians, physicists and nuclear engineers, having served their respective countries, moved in droves from physics to biology.

In 1971, Nixon declared his “War on Cancer”. Now, those whose experise leaves no question that the disease of the genome (with Genomics=Informatics) can only be won with mathematicians, information theorists and most of all, by computers deployed the bolster the infantry of traditional medicine will make sure that the “New War on Cancer”, forty years after the first, will have the needed weaponry developed and ready to use.


The genetic code, 8-dimensional hypercomplex numbers and dyadic shifts

[Excerpts]

Sergey V. Petoukhov

Head of Laboratory of Biomechanical System, Mechanical Engineering Research

Institute of the Russian Academy of Sciences, Moscow

spetoukhov@gmail.com, petoukhov@imash.ru,

http://symmetry.hu/isabm/petoukhov.html, http://petoukhov.com/

[Quarternion fractal from literature. Complex, hypercomplex – or just mathematically lucid? http://www.fractalforums.com/3d-fractal-generation/true-3d-mandlebrot-type-fractal/315/ - AJP]

Abstract: Matrix forms of the representation of the multi-level system of molecular-genetic alphabets have revealed algebraic properties of this system. Families of genetic (4*4)- and (8*8)- matrices show an unexpected connections of the genetic system with functions by Rademacher and Walsh and with Hadamard matrices which are well-known in theory of noise-immunity coding and digital communication. Dyadic-shift decompositions of such genetic matrices lead to sets of sparse matrices. Each of these sets is closed in relation to multiplication and defines relevant algebra of hypercomplex numbers. It is shown that genetic Hadamard matrices are identical to matrix representations of Hamilton quaternions and its complexification in the case of unit coordinates. The diversity of known dialects of the genetic code is analyzed from the viewpoint of the genetic algebras. An algebraic analogy with Punnett squares for inherited traits is shown. Our results are discussed taking into account the important role of dyadic shifts, Hadamard matrices, fourth roots of unity, Hamilton quaternions and other hypercomplex numbers in mathematics, informatics, physics, etc. These results testify that living matter possesses a profound algebraic essence. They show new promising ways to develop algebraic biology.

1. Introduction

Science has led to a new understanding of life itself: «Life is a partnership between genes

and mathematics» [Stewart, 1999]. But what kind of mathematics is a partner with the genetic code? Trying to find such mathematics, we have turned to study the multi-level system of interrelated molecular-genetic alphabets. On this way we were surprised to find connections of this genetic system with well-known formalisms of the engineering theory of noise-immunity coding: Kronecker products of matrices; orthogonal systems of functions by Rademacher and Walsh; Hadamard matrices; a group of dyadic shifts; hypercomplex number systems, etc. This article is devoted to some of our results of such studing of the phenomenologic system of interrelated genetic alphabets….

Genetic information is transferred by means of discrete elements. General theory of signal processing utilizes the encoding of discrete signals by means of special mathematical matrices and spectral representations of signals to increase reliability and efficiency of information ... A typical example of such matrices is the family of Hadamard matrices. Rows of Hadamard matrices form an orthogonal system of Walsh functions which is used for the spectral representation and transfer of discrete signals… An investigation of structural analogies between digital informatics and genetic informatics is one of the important tasks of modern science in connection with the development of DNA-computers and bioinformatics. The author investigates molecular structures of the system of genetic alphabets by means of matrix methods of discrete signal processing [Petoukhov, 2001, 2005a,b, 2008a-c; Petoukhov, He, 2010, etc.].

The article describes author’s results about relations of matrix forms of representation of the system of genetic alphabets with special systems of 8-dimensional hypercomplex numbers (they differ from the Cayley’s octonions). The discovery of these relationships is significant from some viewpoints. For example, it is interesting because systems of 8-dimensional hypercomplex numbers (first of all, Cayley’s octonions and split-octonions) are one of key objects of mathematical natural sciences today. They relate to a number of exceptional structures in mathematics, among them the exceptional Lie groups; they have applications in many fields such as string theory, special relativity, the supersymmetric quantum mechanics, quantum logic, etc. … The term “octet” is also used frequently in phenomenologic laws of science: the Eightfold way by M.Gell-Mann and Y.Ne’eman (1964) in physics; the octet rule in chemistry… In view of these facts one can think that genetic systems of 8-dimensional numbers will become one of the interesting parts of mathematical natural sciences.

In addition, hypercomplex numbers are widely used in digital signal processing… Formalisms of multi-dimensional vector spaces are one of basic formalisms in digital communication technologies, systems of artificial intelligence, pattern recognition, training of robots, detection of errors in the transmission of information, etc.

Revealed genetic types of hypercomplex numbers can be useful to answer many questions of bioinformatics and to develop new kinds of genetic algorithms. Hadamard matrices and orthogonal systems of Walsh functions are among the most used tools for error-correcting coding information, and for many other applications in digital signal processing …. As noted in the article [Seberry, et al., 2005], many tens of thousands of works are devoted to diverse applications of Hadamard matrices for signal processing. Our discovery of relations of the system of genetic alphabets with the special systems of 4-dimensional and 8-dimensional hypercomplex numbers and with special Hadamard matrices helps to establish the kind of mathematics which is a partner of the molecular-genetic system. Hypercomplex numbers, first of all, Hamilton quaternions and their complexification (biquaternions) are widely applied in theoretical physics. The author shows that matrix genetics reveals a special connection of the system of genetic alphabets with Hamilton quaternions and their complexification. These results give new promising ways to build a bridge between theoretical physics and mathematical biology. They can be considered as a new step to knowledge of a mathematical unity of the nature….

As Hamilton quaternions describe the properties of three-dimensional physical space, the discovery of the connection of the genetic system with Hamilton quaternions attracts our attention to some fundamental questions. It is, for example, the question about innate spatial representations in humans and animals [Russell, 1956; Petoukhov, 1981]. Or the question about a development of physical theories, in which the concept of space is not primary, but derived from more fundamental concepts of special mathematical systems [Kulakov, 2004; Vladimirov, 1998]. The described effectiveness of the algorithm of hidden parameters allows thinking about the systems of hidden parameters as about a possible base for additional development of these theories.

Molecular biology and bioinformatics possess their own problems where Hamilton quaternions and their complexification can be used. For example some approaches are known about algorithmic constructions of fractal patterns in biological structures including fractals in genetic molecules (see [Pellionisz et al, 2011; web]). A development of geometric algorithms for such approaches needs those geometrical operations inside physical 3D-space which are connected with the molecular-genetic system and which can be used as a basis of these geometric algorithms. Hamilton quaternions and their complexifications, which are connected with the system of genetic alphabets and which correspond to geometric properties of our physical space, seem to be promising candidates for this purpose.

Our article shows that now Hamilton quaternions and their complexifications are connected not only with theoretical physics but also with molecular genetics and bioinformatics. The discovery of the relation between the system of molecular-genetic alphabets and Hamilton quaternions together with their complexification provides a bridge between theoretical physics and biology for their mutual enrichment. It can be considered as a next step to discover the mathematical unity of nature.

[Physicists pick up from where Schroedinger left Genome Informatics in 1943 in his essay "What is Life?" - AJP]


Biophysicists Discover Four New [Fractal - AJP] Rules of DNA 'Grammar'

MIT Technology, December 10, 2010

... Today, Michel Yamagishi at the Applied Bioinformatics Laboratory in Brazil and Roberto Herai at Unicamp in Sao Paulo, say they've discovered several new patterns that significantly broaden the grammar of DNA.

Their approach is straightforward. These guys use set theory to show that Chargaff's existing rules imply the existence of other, higher order patterns.

Here's how. One way to think about the patterns in DNA is to divide up a DNA sequence into words of specific length, k. Chargaff's rules apply to words where k=1, in other words, to single nucleotides.

But what of words with k=2 (eg AA, AC, AG, AT and so on) or k=3 (AAA, AAG, AAC, AAT and so on)? Biochemists call these words oligonucleotides. Set theory implies that the entire sets of these k-words must also obey certain fractal-like patterns.

Yamagishi and Herai distil them into four equations.

Of course, it's only possible to see these patterns in huge DNA datasets. Sure enough, Yamagishi and Herai have number-crunched the DNA sequences of 32 species looking for these new fractal patterns. And they've found them.

They say the patterns show up with great precision in 30 of these species, including humans, e coli and the plant arabidopsis. Only human immunodeficiency virus (HIV) and Xylella fastidiosa 9a5c, a bug that attacks peaches, do not conform.

"These new rules show for the first time that oligonucleotide frequencies do have invariant properties across a large set of genomes," they say.

That could turn out to be extremely useful for assessing the performance of new technologies for sequencing entire genomes at high speed.

One problem with these techniques is knowing how accurately they work. Yamagishi and Herai suggest that a simple test would be to check whether the newly sequenced genomes contain these invariant patterns. If not, then that's a sign the technology may be introducing some kind of bias.

This is a bit like a checksum test for spotting accidental errors in blocks of data and a neat piece of science to boot.

Ref: arxiv.org/abs/1112.1528: Chargaff's "Grammar of Biology": New Fractal-like Rules

Chargaff's "Grammar of Biology": New Fractal-like Rules

Michel Eduardo Beleza Yamagishi, Roberto H. Herai

(Submitted on 7 Dec 2011)

Chargaff once said that "I saw before me in dark contours the beginning of a grammar of Biology". In linguistics, "grammar" is the set of natural language rules, but we do not know for sure what Chargaff meant by "grammar" of Biology. Nevertheless, assuming the metaphor, Chargaff himself started a "grammar of Biology" discovering the so called Chargaff's rules. In this work, we further develop his grammar. Using new concepts, we were able to discovery new genomic rules that seem to be invariant across a large set of organisms, and show a fractal-like property, since no matter the scale, the same pattern is observed (self-similarity). We hope that these new invariant genomic rules may be used in different contexts since short read data bias detection to genome assembly quality assessment.

http://arxiv.org/ftp/arxiv/papers/1112/1112.1528.pdf

ABSTRACT

Chargaff once said that “I saw before me in dark contours the beginning of a grammar of

Biology”. In linguistics, “grammar” is the set of natural language rules, but we do not know for

sure what Chargaff meant by "grammar” of Biology. Nevertheless, assuming the metaphor,

Chargaff himself started a “grammar of Biology” discovering the so called Chargaff’s rules. In

this work, we further develop his grammar. Using new concepts, we were able to discovery new

genomic rules that seem to be invariant across a large set of organisms, and show a fractal-like

property, since no matter the scale, the same pattern is observed (self-similarity). We hope that

these new invariant genomic rules may be used in different contexts since short read data bias

detection to genome assembly quality assessment.

[We are soon approaching the phase when everyone will say "Of course, the Genome is Fractal! Why should it be an exception to everything else in Nature?" The invited Springer Handbook Chapter "Recursive Genome Function of the Cerebellum: Unification of Neuroscience and Genomics" (In Press) reviews that looking at the genome as a kind of a "language" (with grammar) goes back to a couple of decades - but could not break through in part because full DNA sequences were not available, and also since the double barrier of the "Central Dogma" and "Junk DNA" misnomer blocked progress for over half a Century. With The Principle of Recursive Genome Function recursive fractal iteration was established as the fundamental algorithm of genome function, and in the "Unification" publication not only the Genomic-Epigenomic (HoloGenomic) system was put in a coherent mathematical framework, but also the "RNA System Serves as the Genomic Cerebellum" concept was put forward. The genome is not only fractal, but the "Coordinated Genome Function" uses the dual (covariant and contravariant) representations of protein production. This entry can be discussed in the FaceBook page of Andras Pellionisz]


Francis Collins [Head of NIH of the USA in Bangalore, India -AJP].

This US scientist is the director of the National Institute of Health, the country's leading health research establishment. He's in Bangalore to meet top Indian scientists on Saturday and Sunday and shares his outlook on life in this exclusive interview.

What strengths do you see in Indian institutions? How are they compared to China?

India's great strength now is its IT and computational capacity. Biology is now more a computational science. To understand diseases like diabetes and cancer, we need computational strategies to sift through vast datasets. India can provide that asset.

China has strengths too - a long history of genomics research and was into the human genome project earlier than India. India is catching up. But we need the collaboration of all countries on health because everyone everywhere in the world has health problems. Like Louis Pasteur said, science belongs to no one country.

Tell us more about your Bangalore visit...

It's been very exciting. I've been to the National Centre for Biological Sciences, St John's Medical Research College and IISc. At St Johns, I visited the hospital and clinical facilities to get a grasp of research there.

At NCBS, I spoke to students and faculty. The experience was wonderful. The students are bright and inquisitive and we discussed issues ranging from neuroscience to genomics. Clearly, Bangalore is a city that has a thoughtful generation of youngsters out to prove themselves in science. I see a spark in the young people here.

What collaborations are you looking at with institutions here?

We have a collaboration on cancer with India. India is concentrating on mouth cancer as that occurs at a higher frequency here. Then, there's diabetes - a worldwide concern. At St John's, we talked about diabetes and cardiovascular research and the implications for heart disease arising out of diabetes. We're also looking at vaccines, HIV, vision and brain research and technologies we need to develop to tackle these and other chronic diseases.


Paired Ends: Lee Hood, Andras Pellionisz

Genomeweb, In Sequence

December 06, 2011

a "leader in the field of genome informatics." Pellionisz holds PhD degrees in computer engineering...

People In The News‎

GenomeWeb Daily News - Dec 2, 2011

Elogic Technologies has named Andras Pellionisz to serve on its advisory board. An expert in genome informatics who established the International ...

People In The News‎

GenomeWeb Daily News - 1 day ago

PrimeraDx this week announced that it has appointed Leroy Hood to its scientific advisory board. Hood has helped found several research and commercial ...

[Genomeweb elected to disclose full contents only to subscribers or registered readers. This entry can be discussed on the FaceBook page of Andras Pellionisz]


ELOGIC Technologies (Bangalore) to launch Genome Analytics Service (See Binet - Genome)

ELOGIC Technologies is proud to present, for the very first time in India, IT Infrastructure Services in Next Generation Sequence (NGS) Analysis and Management on the Cloud Environment, through its BINET (Biological Internet) Division.

ET partners with leading IT MNC’s as a preferred alliance partner in India, to cater to the needs of Genome Informatics for a gradually burgeoning global clientele. ET’s partnership with these MNC’s goes back a decade for providing a range of services and products.

ET is foraying into Life Sciences sector as a whole and will pursue its interests at present in Genome Informatics as a focus area.

Being a forerunner in the arena of Genome Informatics and Data Analysis for NGS on the Cloud, ET is building a team comprising Board of Advisors, experienced Genomics experts, Biology Scientists, Mathematicians and IT Professionals, to provide an array of services in this field as a one-stop-shop for its customers. ET is aiming at providing services in the entire Life-Sciences sector per se, which implies developing software products and algorithms for NGS Analysis.

TD Prakash, Managing Director, ET says – “ET understands the Genome Informatics Landscape in all its entirety and interdisciplinary nature of Biological Sciences, which has occupied the foreground now, with the emergence of new areas in Bio research, such as Genomics, Proteomics and System Biology.

We envision value in building Life Science services with cutting edge technologies and will forge collaboration with global players in NGS area, with the objective of serving the global Life Science industry and related Academia.

In order to ensure that we provide services with complete knowledge and understanding of the field, extra proficiency and authority, ET has engaged with Dr. Andras Pellionisz, a global leader in Genome Informatics from USA, to be a part of our Board of Advisors and we welcome him with much delight on board our advisory team”.

Dr. Pellionisz is the President of HolGenTech, Inc. in Silicon Valley, California, USA and is a thought leader in Fractal Genomics and Hologenomics and will be undertaking the role of the guiding force for ET in this field. He has convinced himself of ET’s domain knowledge, capabilities, business model and growth plans prior to being a part of our Board of Advisors. For more information on our collaboration, please read here a Press Release held in California USA on the 30th November, 2011.

Equipped with the very best of high-end IT know-how and technical expertise, Bangalore is emerging as the epicentre of Biotech - a hub for Genomics and Genome Informatics and is therefore in a unique position to deliver to a world-wide audience, quality services and products in the field of Genome Informatics.

ET is poised to deliver its services to the market by the first quarter of 2012 and will, as a first step, start with executing pilot projects with the Academia. ET will offer services in IT infrastructure to begin with, in Compute Power and Storage followed by NGS Data Analysis Services.

Headquartered in Bangalore, ET has its office in Mumbai and operations in Dubai United Arab Emirates and UK too. Its immediate expansion plans include establishing offices in Chennai and Delhi in India, followed by an office in the Silicon Valley of USA.

With its knowledge of Genomics and its strategic location, ET has at its disposal the means and wherewithal to achieving Data Analysis of Next Generation Sequencing on Cloud.


DNA Sequencing Caught in Deluge of Data

New York Times, By ANDREW POLLACK

Published: November 30, 2011

[Added by Pellionisz: In 2008 I warned against this problem in my YouTube. Today, 3 years and 12,744 views later, the NYT describes precisely what was predicted and warned against. The lessons are not analyzed by general media - see my proof of the presently unsustainable path of Genome Sequencing Industry - without the rapid matching of Genome Analytics Industry (stock charts appended). This will emerge (in the USA or elsewhere) once Genome Analytics will be able to tap the vast market of Global Consumers and Global Health Care - this entry can be discussed in the FaceBook of Andras Pellionisz]

BGI, based in China, is the world’s largest genomics research institute, with 167 DNA sequencers producing the equivalent of 2,000 human genomes a day.

BGI churns out so much data that it often cannot transmit its results to clients or collaborators over the Internet or other communications lines because that would take weeks. Instead, it sends computer disks containing the data, via FedEx....

The field of genomics is caught in a data deluge. DNA sequencing is becoming faster and cheaper at a pace far outstripping Moore’s law, which describes the rate at which computing gets faster and cheaper.

The result is that the ability to determine DNA sequences is starting to outrun the ability of researchers to store, transmit and especially to analyze the data.

“Data handling is now the bottleneck,” said David Haussler, director of the center for biomolecular science and engineering at the University of California, Santa Cruz. “It costs more to analyze a genome than to sequence a genome.”
...

That could delay the day when DNA sequencing is routinely used in medicine. In only a year or two, the cost of determining a person’s complete DNA blueprint is expected to fall below $1,000. But that long-awaited threshold excludes the cost of making sense of that data, which is becoming a bigger part of the total cost as sequencing costs themselves decline.
...

But the data challenges are also creating opportunities. There is demand for people trained in bioinformatics, the convergence of biology and computing. Numerous bioinformatics companies, like SoftGenetics, DNAStar, DNAnexus and NextBio, have sprung up to offer software and services to help analyze the data. EMC, a maker of data storage equipment, has found life sciences a fertile market for products that handle large amounts of information. BGI is starting a journal, GigaScience, to publish data-heavy life science papers.

“We believe the field of bioinformatics for genetic analysis will be one of the biggest areas of disruptive innovation in life science tools over the next few years,” Isaac Ro, an analyst at Goldman Sachs, wrote in a recent report.
...

There will probably be 30,000 human genomes sequenced by the end of this year, up from a handful a few years ago, according to the journal Nature. And that number will rise to millions in a few years.

In a few cases, human genomes are being sequenced to help diagnose mysterious rare diseases and treat patients. But most are being sequenced as part of studies. The federally financed Cancer Genome Atlas, for instance, is sequencing the genomes of thousands of tumors and of healthy tissue from the same people, looking for genetic causes of cancer.

One near victim of the data explosion has been a federal online archive of raw sequencing data. The amount stored has more than tripled just since the beginning of the year, reaching 300 trillion DNA bases and taking up nearly 700 trillion bytes of computer memory.

Straining under the load and facing budget constraints, federal officials talked earlier this year about shutting the archive, to the dismay of researchers. It will remain open, but certain big sequencing projects will now have to pay to store their data there.....

“In the life sciences, anyone can produce so much data, and it’s happening in thousands of different labs throughout the world,” he said.

Moreover, DNA is just part of the story. To truly understand biology, researchers are gathering data on the RNA, proteins and chemicals in cells. That data can be even more voluminous than data on genes. And those different types of data have to be integrated. [See an entirely new approach to the "RNA System Acting as a Genomic Cerebellum" In Press - AJP]

...

Researchers are increasingly turning to cloud computing so they do not have to buy so many of their own computers and disk drives.

Google might help as well.

“Google has enough capacity to do all of genomics in a day,” said Dr. Schatz of Cold Spring Harbor, who is trying to apply Google’s techniques to genomics data. Prodded by Senator Charles E. Schumer, Democrat of New York, Google is exploring cooperation with Cold Spring Harbor.

Google’s venture capital arm recently invested in DNAnexus, a bioinformatics company. DNAnexus and Google plan to host their own copy of the federal sequence archive that had once looked as if it might be closed.

...

[Stock charts of the dramatic drop of valuation over the last six monts of the four leading DNA Sequencing companies. (Illumina and Life dropped less, since they are NOT "pure play sequencers"). This evidence of the presently unsustainable path of Genome Sequencing - without the rapid ramp-up in the USA or by SAMSUNG (announced Sept. 1. 2011) and/or other global players - has long been directly communicated, and non-standard solutions are emerging - these additions to the NYT description of the mesmerizing symptoms can be discussed on FaceBook of Andras Pellionisz]


The FDA’s confusing stance on companies that interpret genetic tests is terrible for consumers.

Tell Me What’s in My Genome!

SLATE - By Christopher Mims|Posted Friday, Nov. 25, 2011, at 12:20 AM ET

Right now, for about the same price as a conventional medical test that reveals just a handful of genes, you could learn the entire contents of your genome. Sure, it’s a "research" scan, which means it will contain mistakes, and your insurance won’t cover the $4,000-$5,000 bill. But it won't be more than a few years before a complete and virtually error-free version of your genome will be within financial reach. Wouldn’t you like to unlock your complete instruction set, with all the medical and ancestry data it contains?

Enticing as that may be, it won’t be easy get those keys if the FDA has its way. Last summer, the agency indicated that it wants to classify the work of any company that helps you decipher your genome as a medical test that must be regulated accordingly. But over the last year, the agency’s lack of continued communication has left companies that would interpret genetic information—which are simply offering information—confused as to where they stand. This lack of clarity and direction could ultimately mean ceding leadership in this field to overseas competitors who are not similarly constrained.

The FDA has indicated through its public statements that it will put regulatory barriers in the path of companies that want to help us interpret genomes. In June 2010 the agency sent a series of letters to providers like 23andMe, warning them that they were selling what amounted to medical tests that were not vetted by the FDA, and so were in violation of the law. The FDA’s letter to consumer genetics testing company 23andMe is a good example of the tack the agency is now taking. “23andMe has never submitted information on the analytical or clinical validity of its tests to FDA for clearance or approval. ... Consumers may make medical decisions in reliance on this [genetic] information [provided by 23andMe].”

Since then, the FDA has continued to send out letters of a similar tone—23 in total, all to different companies—but has offered no other guidance to providers of direct-to-consumer genetic tests, leaving these companies, and their investors, in the dark about the ultimate direction of regulation in this area. Frustrated by the delay, in recent months many of these companies have made their responses to the FDA public on their websites, in part to protest the climate of ongoing regulatory uncertainty that the agency’s actions have created. Others have pre-emptively eliminated medically significant interpretations from their tests, even if the genes they return still contain that information.

Rather than protect consumers, the FDA’s move has left the genetic information industry in limbo—and it seems a matter of time before it moves overseas. Can’t get your full genome scan interpreted by software hosted on servers in the United States, owned by a U.S. company? Within a decade, a company in a country not subject to our laws will almost certainly be happy to accommodate you. That’s if you don’t take the do-it-yourself route first, plumbing your genome with free and open-source software linked to Wikipedia-style databases maintained by volunteers (which, because they aren’t sold, aren’t subject to FDA regulation).

It’s difficult, if not impossible, to find legal or medical scholars in the United States who are against patient access to full genome sequences. So where does the FDA’s reticence come from? In part, it’s the long shadow of “genetic exceptionalism”—the idea that “genetic information is inherently unique and should be treated differently in law than other forms of personal or medical information,” as Alan Dow, vice president and legal counsel at Complete Genomics, put it.

Other Western governments, too, have fallen into the genetic exceptionalism trap. In 1997, the European Union’s member states even signed a treaty, the Convention on Human Rights and Biomedicine, which mandates that all signatories apply the precautionary principle when handling biomedical advances like genetic sequencing. This means it’s incumbent upon the advocates of these technologies to prove they won’t do any harm. So, for example, Germany has instituted a law so broad that it basically prevents anyone from getting her own genes sequenced without a doctor’s permission. If the genome-interpreting industry is forced by regulatory limbo to seek shelter outside the United States, we may see developing countries like India compete to fill the market gap.

Based on how we've (mis)used other medical technologies, it’s understandable that governmental bodies are at least a little concerned about the advent of whole-genome sequencing. For example, full-body MRI scans have fed into hypochondria-type fears by flagging benign abnormalities that then have to be further examined. Wouldn't a full-genome scan, with the many disease-contributing genes it turns up, do the same? And won't patients who discover, for example, an elevated chance of an incurable disease have their quality of life adversely affected? We'll get to the details later, but the short answer is no.

Genetic data have to be interpreted in a way that the public might not be accustomed to. But it is elitism of the highest order to imagine that most of us are simpletons who can’t grasp the concept that a gene might contribute to a disease condition, but in no way guarantees it. The fear is that every new study associating a gene with a particular disorder will send patients running to their doctors to ask whether they should be worried. But that seems to be a short-term concern: Most patients will understand the reality after their first (or maybe second) panicked trip to the doctor. The physician will tell them that these studies are always preliminary and that even if they’re borne out by subsequent research, the vast majority of these genes have only a marginal effect on our health.

Studies suggest that even patients who find out they have an elevated risk for a disease with a strong genetic component but no cure—like Alzheimer’s—handle the news quite well. In light of this, it seems like the worst-case scenario for a full genome scan is that a patient might be inspired to actually talk to their doctor about their health. If having genes that suggest an elevated chance of heart disease inspire someone to at least be conscientious about their other risk factors for the disease, great! Preliminary research suggests that results of genetic tests change consumers’ intention to do something about their health, if not their actual behavior. (Consumers’ options about what to do with this information often come down to common lifestyle changes like diet and exercise, which are difficult to get patients to implement under any circumstances.)

The only thing worse than the paternalism keeping genetic data and its implications from consumers is the failure of imagination this represents, in terms of the potential upside of the coming genomic revolution. The more full-genome scans we have, and the cheaper they become, the more useful information patients will have. Widespread genotyping will help us understand our own ancestry, but perhaps more importantly will lead to a new kind of engagement with our health and biology. For this new technology to transform American health—and to cultivate a new, high-tech, high-promise industry within the United States—the FDA needs to provide clarity and guidance. The alternative is that the FDA becomes something like the recording industry at the dawn of the MP3 age: a body trying to lock down immaterial assets that consumers are going to get their hands on, one way or another.

[No Comment - Andras Pellionisz]


ELOGIC Technologies Private Limited Bangalore, India [Pellionisz unites Silicon Valley-s of USA/India - AJP]

PRWeb, November 30, 2011

Bangalore, India - Genome informatics leader Dr. Andras Pellionisz joins the BoA of the global IT Company to break into Industrialization of Genomics

ELOGIC Technologies Private Limited is a Company Certified for ISO 9001:2008 by BVQI and ISO 27001:2005 for Security by BSI for years now and is in the process of obtaining an ISO: 50001 Certification for Energy Management System. In the business of Secure Information Management and IT Infrastructure Services Delivery, the Company aims at maintaining an uncompromising level of integrity and character to serve its customers, partners, employees and community, building a network of trust along the way. The company is now foraying into Genome Informatics and Next Gen Seq Data Analysis on the public and private clouds.

ELOGIC Technologies announces that Dr. Andras Pellionisz has accepted invitation to join the Advisory Board of ELOGIC Technologies. Dr. Pellionisz is an internationally renowned leader in the field of genome informatics specializing in the geometric unification of neuroscience and genomics. Founder and President of Silicon Valley-based HolGenTech, Inc. in California, he exemplifies the model Andy Grove, senior adviser to Intel, by putting innovation within goal-oriented corporation structure. In his major paper in Springer Handbook (in Press, accepted Nov. 1st 2011) he pairs breakthrough algorithmic development with a blueprint for industrial deployment at a crucial time when the Human Genome Project already impacted the economy by $796 Bn.

“We are delighted to welcome Dr. Andras Pellionisz into our Advisory Board. He brings a wealth of leading edge understanding and global contacts in genome informatics that will be invaluable to ELOGIC Technologies as we edge into this major emerging market,” says TD Prakash, Managing Director and Chairman of the Board of ELOGIC Technologies. "This is the formative decision in the long-term co-operation between USA-Silicon valley HolGenTech, Inc. and ELOGIC Technologies from Bangalore, Silicon valley of India, in the emerging field of genome analytics”, both Dr Pellionisz and Mr. TD Prakash concurred.

“Dr. Pellionisz will be our guiding force for foraying into Data Analysis of Next Gen Sequencing and Management, which we are offering to global clientele by early 2012”, added MV Ramanujam, VP of BINET Division, ELOGIC Technologies.

As a domain expert in Genome Informatics, Dr. Pellionisz is a cross-disciplinary scientist and technologist. With Ph.D.-s in Computer Engineering, Biology and Physics, he has 45 years of experience in Informatics of Neural and Genomic Systems spanning Academia, Government and Silicon Valley Industry. He played a leading role in the shift from artificial intelligence to neural nets, including the establishment of the International Neural Network Society. In 2005, he combined interdisciplinary communities of Genomics and Information Technology when he established the International HoloGenomics Society (IHGS).

Based on sound genome informatics, his work sets forth new mathematical principles for proceeding with full exploration of the whole genome. Dr. Pellionisz’ fractal approach to genome analysis is corroborated by recently published findings about fractal folding of DNA structure by Presidential Science Adviser Eric Lander.

“I am very pleased to join the ELOGIC Technologies Advisory Board. I am convinced that they have the foundation essential for edging into genome informatics. As one who served the “Internet Boom” as Chief Software architect of several Silicon Valley companies, I see and publicly expressed already in 2008 two major differences in the coming much larger boom of Industrialization of Genomics. First, while Internet companies could charge ahead without scientific innovation as packet-switching technology is both man-made and is utterly simple, industrialization of genomics (like nuclear industry) would not only be naive but utterly dangerous without science leadership. Second, while the Internet Boom was essentially centered on Silicon Valley of California, the Genome Boom is already global. I not only realize the cardinal importance of an alliance of Silicon Valley of California with Bangalore, the Silicon Valley of India, but will enjoy building a spectacular success based on global alliance,” says Dr. Pellionisz.

In 1973 Dr. Pellionisz was awarded a Stanford Post Doctoral Fellowship, subsequently he served as Research Professor of Biophysics at New York University Medical Center. Later at NASA Ames Research Center, as a Senior Research Associate of the National Academy. From 1994, he served as Chief Software Architect to several Silicon Valley companies.

About ELOGIC Technologies

ELOGIC Technologies is an IT Enabled Services Company serving a clientele of major multi- national and Government organisations seeking collaboration in the areas of Genome Informatics & Life Sciences, E-Security Services, Productivity Enhancement Solutions, Banking Solutions, Engineering Services and Professional Services Consultancy.

Established in the year 2002, they aim at providing high quality, products and services to achieve customer satisfaction and to be an innovating Company in leading edge technologies.

Contact

MV RAMANUJAM
ELOGIC Technologies Pvt Ltd
BANGALORE, INDIA
Phone : 0091 80 41210 892
Email : mvramanujam(at)elogic(dot)co.in
URL : http://www.elogic.co.in

[Dr. Pellionisz, as Founder and President of his HolGenTech, Inc. in Silicon Valley, California, with his new Board of Adviser role to Elogic Technologies of India's Silicon Valley in Bangalore, builds a global alliance. Dr. Pellionisz also serves as Board of Adviser to DRCcomputer - a hybrid serial/parallel processing hardware integrator of California's Silicon Valley. While the terms of an emerging global alliance are not disclosed, in view of the Boston team just having provided "proof of concept" (see full preview and Nature article two items below in this column, and popular coverage here) to the long-held thesis of Dr. Pellionisz that "fractal defects are root causes of a slew of hereditary syndromes, most notably of cancer" (see at min. 30 of his Google Tech Talk YouTube, 2008), Dr. Pellionisz' major peer-reviewed paper In Press provides hints of a global agenda) - This entry can be discussed on the FaceBook page of Andras Pellionisz]


The Search for RNAs

Genomeweb
November 2011

By Christie Rizk

The creation of new drugs is a vital part of the health-care system. Researchers in academia and in industry are always searching for new ways to combat disease — more efficiently, with fewer toxicities, and less chance of rejection. Until recently, most medicines have been limited to the classic formulation of a small molecule targeting a protein to disrupt its function. But there are many targets and formulations that have yet to be fully exploited, particularly those involving RNAs.

Small interfering RNAs and microRNAs can be used both as targets for drugs and as compounds in drug formulations to disrupt the function of certain genes. By taking advantage of RNA interference, miRNAs and siRNAs can bind to specific messenger RNAs and either increase or decrease their expression to affect how much or how little a given target gene functions. "There are really quite a lot of different methods and cellular pathways that are exploited. In some of them the field is quite old, so you might consider antisense technology a form of RNA therapeutics," says the University of Massachusetts Medical School's Phillip Zamore, who co-directs the school's RNA Therapeutics Institute. "There are people who are engineering different kinds of cellular RNAs to alter splicing or to degrade messages. Most of those involve re-engineering longer RNA, and probably can be delivered as drugs."

But the real promise shown by RNA in the therapeutics field comes from small RNAs, he adds. "The new small RNA therapeutics are really the first time that RNA has shown some promise as a drug," Zamore says. "The secret is that they're small, they're generally double-stranded, and they need relatively little — although they need some — chemical modification to make them stable. The real advance has been the discovery of chemical formulations that allow them to be retained in the body instead of filtered out by the kidneys and delivered to cells. And those drugs, which generally take the form of siRNA, are now being tested in early-stage clinical trials." [SiRNA, a tiny start-up to deploy small interfering RNA-s was acquired by Merck 5 years ago for $1.1 Billion - just to be shut down this summer. Why? See comment... AJP]

Most miRNA-based drugs use RNAi pathways to bind argonaute proteins — the protein complexes responsible for RNAi — and, together, the combination of siRNA or miRNA and argonaute protein bind the target complementary mRNA and destroy it. "My lab has for the last 12 years studied the basic mechanisms of siRNAs and microRNAs," Zamore says. "We work in a variety of organisms including flies and mice, and out of our basic research efforts, we've been able to understand new ways in which one can use siRNAs to target genes that differ by as little as a single nucleotide. And in that case it would be a question of targeting a mutant versus a normal gene."

Treating disease

Zamore's primary focus is Huntington's disease, which has been shown to be caused by a mutated gene with an extended CAG repeat. "There has not been much success by any lab targeting the extended repeat," Zamore says. "But as we showed a couple of years ago, there's a polymorphism — a neutral base change — that is commonly associated with the disease gene. And because it creates a single nucleotide difference between the disease gene and the wild-type gene, one can target that and not the normal gene." Zamore's work on Huntington's disease is still in the preclinical stages, but he's hopeful that it will eventually make it to clinical trials.

The list of diseases that are potentially treatable with RNA drugs is long. "There are certainly clinical trials using siRNAs for cancer, and the kinds of targets that people are interested in are molecules, proteins, that normally would be considered non-druggable," Zamore says. "The pharmaceutical industry has a very short list of the types of proteins that they suspect will be amenable to inhibition by classical small-molecule drugs. Any gene that doesn't fall on that list — whose expression or over-expression contributes to a disease — would be a good candidate for RNA interference. So basically, if you can reduce the expression of the protein and do some good, it's a good target for RNAi."

Merck is one of the companies moving into RNA therapeutics. The company acquired San Francisco-based biotechnology company Sirna Therapeutics — which specialized in the development of siRNA drugs — in a 2006 deal worth $1.1 billion, because it believed RNA could "open up a whole new class of medications to treat patients with unmet medical needs," says Jeremy Caldwell, head of Merck's RNA therapeutics division. Caldwell says Merck is working to apply RNA drugs to the treatment of cancer as well as cardiovascular and respiratory diseases. He adds that he's "cautiously optimistic" that the company will be starting clinical trials on some therapeutics in the near future. "RNA is really going to be the modality that takes advantage of all the high-throughput sequencing and genomic information that identify potential drug targets, many of which are not druggable with classic approaches like small molecules," Caldwell says. "Classic small molecule targets are receptors and enzymes because they have a hydrophobic pocket that the small molecule can insert itself into and block the activity of the target. Biologics are similar, but only work on cell surface. What siRNA will be able to do is address both small molecule targets and biologic targets, but also targets such as adaptor proteins that regulate a catalytic intracellular event, for example." He says there are practically no diseases that cannot be targeted by RNA therapeutics.

There are some problems that must be resolved, however. For one thing, most RNA therapeutics are currently limited by their routes of administration — such as intravenous or subcutaneous approaches, Caldwell says. And since there are already plenty of drugs that can be administered orally, it's unlikely they would be replaced by comparable RNA therapeutics unless those, too, could be administered by mouth or were a significant improvement upon the standard treatments.

In addition, researchers are struggling with how to deliver these drugs to their intended targets once they're administered. Many companies developing RNA drugs, including Merck, are taking a route through the liver, which is generally quite efficient, but limits the number of diseases that can be treated with these drugs.

Special delivery

Silence Therapeutics, which specializes in the delivery of targeted RNAi therapeutics, aims to solve this problem with its new delivery system called AtuPlex — a lipid delivery technology that targets the vascular endothelium of different organs. "For the whole field, the biggest hurdle is the delivery. There are different ways to use RNAis or even antisense molecules, but I think in the last three or four years, most companies shifted to formulations — they realized you need delivery technologies for the nucleic acids," says Jörg Kaufmann, Silence's vice president of research. "Our delivery technology, AtuPlex, is a liposomal formulation complexed with nucleic formulations, and the difference is that our formulation targets the vascular endothelium of different organs, including the tumor vasculature." While some companies target the tumor cells themselves, Silence's method allows the therapeutic to directly enter the tumor, and to alter the vasculature of target organs in order to prevent secondary metastasis, Kaufmann adds.

The company's most advanced RNAi therapeutic, a compound called Atu027, has been shown to prevent metastasis to the lungs by modulating the organs' blood vessels, Kaufmann says. "Basically, if the cancer cells are in a solid tumor, at one point they will go into the bloodstream to start growing in different organs. Our system delivers the nucleic acids to the endothelium or the vasculature," he says. Then, the system works to change it enough to prevent cancer cells from taking hold and metastasizing. This approach differs from that of Silence's competitors — companies like Alnylam, which UMass's Zamore co-founded, or Tekmira — which generally target the liver, liver metastases or cancer in the liver, whereas Silence is attempting to target metastases throughout the body, Kaufmann says.

Currently, the company is at the end of Phase I testing on Atu027 and will start Phase II trials by the end of this year or in early 2012. Phase III trials are likely six to 10 years away, Kaufmann says, but he is optimistic about the drug's chances of making it to market. So far, the company is still escalating the dose, and is aiming to show Atu027 can be used as a monotherapy in Phase II testing. "After that, we will combine it, and I personally envision it that it will be used in combination with chemotherapy or maybe as an adjuvant therapy," Kaufmann says. "After the chemotherapy has taken care of the primary tumor, this would prevent metastasis."

RNA in the crosshairs

At the University of Michigan, chemistry and biophysics professor Hashim Al-Hashimi is taking a different approach: instead of using RNAi to create drugs, he's creating ways to target the RNAs themselves. "Antibiotics that bind RNA in the ribosome are really the only example of a bona fide drug that we have on the market that we know functions by binding RNA," says Al-Hashimi, who co-founded a company that specializes in RNA targeting, called Nymirum. "There may be more drugs out there that function by binding RNA that we don't know about, but the drugs that are currently known to bind RNA and have an effect are very few, and the ones that are known are the antibiotics, and those compounds tend to be positively charged. That presents a problem in general, for various reasons — they can be toxic, they can be difficult to take up by cells — but certainly these are compounds that demonstrate the proof of principle that one should be able to target RNA."

However, part of the challenge in finding compounds that bind RNAs is developing assays or technologies that will allow researchers to measure the exact effect of a compound on its target. "Most small molecule drugs target proteins and take advantage of the fact that many proteins are enzymatic," Al-Hashimi says. "When a molecule is enzymatic, you can have an enzymatic assay to read the effect of a drug, so you can screen to assess the inhibitory activity of small molecules by simply asking, 'How well does this enzyme do what it's supposed to do in the absence or presence of a small molecule?'" The challenge with RNA, however, is that the majority of RNA targets are not enzymatic, so there isn't an easy way to create a high-throughput assay to measure a compound's effect on the target RNA.

There are methods that involve tagging RNA with modified compounds like fluorescent tags, Al-Hashimi says. "But because RNA is very fickle and flexible — a very delicate structure — having these large tags attached to it can cause problems in terms of perturbing the RNA," he adds. "Also, with these techniques that rely on tagging the RNA, often what happens is that the molecules that you would like to test have features or optical properties that make them ill-suited to these types of experiments because they happen to absorb light at the same wavelengths as the tag does. So there's quite of a bit of limitation as to the types of molecules you can test with these types of approaches."

RNA, camera, action

To address this challenge, Al-Hashimi and his group have developed a new technology to test molecules and see if they will bind RNA. In a perfect world, this would be done with a computer program, Al-Hashimi says, but he notes that most computer programs assume that RNA is a rigid molecule when it is a mobile, flexible structure that's "wriggling and dancing around, and assumes many different shapes." Instead of taking static images of RNA and then asking a computer program to predict which compounds will bind to it, Al-Hashimi records videos of RNA to capture the structure's fluctuations.

"What we do is we take different frames from our movie, highlighting the lock in different shapes, and then ... we test keys," Al-Hashimi says. "We test them not just against one frame, but against all of the different frames we have, and that gives us more shots on target. So we will not, for example, miss the key that can specifically bind to an unusual shape of the lock. With this new technology we can find these keys, and screen them more effectively."

Using NMR spectroscopy coupled with computational techniques that can predict which agents will bind RNA, Al-Hashimi can visualize RNA in motion and screen for existing compounds that could target the RNA to treat disease. He's already successfully identified one compound — netilmicin — which inhibits HIV replication, and continues to screen existing compound libraries to see whether any of the molecules there could be used to target RNAs. "We know a lot about proteins — we know a lot about what molecules they like to bind — so we have a history that we've accumulated over many, many years," Al-Hashimi says. "With RNA, it's just an open field — we really don't know the kind of keys that RNA is going to like, and it might be keys that we have never synthesized. ... The advantage of this computational approach is that we can test molecules that don't even exist, and see if there's a class of molecules that we ought to spend some effort making, because they could be the next generation of small molecules targeting RNA."

Too soon to tell?

Of course, as with the development of any drug, there are questions as to how the body will react, whether there will be toxic side effects, and whether there is potential for the disease to become resistant to the treatment. Al-Hashimi says that it's still too early to tell whether drugs targeting RNA will create adverse reactions or treatment resistance, but says there is evidence indicating that RNA-targeting compounds may escape resistance more effectively than traditional drugs. "Because of the sheer amount of RNA that one has, there are more goals to shoot at," he says. "So the chance that you have one RNA that has more favorable properties might be quite large, simply because of the potential different ways you could attack a disease through targeting RNA."

And, he adds, the potential for beating resistance with combination therapies is high with an RNA targeting approach. "I think the sheer number of targets that are out there and the different strategies one can go about inhibiting a given disease, you can imagine cocktail strategies where you have drugs that bind not one but multiple elements and that could probably really help with the resistance issue, because you're hitting the disease from so many different ends. It's hard for a mutant to occur that can defeat all of them simultaneously," Al-Hashimi says.

Suppressing toxicities would be a matter of making sure the compound has "exquisite selectivity" for its target, so that it affects one specific target RNA and not similar RNAs as well, he adds.

When it comes to RNAi, Merck's Caldwell says that, although it's likely some diseases will evolve resistance to certain RNA therapeutics, the advantage of RNAi drugs is that it's easier to identify potential resistance mutations in the pre-clinical testing stages and be ready with a backup against the mutated version of the disease.

Overall, researchers say that there is a great deal of potential in RNA therapeutics. "The connection of RNA to diseases is literally unfolding as we speak," Al-Hashimi says. "We really have many decades to go to figure out RNAs and develop the technology needed, but now that the interest is there, we'll definitely be able to learn more and figure out how things work."

[Everyone agrees that the RNA system is most likely the clue to "Coordinated Genome Function" and thus has an enormous potential. Presently, as documented by Merck having shut down its $1.1 Bn SiRNA-wing, the field experiences several difficulties. 1) RNA drugs will be difficult to get approved by FDA. 2) RNA drugs are difficult to deliver to the genome. 3) It is not well understood at all, how the RNA system works to yield "Coordinated Genome Function". Regarding the theoretical foundation of RNA system (how contravariant cRNA functors by means of interference with covariant ncRNA functors comprise the functional geometry by their eigendyads, see "The Recursive Genome Function of the Cerebellum: Geometric Unification of Neuroscience and Genomics". Announcement below, pre-publication here, and Abstract, References and Essential Geometric Concepts here - AJP]


High order chromatin architecture shapes the landscape of chromosomal alterations in cancer ["Fractal Defects" as root causes of cancer -AJP]

Geoff Fudenberg, Gad Getz, Matthew Meyerson & Leonid A Mirny

Nature Biotechnology (2011) doi:10.1038/nbt.2049

Received 09 September 2011 Accepted 21 October 2011 Published online 20 November 2011

ABSTRACT - The accumulation of data on structural variation in cancer genomes provides an opportunity to better understand the mechanisms of genomic alterations and the forces of selection that act upon these alterations in cancer. Here we test evidence supporting the influence of two major forces, spatial chromosome structure and purifying (or negative) selection, on the landscape of somatic copy-number alterations (SCNAs) in cancer1. Using a maximum likelihood approach, we compare SCNA maps and three-dimensional genome architecture as determined by genome-wide chromosome conformation capture (HiC) and described by the proposed fractal-globule model2, 3. This analysis suggests that the distribution of chromosomal alterations in cancer is spatially related to three-dimensional genomic architecture and that purifying selection, as well as positive selection, influences SCNAs during somatic evolution of cancer cells.

[Consistent with copyright principles that authors retain their intellectual property, the above Nature publication (other than the Abstract) is "for fee", a pre-publication .pdf can be found, with full text and full-size Figures, see Fig. above and text-excerpts below - AJP]

...Here, we ask whether the “landscape” of SCNAs across cancers can be understood with respect to spatial contacts in a 3D chromatin architecture as determined by the recently developed HiC method for high-throughput chromosome conformation capture or described theoretically via the fractal globule (FG) model ...Specifically, we investigate the model presented in Figure 1A, and test whether distant genomic loci that are brought spatially close by 3D chromatin architecture during interphase are more likely to undergo structural alterations and become end-points for amplifications or deletions observed in cancer...

[Some may have been wondering how aberrations of fractal properties of the genome (e.g. its "fractal folding", shown by the rotating 3D Hilbert curve in the header of this site) could lead to genomic pathology. The presented overall "fractal defect" (altering the "optimal functional closeness") is a major finding; leading researchers directly to root-causes of cancer. This article is to be compared to the pre-publication copy of the collective work of authors Pellionisz et al. 2011 (In Press), below - AJP]


Recursive Genome Function of the Cerebellum: Geometric Unification of Neuroscience and Genomics

Andras J. Pellionisz, Roy Graham, Peter A. Pellionisz, Jean-Claude Perez
Chapter in Press: The Cerebellum, Handbook by Springer (Ed. M. Manto). Submitted 20th of October, Accepted 1st of November, 2011

Contact: holgentech_at_gmail_dot_com

Abstract

Recursive Fractal Genome Function in the geometric mind frame of Tensor Network Theory (TNT) leads through FractoGene to a mathematical unification of physiological and pathological development of neural structure and function as governed by the genome. The cerebellum serves as the best platform for unification of neuroscience and genomics. The matrix of massively parallel neural nets of fractal Purkinje brain cells explains the sensorimotor, multidimensional non-Euclidean coordination by the cerebellum acting as a space-time metric tensor. In TNT, the recursion of covariant sensory vectors into contravariant motor executions converges into Eigenstates composing the cerebellar metric as a Moore-Penrose Pseudo-Inverse.

The Principle of Recursion is generalized to genomic systems with the realization that the assembly of proteins from nucleic acids as governed by regulation of coding RNA (cRNA) is a contravariant multi-component functor, where in turn the quantum states of resulting protein structures both in intergenic and intronic sequences are measured in a covariant manner by non-coding RNA (ncRNA) arising as a result of proteins binding with ncDNA modulated by transcription factors. Thus, cRNA and ncRNA vectors by their interference constitute a genomic metric. Recursion through massively parallel neural network and genomic systems raises the question if it obeys the Weyl law of Fractal Quantum Eigenstates, or when derailed, pathologically results in aberrant methylation or chromatin modulation; the root cause of cancerous growth. The growth of fractal Purkinje neurons of the cerebellum is governed by the aperiodical discrete quantum system of sequences of DNA bases, codons and motifs. The full genome is fractal; the discrete quantum system of pyknon-like elements follows the Zipf-Mandelbrot Parabolic Fractal Distribution curve.

The Fractal Approach to Recursive Iteration has been used to identify fractal defects causing a cerebellar disease, the Friedreich Spinocerebellar Ataxia – in this case as runs disrupting a fractal regulatory sequence. Massive deployment starts by an open domain collaborative definition of a standard for fractal genome dimension in the embedding spaces of the genome-epigenome-methylome to optimally diagnose cancerous hologenome in the nucleotide, codon or motif-hyperspaces. Recursion is parallelized both by open domain algorithms, and also by proprietary FractoGene algorithms on high performance computing platforms, for genome analytics on accelerated private hybrid clouds with PDA personal interfaces, becoming the mainstay of clinical genomic measures prior and post cancer intervention in hospitals and serve consumers at large as Personal Genome Assistants.

[This preview of the Abstract, with References and further material to be provided elsewhere, outlines the Non-Euclidean Geometrical Unification of Neuroscience with Genomics, since the authors state at the outset that "Our understanding of both the genome and the brain will remain partial and disjointed until we reach a unification of the intrinsic mathematics of structuro-functional geometry of both – as the first is without question a foundation of the second". The paper features the RNA System as a "Genomic Cerebellum", based on the dual valences (covariant and contravariant) of biological entities that represent invariants, such as movements in sensorimotor coordination in case of the cerebellar neural networks, and physically additive protein-synthesis with the use of RNA from "coding DNA" as well as measures of built proteins by RNA emanating from binding of proteins to "non-coding DNA", where their interference acts as the metric of the functional geometry of coordinated genome function. Based on conceptual breakthrough, the paper lays down an Agenda for a unification of Neuroscience and Genomics both in R&D and in the Industrialization of Genomics. As discussed in the FaceBook page of Andras Pellionisz, this paper is cited even before its appearance, and in view of an accelerated surge of both the demand and technology development (see Nature publication of a Korean school of scientists and technologists below), will lead to rapid advances both in our conceptual breakthrough from "frighteningly unsophisticated" notions of coordinated genome function (called "genome regulation") as well as massive deployment in health-care R&D and Industry, such as fractal diagnosis of cancer ]


Altered branching patterns of Purkinje cells in mouse model for cortical development disorder

Nature, Scientific Reports 1, Article number: 122 doi:10.1038/srep00122

[Purkinje neuron Fractal Dimension is a measure of disease - AJP]

Disrupted cortical cytoarchitecture in cerebellum is a typical pathology in reeler. Particularly interesting are structural problems at the cellular level: dendritic morphology has important functional implication in signal processing. Here we describe a combinatorial imaging method of synchrotron X-ray microtomography with Golgi staining, which can deliver 3-dimensional(3-D) micro-architectures of Purkinje cell(PC) dendrites, and give access to quantitative information in 3-D geometry. In reeler, we visualized in 3-D geometry the shape alterations of planar PC dendrites (i.e., abnormal 3-D arborization). Despite these alterations, the 3-D quantitative analysis of the branching patterns showed no significant changes of the 77 ± 8° branch angle, whereas the branch segment length strongly increased with large fluctuations, comparing to control. The 3-D fractal dimension of the PCs decreased from 1.723 to 1.254, indicating a significant reduction of dendritic complexity. This study provides insights into etiologies and further potential treatment options for lissencephaly and various neurodevelopmental disorders.

The formation of cellular layers and dendritic architectures is essential in the development of cortical structures in the mammalian brain1. Alterations in cortical structures are related to epilepsy, mental retardation, deficits in learning and memory, autism, and schizophrenia2, 3, 4. The alteration patterns of cortical structures are often studied using neurological mutation reeler5, which is characterized by ataxia, tremors, imbalance, and a reeling gait6, 7, 8, 9. In the reeler cerebellum, the cytoarchitecture of neural networks and neurons becomes gradually defective during the developmental process10, 11. The Purkinje cells (PCs) are not arranged in a regular plane but clustered in subcortical areas at early stages of corticogenesis. As a consequence of the ectopic location of such cells, an aberrant laminar organization occurs12, 13.

...3-D Fractal Dimension

This parameter reflects the degree of geometric complexity31 of the PC branching systems. Previous estimates from 2-D data varied among different authors31, 32, 33, 34. We extracted our values using instead 3-D data with the box counting method. The results for normal and reeler PCs were 1.71 ± 0.03 and 1.25 ± 0.02 (Table 2). Our fractal dimension for normal cells is consistent with previous results31, 32, 34, whereas for reeler cells it is significantly smaller, indicating a reduced geometric complexity. The lower geometric complexity is also consistent with the data of Figure 5 and could reflect reduced synaptic connections with other neurons.

...Fractal dimension

To estimate the fractal dimension of the PCs, we used the box-counting method. First, we embedded the data points of the PCs in the 3-D space. This space was divided in a grid of boxes with size r and we counted the number of boxes N(r) that contain at least one data point. A log-log plot of r versus N(r) could be fitted by a straight line with slope -D, where D is the fractal dimension (supplemental Fig. 1). A linear least square regression was performed to accurately evaluate this slope. To determine the scaling region and the slope, two end-points of the size giving the best linear fits were selected.

[It only took less than a quarter of a Century since the Fractal Model of the Purkinje Neuron for Korea to deploy 21 Century high-tech to actually measure in 3D the Fractal Dimension, to be used as a measure of disease. While this technology does not show how the growth of fractal physical geometry is governed by the fractal genome, FractoGene (2002) does. At the point when Samsung already announced by September 1, 2011 a genome analytics service for the World, Geometric Unification by Pellionisz of Neuroscience and Genomics (see announcement below and accelerated exposure above) is expected to result in a breakthrough. This comment can be discussed in the FaceBook page of Andras Pellionisz]


Geometric Unification of Neuroscience and Genomics

October 31, 2011

[Invited Chapter accepted to Springer, with the Editor's comment "A Decade Ahead" by Pellionisz et al. (submitted Oct. 20, 2011). Section "Future Directions" lays down a specific Agenda for Industrialization of Genomics. Inquiries to holgentech_at_gmail_dot_com]


...Initially Jobs sought alternatives to surgery

By Joseph Menn in San Francisco

Financial Times
October 21, 2011 2:44 am

[Section on Apple/Google omitted] ...

[Jobs' cancerous DNA analyzed too early, too late - AJP]

Jobs’ core beliefs are at a more emotional level, involving ideas, where the legal outcomes are less predictable, according to the book.

The 630-page book by Walter Isaacson, simply titled Steve Jobs, gives new details on many other areas of the secretive man’s professional and personal life, including his health and his romances.

It is based on dozens of interviews with Jobs that continued until just weeks before his death from cancer this month, as well as talks with family members and friends.

Some of the biggest revelations involve Jobs’ decisions on his medical treatment, where it appears that a man widely hailed as a genius made the poorest decisions possible.

It had been reported by Fortune magazine in 2008 that Jobs had delayed surgery for what he knew was a highly treatable form of cancer in his pancreas while he pursued alternatives. It emerges that Jobs resisted entreaties by his wife, a cancer survivor, former Intel chief Andy Grove and others close to him to have the small tumour removed because he did not want his body to be “violated”, Mr Isaacson told the CBS television show 60 Minutes.

After Jobs finally gave in, it may have been too late. Doctors discovered that the disease had spread to neighbouring tissue, Mr Isaacson said, and Jobs “regretted” his initial reluctance. The news programme posted an excerpt of its interview with Mr Isaacson on Thursday ahead of its full broadcast on Sunday night.

After Jobs accepted a traditional medical approach to his illness, he mastered it in detail and made the final decisions on all treatment, according to an account of the book in the New York Times. That included approving the sequencing of his own genes, which allowed for hand-tailored treatments, a pioneering approach that Jobs believed was key to the future of medicine. But not all that Jobs wrought, at Apple or in his personal life, was a success.

[Steve's cancerous DNA (compared to DNA of healthy tissues) was analyzed when his condition was too advanced, and the science was too early. Thus, we lost not only a giant, but a dear friend. Bill Gates had been asked if he wanted his DNA fully sequenced. After some hesitation, he said no ... but perhaps if I had cancer, I would. Now both Intel Founders Andy Grove and Gordon Moore should know that the genomics hinges on ... Informatics. (Gordon Moore had been sequenced twice, not for disease, but to compare Life Technologies' two very different sequencing platforms). Time is ripe for Informatics Giants to help programs, such as laid out in the invited Chapter submitted October 20th "Geometric Unification of Neuroscience and Genomics" (by Pellionisz et al.). This entry can be discussed in the FaceBook page of Andras Pellionisz]


Savoring an NGS Software Smorgasbord

In the latest crop of analysis tools for NGS data, functionality and ease-of-use are twin priorities.

By Allison Proffitt

September-October issue of Bio-IT World, 2011 | ‘Scaling to bigger and better hardware doesn’t help if your data is [sic] growing in size faster than your hardware,” says Titus Brown at Michigan State University. He and others in the NGS community are calling for software solutions to their NGS data woes instead of massive storage options. In an August post on his blog, “Daily Life in an Ivory Basement,” Brown wrote: “The bottom line is this: when your data cost is decreasing faster than your hardware cost, the long-term solution cannot be to buy, rent, borrow, beg, or steal more hardware. The solution must lie in software and algorithms.”

Thankfully, the options for both are expanding. Familiar names such as CLC bio, Geospiza, DNAnexus, GenomeQuest (see, p. 24), Omicia (see, p. 48) and others (see, “Next-Gen Sequencing Software: The Present and the Future,” Bio•IT World, Sept 2010) are being joined by a new batch of friendly competitors. For the most part, these offerings—from aligners to niche analytics—support the Illumina, 454, and SOLiD platforms, with some including Ion Torrent, Pacific Biosciences, and Complete Genomics data as well.

The software landscape for NGS analysis is broad and varied partly because “analysis” isn’t a cut and dried term, says Knome’s Nathaniel Pearson, director of research. “We’ve managed, as a community, to make people understand that analysis is as important as sequencing in the end… But now we have to tease out upstream and downstream analysis.”

Pearson defines “upstream analysis” as that closest to the sequencing machines, where the first work was done: base calling, variant calling, variant assessment, etc. “Now we’re seeing a focus moving toward downstream analysis, toward understanding many genomes at a time. As the stream of sequencing data from one machine comes together in a river with the streams coming from other machines, we need to make sense of that tide of data.”

Swimming Upstream

Knome’s area of interest can be summarized as “service with software,” says Pearson. kGAP—Knome’s Genome Analysis Platform—is the analysis software Knome uses to “richly annotate genomes and compare them to each other thoroughly,” says Pearson.

Knome’s sequencing and genome analysis service was launched in 2007. “Knome cut its teeth analyzing whole genomes for consumers. Given how costly whole genome sequencing remains, most of those consumers are still either healthy and wealthy aficionados of science and technology, or physician-aided families with urgent health problems—fairly small markets,” says Pearson.

“We do foresee that the consumer market will eventually democratize, as sequencing gets cheaper and insights for small numbers of relatively healthy genomes—especially in family settings—become more precise and useful,” he says.

Until then, Knome plans to keep refining its analysis pipeline and end-user software Today more than 95% of the firm’s customer base is researchers, about half from academia and half from industry, users that Pearson says can best understand diseases of widespread public interest.

When these customers receive Knome’s analysis they also receive software tools like KDK Site Finder, a simple query interface that lets clients find interesting sites in one or a set of genomes by “sensibly chosen criteria: allele frequency, call quality, novelty, zygosity—the usual suspects—as well as a rich archive of gene- and site-associated phenotype data from the literature.”

The current version of kGAP runs in the Cloud, which has greatly increased its throughput. But Pearson doesn’t expect analysis costs to fall at the rate of sequencing costs. “They’re going to drop slower than sequencing costs overall because we’re more tied to computational costs—which is more of a Moore’s Law scale,” he says. “Some software will fall quickly; it’ll get commoditized. But the very best software will always be costing a bit more because it will entail evermore complex underlying calculations to make the bottom line look much simpler to use.”

He believes future analysis options will do for sequencing what Photoshop did for photography. “I think we’ll see software for the end user for understanding genomes [in which] a lot of the underlying calculations will be done very swiftly and very cleverly under the hood. And the user’s experience will be very easy and very fast, but that’s going to cost a bit.”

The team at Real Time Genomics might disagree. The company’s “single and only intent is to provide the world’s best genomic analysis software,” says CEO Phillip Whalen. And they’re giving it away for free.

The venture-funded company based in San Francisco unveiled its website only a few months ago, but the technical team has been working on this problem for seven or eight years.

“The decision we made when we basically took the wrappers off,” says Whalen, “was that for organizations we wanted to charge a license fee, but if [researchers are] working on a project and they decide, ‘I’d like a really tight, easy to use pipeline,’ absolutely the use of our software by an individual investigator is unrestricted.”

RTG Investigator is made up of two such pipelines: one geared for variant detection and one for metagenomics. The software runs from a command line interface and is geared toward research teams that include both bioinformaticians and biological investigators. “Our customers wring the last bit of information out of their datasets, and the tension of discovery demands a collaborative effort,” says Stewart Noyce, RTG’s director of product marketing.

“Right at the core is this extremely fast and sensitive searching technology,” says Graham Gaylard, RTG’s founder. “When I say sensitive, we actually can search with mismatches in the search pattern right at the very start. “The variant detection pipeline does all of the alignment—it’s a fully gapped aligner—so it does full read matching assembly and also processing right through to variant calls such as SNPs, complex calls, indels, CNVs [copy number variations] and structural variations,” says Gaylard. “It handles paired ends natively, not as an add-on. That gives us far superior efficiency. We’re as accurate as all of them, but we’re faster than the BWA/GATK pipeline by 10x.”

And the numbers are even better for metagenomics. “One of the functions our search technology replaces is BLASTX, a translated nucleotide search of protein databases. We’re 1,000x faster than that.” The Genome Institute at Washington University acquired some of the early licenses for the product a couple of years ago and RTG has worked closely with them on the Human Metabolome Project. Gaylard says RTG has turned a 10-year compute task on their cluster into a three-month problem. “That has a big impact on how you do things,” he says.

The software is designed to make maximum use of the computing resources allocated to it, and will run on a laptop or cluster or can be pushed to the Cloud. Everything is proprietary—new algorithms, a new approach, and patent protected (or pending). “We have not gone out and taken something open source and tweaked it,” says Whalen. “We have attacked the problems from a computer science point of view with new ways of doing things. We’ve done that from scratch and come up with some results that our customers say are pretty compelling.”

Betting on Biologists

Though some users are happy at a command line, Enlis Genomics and others are betting that many biologists would like to dig into their data without also learning bioinformatics. Enlis’ “point-and-click genomics” software was designed by biologists, says founder Devon Jensen.

The software caught Illumina’s eye in July, winning the commercial category in the iDEA Challenge (see, “Illumina Showcases New Visions in Genomic Interpretation,” Bio•IT World, July 2011; part of the prize was a one year co-promotion marketing agreement with Illumina. Jensen says the details of that agreement are still being finalized).

This isn’t variant calling though. The software addresses the biologist’s question: After you have sequenced, assembled, and called variants, what do you do next? Tools like the Variation Filter and the Genome Difference tool let users query the genome and compare up to 100 genomes. “The focus of our software is making it easy to find what is biologically relevant in the sequence data of a patient, individual or research animal,” says Jensen.

The Enlis software comes with an import/annotation tool that creates a .genome file format encapsulating all the different types of genomic data into a single file to improve the process of handling and storing the data for the researcher. The focus is on speed and ease. “The software contains very fast algorithms for filtering variations and finding differences between whole genomes,” says Jensen. “We have organized all of the information in a way that allows a researcher to quickly assess whether a particular feature of a genome is important.”

SoftGenetics’ NextGENe product is also aimed at the individual biologist or clinician, says John Fosnacht, the company’s co-founder and vice president. “It’s a Windows-based program that’s easy to use. It has a lot of tools in it that [users] can use on multiple applications. It’s doesn’t require any kind of bioinformatics support.”

Fosnacht says the company has several groups of customers, including core labs that don’t have huge bioinformatics resources. The Mayo Clinic, for example, is using a networked version of the software. The software will process a whole human genome in ten hours, Fosnacht says.

In a partnership with the rare disease group at NIH, SoftGenetics developed a variant comparison tool as a module in NextGENe to identify which of thousands of variants are most likely to be causative mutations in rare genetic disorders. The software takes the total number of variants (more than 275,000 variants in a family of 6 in one example) and filters out silent variants, known mutations in dbSNP, and other parameters. The NIH researchers were left with a very manageable six candidate mutations.

“The filtering and prediction part takes less than half a day. That allows the molecular geneticist and researcher, instead of trying to do the impossible and look at 280,000 variants, to focus on relatively few,” says Fosnacht.

The software uses a modified Burrows-Wheeler transform method, and excels in indel discovery and somatic mutations. NextGENe was able to find a 55-basepair deletion in a 50-bp read. “This is a patented functionality in the software, that can elongate short reads,” says Fosnacht. “In reality it is a localized assembly. Once the reads are elongated the software can detect an indel up to 33% of the elongated length. The same process can be used to actually merge paired reads into one long read. When employed this process can produce Sanger quality reads from short reads.”

These types of projects make the most of what Fosnacht calls tertiary analysis tools. “We want to provide the third level of tools to the actual users to speed up the whole process. Unlike many “freeware” or other programs that just give you a list—an Excel spreadsheet basically—of all the variants that were found, you can actually see them in our browser… A lot of people like to touch and feel, you might say, their data.”

DNASTAR agrees. “There just aren’t enough bioinformaticians out there to handle the data deluge,” says Tom Schwei, DNASTAR’s VP and general manager. “And they don’t want to wait in line for a week or two weeks for that bioinformatics core group. We believe that the end user, the person who is sponsoring the experiment, knows best their research objectives and their data and is in the best position to do the analysis… You shouldn’t have to be a bioinformatician to parse through the data and understand what you see.”

The company views the NGS market as simply an extension of what their customers are already doing. As such, DNASTAR recently moved their next-gen data products under the Lasergene umbrella, a 15-year old brand name that also includes primer design software and cloning resources. SeqMan

NGen is the GUI-based assembler, SeqMan Pro the data analysis module. They are designed to work together, although they can be purchased separately.

Schwei says that the new Lasergene offerings are designed to be intuitive, fast, and easy to use. Users can easily compare their variants to dbSNP and the reference genome.

“SeqMan Pro’s strength is really the analysis of any number of samples. It can handle individual assemblies quite well, and it can handle multiple assemblies.” The software can manage 100 samples of a certain region, says Schwei. “We will do separation of the tags if people are running multiple samples in one lane on the assembly side. We’ll then report on those samples on the analysis side.”

The software is also affordable. “For less than $10,000, scientists can get all the software they need—and the computer to run it on—to do any next-gen assembly and analysis project they need to do,” he says, thanks to proprietary assembly algorithms. “Basically, it no longer relies on the amount of memory you have on your computer,” Schwei says. “There’s no correlation between the amount of RAM and the size of the genome you have to assemble.”

Avadis NGS by Strand Scientific Intelligence enables “NGS analysis for the rest of us,” says Thon de Boer, director of product management, software. With a strong focus on visualization, Avadis NGS has three major workflows: DNA-seq, CHIP-seq, and RNA-seq. De Boer says Strand is focusing on “the individual researcher with their individual [sequencer] and their individual piece of software.” The desktop software manages analysis after alignment, the “backend” analysis, de Boer calls it, and he says that Strand has been able to “sell to places that already have the Genomics Workbench from CLC bio, for instance, because people really like our visualization.”

“We have special never-seen-before visualizations around splicing—very informative alternative splicing analysis visualization. And the same goes for SNP analysis, what we call the variant supporter view, which is just a better way to look at all of the supporting reads for a particular SNP without being overwhelmed with the amount of data you have to look through.”

Strand has also had success partnering in the field. Ion Torrent is a reseller of Avadis NGS, and Strand recently announced a partnership with German-based BIOBASE to give all Avadis NGS users one-click access to BIOBASE’s Genome Trax curated biological database. “We bundle a lot of our software with publicly available data,” says de Boer, and the partnership with BIOBASE will expand the available data pool and “make it easy for customers to get all the information that they need right from our servers.”

Partek’s Genomics Suite is a complete start-to-end solution,” says Hsiufen Chua, Partek’s regional manager in Singapore. “Just one package off the shelf and you can use all the genomics data analysis you need in the lab.”

Partek’s product integrates sequencing data with microarray data or real time PCR because, as Chua points out, most labs have several types of data. “[Customers] would like to bring together two sets of data because they would have samples that have been run on different platforms.” Genomics Suite allows users to compare the results in the same platform.

“From the point that [researchers] obtain the reads from the next-gen sequencer, we take care of them. We have solutions to help them align the reads down to the point where they can do quality control to see if the data they have is good enough to proceed for further analysis. If so, then we have the tools for them to do the statistical analysis—all the statistics. Following that, we also have the same tools to do the biological interpretation.”

Service Segment

But if a do-it-all platform is not what a researcher wants, the analysis-as-a-service segment of the market is expanding. While BGI (see, p 31) and Complete Genomics will do sequencing and analysis, Samsung just launched beta testing of an analysis-only service.

Samsung’s Genome Analysis Service will provide analysis for whole-genome sequencing and RNASeq for Life Technologies and Illumina data, says SungKwon Kim, director of the bioinformatics lab at Samsung SDS, with support for the Ion Torrent sequencer ready by the end of 2011.

The algorithms that Samsung SDS is using are a combination of open-source and vendor-provided software with Samsung’s own proprietary “tweaks,” says Kim. Samsung has built its own genome browser, but all of the data are available for download if the customer prefers another option.

Samsung is offering analysis on its own Cloud infrastructure in Korea, which Kim expects it to be extremely efficient, safe, and fast. “I think our analysis job is much faster than other competitors,” he says. “Our whole genome analysis will take five days; our RNA analysis will take 3 days.”

He also cites Samsung’s reputation for enterprise-level IT. “We’ve been working with system innovation with banks, high-profile Fortune 500 companies, so when it comes to data security—I don’t think any other vendor companies should be able to match our capabilities in security and recovery handling.”

Kim says Samsung has been eyeing the NGS space for three years. “This [industry] is mainly driven by academics and research institutions who have some of the IT infrastructure and who have their own sequencers… but when the read price drops below $1,000, then I don’t think any research institute or academia will be able to handle [both] their own sequencing jobs and their own analysis jobs.”

With so many options, they shouldn’t have to. •

This article also appeared in the 2011 September-October issue of Bio-IT World magazine.


Foundations of XXI Century Industrialization of Genomics

We're at a frighteningly unsophisticated level of genome interpretation

http://www.signonsandiego.com/news/2011/sep/20/venter-institute-breaking-ground-35-million-center/

Venter Institute breaking ground on $35 million center

While the institute has borrowed money to launch the work, its leaders are hoping to pay for some of the project's cost with donations from local philanthropists.

''It's now easy with the new technology to generate a lot of different data, but there are very few groups or scientists generating knowledge out of this data. We're at a frighteningly unsophisticated level of genome interpretation.''

Read more: http://www.smh.com.au/world/genome-research-little-bang-for-buck-scientists-20100401-ri2t.html#ixzz1YztapdeV

http://www.sisbq.org/uploads/5/6/8/7/5687930/qbtherapy.pdf

Stagnaro, S. and Caramel, S. (2011) Italy

"...human bodies are a continuum of biological systems whose dynamics follow the laws of deterministic chaos (Lorenz 1963, Ruelle 1991, Cramer 1994, Stagnaro et al. 1996), which can be measured by means of nonlinear statistical invariants. Furthermore, there is the recent discovery that energy information and communication between DNA and bio-systems are strictly linked with quantum behavior."


A DNA Tower of Babel

As more and more people's genomes are decoded, we need better ways to share and understand the data.

FRIDAY, SEPTEMBER 23, 2011BY DAVID EWING DUNCAN

If the Internet cloud were actually airborne, it would be crashing down right now under the sheer weight of a quintillion bytes of biological data.This year, the world's DNA-sequencing machines are expected to churn out 30,000 entire human genomes, according to estimates in Nature magazine. That is up from 2,700 last year and a few dozen in 2009. Recall that merely a decade ago, before the completion of the Human Genome Project, the number was zero. At this exponential pace, by 2020 it may be feasible—mathematically, at least—to decode the DNA of every member of humanity in a single 12-month stretch.

The vast increase in DNA data is occurring because of dazzling advances in sequencing technology. What cost hundreds of millions of dollars a decade ago now costs a mere $10,000. In a few years, decoding a person's DNA might cost $100 or even less.

But what's missing, say a growing chorus of researchers, is a way to make sense of what these endless strings of As, Gs, Cs, and Ts mean to individuals and their health. "We are really good at sequencing people, but our ability to interpret all of this data is lagging behind," says Eric Schadt, director of the Mount Sinai Institute for Genomics and Multiscale Biology and chief scientific officer at California-based Pacific Biosciences, which sells sequencing machines.

Scientists don't yet know what all our DNA does—how each difference in genetic code might influence disease or the color of your hair. Nor have studies confirmed that all the genetic markers linked to, say, heart disease and most cancers actually increase a person's risk for these illnesses. Just as significant, the thousands of genomes being cranked out right now can't easily be compared. There is no standard format for storing DNA data and no consistent way to analyze or present it. Even nomenclature varies from lab to lab.

The industry is working to address these problems. ["Industry" will never solve a problem that is SCIENTIFIC - AJP]. Earlier this summer, at a meeting of geneticists and other experts that I attended in San Francisco, Clifford Reid, the CEO of Bay Area-based Complete Genomics, called for a consortium of gene companies to develop sorely needed standards for everything from consent procedures for DNA donors to methods of collecting, storing, and analyzing DNA specimens. Reid says the ultimate purpose is to "aggregate multiple data sets, providing broad access to data sets that are today in silos and largely unavailable to the broader scientific community."

The payoff from "interoperable" genomes will be faster research on the links between DNA and disease, scientists say. Researchers will be able to validate suspected links between genetic makeup and drug reactions or overall health by conducting much larger studies in which many people's genomes are compared. And physicians and individuals will be able to use standardized methods of reporting a person's genetic risks and advantages. That will matter as more and more ordinary people have their DNA decoded.

Another major initiative comes from Sage Bionetworks, a Seattle-based nonprofit cofounded by Schadt and Sage director Stephen Friend, formerly the leader of Merck's advanced technologies and oncology groups. Sage has raised $20 million to support a movement among biologists, computer scientists, patient advocacy groups, and businesses to standardize DNA databases that have sprung up over the years. "This won't happen overnight," says Schadt. "But it will be huge, like the Internet."

At some companies, efforts are under way to build an IT infrastructure capable of pooling and interpreting whole genomes on a larger scale. Jorge Conde, the CEO of Knome, a company in Cambridge, Massachusetts, that sells whole-genome sequencing as a service and uses a team of PhDs in India to analyze the results, says more drug companies now want to use full genomes to understand why drugs work or have side effects in some people and not others. "As the price has dropped, we are getting more interest from pharma and biotech companies," says Conde. Knome's price for its sequencing and analytical service has dropped from $350,000 in 2007 to under $10,000 today.

One of Knome's more recent ideas, still at an early stage, is to get drug companies to share genomes they have had decoded. The company has launched a cloud-based service called kGAP that would let customers process several hundred genomes at one time, studying them for the presence of 200,000 known links between DNA markers and genotypes for disease and other traits. The technology is still oriented toward facilitating big research projects, but eventually such engines might be used to compare an individual's genome with thousands of others and spit out personalized health tips and diagnoses. "The big play is when this information is available to be used by health-care providers and patients," says Conde. "But that's still several years away."

[The colossal threat of unsustainability of Industrialization of Genomics; a hyper-escalating number of available full DNA sequences glutting the supply-side, and an almost total lack of the "demand side", of overall theoretical understanding of how The Recursive Genome Function arises (in the form of iterative fractal recursion) from the Fractal DNA) I disseminated as early as it was possible once the US government admitted in 2007 in their ENCODE results, that the underlying assumptions have been false for over half a Century. The peer-reviewed science publication was The Principle of Recursive Genome Function, and the now outrageously evident facts were popularized in a Google Tech Talk YouTube (both in 2008), making the trivial point that Information TECHNOLOGY is more than ready, but Information THEORY of genome function is not. With the software-enabling recursive algorithms of FractoGene, HolGenTech, Inc. in Silicon Valley, working with global strategic partners, is ready to come to the rescue. This entry can be discussed at the FaceBook page of Andras Pellionisz]


Sam Waksal, Pfizer Venture Investments, and More: Moderator Looks Forward to All-Star Chat at New York Life Sciences 2031

Arlene Weintraub 9/6/11

Les Funtleyder, manager of the Miller Tabak Health Care Transformation Fund (MTHFX), recently told Xconomy that in a few years, investors are going to look back and wish they had invested more in healthcare today. That forward-thinking attitude prompted Xconomy to invite Funtleyder to moderate our first public New York event, Life Sciences 2031, a panel discussion that will take place October 13 at the Alexandria Center for Life Science.

Funtleyder, who is also author of the book Health Care Investing (McGraw Hill 2009), spends his days contemplating what the future will hold for pharmaceuticals, biotechnology, and health care—and searching for the companies that are best poised to capitalize on those trends. So he’s excited by the prospect of polling the event’s four panelists on current trends in those industries and what they portend for the next 20 years. “The level of expertise on this panel will give the current era some context,” Funtleyder says.

The panelists bring a wide range of experience to bear on what’s sure to be a lively discussion. Sam Waksal was the founder and CEO of ImClone Systems and now serves as CEO of Kadmon, a New York-based biotech startup working on drugs to treat cancer, autoimmune diseases, and infectious diseases. Barbara Dalton is a scientist-turned-investor—a Ph.D. trained in virology and immunology who is now VP of venture capital for Pfizer. Sam Isaly, founder of OrbiMed Advisors, manages the popular Eaton Vance Worldwide Health Sciences Fund. And Eric Schadt is a genomics expert who serves as the chief scientific officer of Pacific Biosciences, as well as the director of the New York-based Mount Sinai Institute for Genomics and Multiscale Biology.

Funtleyder expects all the panelists to weigh in on one of the top questions on everyone’s mind: What are the next areas of growth in research? “The fact that they’ve all been around a while will allow them to bring to the table some ideas about how we can increase R&D productivity in the industry,” Funtleyder says.

For his part, Funtleyder believes R&D has reached an inflection point. “We had Genomics Part 1—the sequencing of the human genome,” he says. “Now there’s Genomics Part 2. The cost of sequencing a genome has come down from $3 million to $1,000 and the speed has gone up. Is this going to lead us into better drug discovery in the next decade? Or are we having another fad and nothing will ever come from it? That will be an interesting question to discuss.”

Personalized medicine is one of the goals of genomics research, Funtleyder points out, and will no doubt be part of the discussion at New York Life Sciences 2031. “Personalized medicine is the holy grail. But are we there yet? I don’t know. It seems like it’s inevitable. But what’s the timing? It could be a while,” he says.

Pfizer’s Barbara Dalton will bring valuable perspective as someone from Big Pharma’s ranks, Funtleyder predicts. “What’s the business model of the future?” he wonders. “Pfizer is setting up campuses all over, in Massachusetts, New York, California. Is that the new model—hiring universities, basically, to do research for you?” Funtleyder is also interested in how Pfizer’s acquisition choices might impact the rest of the industry. “The old saying is ‘If Pfizer sneezes, the industry catches a cold,’” he says. “What Pfizer does will guide what other companies do, and it will also affect the decision making by small and mid-cap biotechs. They want to make themselves attractive to Big Pharma.”

Funtleyder also wonders what the future holds for small biotechs. Some promising startups are getting snapped up by Pfizer and other large companies, he notes. But what about the legions of small biotechs that need years of research to bring their ideas to fruition—and the capital to support those projects? He’s particularly interested in what “Sam and Sam” will have to say about that, seeing as Waksal is running a startup, and Isaly’s firm has a venture arm. “It seems like private equity and venture capitalists are taking less risky bets,” Funtleyder says. “Is this setting us up to no longer have a small biotech industry in the middle of the decade? The good ones will get acquired, the bad ones will go out of business, and we’re not seeing too many public offerings.”

The biggest macro-risk for the future of the industry, Funtleyder believes, is government spending. “We’re having cost problems in the U.S.—that’s not going to go away,” he says. “We’re going to have health care reform. How are we going to grapple with it over the next decade?” Funtleyder hopes to gather perspectives and advice on that issue from all four panelists.

This being a New York event, there will, no doubt, be plenty of discussion about what needs to happen for the city to become a biotech hotbed—long the goal of Mayor Michael Bloomberg and other politicians. “This has been a knock on Manhattan forever,” Funtleyder says. “We have eight research institutions and more people than Boston or San Francisco. It’s always been a mystery to me why we don’t have a thriving biotech industry.”

The four panelists should have some creative ideas for fostering growth in the city’s life sciences sector, Funtleyder believes. “This is a great industry from an economic-development point of view,” he says. “Having life sciences as a growth engine in this town would create other growth engines.”

Funtleyder won’t be the only one firing questions at our panelists, as we will leave plenty of time for audience Q&A. So please join us at Xconomy Forum: New York Life Sciences 2031 by registering here.

[Having spent 14 years as Professor of New York University Medical School, originating a tensor- and now fractal geometrical approach to identify the intrinsic mathematics of living systems (both in Neuroscience and Genomics, unifying the two) I have plenty observations and suggestions to share. This comment can be discussed in the FaceBook page of Andras Pellionisz]


Lost In Translation? Andy Grove blasts "Change the System!" in his Anti-Medical School Course at UC Berkeley

California Institute for Quantitative Biosciences
published by Adam Mann on Thu, 09/01/2011 - 16:57

In QB3’s Anti-Medical School course, UC Berkeley and UCSF bioengineering students try to solve real-world problems that doctors face in the clinic.

Wednesday night, QB3 hosted a talk at Byers Auditorium in Genentech Hall entitled “Translational Medicine: Key to Progress or Bridge to Nowhere.” Speaking at the lecture was Andy Grove, co-founder and former CEO of the semiconductor giant Intel.

Grove spoke about the problems currently facing drug development. In terms of time and investment, he said the closest equivalent process in history to the creation of a single drug is the construction of a single pyramid in ancient Egypt. According to his estimates, a pyramid cost the equivalent of $1.5 billion US dollars and took 20 years to construct—similar to the average drug development cost and time in the US since the early 90s.

Things are only set to get worse. Grove pointed out that modern science is currently facing a new age of genetic- and cellular-based personalized medicine. The complexity of delivering such individualized drugs repeatedly and effectively will require a new way of thinking about medicine. Researchers will need to rely heavily on integrative technology and collaboration. Grove cited the implantable artificial kidney under development in the lab of Shuvo Roy at UCSF as one such approach to overcoming such problems.

But other difficulties loom. Because of the extraordinary complexity of personalized medicine, researchers have a hard time telling the truly relevant effects from everything else, said Grove. [Algorithm of Fractal Recursive Iteration has "parameters" - keeping fractality intact, resulting in harmless human diversity, whereas "syntax fractal defects" breach the fractal integrity - a revolutionary way to tell chaff from needles in a haystack - AJP]. This unsettles investors, who consider biotech a “risky business at risk.” Currently, potential biotechnology seed investments are diverted to Silicon Valley social networking sites, he said.

And then he delivered the bad news.

According to Grove, the industry is not primed for the overwhelming change it will face in the coming decades. Transformation is needed, he said repeatedly. This revolution will only come if people can lower the resistance to flow of knowledge from labs and research organizations to pharmaceutical companies.

Ideally, Grove said, knowledge flows from a medical center to industry and dollars flow back. Biotech startups are the perfect transitional step between such institutions. Unfortunately, due to difficulties in integration and regulation, nothing flows either way, said Grove.

The solution for him is to do science and early research inside the industry. Grove cited his work at Intel, where he got rid of the R&D department and placed it entirely inside manufacturing, streamlining the process. Perhaps such a model could be a boon to clinical medicine, he suggested.

Regulation has also become a barrier to companies trying to develop new medicine, said Grove. Because it is so easy to file, the US patent office is overwhelmed with trivial and obvious inventions. This is threatening and kills innovation, he said. Furthermore, the agency has been slow to upgrade from paper-based technology to computers, which would greatly reduce waste and increase speed, he said. [A good illustration is that the USPTO requires, in its latest twist of an ardous struggle, more than 9 years of the priority date of submission in 2002 August a re-writing of FractoGene core-patent, incorporating attachments already submitted. The expense of this (and claiming extension for 9 years lost) forces a "shotgun marriage" with a giant that can handle this imposed burden - while benefitting a lucky US IT global leader, determined to edge into Genome Analytics 50/50 from the now incredibly precious early priority-date - AJP]

In addition, the FDA, while trying to help, has also slowed innovation. Every year, 800,000 scientific papers related to new drugs appear in the literature, said Grove. From these, 6,000 new drugs make it to phase 3 FDA trials. But because of stringent standards, only 20 new drugs come to market each year.

Still, Grove’s talk was not all about gloom and doom. He urged scientists and doctors to get involved in trying to change the system. Ultimately, researchers need to look into doing science that’s not just for itself, Grove said. They need to produce products that help people.

Comment on the Article by Pellionisz:

Happy Birthday, dear Andy! It speaks volumes that a truly great man celebrates his landmark birthday-eve by looking ahead to swim across rough waters of the future of human kind. The absolute need of "new ways to think about medicine" [and biology at large] recalls perhaps just two outstanding precedents. John von Neumann turned to an informatics-based biology having accomplished so much for the first serial computer architecture. Neumann looked ahead to times when "the von Neumann bottleneck" will halt what later became "Moore's Law". Gordon Moore's genome fully sequenced twice just recently most likely makes also Andy Grove think in overdrive how to put genome analytics code on a chip, as the future device capable to match the data-flow parallel processing by the genome itself. Prior to von Neumann, Nobelist Schrödinger addressed in "What's Life" the quantal hereditary code-script as information by aperiodical covalent bondings of hydrogen. Without turning the vision into reality of giants of thinker-architects of strategies for a better future human kind will do poorly. [This comment can be discussed at the Lost in the Translation original website, as well as in the FaceBook page of Andras Pellionisz]

[A particularly crass example of "scientists" who never produced anything to help people suffering from "Junk DNA diseases", but on the contrary as a detractor excelled only in badmouthing pioneers of genome informatics is a Canadian Academic in semi-retirement. With his blog "Revisiting the Central Dogma in the 21st Century", while 11 years late even with his calendar and unable to produce mathematically properly rounded percentage-numbers, the topic fetched to date an astronomical, close to 500 comments, the vast majority is totally off-topic cyber-sewage and cyber-libel. One trying to focus on the issue asked "what if Crick was dead wrong with his Dogma?" and an answer pointed out that even Crick considered that as a "game changer". Nonetheless, an avalanche of superficial comments from people who admitted their incompetence in science matters, produced an utterly worthless "papering over" of the issue itself, by digressing into a flabbergasting array of meaningless tangentials. Perhaps out of his embarrassment (over becoming infamous for his libelious badmouthing) the blog-owner (while still uphelding the legally faulty policy making him liable of letting "anonymous" postings) now openly censors out attempts to steer attention to the paradigm-shift necessitated by historical failures of the two mistaken dogmas of Old School Genomics - AJP]


W.M. Keck Foundation awards Jefferson scientists with $1M medical research grant [Rigoutsos - AJP]

Published on August 4, 2011 at 2:23 AM

Scientists at Thomas Jefferson University have been awarded a $1 million medical research grant from the W.M. Keck Foundation for an ambitious project looking at the little explored 98 percent of the human genome and what role it may play in the onset and progression of diseases.

The multidisciplinary team, led by Computational Medicine Center at Jefferson director Isidore Rigoutsos, Ph.D., will soon begin studying a particular group of DNA motifs-genomic combinations of letters that repeat more frequently than expected by chance-called pyknons. Researchers and physicians will be looking at what function they serve in the context of several types of cancers, platelet aggregation properties, two autoimmune disorders, and type-1 diabetes.

Dr. Rigoutsos, a world-renowned computational biologist, originally discovered pyknons in 2005 using computational analyses. In the time since their discovery, evidence has been slowly accumulating that these pyknon motifs mark transcribed, non-coding RNA sequences with potential functional relevance in human disease.

"This is very exciting. The grant comes on the heels of six years of research," said Dr. Rigoutsos. "It will help us get to the bottom of this story: an unexplored territory that we strongly suspect has something important to reveal about human disease. There is disconnected evidence, and we want to assemble all the pieces."

For many years, Rigoutsos, who came to Jefferson in 2010 following a nearly 18-year tenure at IBM's Research Division, focused on generating conspicuous tidbits of evidence computationally, the result of his not having access to experimental facilities. All of this has, of course, changed at his new home: the Computational Medicine Center, which he founded at Jefferson last year.

Now, he said, he can cast a wider and deeper net by studying pyknons using samples from a diverse collection of human conditions: prostate, colon and pancreatic cancer, chronic lymphocytic leukemia, type-1 diabetes, hyper- and hypo-reactivity in platelets, multiple sclerosis, and systemic sclerosis. Dr. Rigoutsos is also a member of the Kimmel Cancer Center at Jefferson.

The goal is to investigate the presence of pyknon-marked non-coding RNAs in these conditions and determine the rules governing the biogenesis, processing, and mechanisms of regulatory action of these transcripts. The planned research activity will involve a combination of computational analyses and modern experimental techniques.

The winning team comprises researchers and physicians from the Computational Medicine Center and several Departments of Thomas Jefferson University and Hospital, the Children's Hospital of Philadelphia, the University of North Carolina at Chapel Hill, and the University of Texas' MD Anderson Cancer Research Center.

"It is a great honor to be recognized by the W.M. Keck Foundation, which has a long history of supporting innovative and pioneering medical research," said Mark L. Tykocinski, M.D., Dean of Jefferson Medical College and Senior Vice President of Thomas Jefferson University. "This is a unique award for a unique area of human genome research that, with our multidisciplinary approach, will undoubtedly pave the way for breakthrough discoveries to help better treat and prevent diverse diseases."

[Nobel Prizes in Genomics will be awarded in Physics, for biophysicists, picking up the trail left by Erwin Schroedinger (What is Life? - 1944) who predicted that non-periodical covalent bondings of hydrogen encode life by a mathematics unknown to him - and the rest of the World. After the "Big Genome Letdown" decade since Full Human DNA sequencing (revealing the sequence of bondings, but absolutely not "cracking the code" how life is encrypted in a sequence, a new era started with biophysicists (often, with degrees in pure math, bioinformatics, computer science, etc) - who have been traditionally sidetracked as "rebels". Now, "rebels are becoming leaders". Isidore Rigoutsos is in the same rank as Eric Schadt or Eric Lander (soon, Francis Collins will also be widely known to have had quantum mechanics before his full M.D./Ph.D...). There will be no Nobel Prize for "the gene of cancer, schizophrenia, autism, diabetes etc. - since with the failed "gene discovery" is extremly unlikely to come up with (almost for sure, non-existent) "genes for complex hereditary syndromes". (This is not to deny, that about 2,800 so-called "Mendelian diseases" are already on record, since even a single letter of A,C,T,G can turn by mutation an amino-acid producing codon e.g. into a "premature termination codon" - thus resulting in truncated thus disfunctional or even toxic protein and thus dreadful disease. However, "Genome Regulation Diseases" will not be so "easy" to map - an entirely new science is needed to mathematically understand "Recursive Genome Function" - though as yet still a minority of leading scientists are fully aware of the game-changer of discarding an "open loop" obsolete axiom and carting in the "recursive, e.g. fractal iteration" algorithms. Eric Lander already joined the fray of the "fractal nature of DNA" - grabbing Grosberg's 20+ year old seminal concept of fractal folding of the 2m long "noodle" of DNA into the 6 micron radius nucleus of a cell. Lander commands the $600 M Broad Institute (by grace of Eli & Edythe Broad). Eric Schadt, a "rebel" who told off one of the biggest of Big Pharma that they'd have to become a "genome informatics company" was dismissed as an outright "rebel" - but now Eric Schadt, in addition to CSO of PacBio, assumed Directorship of the $100 M Mount Sinai Institute of Genomics and Multi-Scale Biology ("multi-scale" rhymes with "scale-free"...). Another mathematician, Isidore Rigoutsos, while enjoying the comfort of IBM Watson Center for almost 2 decades, graced the somewhat unwilling World by his "pyknon"-s (short repetitive segments of DNA snippets that appear throughout genes and "Junk" with frequency far exceeding non-significant repetitions. It took a year to get his work published - yet another "rebel" trampling on the no man's land of "Junk" considered undesirable - but now he has his own Institute to lead progress. While Isidore did not focus his immediate attention to the distribution-curve of pyknon-like elements of a full DNA, in my Cold Spring Harbor Laboratory presentation (2009), upon invitation by George Church I provided evidence that they follow the Zipf-Mandelbrot Fractal Parabolic Distribution Curve (see Figs. 11-14). It may be, that Academic Institutes like the above are needed before Industrialization of Genomics will make Sequencing Industry sustainable by matching Analytics Industry - or history will show that the next quantum-leap is a better solution, turning paradigm-shift science into an IT-led "New Pharma" based on "Genome Computing Business" (the latter already a fact accomplished by SAMSUNG). - This entry can be discussed in the FaceBook page of Andras Pellionisz]


Samsung Launches Genome Analysis Service, Offers Free Genome

Bio-IT World
By Allison Proffitt

August 23, 2011

Samsung SDS is launching beta testing of its new next generation sequencing data analysis service beginning September 1. The Samsung SDS Genome Data Analysis Service will provide analysis services for whole genome sequencing and RNASeq for both Life Technologies and Illumina sequencers. The service will be in beta testing mode from September to November and will be offering free genome analysis (one genome per researcher) during the testing phase.

The service will be Cloud-based, with all analysis being done on the Samsung cloud in Korea. Users can either upload their data, or send a hard drive. “The customer will be actually going to our website and filling out the order form… [to get] their whole genome analyzed and their RNA analyzed,” SungKwon Kim, director of the Bioinformatics Lab, told Bio-IT World. Results are returned to the user online via Samsung’s own genome browser. “User should be able to easily navigate their genomic analysis with our web browser once we finish the genomic analysis.”

The algorithms that Samsung SDS is using for analysis are a combination of open source and vendor provide softwares with Samsung’s own proprietary “tweaks”.

Bioinformatics is a newer area for Samsung, but Kim expects it to be a growth area for the company in the years to come. “We are known for the enterprise-level IT business. When I say enterprise level, we’ve been working with system innovation with banks, high profile, Fortune 500 companies, so when it comes to data security… I don’t think any other vendor companies should be able to match with our capabilities in security, recovery handling, those kinds of things.”

Kim also believes that Samsung will offer faster genome analysis than its competitors or in house options. He expects the final service to take 5 days for whole genome analysis and 3 days for RNA analysis.

Interested parties can sign up for beta test beginning on September 1 on the Samsung Genome website http://www.samsunggenome.com.

[I predicted in 2004 that the formative event, as far as technology and business are concerned, would be when the "tectonic plates" of IT and Genomics will pile up, resulting in the "earthquake" in Industrialization of Genomics. I would never guess that it would be Korea (Samsung) beating the World to the punch, including IBM (Rigoutsos just left...), Intel (pioneering investment into sequencing, but Schadt has just left for most of his activities from Silicon Valley to New York City), Microsoft, Google, Oracle or even Dell or HP not really engaged into Genome Analytics in earnest, and even Sony, Fujitsu or Japan only explored the option but now are beaten in the all-important, indeed crucial "rush to market". One wonders if the world will rush to ship their full genomes to Korea (with the dubious claim that their security beats everyone...), and - as pointed out in my YouTube (2008) Genome IT means not one, but two things. Genome Information Technology is the relatively "easy" part (with Samsung now in the lead), but Genome Information Theory (intrinsic algorithms of recursive genome function) are much harder. HolGenTech, Inc. in Silicon Valley put IP since 2002 into the focus Genome Information Theory - and is now ready to beat the "combination of open source and vendor provide softwares with Samsung’s own proprietary “tweaks”" - This entry can be discussed on the Facebook page of Andras Pellionisz]


Comment by Andras J. Pellionisz to New York Times "Cancer's Secrets coming into Sharper Focus"

The following (unpublished) comment is better suited to this more specialized audience:

It is a pleasure to see in the superb coverage of NYT that both mistaken and very seriously stated axioms are finally discarded; Crick's over half-a-Century "Central Dogma", held from 1956 to his passing in 2004, and Ohno's "So much "Junk" DNA in our genome" erroneous theory published in 1972 and held until his death in 2000. The establishment acknowledged first in 2007 when the NIH-organized ENCODE project revealed that the human DNA is "pervasively transcribed", and now that entirely new conceptual basis is available to mathematically formulate genome regulation in terms of advanced (software-enabling) algorithms of fractal iterative recursion, as well as explaining how aberrant methylation and chromatin modulation of regulatory sequencing leads to uncontrolled (cancerous) growth. The peer-reviewed science paper is The Principle of Recursive Genome Function (2008) and the misregulation by aberrant methylation and chromatin modulation leading to cancer is specifically demonstrated at min. 30:00 of the Google Tech Talk YouTube (2008) and subsequent Cold Spring Harbor Presentation invited by George Church (2009). Indeed, present NIH Head Dr. Collins called upon conclusion of ENCODE in 2007 that "the community of scientists have to re-think long-held beliefs". A lucky few did not have to "re-think" old dogmas, since they never believed in them in the first place, for reasons of cross-disciplinary domain expertise of informatics-essentials of biophysics - but publication of formerly "lucid heresy" (on both counts, since iterative fractal recursion "violated" both the Central Dogma and Junk DNA rules of the establishment) was only possible floated in the self-edited Cambridge University Press book-chapter Neural Geometry: Towards a Fractal Model of Neurons (1989). The NIH Grant Proposal -duly acknowledged in the book chapter- was rejected, and an ongoing NIH Grant was discontinued. Nearly ¼ Century later, since “our concepts of genome regulation are frighteningly unsophisticated”, the immediately utility of advanced algorithmic (software enabling) approaches is now palpable. This post can be discussed in the FaceBook page of Andras Pellionisz.


Everything Scientists Thought They Knew About Cancer Might Be Totally Wrong

A 2000 study called The Hallmarks of Cancer is the most-referenced paper in the journal Cell, one of the most influential journals in the world. Turns out that paper might be wrong.

And that might partly explain why cancer death rates are falling slowly.

The theory, which is illustrated in the video above, basically says that sometimes cells lose their ability to regulate growth and go crazy, creating cancer tumors. The New York Times reports that at the recent annual meeting of the American Association for Cancer Research in Orlando, Florida, scientists had a whole lot of other theories up their sleeves.

One states that microbes, which include tiny creatures like bacteria and make up 90 percent of the cells in our body, sometimes turn against us to cause cancer.

Another theory is that what scientists used to think was "junk DNA" and makes up 98 percent of our DNA (only two percent is the kind that actually instructs genes) is not junk at all but a mechanism for causing cancer, among other things.

Lastly, micoRNAs might be the culprit. Until recently scientists thought they didn't do much, but now they're starting to think that microRNAs might intercept or block messages from DNA to messenger RNA.

If all of that seems crazy-complicated, check out this pretty awesome video the New York Times created for you.

Of course, cancer ain't talkin' so these theories might be totally wrong too. But one thing's for certain: smoking is still bad. Damn.

---

Cancer’s secrets are coming into sharper focus

By George Johnson / New York Times News Service

Published: August 16. 2011 4:00AM PST

For the last decade cancer research has been guided by a common vision of how a single cell, outcompeting its neighbors, evolves into a malignant tumor. Now recent discoveries are providing new details. Cancer appears to be even more willful and calculating than previously imagined.

Through a series of random mutations, genes that encourage cellular division are pushed into overdrive, while genes that normally send growth-restraining signals are taken offline.

With the accelerator floored and the brake lines cut, the cell and its progeny are free to rapidly multiply.

More mutations accumulate, allowing the cancer cells to elude other safeguards and to invade neighboring tissue and metastasize.

These basic principles — laid out 11 years ago in a landmark paper, “The Hallmarks of Cancer,” by Douglas Hanahan and Robert Weinberg, and revisited in a follow-up article this year — still serve as the reigning paradigm, a kind of Big Bang theory for the field.

But recent discoveries have been complicating the picture with tangles of new detail.

Most DNA, for example, was long considered junk — a netherworld of detritus that had no important role in cancer or anything else. Only about 2 percent of the human genome carries the code for making enzymes and other proteins, the cogs and scaffolding of the machinery that a cancer cell turns to its own devices.

These days “junk” DNA is referred to more respectfully as “noncoding” DNA, and researchers are finding clues that “pseudogenes” lurking within this dark region may play a role in cancer.

“We’ve been obsessively focusing our attention on 2 percent of the genome,” said Dr. Pier Paolo Pandolfi, a professor of medicine and pathology at Harvard Medical School. This spring, at the annual meeting of the American Association for Cancer Research in Orlando, Fla., he described a new “biological dimension” in which signals coming from both regions of the genome participate in the delicate balance between normal cellular behavior and malignancy.

As they look beyond the genome, cancer researchers are also awakening to the fact that some 90 percent of the protein-encoding cells in our body are microbes. We evolved with them in a symbiotic relationship, which raises the question of just who is occupying whom.

“We are massively outnumbered,” said Jeremy Nicholson, chairman of biological chemistry and head of the department of surgery and cancer at Imperial College London. Altogether, he said, 99 percent of the functional genes in the body are microbial.

In Orlando, he and other researchers described how genes in this microbiome — exchanging messages with genes inside human cells — may be involved with cancers of the colon, stomach, esophagus and other organs.

These shifts in perspective, occurring throughout cellular biology, can seem as dizzying as what happened in cosmology with the discovery that dark matter and dark energy make up most of the universe: Background suddenly becomes foreground and issues once thought settled are up in the air. In cosmology the Big Bang theory emerged from the confusion in a stronger but more convoluted form. The same may be happening with the science of cancer.

Exotic players

According to the central dogma of molecular biology, information encoded in the DNA of the genome is copied by messenger RNA and then carried to subcellular structures called ribosomes, where the instructions are used to assemble proteins. Lurking behind the scenes, snippets called microRNAs once seemed like little more than molecular noise. But they have been appearing with increasing prominence in theories about cancer.

By binding to a gene’s messenger RNA, microRNA can prevent the instructions from reaching their target — essentially silencing the gene — and may also modulate the signal in other ways. One presentation after another at the Orlando meeting explored how microRNAs are involved in the fine-tuning that distinguishes a healthy cell from a malignant one.

Ratcheting the complexity a notch higher, Pandolfi, the Harvard Medical School researcher, laid out an elaborate theory involving microRNAs and pseudogenes. For every pseudogene there is a regular, protein-encoding gene. (Both are believed to be derived from a common ancestral gene, the pseudogene shunted aside in the evolutionary past when it became dysfunctional.) While normal genes express their will by sending signals of messenger RNA, the damaged pseudogenes either are mute or speak in gibberish.

Or so it was generally believed. Little is wasted by evolution, and Pandolfi hypothesizes that RNA signals from both genes and pseudogenes interact through a language involving microRNAs. (These signals are called ceRNAs, pronounced “sernas,” meaning “competing endogenous RNAs.”)

His lab at Beth Israel Deaconess Medical Center in Boston is studying how this arcane back channel is used by genes called PTEN and KRAS, commonly implicated in cancer, to confer with their pseudotwins. The hypothesis is laid out in more detail this month in an essay in the journal Cell.

In their original “hallmarks” paper — the most cited in the history of Cell — Hanahan and Weinberg gathered a bonanza of emerging research and synthesized it into six characteristics. All of them, they proposed, are shared by most and maybe all human cancers. They went on to predict that in 20 years the circuitry of a cancer cell would be mapped and understood as thoroughly as the transistors on a computer chip, making cancer biology more like chemistry or physics — sciences governed by precise, predictable rules.

Now there appear to be transistors inside the transistors. “I still think that the wiring diagram, or at least its outlines, may be laid out within a decade,” Weinberg said in an email. “MicroRNAs may be more like minitransistors or amplifiers, but however one depicts them, they still must be soldered into the circuit in one way or another.”

In their follow-up paper, “Hallmarks of Cancer: The Next Generation,” he and Hanahan cited two “emerging hallmarks” that future research may show to be crucial to malignancy — the ability of an aberrant cell to reprogram its metabolism to feed its wildfire growth and to evade destruction by the immune system.

Unwitting allies

Even if all the lines and boxes for the schematic of the cancer cell can be sketched in, huge complications will remain. Research is increasingly focused on the fact that a tumor is not a homogeneous mass of cancer cells. It also contains healthy cells that have been conscripted into the cause.

Cells called fibroblasts collaborate by secreting proteins the tumor needs to build its supportive scaffolding and expand into surrounding tissues. Immune system cells, maneuvered into behaving as if they were healing a wound, emit growth factors that embolden the tumor and stimulate angiogenesis, the generation of new blood vessels. Endothelial cells, which form the lining of the circulatory system, are also enlisted in the construction of the tumor’s own blood supply.

All these processes are so tightly intertwined that it is difficult to tell where one leaves off and another begins. With so much internal machinery, malignant tumors are now being compared to renegade organs sprouting inside the body.

As the various cells are colluding, they may also be trading information with cells in another realm — the micro-organisms in the mouth, skin, respiratory system, urogenital tract, stomach and digestive system. Each microbe has its own set of genes, which can interact with those in the human body by exchanging molecular signals.

“The signaling these microbes do is dramatically complex,” Nicholson said in an interview at Imperial College. “They send metabolic signals to each other — and they are sending chemicals out constantly that are stimulating our biological processes.

“It’s astonishing, really. There they are, sitting around and doing stuff, and most of it we don’t really know or understand.”

People in different geographical locales can harbor different microbial ecosystems. Last year scientists reported evidence that the Japanese microbiome has acquired a gene for a seaweed-digesting enzyme from a marine bacteria. The gene, not found in the guts of North Americans, may aid in the digestion of sushi wrappers. The idea that people in different regions of the world have co-evolved with different microbial ecosystems may be a factor — along with diet, lifestyle and other environmental agents — in explaining why they are often subject to different cancers.

The composition of the microbiome changes not only geographically but also over time. With improved hygiene, dietary changes and the rising use of antibiotics, levels of the microbe Helicobacter pylori in the human gut have been decreasing in developing countries, and so has stomach cancer. At the same time, however, esophageal cancer has been increasing, leading to speculation that H. pylori provides some kind of protective effect.

At the Orlando meeting, Dr. Zhiheng Pei of New York University suggested that the situation is more complex. Two different types of microbial ecosystems have been identified in the human esophagus. Pei’s lab has found that people with an inflamed esophagus or with a precancerous condition called Barrett’s esophagus are more likely to harbor what he called the Type II microbiome.

“At present, it is unclear whether the Type II microbiome causes esophageal diseases or gastro-esophageal reflux changes the microbiome from Type I to II,” Pei wrote in an email. “Either way, chronic exposure of the esophagus to an abnormal microbiome could be an essential step in esophageal damage and, ultimately, cancer.”

Unseen enemies

At a session in Orlando on the future of cancer research, Dr. Harold Varmus, the director of the National Cancer Institute, described the Provocative Questions initiative, a new effort to seek out mysteries and paradoxes that may be vulnerable to solution.

“In our rush to do the things that are really obvious to do, we’re forgetting to pay attention to many unexplained phenomena,” he said.

Why, for example, does the Epstein-Barr virus cause different cancers in different populations? Why do patients with certain neurological diseases like Parkinson’s, Huntington’s, Alzheimer’s and Fragile X seem to be at a lower risk for most cancers? Why are some tissues more prone than others to developing tumors? Why do some mutations evoke cancerous effects in one type of cell but not in others?

With so many phenomena in search of a biological explanation, “Hallmarks of Cancer: The Next Generation” may conceivably be followed by a second sequel — with twists as unexpected as those in the old “Star Trek” shows. The enemy inside us is every bit as formidable as imagined invaders from beyond. Learning to outwit it is leading science deep into the universe of the living cell.

[The Cancer Establishment "threw in the towel" - giving up on the old axiom of looking for "cancer gene". Now they bank on more sophisticated approaches, where the two mistaken axioms ("Junk DNA" and "Central Dogma") are given up. Good time for the "Jim Clark"-s of Postmodern Genomics to jump on the fractal bandwagon of HolGenTech, Inc.! - This post can be discussed on the FaceBook page of Andras Pellionisz]


Spit and know your future

Author(s): Dinsa Sachan

Issue: Aug 15, 2011
Down to Earth (India)

Personal genomics makes individualised healthcare possible; but experts remain wary.

SOUMYA Swaminathan’s 16-year-old son, Sudarshan, loves sports, and is training to be a footballer. Two years ago, he was confused which sport to train in. “He would come home and say he wants to play cricket. Next week it would be football,” says Swaminathan. Then she heard about the sports DNA test offered by Super Religare Laboratories (SRL). The Mumbai-based firm had just launched the test, which would examine DNA from your saliva sample for variants of a gene linked with sporting prowess. It turned out that Sudarshan had a genetic predilection for power sports. “So we decided to focus on football and basketball,” she adds.

Gene testing is no longer restricted to paternity testing and DNA fingerprinting for criminal cases. It has varied avenues from giving details about what diseases you are likely to contract to which sports are suited for you. Experts say the personal genomics is the future of science. All you have to do is give saliva or blood sample and within days a comprehensive feedback on your health, which includes what diseases you are more or less likely to develop over your lifetime, is handed over to you.

“Personalised medicine means different things to different people. Some see it as targeted genomics where changes in specific genes predict responses to specific therapies,” says John Tomaszewski, president, American Society for Clinical Pathology. Others might see it as cellular engineering where one’s own cells are removed, re-engineered to treat a specific disease, and then re-infused into the patient, he adds. Both of these strategies are in limited use today, but hold hope for individualised healthcare.

Beginning and future

The seeds of personalised genomics were first sown in 1990 when the Human Genome Project (HGP) was conceived by a team of scientists in the US. The project, which lasted 13 years, identified and mapped the entire human genome—approximately 30,000 genes.

The Personal Genome Project (PGP), pioneered by George Church, one of HGP’s founders, is the next step. HGP mapped the genome of an anonymous person, while the PGP, when completed, will have genetic information of several individuals as well as their phenotypic information. Through this researchers can study the connections between gene functions and physical traits. The project is in process gathering genotypical information for 100,000 participants.

Rush to capitalise on science

Besides the research-based projects like PGP, the more visible side of the personal genomics industry are commercial enterprises like 23andme, Navigenics and Knome that offer different genetic tests to people. Companies like the California-based 23andme, which was floated in 2006, take a saliva sample of the customer and predict susceptibility to 199 genomic conditions including cancer and diabetes. It also has a research division, 23andWe, which published its first paper in June 2011 pinpointing genetic origins of the Parkinson’s disease.

23andme, along with ventures like Navigenics, belongs to the direct-to-consumer bracket that rely on SNP (single nucleotide polymorphism) genotyping. SNPs are particular point-locations in human DNA showing errors. Some SNP’s influence traits like physical appearance and susceptibility to diseases.

23andme sequences only a portion of one’s genome. Other players like Knome and Illumina, whose technology is used by a number of projects, offer whole genome sequencing services, which list every minute detail of your genetic fingerprint. Knome’s DNA analytics service is priced at US $4,998. When it was launched, 23andme charged US $999 for its sequencing service. The cost was slashed to US $399 two years later. Now, the company offers a package for USD $99, which includes a year’s mandatory subscription for US $9 per month in which participants are informed about any new updates from their single saliva sample.

“The price to sequence bases has fallen a million times from when HGP started with three billion ,” says Richard Resnick, CEO of GenomeQuest, a genome sequence management company. He adds that the cost will drop to as low as US $1,000 till next year.

Where India stands

Although companies like 23andme have not yet arrived in India, the country has been making strides in the field of personal genomics. Institute of Genomics and Integrative Biology (IGIB) in New Delhi mapped the first human genome in 2009. “Though India started slow in the field we are catching up,” says Rajesh Gokhale, Director, IGIB.

On the commercial side, only SRL [Super Religare Laboratories] has set its shop in the country. It takes a cheek swab and tests it for variations within the gene (ACTN3) associated with athletic performance in numerous studies. SRL tests children for two variants of the gene. According to B R Das, research and development chief at SRL, individual having the R variant of the gene have a possibility of excelling in sports that require short, powerful bursts of energy. The other variant, X, may be more useful in endurance sports like cricket. The test has been taken by 3,000 people across India. It costs Rs 2,000.

Binay Panda, head of Ganit Labs, Institute of Bioinformatics and Biotechnology in Bengaluru, believes India should develop its genomic potential in the direction which can benefit population at large. He sees a potential in medicine. His lab is trying to pin down the genetic causes of cancer. “This can help us devise specific diagnostic and prognostic techniques that would detect cancer at an early stage,” he adds.

In doubt

But experts remain critical about the disease assessments made by companies. “As far as sequencing some parts of genome and making predictions is concerned, we do know a fair amount about specific diseases and mutations associated with them,” says Kevin Rosenblatt, director, Center for Clinical Proteomics at The University of Texas Medical School at Houston. But when we consider the whole genome and the vast number of mutations that each of us carries in our DNA, science is not that well developed, he adds.

Gil Atzmon, assistant professor of genetics and medicine, Albert Einstein College of Medicine in the US, says, “Risk is not necessarily illness, and the probability of getting a disease is dependent on many factors, especially when epigenetics—response of gen ome to stress, diet, toxins and much more—is introduced to the equation.” He adds that two people can have the same genetic blueprint but are vulnerable to different diseases based on the environment they interact with. Andras Pellionisz, head of HolGenTech Inc, which makes genome analysing software, believes analysing genomes presents various challenges which current technology is not adequately prepared to meet. “The biggest challenge is interpretation. Genome is a changing repository of information because mutations and actual sequence alterations happen throughout life.”

Indian researchers are also wary of such endeavors. “The test by SRL measures one factor of energy metabolism. Will you become a sportsperson or not depends on other factors like environment also,” says Gokhale. Panda dubs the tests as “recreational genomics”.

Ethical and legal issues

When genomics companies were sprouting, the debate revolved aro und ethical issues and these still continue to haunt it. Critics question the repercussions of telling people what diseases they are vulnerable to. There are also concerns that health insurance companies might exploit the information and not insure one based on the probability of having a specific disease. Though US has a law— The Genetic Information Nondiscrimination Act—that protects US citizens from “genetic discrimination” no other country has such a legislation.

[I made a technical comment on "biological information" to the Article (click on the headline-link to view). Additionally, I attribute the biggest significance to this news that it amounts to yet another entry to the "DTC becames global" trend, with Mumbai-based Super Religare Laboratories. This comment can be discussed at the FaceBook page of Andras Pellionisz]


Researchers uncover a new method of checking for skin cancer

01 July 2011 @ 11:41 am EDT

Medical Daily

[Skin Cancer? There is an app for it ... AJP]

Skin Scan, a Romanian startup, claims it has found a way to measure the amount of risk a mole represents using a proprietary algorithm combined with an iPhone image of the mole which it says can measure the likelihood of it representing skin cancer.

The app raises awareness of a particularly aggressive disease which kills when people aren't aware of the serious nature of a mole that might have developed from over-exposure to harmful sunlight.

"It is accepted that human tissues have a fractal-like structure." says P.hD Mircea Olteanu, brains behind the app, "Consequently, during the last decades scientists tried to classify different types of tumors by computing their fractal dimension and numerical characteristics... Skin Scan is a skin cancer prevention tool which tells users when to look for a professional medical investigation."

Commonly found in Caucassions 1:80, the disease is largely preventable by precautionary measures and Skin Scan hopes to take that a step forward by getting people to check for their mole's more regularly - with an iPhone.

The app measures the mole's size and uses various characteristics, such as it's fractal nature - smoothness, colour and other irregularies to keep an eye on more serious ones.

If the app detects a change in the mole, it sends them to the doctor.

"Skin Scan is a skin cancer prevention tool which tells users when to look for a professional medical investigation...Our team encourages the use of this modern technology that alerts users to seek medical help in time." continued Mircea Olteanu.

The team is run by Mihai Mafteianu of Cronian Labs in Romania. It’s headed up by CEO Victor Anastasiu, co-founder of Romanian seed fund SeedMoney.

The team secured $50,000 dollars of startup and costs $6.50 on the iPhone app store.

Download the Skin Scan app: http://itunes.apple.com/us/app/skin-scan/id434196122?mt=8&ls=1

Access the company's website: http://www.skinscanapp.com/index_3

[Both the utility of mobile devices and fractality of tissues (as well as fractality of the genome has been pioneered since 2002, see FractoGene and Barcode Shopping App YouTube). Fractals are eminently recursive algorithms, however, and thus FractoGene used to be a "lucid heresy" violating Crick's "Central Dogma" (while Crick was still alive, forbidding protein-to-DNA recursion) as well as Ohno's "Junk DNA" dogma (that Ohno actually meant seriously in 1972 such that any recursion to the "Junk" would be pointless - his 4.5 page meeting abstract deteriorated into a ludicrious misnomer now "seriously" upheld only by an occasional lonely moron, who is unable to produce a properly rounded percentage number, let alone understand advanced mathematics of postmodern genome informatics). Obsolete dogmas had to be demolished once Crick was gone, and the ENCODE results were published (2007), clearing the path towards The Principle of Recursive Genome Function (see peer-reviewed science paper and popularization by Google Tech Talk YouTube, both in 2008). The YouTube shows at min. 31 how aberrant methylation of perused auxiliary information (from the "Junk") results in uncontrolled fractal growth - the genome misregulation diseases a.k.a. cancer. Today, there is a veritable groundswell of observations, papers, and "the tip of the iceberg", an already deployed $6.50 tool as an app to help prevent cancer - based on the realization that the "double lucid heresy" was crazy, and actually was crazy enough to be true! (Paraphrasing Niels Bohr's theory, when the Copenhagen Group heard Bohr's thesis, and concluded "We all agree that your theory is crazy - the only question we have if it is crazy enough to be true"). HolGenTech, Inc. now focuses on cancer where the fractal genome, because of its "fractal defects" violates the genome's own mathematical rules, and thus genome misregulation results in clearly fractal, uncontrolled cancerous growth. In addition to several papers covered in this column, below is a list of recent observations and findings substantiating the case:

http://www.ncbi.nlm.nih.gov/pubmed/21319994

http://www.ncbi.nlm.nih.gov/pubmed/21514387

http://lambda.qsensei.com/content/1pmhrw

http://www.biomedcentral.com/1471-2407/10/260

http://www.cancertherapyblog.com/cancer-news/fractal-dimension-analysis-helps-breast-cancer-prognosis/

This entry can be discussed on the FaceBook page of Andras Pellionisz]


How accurate is the new Ion Torrent genome, really?

Genetic Future
By Daniel MacArthur July 21, 2011

New sequencing technology Ion Torrent has made a splash with a paper in today’s issue of Nature. There’s no question the high-impact publication is a massive boost for the young platform, now nestled within the embrace of the giant Life Technologies (who acquired the startup for a surprisingly large price last August) and bracing for the impending launch of its most serious competitor, Illumina’s MiSeq.

The paper jumps the new platform through the standard hoops: some basic kicking-the-wheels, a test run on three bacterial genomes (Vibrio fisheri, Escherichia coli, and Rhodopseudomanas palustris), and then the traditional main event: the sequencing of a complete human genome. The genome in question is that of Intel co-founder Gordon Moore, the eponymous originator of Moore’s Law. There’s some pleasing symmetry here: Moore’s Law is frequently cited in the context of the massive decline in the costs of DNA sequencing; in addition, the Ion Torrent technology is based on the same kind of semiconductor technology pioneered by Moore. Refreshingly, the paper refers to Moore by name, which is a pleasant change from the rather affected pseudo-anonymity of other published genomes (e.g. Patient Zero).

Anyway I’m not going to comment at all here on the technical and bacterial work, which I have no doubt will be covered in detail by my esteemed colleagues Keith Robison and Nick Loman. My main interest in this paper is what it tells us about the ability of Ion Torrent as a potential platform for large-scale sequencing of human genomes, and a rival to current sequencing market leader Illumina. I also want to spend some time berating the authors of the paper for a thoroughly misleading piece of statistical sleight-of-hand that makes their accuracy numbers sound far better than they actually are.

What did they do?

The company sequenced Moore’s genome using their technology to an average coverage of 10.6x. This just means that on average each base in the genome was covered by 10.6 separate Ion Torrent reads, albeit with substantial variation: some bases had lots more reads, and some had fewer. You can see the distribution of read counts per base (in red), compared with the ideal distribution (a Poisson distribution, in green) in Figure 4b of the paper – I’ve copied a thumbnail to the right. It’s clear that there are plenty of positions in the genome with substantially less than 10 reads.

Let’s be very clear about this up front: by modern standards, this is a poor-quality genome. An average coverage of 10x means that most positions in the genome will be covered by at least one read – 99.21%, in this case – but in many of those locations, the number of reads will be too low to have any chance of accurately calling a heterozygous SNP (a base change where both different versions are present, one on the maternal and one on the paternal chromosome). This isn’t a function of the raw data quality – it’s simply a statistical consequence of sampling error at small sample sizes, that can only be overcome by additional sequencing.

It’s also an extremely expensive genome: even at this low coverage the sequencing burned through around 1,000 Ion Torrent chips, and in an NY Times piece yesterday sequencing guru George Church estimated the total cost of this project at around $2 million. That would be substantially lower at today’s prices, but still north of $200,000 for a poor-quality genome compared to less than $5,000 for a high-quality sequence from Complete Genomics. The yield of the Ion platform (in terms of bases per dollar) is of course going up rapidly, but I think it’s important to emphasise that Ion Torrent is not yet a remotely competitive technology for affordable whole human genome sequencing.

So how accurate is the genome sequence, really?

The authors attempted to explicitly estimate their error rate by sequencing Moore’s genome a second time using an independent technology: in this case, Life Technologies’ SOLiD platform, to a total coverage of around 15x. (The higher depth of the SOLiD sequencing understates the far higher yield from that platform compared to Ion Torrent; for this paper the authors ran over 1,000 chips on the Ion Torrent, whereas the SOLiD coverage was presumably achieved in a single run.) 15x coverage isn’t much better than 10x, so the SOLiD sequence would be expected to be missing plenty of heterozygous sites as well.

So, the authors have two separate low-coverage genomes, both of which would be expected to be missing plenty of SNPs – that means we would expect to see plenty of sites that differ between the two sequences (reflecting changes that by chance were detected by one platform but missed by the other). Yet the paper appears to cite a “validation rate” for the SNPs called by the Ion Torrent that is implausibly high:

To confirm the accuracy of our analysis, we also sequenced the G. Moore genome using ABI SOLiD Sequencing43 to 15-fold coverage and validated 99.95% of the heterozygous and 99.97% of the homozygous genotypes (Supplementary Tables 1 and 2). [my emphasis]

There’s absolutely no conceivable way that a comparison between a 10x genome sequence and a 15x genome sequence could possibly result in a “validation rate” of 99.95% for heterozygous sites, at least not for any reasonable definition of the term “validation rate”. It takes some digging in the supplementary data to figure out what’s going on here. This is the definition of the term in the legend of Table S2, where the metric is referred to as the “percent same genotype”:

In cases where both datasets call the same type of SNP (heterozygote or homozygous variant) the proportion for which the genotype call is the same

The only way I can parse that sensibly is as follows: for sites that are called as heterozygous in both the Ion Torrent and SOLiD data, the “validation rate” is the proportion where the same two alleles are present. In other words, non-validated sites would only be sites where both platforms called a heterozyous SNP, but one platform said it was an A/G SNP while the other said it was an A/C SNP.

This is a near-useless metric, and does not correspond to any meaningful definition of the term “validation rate”. It gives us no information about what we actually want to know about, the proportion of sites where a SNP is called by one platform but not by the other – those are simply excluded from the comparison entirely. This is simply a measure of the platform’s ability to call the correct non-reference base at sites that are genuinely polymorphic, something that would be extremely high for virtually any half-decent sequencing technology. The only useful thing this metric does is provide a percentage with lots of convincing nines in it, which I’m sure the investors love, but I’m seriously perplexed that it managed to sneak past the manuscript reviewers.

Let’s take a more sensible definition of the term “validated”: for instance, let’s say it’s the proportion of sites called as heterozygous by Ion Torrent that also show some evidence of variation in SOLiD (we’ll generously say that the variant can be either homozygous or heterozygous in the SOLiD calls). Using this more plausible definition, the validation rate for Ion Torrent SNPs is just 88.0% at homozygous sites and 84.4% at heterozygous sites.

Ion Torrent could no doubt argue that this calculation is unfair to them: in many (probably most) cases, a discrepancy between Ion Torrent and SOLiD will be due to SNPs that were missed by the SOLiD technology, and thus aren’t really errors made by Ion Torrent. This is absolutely true, and in response I say: so do a proper job of validating your variants. Being a part of Life Technologies, one might imagine, should give the chaps at Ion Torrent a decent amount of access to SOLiD machines, and one more run of Moore’s genome on a SOLiD 4 would have given a far cleaner genome sequence for comparison. LIFE might even have one or two of those old capillary sequencing machines around that they used to sell: just 100-200 targeted capillary reactions around sites discrepant between the Ion Torrent sequence and a high-quality SOLiD sequence would have given plenty of data for an accurate estimation of the platforms real false positive and false negative rates.

Lack of proper validation is even more of an issue for larger structural variants. Here the authors steer clear of attempting to discover new variants, focusing instead on figuring out whether Moore carries any of the known structural variants called by the 1000 Genomes pilot project (PDF). Of 7,565 large deletions and inversions found by 1000 Genomes, the authors find evidence for 3,413 of them in Moore’s genome. That seems like a surprisingly large proportion to me, and it’s unclear how many of these calls are real: the authors report the results of a simulation using random genomic regions to estimate that 99.94% of their called events are real, but this number is not particularly meaningful as true deletion breakpoints are not well-represented by random chunks of the genome. And here there is absolutely no experimental evidence brought to bear – for instance, as far as I can tell no attempt was made to see how many of these apparent deletions also showed support in the SOLiD data, and certainly no attempt to independently validate the variants using a simple PCR assay.

All in all, a disappointing showing. This clearly isn’t a great genome sequence – it simply can’t be at 10x coverage, no matter how good the raw accuracy is – but the authors haven’t done enough experimental work to get a good sense of how accurate it really is. That means there’s very little we can say about the utility of Ion Torrent for whole human genome sequencing, apart from the fact that it’s currently too expensive to be practical.

What does Moore’s genome tell us about him?

Not much. The authors make a fairly cursory attempt at genome interpretation, pulling annotations from 23andMe’s database and OMIM, but their results aren’t particularly useful. That’s not a criticism, by the way: the point of this paper was demonstrating a sequencing technology, not a functional annotation pipeline. (Incidentally, 23andMe’s database was apparently used without any formal collaboration with the company, suggesting the researchers simply scraped the information from the company’s website: it’s intriguing to see one of the companies attacked by the FDA and Congress as “snake oil” being used as the go-to source for functional annotation.)

However, I note that the indefatigable Mike Cariaso has already run Moore’s genome through his interpretation pipeline Promethease – you can get the results here. It appears Moore has an increased risk of baldness (check), altered responses to various drugs, and a potentially highly elevated risk of age-related macular degeneration. However, nothing that he couldn’t have learnt from a 23andMe test, at less than 0.1% of the cost.

Where to next for Ion Torrent genomes?

This has been a pretty negative post, because I’ve focused solely on a section of the paper that – I’ll be frank – was done pretty badly. It’s not intended to be a critique of the Ion Torrent technology as a whole, and I’ll leave an evaluation of the technical merits of the platform to others who know it far better than I.

Still, I can’t help but wonder if Torrent made a mistake in including a human genome in this paper at all. I mean, I know it’s traditional, and sequencing Moore makes for some easy headlines, but the Torrent platform simply isn’t currently suited to whole-genome sequencing and won’t be until its yield improves substantially (there are clear signs in the paper that this is happening, albeit perhaps a little slower than we were promised). In sequencing a human genome with this early-stage, low-yield technology, Ion Torrent was forced into a dilemma of its own making: either spend an obscene amount of money to generate a high-quality sequence, or spend a simply lewd amount of cash to generate a crappy sequence. In the end they opted for the second approach, and I suspect they would have been better off simply leaving Moore’s genome out of the paper entirely.

In any case, I should emphasise that given the slow pace of publishing, this is a genome that was put together using the technology of maybe 12 months ago. There’s no question that Torrent technology has been improving over that time, and while it’s still not at the stage of competing with Illumina on cost right now, it’s certainly possible that this will be more viable in 12 months’ time. Hopefully the next genome sequence published using this technology comes complete with sufficient validation data to get a real impression of its quality.

[With TWO Former Intel Presidents (legendary Andy Grove and "Moore's Law" Gordon Moore) personally involved, with their own genomes, in "Genome Informatics" one may assume there is much talk at Intel about the future. As for the "present perfect", the Battelle Study (avilable on the web free full text) already assessed the economic impact of The Human Genome Project at $796 Bn in the USA alone - with Intel taking a vital part in the investments into quick and affordable full DNA sequencing by Pacific Biosciences since July 14, 2008 (first $100 M - a formative event in the history of "Genomics becoming Informatics"). It is unclear from the coverage from this article and the one below, why Gordon Moore got fully sequenced (once by Ion Torrents new platform, another time by the well-established SOLiD platform of the same company (Life Technologies) - and why is it that Andy Grove is not on public record of having been fully sequenced by any platform (thus missing from the approximately 2,700 individual humans whose DNA has been fully sequenced to date). From the viewpoint of Intel as a company and lead investor, it certainly makes a lot of sense to cross-compare the existing and emerging "full DNA sequencer technologies" for e.g. accuracy, costs, speed - and e.g. clinical applications (for instance with PacBio's capability of calling not only nucleotides, but also the methylation-status). It follows, that we might expect e.g. IT celebs with genomic conditions (cancer, Parkinsons', Alzheimers' - already a list of about 30 serious hereditary conditions that remained unpublished by Newsweek) - if I were Steve Jobs or Andy Grove could probably afford and be extremely keen on including a "Full DNA Sequencing" in my "Annual Physical" - but presently costing about $5k and needing only a blood sample, I would absolutely kindly but firmly demand a "before and after" full DNA sequencing for any therapy (say, chemo) with a propensity or requirement that it should do something about my diseased genome (at the very least monitoring its methylation-pattern, if full DNA sequencing would be deemed too costly, by the inexpensive Illumina methylation-interrogation by their microarrays). If I were a former leader of a major IT company with my opinion heavy as gold, I would probably not advise to have some other country to grab, beyond a point of no return, the software development of modern genome informatics (ask me why). This entry can be discussed in the FaceBook page of Andras Pellionisz]


[Former Intel President] Grove Backs an Engineer’s Approach to Medicine

(see also similar coverage by Kristen Bole)

May 17, 2010, 4:52 PM

New York Times

By ANDREW POLLACK

Andrew S. Grove, the former chief executive of Intel, is taking the next step in his quest to infuse the engineering discipline of Silicon Valley into the development of new medical treatments.

Mr. Grove has pledged $1.5 million so that the University of California campuses in San Francisco and Berkeley can start a joint master’s degree program aimed at so-called translational medicine — the process of turning biological discoveries into drugs and medical devices that can help patients.

The idea is to expose students to both the engineering prowess of Berkeley and the medical research of San Francisco to train a new breed of medical innovator.

“What we have learned from decades of rapid development of information technology is that the key is relentless focus on ‘better, faster, cheaper’ — in everything,’’ Mr. Grove said in a statement. “The best results are achieved through the cooperative efforts of different disciplines, all aimed at the same objective.”

Mr. Grove first broached the idea of the joint-campus program in November.

Mr. Grove’s views reflect in part his personal frustrations with waiting for better treatments for prostate cancer, which he had, and Parkinson’s disease, which he has now.

“I have my own decade-plus experience with a number of diseases that have dozens of ways of curing mice if mice have them and don’t progress toward clinical implementation,’’ Mr. Grove said in an interview.

Translational medicine is indeed a big buzzword these days. Everyone seems to recognize that there is a gap in getting from “bench to bedside.’’ Various programs are being set up to try to speed the process. Interdisciplinary work is another buzzword and trend among medical researchers.

Clearly, medical innovation can stand to be improved. Spending by big pharmaceutical companies on research and development has roughly doubled in the last decade, without any increase in the number of new drugs getting to market.

Yet whether Mr. Grove has the right prescription is open to debate. Some medical researchers have ridiculed his criticisms of their work, saying it is simply not possible to apply the techniques of the electronics business to the far greater complexity of human biology. ["Is Andy Grove a Kook?" - some old schooler biochemists who can't even produce properly rounded percentages, let alone come near to understanding a (now old) Itanium CPU, consisting of 2 Billion transistors, would probably put Andy Grove into the same category of ridicule as they did Barbara McClintock - to get her Nobel four decades delayed - AJP]

“Mr. Grove, you can print out the technical specs for your chips,’’ Derek Lowe, a pharmaceutical industry chemist, wrote in 2007. “We don’t have them for cells.’’

--

Comment (Pellionisz)

Harvard Medical School medicine was successfully united with MIT engineering to have created the Broad Institute. With $600 M charitable contribution from Eli and Edythe Broad – since a hereditary and “incurable” (!) syndrome, appeared in the family - a single frustrated venture philantrophist family provided the wherewithal to put Informatics and Genomics together. Broad is under directorship of Dr. Lander, with first degree in mathematics, now a Science Advisor to the President. Lander et al. 2009 Science cover article amounted to the outry “Mr. President, the Genome is Fractal!”. Andy Grove certainly is one the Worlds best experts how to wrestle the 500 pound gorilla of Informatics – but seems to be frustrated by old school medicine to handle his serious hereditary syndromes (prostate cancer and Parkinsons’). Mr. Grove is forcefully vocal both about his conditions and even more so about the need and extreme timeliness to seek entirely new avenues. Now the Battelle Study (assessing that the Human Genome Project had a $796 Bn impact on the US economy alone) rests on the pillar that “Genomics turned into Informatics”. It would take just minutes for Andy Grove to verify that the genome is a code of Informatics how to (mis)regulate genome function of a massively parallel 2-bit dataflow-“machine” – totally impossible without massive deployment of the best defense-computers. Assembling full human DNA has proven this thesis, BTW a trivial note for Silicon Valley movers and shakers, over a decade ago. More in my Google Tech Talk YouTube under “Pellionisz”.]


Loophole found in genetic traffic laws [Death Certificate of Crick Central Dogma - AJP]

Altered molecule causes protein-making machinery to run stop signs
By Tina Hesman Saey July 16th, 2011; Vol.180 #2 (p. 8)

Science News

Biology’s rules may be full of exceptions, but a new discovery has uncovered a violation in a rule so fundamental that geneticists call it the central dogma.

The molecular equivalent of writing one RNA letter in a different font can change the way a cell’s protein-building machinery interprets the genetic code, Yitao Yu and John Karijolich of the University of Rochester in New York report in the June 16 Nature. They found that occasional conversions of a genetic letter found in RNA into a slightly different form can cause a cell’s protein-building machinery to roll right through a stop sign.

That might seem like a run-of-the-mill molecular traffic violation, but it results in an entirely different protein than the one encoded by DNA — a clear violation of the central dogma.

The central dogma holds that DNA is the repository for all genetic instructions in a cell. The tenet declares that those instructions are carefully transcribed into multiple messenger RNA, or mRNA, copies, which are then read in three-letter chunks called codons by cellular machinery called ribosomes. Ribosomes then convert the mRNA instructions into proteins.

Yu and Karijolich studied pseudouridine, a slightly different version of the RNA component uridine. The enzymes that copy DNA to RNA and vice versa can’t tell the difference between the two components, but the subtle chemical tweak — akin to writing a letter in a hard-to-read, byzantine font — gives an entirely different meaning for the ribosome, the researchers suggest.

The result is “groundbreaking,” says Nina Papavasiliou, a molecular biologist at Rockefeller University in New York City. “It says that we don’t fully understand how ribosomes decode RNAs.”

That discovery could also mean that genes contain more information than scientists have realized, Papavasiliou says.

Pseudouridine is already known to be important for the function of many types of RNA in cells. Yu and Karijolich engineered a system to discover whether mRNAs containing the modified letter might also have a slightly different function than those with plain old uridine. The researchers created a flawed copper-detoxifying gene called CUP1 that contained an early signal to stop making protein. The team also created a system that would cause yeast cells to edit the mRNA, replacing the uridine in the stop codon with pseudouridine. If pseudouridine behaved just like uridine, then cells would prematurely halt production of the detoxifying protein and wouldn’t be able to grow in the presence of copper.

Yeast cells that replaced uridine in the stop sign with pseudouridine could grow on copper, the researchers found. Looking more closely, the team found that instead of reading the stop sign as stop, ribosomes interpreted the pseudouridine-containing codon as an instruction to insert the amino acids serine, threonine, phenylalanine or tyrosine into the protein.

That choice of amino acids by the ribosome has biologists reeling, because those aren’t even the amino acids usually chosen when the protein factories do occasionally run stop signs.

When you know the literature, you would expect other [amino acids],” says Henri Grosjean, a biochemist and geneticist at the University of Paris-South.

Apparently ribosomes haven’t read those papers.

Whether pseudouridine plays a part in changing the genetic code in nature remains to be seen, but researchers are betting that it does. The implications for health and disease could be great, says Juan Alfonzo, a molecular biologist at the Ohio State University. Pseudouridines may be required to make some proteins correctly, but “misplacing a pseudouridine could make things a physiological mess,” he says, causing some proteins to have flaws, even fatal ones.

And Yu and Karijolich’s technique might be used to fix genetic errors, too. Introducing stop sign–busting pseudouridine into an RNA may one day help people with rare genetic diseases in which one of their genes contains an early stop codon, Alfonzo says.

[Here you go, again. Crick' infamous "Central Dogma of Molecular Biology", totally falsely pontificating that sequence information NEVER recurses from proteins and RNA to DNA, was immediately laughed about by Nobelists Jacobs and Monod as far back as half a Century ago (1961). "Facts don't kill theories", however - some lonely morons (who can not produce properly rounded numbers!) could still dismiss this experimental discovery as "not fitting into their ideology"). Obsolete theories are killed by superior theories - in this case The Principle of Recursive Genome Function peer reviewed paper and its popularization in Google Tech Talk YouTube. - This entry can be discussed in the FaceBook page of Andras Pellionisz]


Editing the genome - Scientists unveil new tools for rewriting the code of life

BOSTON, MA (July 14, 2011) — The power to edit genes is as revolutionary, immediately useful and unlimited in its potential as was Johannes Gutenberg's printing press. And like Gutenberg's invention, most DNA editing tools are slow, expensive, and hard to use—a brilliant technology in its infancy. Now, Harvard researchers developing genome-scale editing tools as fast and easy as word processing have rewritten the genome of living cells using the genetic equivalent of search and replace—and combined those rewrites in novel cell strains, strikingly different from their forebears.

"The payoff doesn't really come from making a copy of something that already exists," said George Church, a professor of genetics at Harvard Medical School who led the research effort in collaboration with Joe Jacobson, an associate professor at the Media Lab at the Massachusetts Institute of Technology. "You have to change it—functionally and radically."

Such change, Church said, serves three goals. The first is to add functionality to a cell by encoding for useful new amino acids. The second is to introduce safeguards that prevent cross-contamination between modified organisms and the wild. A third, related aim, is to establish multi-viral resistance by rewriting code hijacked by viruses. In industries that cultivate bacteria, including pharmaceuticals and energy, such viruses affect up to 20 percent of cultures. A notable example afflicted the biotech company Genzyme, where estimates of losses due to viral contamination range from a few hundred million dollars to more than $1 billion.

In a paper scheduled for publication July 15 in Science, the researchers describe how they replaced instances of a codon — a DNA "word" of three nucleotide letters — in 32 strains of E. coli, and then coaxed those partially-edited strains along an evolutionary path toward a single cell line in which all 314 instances of the codon had been replaced. That many edits surpasses current methods by two orders of magnitude, said Harris Wang, a research fellow in Church's lab at the Wyss Institute for Biologically Inspired Engineering who shares lead-author credit on the paper with Farren Isaacs, an assistant professor of molecular, cellular and developmental biology at Yale University and former Harvard research fellow, and Peter Carr, a research scientist at the MIT Media Lab.

In the genetic code, most codons specify an amino acid, a protein building block. But a few codons tell the cell when to stop adding amino acids to a protein chain, and it was one of these "stop" codons that the Harvard researchers targeted. With just 314 occurrences, the TAG stop codon is the rarest word in the E. coli genome, making it a prime target for replacement. Using a platform called multiplex automated genome engineering, or MAGE, the team replaced instances of the TAG codon with another stop codon, TAA, in living E. coli cells. (Unveiled by the team in 2009, the MAGE process has been called an evolution machine for its ability to accelerate targeted genetic change in living cells.)

While MAGE, a small-scale engineering process, yielded cells in which TAA codons replaced some but not all TAG codons, the team constructed 32 strains that, taken together, included every possible TAA replacement. Then, using bacteria's innate ability to trade genes through a process called conjugation, the researchers induced the cells to transfer genes containing TAA codons at increasingly larger scales. The new method, called conjugative assembly genome engineering, or CAGE, resembles a playoff bracket—a hierarchy that winnows 16 pairs to eight to four to two to one—with each round's winner possessing more TAA codons and fewer TAG, explains Isaacs, who invokes "March Madness."

"We're testing decades-old theories on the conservation of the genetic code," Isaacs said. "And we're showing on a genome-wide scale that we're able to make these changes."

Eager to share their enabling technology, the team published their results as CAGE reached the semifinal round. Results suggested that the final four strains were healthy, even as the team assembled four groups of 80 engineered alterations into stretches of the chromosome surpassing 1 million DNA base pairs. "We encountered a great deal of skepticism early on that we could make so many changes and preserve the health of these cells," Carr said. "But that's what we've seen."

The researchers are confident that they will create a single strain in which TAG codons are completely eliminated. The next step, they say, is to delete the cell's machinery that reads the TAG gene — freeing up the codon for a completely new purpose, such as encoding a novel amino acid.

"We're trying to challenge people," Wang said, "to think about the genome as something that's highly malleable, highly editable."

[Starting from the conclusion of the authors' Press Release, the philosophical message is: "The Genome is NOT your Destiny". This drastically altered world-view, introduced by Genome Informatics, is similar in scope to the Heisenberg "Principle of Uncertainty" - that forever changed the former deterministic physics to probabilistic physics. As emphasized in the 2088 Google Tech Talk YouTube, the radically different new philosophy instills scientific reason for hope ("The Principle of Recursive Genome Function" presented as "The Circle of Hope"), replacing the gloomy former attitude that "Your Genome was Your Destiny" - both a reason for gloom and doom, as well as avoiding any genome testing, in the (mistaken) belief that "there is nothing one can do about it". Beyond philosophical world-view and attitude of the masses, the paradigm-shift practical application is also evaluated (by some interesting metaphors) in various leading Journals (New York Times, Los Angeles Times, Forbes, etc, etc.). Let's just start with the metaphors used in the authors' Press Release. Is the new breakthrough like "Gutenberg' Press", or "The Word Processor (software)"? In terms of "Synthetic Biology", entirely new "texts" can be composed of the A,C,T,G letters. Most exciting! In terms of "The Principle of Recursive Genome Function", however, (see one follower below), an even more immediate - and patently useful - "word processing" could e.g. be made in time in vivo for humans - by the "replace" function of e.g. the "fractal defects" of the genome with snippets that do obey the genome's own fractal mathematics - thus fractal iterative recursion function of genome regulation no longer "hicks up" on the glitches! Those familiar with (man-made) software know, that the instructions are first run through a "Syntax Checker"; to see if all instructions fully conform with the formal requirements. Suppose an instruction is found to have a "structral variant" (gibberish), the coder uses "global replace" throughout the code to straighten out all such glitches. (Those familiar with code also know, that impeccable syntax is a necessary but not sufficient condition for any software to run - but actual compilation could never take place before all instructions obey the rules). The articles below, to "replace" codons (or even just a single nucleotide in case of Progeria...) show the enormous potential power of the breakthroughs for IT-led "New Pharma". The issue can be debated in Andras Pellionisz' FaceBook page. - AJP]


Sequence Variability and Sequence Evolution:

An Explanation of Molecular Polymorphisms and Why Many Molecular Structures Can Be Preserved Although They Are Not Predominant

DNA AND CELL BIOLOGY
Volume 29, Number 10, 2010
Mary Ann Liebert, Inc.
Pp. 571-576
DOI: 10.1089/dna.2009.094

Borros M. Arneth

Institute of Clinical Chemistry and Laboratory Medicine, Johannes Gutenberg University, Mainz, Germany

The existence of many processes that regulate RNA expression poses a challenge to the idea that the cell is the culmination of a highly efficient interplay of individual proteins, each with specific, highly specialized functions.

It will be demonstrated here the extent to which the cell may undergo evolutionary processes that also occur in the macrocosmos, specifically with reference to the rules of mutation and preservation. These molecular evolutionary processes could facilitate a better understanding of the development of molecular structures and the functioning of the cell and could give an explanation of the molecular polymorphisms and also explain why many molecular structures can be preserved although they are not predominant...

Pellionisz (2008) describes the principle of recursive genome function and how DNA is affected by this recursion through proteins.

[See the 2008 paper in full here - AJP]


Clue to kids' early aging disease found [The Colossal Paradigm-Shift - AJP]

By Madison Park, CNN

July 1, 2011 12:08 p.m. EDT

[Dr. Francis Collins wrote the book on the colossal paradigm shift; Personalized (Genomic) Medicine - AJP]

(CNN) -- Her name was Meg, 23, featherweight and feisty.

Standing 3 feet tall, Meg didn't look like her peers. Bald and skinny, her body was aging rapidly because she had a rare genetic disease called Hutchinson-Gilford progeria syndrome.

People with progeria wrinkle and develop the same circulation and joint ailments as the elderly -- except most of them die by age 13.

Progeria affects 200-250 children worldwide, but research into the disease could offer clues on cellular function and how it affects human aging and other age-related diseases.

This week, a study about a possible treatment was published in Science Translational Medicine. Dr. Francis Collins, director of the National Institutes of Health, is one of the authors.

About 30 years ago, Collins, then a young Yale University doctor, met Meg. He realized there was little he could do for his patient, but he couldn't look away.

"It was compelling to try to understand why someone's body is melting away in the ravages of age," he said. "You couldn't be involved without marveling at it and wanting to do something to understand the situation."

Collins offered his concern and compassion, but there was no treatment for her disease.

Despite her grave prospects and appearance, Meg did not shy away from the public eye. Instead, she became an outspoken advocate for disabled people in Milford, Connecticut.

Long before it became customary to do so, "She got that town to become friendly to the disabled," Collins said. "She made it happen."

Just because she was diminutive, it didn't mean people could step all over her. Meg could also "curse like a sailor" in her birdlike voice, he said.

Meg Casey died in 1985, but she never faded from the doctor's memory.

Collins' role as a geneticist is to decode the most complex puzzles of human life. He is best known as a leader of the Human Genome Project that mapped and sequenced the human DNA.

The mystery of progeria remained one of his interests. Collins and seven others are authors of a study that found an immune suppressing drug, called rapamycin, could possibly treat progeria.

There have been no approved drugs or treatment to slow the course of the disease.

Children with this rare genetic condition lose their hair as infants, while they're learning to talk. Their minds develop normally, but their bodies age rapidly.

As toddlers, their skin begins to wrinkle and sag. Most of them die of age-related causes, like heart disease, heart attack or stroke, before they start high school.

Clues found in mysterious childhood aging disease

The cause: A single letter in a progeria patient's genome is out of place. This genetic defect causes the child to accumulate too much of a toxic protein called progerin and the cells can't get rid of it. [The single letter clue was actually found in 2003 by Vastag B 2003 Cause of progeria's premature aging found: expected to provide insight into normal aging process. JAMA 289:2481-2482 - AJP]

"Cells have a normal way of removing byproducts," said Dr. Dimitri Krainc, an author in the study. "You accumulate trash and you take it out. That's what happens in cells. As they work, they start accumulating byproducts. There has to be a system to remove those byproducts."

Progerin is seen in small amounts in healthy people's cells as they begin to age. The difference is that healthy cells can get rid of the damaged molecules and unneeded proteins.

The researchers chose to test rapamycin on cells from progeria patients because research suggested its effectiveness in extending the lives of mice. Rapamycin is an immune-suppressing drug given to transplant recipients to prevent organ rejection.

Krainc, an associate professor of medicine at Harvard University, said cells from progeria patients look "very sick."

"When you treat them with rapamycin, it looks normal," he said. The drug appeared to activate a cellular system that removes the waste.

"Rapamycin or similar drugs of that same class are capable of revving up that cleanup system," Collins said.

But the drug comes with major risks because it raises cholesterol levels and suppresses the immune system, making patients more vulnerable to infections. Rapamycin was derived from bacteria found in the soil of Easter Island in the 1960s.

The results of the progeria study triggered discussions about a potential human clinical trial.

"I can't say what the drug will do before clinical trial," Krainc said. "We're very hopeful, because of the dramatic effect (in cells) was replicated."

The protein buildup in cells is seen in other diseases, such as Alzheimer's and Parkinson's disease. Alzheimer's patients have tau protein tangles and another protein called the beta amyloid plaques in their brains. Parkinson's patients have a buildup of a protein called alpha-synuclein.

Some scientists hypothesize that the cell's inability to dispose of unnecessary protein as humans age is what could lead to severe illnesses.

"This is a fundamentally important pathway by which cells maintain their own health," Collins said. "Yet as we age, we don't do it quite as well and the buildup starts to happen."

The Progeria Research Foundation, a nonprofit that supports affected families and promotes scientific research on the disease, supplied tissue and cell samples for the study.

Dr. Leslie Gordon, the foundation's medical director and a mother of a boy with progeria, said scientists are considering the use of RAD001, a modified version of rapamycin that has fewer side effects in a potential human trial.

"Nothing is in place," she said. "These things take instructional and federal scrutiny. We're considering this based on this very study."

For years, progeria research languished as another orphan disease. Rare diseases struggle to attract attention and research dollars because they affect very few people.

Drug development favors common diseases that could lead to blockbuster medications. This leaves patients and families of orphan diseases with little support or medical options.

"The first thing that we discovered after my son was diagnosed was there was nothing out there to help researchers to do research -- no cells or tissue, no clinical information bank, no outreach for families," said Gordon. "There was nothing to work with."

The foundation established a cell and tissue bank and started efforts to identify more progeria patients. It is involved in a different clinical trial involving a three-drug cocktail to treat progeria.

It hasn't been easy trying to bring attention and resources to a rare disease, Gordon said.

"For all the times people could not help, there are people like Francis (Collins) who say yes," Gordon said. "When we approached him, it was the fact that he cared and the fact that it bothered him that we didn't have the answer."

"He's not only interested in science, he's interested in people."

Collins has met her young son, Sam.

Patients with progeria are enthusiastic, precocious and embrace life, Collins said.

"They're taking what most of us look at as a really discouraging situation and saying, 'No, darn it. I'm going to make the most of it. If I have a shortened time being here, I'm not going to waste it feeling sorry for myself.

"I'm going to make the most of it.'"

[Given a single-letter glitch in the DNA (that, in the wrong place, could make the codon produce inappropriate and/or toxic protein), as elaborated in e.g. Dr. Collins' book can be theoretically dealt with in two entirely different manner. One is to CURE the DNA by patching the erroneous code (see early animal research results of e.g. Dr. George Church' lab in the news below). The traditional ways and means of Big Pharma aim at THERAPY by means of compounds (that, as Dr. Collins elaborates in his book) may or may not work in individual cases - plus could have serious side-effects in SOME, causing FDA to withhold or pull a compound such that it might never become an approved drug (as FDA is mandated by a generation-old legislation to apply a "one drug fits all" - scientifically clearly obsolete - criteria. The paradigm-shift towards "New Pharma" (led by genome informatics, to understand how the genome malfunctions and/or misregulated e.g. in the case of cancer, and how to measure the efficacy in genomic (not often too late protein) results, as well as how to modify the genome such that "The Genome is NOT your Destiny") is, indeed, colossal. The issue can be debated in Andras Pellionisz' FaceBook page. - AJP]


Researchers Use Genome Editing Methods to Swap Stop Codons in Living Bacteria

July 14, 2011
Genomeweb

By Andrea Anderson

NEW YORK (GenomeWeb News) – In Science today, an international research group led by investigators at Harvard University and the Massachusetts Institute of Technology reported that they have advanced their genome editing technology, using these tools to develop Escherichia coli strains in which one stop codon has been replaced by another.

"We're able to, at a genome-wide scale, make codon replacements for an entire codon across the whole genome," co-first author Farren Isaacs, who performed the research as a post-doctoral researcher at Harvard University and is currently a researcher at Yale University, told GenomeWeb Daily News. "Basically we use living cells as a template and we make changes within the living cells."

To do this, the team split the E. coli genome into dozens of pieces and then used a method called multiplex automated genome engineering, or MAGE, to introduce codon changes to each region in differents E. coli cultures, trading the guanine nucleotide in the TAG stop codon for an adenine to make the synonymous stop codon TAA. From there, they came up with a hierarchical "conjugative assembly genome engineering" (CAGE) strategy to amalgamate these changes so that increasingly large stretches of the genome were recoded in each intermediate strain.

At the moment, the team has four E. coli strains that they plan to use to generate a single strain in which every TAG has been converted to TAA.

"These four strains, which contain up to 80 modifications per genome, can be combined to complete the assembly of a fully recoded strain containing all 314 TAG-to-TAA codon conversions," the study authors wrote.

The long-term objectives of such experiments are to develop the technology to make large-scale changes to genomes and introduce new functions into organisms, Isaacs explained, and, eventually, to create organisms with new genetic codes, including those capable of producing proteins from amino acids not currently found in nature.

"That could lead to entirely new classes of drugs, industrial enzymes, biopolymers that could be used to make new types of materials, and so on," he said, noting that similar strategies could also be used to genetically isolate organisms, thwarting potential viral pathogens, and to contain genetically engineered organisms within restricted environments.

Members of the team previously used MAGE to reprogram a biosynthesis pathway in E. coli cells leading to enhanced production of the compound lycopene — work that they reported in Nature in 2009.

Now, researchers have shown that they can build on this method, putting together pieces of the genome containing MAGE-induced changes to produce bacterial strains with specific codon changes across larger and larger swaths of the genome.

The 20 amino acids and "stop" signal are encoded by 64 triplet nucleotide sequences, meaning there are more than three times as many codons as there are functions for which they code.

The researchers were able to exploit this redundancy in the genetic code, Isaacs explained, trading the stop codon TAG, which usually appears 314 times in the E. coli genome, for another stop codon, TAA, that's recognized by a different release factor during translation.

The team first introduced targeted alterations into 32 E. coli cultures by MAGE using oligonucleotides containing the desired changes, Isaacs said, gauging the functional consequences, if any, along the way.

"We decided to pursue a strategy whereby we would divide up the strains and quickly introduce all of these changes to small pools to, one, verify that they're viable and, two, be able to detect and quantify phenotypes," he explained. "It was important to be able to obtain enough resolution on the changes that we're making to see if any of them led to any sort of strange phenotype."

They then merged the MAGE-produced alterations into their final four intermediate strains using CAGE. Coupled with selection, this hierarchical genome engineering method let the researchers transplant well-defined stretches of DNA from one genome to another without introducing unintended changes to the recipient genome.

The researchers are now continuing to use CAGE to take the E. coli strains with the most extensive recoding to the next stage — assembling a strain in which every TAG in the genome has been converted to TAA.

In the future, they also plan to try trading out other codons, including those coding for amino acids. The team will likely attempt to use their codon replacement strategy in other bacterial species, Isaacs noted, and, perhaps, in eukaryotic cells as well.

"Our methods treat the chromosome as both an editable and an evolvable template," the researchers wrote, "permitting the exploration of vast genetic landscapes."

[Look for a piece of news that appeared a few minutes after this one. (Should be shown here ASAP). Putting 1+1 together, you'll see crystal clear why and how we are facing the biggest paradigm-shift of science & technology of all times. The issue will be up for debate in Andras Pellionisz' FaceBook page. - AJP]


The Mathematics of DNA [is Fractal - says Dr. Perez]

The number of triplets that begin with a T is precisely the same as the number of triplets that begin with A (to within 0.1%).

The number of triplets that begin with a C is precisely the same as the number of triplets that begin with G.

The genetic code table is fractal - the same pattern repeats itself at every level. The micro scale controls conversion of triplets to amino acids, and it’s in every biology book. The macro scale, newly discovered by Dr. Perez, checks the integrity of the entire organism.

Perez is also discovering additional patterns within the pattern.

I am only giving you the tip of the iceberg. There are other rules and layers of detail that I’m omitting for simplicity. Perez presses forward with his research; more papers are in the works, and if you’re able to read French, I recommend his book “Codex Biogenesis” and his French website. Here is his English paper.

(By the way, he found some of his most interesting data in what used to be called “Junk DNA.” It turns out to not be junk at all.)

OK, so what does all this mean?

Copying errors cannot be the source of evolutionary progress, because if that were true, eventually all the letters would be equally probable.

This proves that useful evolutionary mutations are not random. Instead, they are controlled by a precise Evolutionary Matrix to within 0.1%

When organisms exchange DNA with each other through Horizontal Gene Transfer, the end result still obeys specific mathematical patterns

DNA is able to re-create destroyed data by computing checksums in reverse - like calculating the missing contents of a page ripped out of a novel.

No man-made language has this kind of precise mathematical structure. DNA is a tightly woven, highly efficient language that follows extremely specific rules. Its alphabet, grammar and overall structure are ordered by a beautiful set of mathematical functions.

More interesting factoids:

The most common pair of letters (TTT and AAA) appears exactly 1/13X as often as all the letters combined – consistently, the genomes of humans and chimpanzees.

If you put the 32 most common triplets in Group 1 and the 32 least common triplets in Group 2, the ratio of letters in Group1:Group2 is exactly 2:1. And since triplet counts occur in symmetrical pairs (TTT-AAA, TAT-ATA, etc), you can group them into four groups of 16.

When you put those four triplet populations on a graph, you get the peace symbol:

Does this precise set of rules and symmetries appear random or accidental to you?

My friend, this is how it is possible for DNA to be a code that is self-repairing, self-correcting, self-re-writing and self-evolving. It reveals a level of engineering and sophistication that human engineers could only dream of. Most of all, it’s elegant.

Cancer has sometimes been described as “evolution run amok.” Dr. Perez has noted interesting distortions of this matrix in cancer cells. I strongly suspect that new breakthroughs in cancer research are hidden in this matrix.

I submit to you that the most productive research that can possibly be conducted in medicine and computer science is intensive study of the DNA Evolution Matrix. Like I said, this is just the tip of the iceberg.

There is so much more here to discover!

When we develop computer languages based on DNA language, they will be capable of extreme data compression, error correction, and yes, self-evolution. Imagine: Computer programs that add features and improve with time. All by themselves.

What would that be like?

Perry Marshall

P.S.: Dr. Perez and I are friends. Perez worked on HIV research with the man who originally discovered HIV, Luc Montagnier. Perez also worked in biomathematics and Artificial Intelligence at IBM. I’m familiar with this work because last spring I had the privilege of helping him translate his groundbreaking research paper about this into English. [See link above - AJP]

In the 1940′s, the eminent scientist Barbara McClintock damaged parts of the DNA in corn maize. To her amazement, the plants could reconstruct the damaged section. They did so by copying other parts of the DNA strand, then pasting them into the damaged area.

This is a lot like ripping an entire page out of a mystery novel and somehow being able to re-construct the missing text, even though the page is destroyed.

This discovery was so radical at the time, hardly anyone believed McClintock’s reports. [To the contrary, her pioneering was opposed at every step of the way, ignorant detractors called Barbara McClintock a "Kook" - she got her Nobel several decades after her "lucid heresy"]

And we still wonder: How does a tiny cell possibly know how to do…. that???

A French HIV researcher and computer scientist has now found part of the answer. Hint: The instructions in DNA are not only linguistic, they’re beautifully mathematical.

Imagine that someone gives you a mystery novel with an entire page ripped out.

And let’s suppose someone else comes up with a computer program that reconstructs the missing page, by assembling sentences and paragraphs lifted from other places in the book.

Imagine that this computer program does such a beautiful job that most people can’t tell the page was ever missing.

DNA does that.

In the 1940′s, the eminent scientist Barbara McClintock damaged parts of the DNA in corn maize. To her amazement, the plants could reconstruct the damaged section. They did so by copying other parts of the DNA strand, then pasting them into the damaged area.

This discovery was so radical at the time, hardly anyone believed her reports. (40 years later she won the Nobel Prize for this work.)

A French HIV researcher and computer scientist has now found part of the answer. Hint: The instructions in DNA are not only linguistic, they’re beautifully mathematical. There is an Evolutionary Matrix that governs the structure of DNA.

Computers use something called a “checksum” to detect data errors. It turns out DNA uses checksums too. But DNA’s checksum is not only able to detect missing data; sometimes it can even calculate what’s missing. Here’s how it works.

In English, the letter E appears 12.7% of the time. The letter Z appears 0.7% of the time. The other letters fall somewhere in between. So it’s possible to detect data errors in English just by counting letters.

In DNA, some letters also appear a lot more often (like E in English) and some much less often. But… unlike English, how often each letters appears in DNA is controlled by an exact mathematical formula that is hidden within the genetic code table.

When cells replicate, they count the total number of letters in the DNA strand of the daughter cell. If the letter counts don’t match certain exact ratios, the cell knows that an error has been made. So it abandons the operation and kills the new cell.

Failure of this checksum mechanism causes birth defects and cancer.

Dr. Jean-Claude Perez started counting letters in DNA. He discovered that these ratios are highly mathematical and based on “Phi”, the Golden Ratio 1.618. This is a very special number, sort of like Pi. Perez’ discovery was published in the scientific journal Interdisciplinary Sciences / Computational Life Sciences in September 2010.

Before I tell you about it, allow me to explain just a little bit about the genetic code.

DNA has four symbols, T, C, A and G. These symbols are grouped into letters made from combinations of 3 symbols, called triplets. There are 4x4x4=64 possible combinations.

So the genetic alphabet has 64 letters. The 64 letters are used to write the instructions that make amino acids and proteins.

Perez somehow figured out that if he arranged the letters in DNA according to a T-C-A-G table, an interesting pattern appeared when he counted the letters.

He divided the table in half as you see below. He took single stranded DNA of the human genome, which has 1 billion triplets. He counted the population of each triplet in the DNA and put the total in each slot:

When he added up the letters, the ratio of total white letters to black letters was 1:1. And this turned out to not just be roughly true. It was exactly true, to better than one part in one thousand, i.e. 1.000:1.000.

Then Perez divided the table this way:

Perez discovered that the ratio of white letters to black letters is exactly 0.690983, which is (3-Phi)/2. Phi is the number 1.618, the “Golden Ratio.”

He also discovered the exact same ratio, 0.690983, when he divided the table the following two alternative ways:

Again, the total number of white letters divided by the total number of black letters is 0.6909, to a precision of better than one part in 1,000.

[We will never know fractal computer languages of DNA without appropriate (and potentially eminently lucrative) funding. Dr. Perez is an independent scientist in France, who cited his motivated to write his papers by publications of Pellionisz (2008) The Principle of Recursive Genome Function, Pellionisz, A. (2009) From the Principle of Recursive Genome Function to Interpretation of HoloGenome Regulation by Personal Genome Computers. Cold Spring Harbor Laboratory; Personal Genomes, Sept. 14-17, 2009 and Lander et al. (2009 October 9th Science cover article). While Dr. Lander is Science Advisor to the US President and thus could help ensure that the USA emerges in her race with China, Japan, Korea, India - and even Russia, the US government is out of funds - fractal clues to e.g. cancer (as voiced in the Google Tech Talk YouTube by Dr. Pellionisz, 2008), may come from "Venture philantrophists" like Dr. Lander's own Broad Institute (Eli and Edythe Broad) - AJP].


Cell Surface as a Fractal: Normal and Cancerous Cervical Cells Demonstrate: Different Fractal Behavior of Surface Adhesion Maps at the Nanoscale

M. E. Dokukin,1 N.V. Guz,1 R. M. Gaikwad,1 C. D. Woodworth,2 and I. Sokolov1,3,* 1Department of Physics, Clarkson University, Potsdam, New York 13699-5820, USA 2Department of Biology, Clarkson University, Potsdam, New York 13699-5820, USA 3Nanoengineering and Biotechnology Laboratories Center (NABLAB), Clarkson University, Potsdam, New York 13699-5820, USA

(published 8 July 2011)

Here we show that the surface of human cervical epithelial cells demonstrates substantially different fractal behavior when the cell becomes cancerous. Analyzing the adhesion maps of individual cervical cells, which were obtained using the atomic force microscopy operating in the HarmoniX mode, we found that cancerous cells demonstrate simple fractal behavior, whereas normal cells can only be approximated at best as multifractal. Tested on 300 cells collected from 12 humans, the fractal dimensionality of cancerous cells is found to be unambiguously higher than that for normal cells.

DOI: 10.1103/PhysRevLett.107.028101


China genomics institute outpaces the world

(Xinhua)

Updated: 2011-06-14 17:19

SHENZHEN - Many people were surprised when BGI (formerly Beijing Genomics Institute), a Chinese genomics institute, sequenced a strain of E. coli bacterium responsible for the outbreak in Germany that killed at least 18 people earlier this month.

But it was no surprise for Qin Junjie, deputy head of BGI's microorganism genomics researcher center, whose team sequenced the bacterium in three days. "We have the greatest output of genomics data and the best team to analyze it," Qin said.

BGI is more like a factory than a lab, according to Qin. The BGI facility, a converted shoe factory in Shenzhen city, now houses 137 top-of-the-line genome-sequencing machines and high-speed computers.

BGI pumped out 500 Tb of genomics data in 2010 - ten times the amount of data the US National Center for Biology Information (NCBI) produced in the past twenty years. BGI expects to produce 100 Pb of data in 2011, Qin said [That will be two hundred times, in a single year, of NCBI output in the past twenty years - meanwhile, NCBI closes some of its data-services, due to lack of continued government support - AJP] .

In addition, BGI used Ion Torrent, a newly-developed sequencing machine that is much quicker. "Even half an hour counts in the fight against epidemics," said Yang Bicheng, spokeswoman for BGI.

To cope with the vast amount of data, BGI needs a robust, young staff. The institute has 3,000 scientists who are younger than 25 on average. At 29, Qin is one of the oldest.

Li Yingrui was just a college student and an intern of BGI when he published his first paper in Nature Journal in 2007. Now, Li, 24, directs the bioinformatics department and its 1,500 computer scientists. He has become one of BGI's leading scientists with five papers published in Science Magazine and Nature Journal.

In BGI, college, or even high school students, lead cutting-edge projects and publish papers in top science journals. Yet despite the impressive work of these young scientists, their pay isn't so world-class. A recent college graduate gets about 3,000 yuan ($462) per month. The average monthly salary in Shenzhen is more than 4,000 yuan.

Having an army of scientists at a comparatively low cost contributes to BGI's competitiveness, Yang said.

At BGI, young people can work with world's leading scientists and participate in international projects," Yang said. "They also have the opportunity to lead research in new areas, and such motivation is more powerful than anything else."

The growing fame of BGI in the world shows China's efforts in promoting scientific advancement is starting to pay off, said Wang Jian, BGI's director.

"The scientific outlook on development is a key policy of China, and it requires the government to focus on supporting research facilities like BGI," Wang said.

In addition, China has been striving for progress in medical reform, agriculture and environmental protection, which in turn boosts bioscience research, he added.

In a visit to BGI on June 4, two days after it completed the sequencing of the bacterium, Xu Qin, mayor of Shenzhen, said the city will give all-out support to boost the leap-frog development of BGI.

Shenzhen, a boomtown near Hong Kong, is the base of some of China's most innovative companies such as ZTE and Huawei.

[It is not that in the USA there is nothing done. There is a lot of competition, for instance. The E. coli German strain has also been sequenced by Pacific Biosciences SMRT molecular sequencer - while China favored Life Technologies' Ion Torrents sequencer (that is way over ten times cheaper as a machine). Illumina is also in the race - and of course for full human DNA sequencing there is Complete Genomics. All US companies develop their separate (and largely incompatible) software for the "sequencing" part (assembly), as well as for the crucial "analytics" part (interpretation). This is exactly where China differs; and appears to develop a 200x lead over US government (in a single year, beating the two decades worth of US government support). In "assembly", China (BGI) already announced a GPU-assisted software cloud tool kit. Unless the US entrepreneurship wakes up (it will take either a miracle, or some movers & shakers getting sick with cancer, or just getting sick of the feeling of a mortal dependence...) the US will find Genome Informatics going the same way as the entire consumer electronics did ("Invented and designed in the USA - made in China" - with the "tiny" difference that consumer electronics is mostly for fun, but Genome Informatics is a life or death issue). - This entry can be discussed on the FaceBook of Andras Pellionisz]


Searching For Fractals May Help Cancer Cell Testing

Researchers use a new tool to determine that cancerous cells have different geometries than healthy cells.

Jul 5, 2011
By Phillip F. Schewe

Inside Science News Service

[The above figure shows a cell imaged by SEM (scanning electron microscope) and AFM (atomic force microscope).- excerpt from the Sokolov paper - AJP]

(ISNS) -- Scientists have long known that healthy cells looked and behaved differently from cancer cells. For instance, the nuclei of healthy cells -- the inner part of the cells where the chromosomes are stored -- tend to have a rounder surface than the nuclei in cancerous cells.

A new experiment looks at the shapes of healthy and cancerous cells taken from the human cervix and has attempted to quantify the geometrical differences between them. The research, carried out at Clarkson University in Potsdam, N.Y. finds that the cancerous cells show more fractal behavior than healthy cells.

Fractal is the name used for heavily indented curves or shapes that look very similar over a variety of size scales. For example, the edge of a snowflake, when observed with a microscope, has a lacelike structure that looks the same whether at the level of a millimeter, or a tenth of a millimeter, or even a thousandth of a millimeter. The position of galaxy clusters in the sky seems to be fractal. So does the snaking geometry of streams in a river valley, or the foliage of leaves on a tree. The shape of coastlines and clouds reveals a fractal, "self-similar" geometry. Even the "drip" paintings of Jackson Pollack are fractal.

Fractal geometry apparently also appears in the human body. The pattern of heartbeats over long intervals looks fractal. How about the geometry of cells? And could the observation of fractal geometry be used to identify cancer cells?

Igor Sokolov and his Clarkson colleagues used an atomic force microscope to view cells down to the level of one nanometer, or a billionth of a meter (one-millionth of a millimeter). Just as the needle on a record player rides over the groove of a rotating vinyl record to read out the music stored on the record's surface, so the sharp needle forming the heart of an atomic force microscope rides above a sample reading out the contours of matter just below at nearly atomic resolution.

Previous studies of cells at the microscopic level produced two-dimensional maps of the cells' surface. The new study produces not only three-dimensional surface maps of geometry. But with their atomic force microscope device the Clarkson scientists can also map properties such as the rigidity of the cells at various points on its surface or a cell's adhesion, its ability to cling to a nearby object, such as the needle probe of the atomic force microscope itself.

The Clarkson measurements show that cancerous cells feature a consistent fractal geometry, while healthy cells show some fractal properties but in an ambiguous way. The fact that the adhesive map is fractal for cancerous cells but not for healthy cells was not known before.

Being able to differentiate clearly between healthy and cancerous cells would be important step toward a definitive diagnosis of cancer. Can a fractal measurement of cells serve as such a test for malignancy?

Sokolov believes it can.

"The existing cytological screening tests for cervical cancer, like Pap smear, and liquid-based cytology, are effective and non-invasive, but are insufficiently accurate," said Sokolov.

These tests determine the presence of suspicious abnormal cells with sensitivity levels ranging from 80 percent all the way down to 30 percent, for an average of 47 percent.

The fractal criterion used in the Clarkson work was 100 percent accurate in identifying the cancerous nature of 300 cells derived from 12 human subjects, Sokolov said. He intends now to undertake a much wider test.

"We expect that the methodology based on our finding will substantially increase the accuracy of early non-invasive detection of cervical cancer using cytological tests," Sokolov said.

Sokolov asserts that physics-based methods, such as his atomic force microscope maps of cells, will complement or even exceed in detection ability the more traditional biochemical analysis carried out at the single cell level.

"We also plan to study how fractal behavior changes during cancerous transformation, when a normal cell turns into a fully developed malignant cell, one with a high degree of invasiveness and the ability to reproduce itself uncontrollably," Sokolov added.

Robert Austin, an expert on biological physics at Princeton University in N.J., believes it is important to learn more about the properties that make cancer cells lethal, such as their ability to metastasize, to invade new parts of the body. About the Clarkson paper, which is appearing in the journal Physical Review Letters, [This column will excerpt the Letter, to appear on July 8th by courtesy of Prof. Sokolov - AJP] Austin said "Perhaps this is a step in the direction of connecting physical aspects of cancer cells with the biological reality that their proliferation and invasiveness is what makes them deadly."

[FractoGene (2002, web) and The Principle of Recursive Genome Function (2008, YouTube from 30:00 minutes) are rapidly closing on cancer. While the Fractal Recursive Iteration (developing fractal brain cells was shown as early as in 1989, the "lucid heresy", running head-to-head with both of the (totally wrong) axioms of Old School Genomics ("Junk DNA" and "Central Dogma") met with violent opposition at every step of the long way (including some detractor, alas with inaptitude in math not only to - admittedly - grasp advanced tensor and fractal geometry, but unable to produce a properly rounded number!). Today, a massive evidence is shown below to substantiate the Principle of Recursive Genome Function, where methylation of perused auxiliary (intergenic and intronic) genomic information is of the essence to keep fractal growth bounded (properly regulated). While Prof. Sokolov does not seem to connect the enhanced fractality of cancerous cells with genomic shifts (e.g. in methylation), it is clear that fractal geometry has an immediate utility e.g. for diagnosis by measuring significant difference in fractal dimension of already developed cancerous cells. Imagine how important it is to conduct a targeted search for "fractal defects" of the genome BEFORE the uncontrolled protein-production is spotted from the very early signs of genomic shifts e.g. in methylation! History will not be kind to those whose tardiness contributes to hundreds of millions of people dying of "Junk DNA diseases" (such as cancers), while those smart enough to advance (also in mathematics, in algorithmic - software enabling) to PostModern analytics of genome function will be richly rewarded - This entry can be discussed on the FaceBook of Andras Pellionisz]


A quest for better genetics [from Moscow...]

The Moscow News
By Oleg Nikishenkov at 27/06/2011 21:56

Genetics, advanced mathematics and new online solutions can help humans quantify their behavior with the ultimate goal of living longer.

That’s the message from Esther Dyson, a top-50 global IT investor and Skolkovo advisor who is promoting a new project in genome analysis designed to fulfill the goal.

“Biology becomes more and more information processing, and you have some of the best mathematicians and algorithm people here in Russia,” Dyson told The Moscow News on the sidelines of RIA Novosti’s Future Media Forum in Moscow last week.

The process of getting personal genetic analysis is fairly simple, despite its scientific complexity. All you have to do is deposit saliva into a test tube and send your sample to the lab. But it has geographical and even political restrictions. If you are in Russia, you need a special permission to send a sample of your saliva to the US, where the 23andMe (equal to the number of human chromosomes) labs are. And some US states prohibit the purchase of genetic data by private individuals on the assumption of reaction to this knowledge could be unpredictable. “We keep the information very secure but if you fear more than fear of death, we won’t get you genus [genomic] test done,” Dyson said.

Nikolai Mityushin, head of investments of the ABRT venture fund, said such projects are quite promising, as they deal with real needs like healthcare.

“On the other hand, the specifics of the project attribute it to the biotech industry, which has its own regulation,” he said.

Dyson said some Russian businesspeople she knows have already sent samples of their saliva to the project.

The higher math and strong computer systems required to analyze SNPs (Single Nucleotide Polymorphism), which are variations each our gene has.

“What we look for is specific little correlations of data, which come in different varieties,” Dyson said. They occur once in every 300 nucleotides on average, which means there are roughly 10 million SNPs in the human genome.

Currently genetic labs can analyze 1 million SNPs, but “in two years it will probably be 3 million and in a few years the entire genome will be collected,” Dyson said. That might help predict an individual’s reaction to certain drugs and the risk of some diseases, including heart failure, diabetes and cancer.

But so far the genetic testing only gives “hints” on how to change your behavior.

“The remainder of the proof is left as an exercise for the reader,” said Dyson quoting popular math professors’ saying. She explained that the next trend is user-generated data for research, as opposed to user-generated content. “And this research is about ourselves,” she said.

Pavel Gitelman, who coordinates the Red Quest project, said that in Russia such an innovation could be well received by wealthy clients. “We still have quite a huge category of people who mistrust genetics, but fully believe in all kinds of fortune tellers,” he said.

[Some of the World's best algorithm-creators, like Sergey Brin and Larry Page originate from Russia (Sergey was born in Moscow) - now running the World's most powerful IT company (Google, Inc.). Shura Grosberg published - while in Moscow, in the early 90-s - the seminal concept of the fractal folding structure of the DNA-strand (to become the foundation of Lander et. al. 2009 Oct 9 Science Cover article "Mr. President, the Genome is Fractal!"). As I mention towards the end of my 2008 Google Tech Talk YouTube , for those who know that the Internet itself is fractal (and depends on "recursion", moreover putting "cookies" to perused pages...) it should be obvious that the genome functions based on The Principle of Recursive Genome Function (Pellionisz, 2008, see the seminal Fractal Model of brain cell 1989). Interestingly, as shown below (a Figure from Pellionisz' 2009 presentation in Cold Spring Harbor Laboratories, upon invitation by Harvard Professor of Genetics George Church, even Esther Dyson's father (physicist Freeman Dyson) suspected that the origin of life may be fractal by showing the Mandelbrot fractal set on the cover of his book - though neither the index mentions "fractal", nor citation is not made to "Mandelbrot". Some of such "near misses" may be even more likely where the gap between "biologist" Lisenko and outstanding mathematicians let to some mistrust of genetics. To the advantage of the European Schools, the two mistaken dogmas of Genomics (the "Junk DNA" misnomer, rendered obsolete by Pellionisz' Symposium in Budapest, 2006, and Nobelist Crick's huge mistake "Central Dogma") have never been entrenched to such a degree in Europe than in America. In the USA, the establishment till the end of ENCODE 2007 and some detractor individual of Toronto, without the elementary competence in mathematics if a number was rounded correctly or not (!) inflicted severe damage on the development of PostModern Genomics. Huge downloads (much more from the Ukraine compared from Russia) from this website show a torrent of interest now in PostModern Genome Informatics. This entry can be discussed on the FaceBook of Andras Pellionisz]


Study Suggests Widespread Loss of Epigenetic Regulation in Cancer Genomes

June 27, 2011

By Andrea Anderson
Genomeweb

[Figure from Nature Genetics - by courtesy of Dr. Feinberg ]

NEW YORK (GenomeWeb News) - Cancer coincides with a widespread loss of epigenetic regulation affecting large chunks of DNA in the genome, according to a study in the early, online version of Nature Genetics yesterday. ["We suggest a model for cancer involving loss of epigenetic stability of well-defined genomic domains that underlies increased methylation variability in cancer that may contribute to tumor heterogeneity."].

Using custom microarrays, American researchers assessed methylation patterns in five types of cancer, focusing on regions of the genome that were shown to be differentially methylated in cancer in the past. The team found that cancer genomes show dramatically different methylation patterns compared to corresponding normal tissues, including a lack of defined methylation boundaries around so-called CpG islands where cytosine and guanine nucleotides frequently neighbor one another.

In addition, they reported, the cancers showed dramatic variability in their methylation levels, along with changes in some parts of the genome that are known to be differentially methylated in other tissue types or in undifferentiated cells.

Moreover, their bisulfite sequencing studies of colon cancer, pre-cancerous polyps, and normal colon tissue uncovered large stretches of DNA that were differentially methylated in the cancer samples, leading to altered expression of some cell cycle and cell matrix-related genes in these regions.

"Epigenetics, specifically DNA methylation, is losing its regulation in cancer and we think that that's helping cancer thrive," co-first author Winston Timp, a post-doctoral fellow in Andrew Feinberg's lab at Johns Hopkins University's Center for Epigenetics, told GenomeWeb Daily News.

"It may be a very early event which acts in conjunction with mutations to cause cancer," he added. "And it seems to happen in all types of cancer."

In 2009, Feinberg, Johns Hopkins biostatistics and epigenetics researcher Rafael Irizarry, and colleagues published a Nature Genetics study describing methylation differences in colon cancers compared to normal colon tissue. In particular, they found cancer-specific differences not necessarily within CpG islands, but more often on the "shores" of these islands.

In addition, many of the same regions that were differentially methylated in the colon cancers correspond to those that tend to be differentially methylated in various tissue types or in undifferentiated cells.

Consequently, the team decided to look at several cancer types in the current study, Timp explained. "If these differences seem to control differentiation state, so to speak, maybe they'll show differences in other cancers too."

Using Illumina custom bead arrays, the team first tested 122 colon, lung, breast, and thyroid cancers and Wilms' tumors, a childhood kidney cancer. They also tested 30 pre-malignant samples, along with 141 matched control samples, focusing on 384 sites in 151 cancer-specific differentially DNA-methylated regions detected in colon cancer in the past

Indeed, researchers reported, cancer and normal samples from each tissue type had very distinct methylation patterns. While both normal tissues and cancers tended to fall in distinct clusters based on their methylation patterns, methylation in the cancer samples was far more variable.

"This seems to show a loss of control - a loss of regulation - of methylation in cancer compared to [matched normal tissue]," Timp said. "The five different normal tissues also cluster out very well from each other … but all the cancers are much more variable."

To explore the methylation patterns across the genome in cancer cells, meanwhile, the researchers did shotgun, whole-genome bisulfite sequencing of three colorectal tumor samples — along with matched normal colon tissue and two pre-cancerous adenomatous polyp samples — using the ABI SOLiD platform.

From this sequence data, the team found altered methylation in large chunks of the cancer genomes. These blocks frequently had lower methylation levels than normal colon tissue, though, again, methylation patterns were far more variable in the cancer samples.

Consistent with these methylation changes, the expression of genes within these hypo-methylated blocks of the cancer genomes typically showed higher but more variable expression.

"We think that what's happening is a loss of [methylation] control rather than a concerted shift," Timp explained.

In addition, he noted, some of these differentially methylated regions in cancer appear to coincide with regions previously reported to be partially methylated in stem cells or other tissue types, while others overlapped with regions known to have chromatin alteration or epigenetic marks related to lamina function.

"We can say with some certainty that these areas are important for both differentiation and cancer," Timp said. "We would propose that maybe there's a link here and maybe cancer is losing regulation of these areas and reverting to a more primitive or less controlled state."

Although more research is needed to determine whether that is the case or whether there is some other explanation for the importance of the areas in both cancer and tissue development or differentiation, those involved in the study argue that their findings may ultimately lead to improved tests or treatments for cancer.

"Maybe the big lesson learned from our observation of this universal [epigenetic] chaos is that we may need to think not so much about just killing cancer cells, but also about ways of helping cancer cells figure out how to be what they're supposed to be, and re-educate them so they can stay truer to their normal identities," Feinberg said in a statement.

[This landmark experimentation substantiates the prediction of The Principle of Recursive Genome Function (also popularized in the YouTube, both in 2008) by Pellionisz. Fractal Iterative Recursion, to grow organelles, organs and organisms (FractoGene) requires not only a recursion to intergenic and intronic regions of the DNA to pick up auxiliary information to maintain growth, but also the auxiliary information must be cancelled (e.g. by methylation) to prevent uncontrolled use, resulting in cancerous protein production. The two snapshots from YouTube show normal methylation (at 31:04 of YouTube, while at 31:50 of the YouTube the epigenomic control by methylation is shown to be defective, leading to unduly repeated perusal of "non-coding" information. This entry can be discussed in the FaceBook page of Andras Pellionisz]



A surge of top-quality papers pointing into "methylation-defects" as predicted by FractoGene as culprits for cancer

Scientists Report that Methylation Chaos in Cancer Contributes to Cell Adaptability

http://www.genengnews.com/gen-news-highlights/scientists-claim-methylation-chaos-in-cancer-contributes-to-cell-adaptability/81245352/

Scientists claim cancer cells have lost the ordered patterns of methylation demonstrated by normal tissues and demonstrate chaotic methylation across the genome that could help them adapt to changing environments in growing tumors and facilitate metastasis. Research by aJohns Hopkins University-led team has found that in contrast to normal tissues that demonstrate specific patterns of methylation at CpG islands or in large blocks of DNA, cells from a range of different tumor types showed vastly differing methylation patterns at the same sites, which they termed differentially-methylated regions (cDMRs).

In colon cancer this was evidenced by loss of methylation stability both at the boundaries of CpG islands and the presence of blocks of hypomethylation affecting over half the genome, which together led to significant variability in gene expression. Andrew P. Feinberg, M.D., professor of molecular medicine and director of the Center for Epigenetics at the Johns Hopkins University School of Medicine’s Institute for Basic Biomedical Sciences, and colleagues, report their findings in Nature Genetics. In their paper, titled “Increased methylation variation in epigenetic domains across cancer types,” the authors point out that their findings could indicate that the use of nonspecific DNA methylation inhibitors as cancer therapy could have unwanted effects by activating tumor-promoting genes in hypomethylated blocks.

Work by Dr. Feinberg’s team has previously demonstrated that colon cancers exhibit changes in the degree of methylation at regions of lower CpG density near CpG islands. These cDMRs corresponded in the main to the same regions that show DNA methylation variation in normal spleen, liver, and brain tissues, or tissue-specific DMRs (tDMRs), suggesting their involvement in cell differentiation in normal tissues.

To investigate cancer-related changes in global methylation further, the researchers designed an array to analyze over 151 cDMRs that were consistently altered across colon cancer, and compared methylation within these regions in another 290 samples including matched normal and cancer samples from colon, breast, lung, thyroid, and Wilms’ tumor. They found that almost all of the cDMRs were altered across all cancers tested. “Specifically, the cDMRs showed increased stochastic variation in methylation level within each tumor type, suggesting a generalized disruption of the integrity of the cancer epigenome,” the team suggests.

To look further at this phenomenon, the team carried out genome-scale bisulfite sequencing of three colorectal cancers, matched normal colonic mucosa and two adenomatous polyps. The study was designed to obtain methylation estimates with enough precision to detect differences of 20% methylation. The results demonstrated a significant loss of methylation stability in colon cancer, which involved CpG islands and their boundary regions (shores), and also identified large blocks (5010 kb) of hypomethylation. “Remarkably, these hypomethylated blocks in cancer corresponded to more than half of the genome, even after accounting for the number of CpG sites within the blocks,” the authors note.

Over 5,800 hypermethylated and 4,300 hypomethylated small DMRs (less than 5 kb) were also identified. Interestingly, the research comfirmed previous findings that hypermethylated cDMRs are enriched in CpG islands, whereas hypomethylated cDMRs are enriched at CpG island shores. The thousands of small DMRs frequently involved loss of boundaries of DNA methylation at the edge of CpG islands, shifting of DNA methylation boundaries, or the creation of novel hypomethylated regions in CG-dense regions that are not normally recognized as islands. The knock-in effect of this variability in cancer cell methylation was both an increase in gene silencing and a substantial enrichment of genes with increased expression variability in the methylated blocks, Dr. Feinberg and colleagues state.

These data underscore the importance of hypomethylated CpG island shores in cancer, the authors note: “Shores associated with hypomethylation and gene overexpression in cancer are enriched for cell cycle related genes, suggesting a role in the unregulated growth that characterizes cancer.” Regions of altered methylation in cancer were also found to match those in normal tissues associated with controlling cell differentiation into specific cell types. “Targeting those regions might help the cells become more normal,” suggests Rafael Irazarry, Ph.D., professor of biostatistics at the Johns Hopkins University Bloomberg School of Public Health and lead author of the Nature Genetics paper.

From the cancer perspective, methylation chaos is helpful because it means tumors can turn genes on and off in an uncontrolled way, increasing their adaptability, Dr. Feinberg adds. This indicates that increased epigenetic heterogeneity in cancer at cDMRs may play a role in the ability of cancer cells to adapt rapidly to changing tissue environments, such as increased oxygen in regions of neovascularisation, then decreased oxygen with necrosis, or metastasis to a new intercellular milieu.

Current efforts to exploit DNA methylation for cancer screening have focused on identifying narrowly defined cancer-specific profiles. However, the Johns Hopkins research suggests broader evaluation of the cancer epigenome may be more relevant. “Given the importance of boundary regions for both small DMRs and large blocks identified in this study, it will be important to focus future epigenetic investigations on the boundaries of blocks and CpG islands (shores) and on genetic or epigenetic changes in genes encoding factors that interact with them,” the authors conclude.

Measuring DNA Methylation

http://asweetlife.org/a-sweet-life-staff/featured/dna-methylation-the-coolest-new-technology-at-the-ada-scientific-sessions/17772/

What is this promising new technology? A means of “Using Differentially Methylated Circulating DNA for In Vivo Detection of Beta-Cell Death in Diabetes,” as presented by Dr. Eitan Akirav.


Come again? Using differentially methy-what?


Okay, I admit the technology is not very consumer-friendly, but it’s really cool, so bear with me, and we’ll start from the beginning.


Beta-cell death is a problem in both type 1 and type 2 diabetes; in the former, cell death is induced as cells react to their increasingly toxic, auto-reactive environment, and in the latter, cell death is induced as cells become over-stressed. Currently, beta-cell death is very hard to measure. The best way is to look directly at tissue in the pancreas—but that’s not possible if you want to keep your human patient or mouse models alive. Without tissue samples, imaging of a living pancreas would be useful, but that proves difficult and inaccurate because of the location and nature of the pancreas. So, researchers will usually quantify beta-cell death using proxies like C-peptide (as an indicator of how much insulin is being secreted).

These proxies, though, are not reliable or accurate; ideally, if we want to understand exactly what causes beta-cell death and what we can do to manipulate the process, we should be able to measure the degree of beta-cell death while the subject is still living, and without too much indirection.


This sort of problem is not unique to diabetes; researchers face a similar issue with, for example, diagnosing cancers. Many cancers come with tangible symptoms or lumps, but how can you test for internal cancers in asymptomatic patients before it’s too late? Much research has been done to identify unique biological signatures, called biomarkers, for tumors. People started by looking for specific proteins or RNA sequences, but these have not proved reliable indicators of many tumors. More recently, though, a promising new biomarker is being tested—the methylation of DNA fragments circulated freely in the blood [1].

Relationship exists between mutations in tumors and methylation patterns found in their genomes

http://triplehelixblog.com/2011/07/methylation-the-cause-of-brain-tumor/

Methylation: The Cause of Brain Tumor?

When one thinks of the word “cancer” breast cancer, lung cancer, and skin cancer are among the various types that first come to mind. One type of cancer that is often neglected is Brain Tumor. According to the National Tumor Society, more than 500 people per day are diagnosed with primary or metastatic brain tumor and what’s worse is that the mortality rates for those diagnosed with brain and nervous system tumors haven’t improved over the past decade. The desperate need for new treatments and therapies for brain tumor is evident and it is the hope of many people whose loved one have suffered the wrath of this incurable disease that 2011 will bring new treatments and bring the path to the cure closer than ever before [1].

Neuroscience is an area of science that has faced its fair share of failures, yet that doesn’t mean scientists should give up on the field itself. Recently, researchers at the Alpert Medical School made an important discovery that may change the face of brain tumor treatments and diagnosis forever.

This new discovery is developed from the hypothesis that a relationship exists between mutations in tumors and methylation patterns found in their genomes (2). Chemically speaking, methylation is the addition of a methyl group to a substrate or the substitution of an atom or group by a methyl group. When DNA is methylated, gene silencing often occurs which could be the cause of the tumor. Gene silencing is a process of gene regulation which “switches off” a gene through a mechanism. Researchers and neuroscientists speculate that the methylated regions mark the genes involved in metabolic processes which explain the abnormal behavior of tumor cells [2]


New Method Used by Cells to Reverse Silenced Genes Discovered

New Method Used by Cells to Reverse Silenced Genes Discovered http://www.medindia.net/news/New-Method-Used-by-Cells-to-Reverse-Silenced-Genes-Discovered-87312-1.htm#ixzz1ScKlWBcS

Novel mechanism used by body cells to turn on the silenced genes has been identified by researchers at Fox Chase Cancer Center. This process is critical in preventing the development of cancer suggesting the possibility of new therapies that might target the specific changes underlying the disease. The findings will be published online in the journal Cell.

Read more: New Method Used by Cells to Reverse Silenced Genes Discovered http://www.medindia.net/news/New-Method-Used-by-Cells-to-Reverse-Silenced-Genes-Discovered-87312-1.htm#ixzz1ScL7s4vP

The process investigated by Alfonso Bellacosa, M.D., Ph.D., Associate Professor at Fox Chase, and his colleagues, is called methylation, in which the cell chemically tags genes to turn them off. More specifically, the cell silences a gene by adding a chemical compound known as a methyl group; without that methyl group, the gene remains active.

Read more: New Method Used by Cells to Reverse Silenced Genes Discovered http://www.medindia.net/news/New-Method-Used-by-Cells-to-Reverse-Silenced-Genes-Discovered-87312-1.htm#ixzz1ScLKxrKH


"So What?" - if you separate Fractal Defects from Structural Variants of Human Diversity? - In Vivo Genome Editing!

[Zinc Finger Nucleases inserted into intron ("Junk"...) fix Hemophilia in Mouse - AJP]

Genome editing, a next step in genetic therapy, corrects hemophilia in animals.

Using an innovative gene therapy technique called genome editing that hones in on the precise location of mutated DNA, scientists have treated the blood clotting disorder hemophilia in mice. This is the first time that genome editing, which precisely targets and repairs a genetic defect, has been done in a living animal and achieved clinically meaningful results. As such, it represents an important step forward in the decades-long scientific progression of gene therapy -- developing treatments by correcting a disease-causing DNA sequence.

In this new study, researchers used two versions of a genetically engineered virus (adeno-associated virus, or AAV) -- one carrying enzymes that cut DNA in an exact spot and one carrying a replacement gene to be copied into the DNA sequence. All of this occurred in the liver cells of living mice. "Our research raises the possibility that genome editing can correct a genetic defect at a clinically meaningful level after in vivo delivery of the zinc finger nucleases," said the study leader, Katherine A. High, M.D., a hematologist and gene therapy expert at The Children's Hospital of Philadelphia. High, a Howard Hughes Medical Institute Investigator, directs the Center for Cellular and Molecular Therapeutics at Children's Hospital, and has investigated gene therapy for hemophilia for more than a decade.

The study appeared online today in Nature.

High's research, a collaboration with scientists at Sangamo BioSciences, Inc., makes use of genetically engineered enzymes called zinc finger nucleases (ZFNs) that act as molecular word processors, editing mutated sequences of DNA. Scientists have learned how to design ZFNs custom-matched to a specific gene location. ZFNs specific for the factor 9 gene (F9) were designed and used in conjunction with a DNA sequence that restored normal gene function lost in hemophilia. By precisely targeting a specific site along a chromosome, ZFNs have an advantage over conventional gene therapy techniques that may randomly deliver a replacement gene into an unfavorable location, bypassing normal biological regulatory components controlling the gene. This imprecise targeting carries a risk of "insertional mutagenesis," in which the corrective gene causes an unexpected alteration, such as triggering leukemia.

In hemophilia, an inherited single-gene mutation impairs a patient's ability to produce a blood-clotting protein, leading to spontaneous, sometimes life-threatening bleeding episodes. The two major forms of the disease, which occurs almost solely in males, are hemophilia A and hemophilia B, caused respectively by a lack of clotting factor VIII and clotting factor IX. Patients are treated with frequent infusions of clotting proteins, which are expensive and sometimes stimulate the body to produce antibodies that negate the benefits of treatment. In the current study, the researchers used genetic engineering to produce mice with hemophilia B, modeling the disease in people. Before treatment, the mice had no detectable levels of clotting factor IX. Previous studies by other researchers had shown that ZFNs could accomplish genome editing in cultured stem cells that were then injected into mice to treat sickle cell disease. However, this ex vivo approach is not feasible for many human genetic diseases, which affect whole organ systems. Therefore the current study tested whether genome editing was effective when directly performed in vivo (in a living animal). High and colleagues designed two versions of a vector, or gene delivery vehicle, using adeno-associated "Genome editing, a next step in genetic therapy, corrects hemophilia in animals." PHYSorg.com. 26 Jun 2011. http://www.physorg.com/news/2011-06-genome-genetic-therapy-hemophilia-animals.html Page 1/2 virus (AAV). One AAV vector carried ZFNs to perform the editing, the other delivered a correctly functioning version of the F9 gene. Because different mutations in the same gene may cause hemophilia, the process replaced seven different coding sequences, covering 95 percent of the disease-carrying mutations in hemophilia B.

The researchers injected mice with the gene therapy vector, which was designed to travel to the liver—where clotting factors are produced. The mice that received the ZFN/gene combination then produced enough clotting factor to reduce blood clotting times to nearly normal levels. Control mice receiving vectors lacking the ZFNs or the F9 minigene had no significant improvements in circulating factor or in clotting times. The improvements persisted over the eight months of the study, and showed no toxic effects on growth, weight gain or liver function, clues that the treatment was well-tolerated. "We established a proof of concept that we can perform genome editing in vivo, to produce stable and clinically meaningful results," said High. "We need to perform further studies to translate this finding into safe, effective treatments for hemophilia and other single-gene diseases in humans, but this is a promising strategy for gene therapy." She continued, "The clinical translation of genetic therapies from mouse models to humans has been a lengthy process, nearly two decades, but we are now seeing positive results in a range of diseases from inherited retinal disorders to hemophilia. In vivo genome editing will require time to mature as a therapeutic, but it represents the next goal in the development of genetic therapies."

[The Nature paper concludes:

Studies showing that ZFNs can mediate gene correction efficiently through the introduction of site-specific DSBs, and can induceHDRin cultured cells, have provided important proof-of-concept results for the clinical application of engineered nucleases for diseases affecting cells that can be removed and returned to the patient. However, the necessity to isolate and manipulate cells ex vivo limits the application of this technology to a subset of genetic diseases. Our results show that AAV-mediated delivery of a donor template and ZFNs in vivo induces gene targeting, resulting in measurable circulating levels of factor IX.

This therapeutic strategy is sufficient to restore haemostasis in a mouse model of haemophilia B, thus demonstrating genome editing in an animal model of a disease. Clinical translation of these results will require optimization of correction efficiency and a thorough analysis of off-target effects in the human genome, an issue that we have begun to monitor. Together, these data show that AAV-mediated delivery of ZFNs and a donor template gives rise to persistent and clinically meaningful levels of genome editing in vivo, and thus can be an effective strategy for targeted gene disruption or in situ correction of genetic disease in vivo.]

[When presenting paradigm-shifts, the most American question is "so what?". To be able to separate Fractal Defects (wherein the genome violates its own mathematics) from at least 4 million Structural Variants that only reflect "Human Diversity" (with the fractality intact) is nice. However, in the not-so-distant future when Big Pharma delivers drops of liquid containing a harmless virus that carries a "patch" to fix genomic glitches, not unlike when your computer updates lethal Windows defects by simply patching them, will make a similar, but multiple quantum leaps like when Polyo was rendered harmless - this entry can be discussed in FaceBook of Andras Pellionisz]


23andMe-Led Team Reports on Findings from Web-Based Parkinson's GWAS

June 24, 2011

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) - In a paper appearing online last night in PLoS Genetics [See Excerpts and link below - AJP], researchers from the personal genetics firm 23andMe and the Parkinson's Institute in California reported that they have identified two new loci contributing to Parkinson's disease risk.

The team's web-based genome-wide association study approach involved 3,426 individuals with Parkinson's disease - enrolled over the span of a year-and-a-half through a collaboration between 23andMe, the Michael J. Fox Foundation, the National Parkinson Foundation, and the Parkinson's Institute and Clinical Center - and 29,624 unaffected control individuals who enrolled as customers of 23andMe's personal genome services.

Using custom Illumina HumanHap 550+ arrays to genotype these individuals, researchers not only verified 20 Parkinson's associations identified previously, but also detected two new risk loci - one SNP falling near the lysosomal integral membrane protein type 2 coding gene SCARB2 and another in the vicinity of the SREBF1 and RAI1 genes. Two other variants - in and around the RIT2 gene and the USP25 gene - fell just shy of significantly associating with the disease.

"Not only are these new genetic findings significant, but we've also shown that the data collected by 23andMe support discovery of new associations as well as replication of previously known associations," lead author Chuong Do, a research scientist at 23andMe, said in a statement.

"This study is a rigorous 'proof of principle,'" he added, "and clearly demonstrates that web-based phenotyping works for a disease of real public health significance."

Based on their findings so far, the team estimated that at least a quarter of Parkinson's disease risk can be attributed to genetic factors. Consequently, they say, additional studies are needed to further tease apart the genetic environmental contributors to the disease.

23andMe's Parkinson's disease cohort now consists of more than 5,000 individuals with the condition, the company said. Meanwhile, some 76,000 individuals who use the 23andMe database have reportedly consented to participate in the Parkinson's disease or similar 23andMe research projects.

For instance, the company is currently using a similar web-based approach to study sarcoma, using a cohort that includes more than 500 individuals with that disease.

"We believe this paper proves the potential of our approach of combining genetic information with web-based data about specific conditions to make novel research discoveries," Anne Wojcicki, president and CEO of 23andMe, said in a statement. "This approach has the potential to be used in many other conditions."

[Excerpts and Link of the PLOS paper - AJP]

Web-Based Genome-Wide Association Study Identifies Two Novel Loci and a Substantial Genetic Component for Parkinson's Disease

Chuong B. Do1*, Joyce Y. Tung1, Elizabeth Dorfman1, Amy K. Kiefer1, Emily M. Drabant1, Uta Francke1, Joanna L. Mountain1, Samuel M. Goldman2, Caroline M. Tanner2, J. William Langston2, Anne Wojcicki1, Nicholas Eriksson1*

We conducted a large genome-wide association study (GWAS) of Parkinson's disease (PD) with over 3,400 cases and 29,000 controls (the largest single PD GWAS cohort to date). We report two novel genetic associations and replicate a total of twenty previously described associations, showing that there are now many solid genetic factors underlying PD. We also estimate that genetic factors explain at least one-fourth of the variation in PD liability, of which currently discovered factors only explain a small fraction (6%-7%). Together, these results expand the set of genetic factors discovered to date and imply that many more associations remain to be found. Unlike traditional studies, participation in this study took place completely online, using a collection of cases recruited primarily via PD mailing lists and controls derived from the customer base of the personal genetics company 23andMe. Our study thus illustrates the ability of web-based methods for enrollment and data collection to yield new scientific insights into the etiology of disease, and it demonstrates the power and reliability of self-reported data for studying the genetics of Parkinson's disease.

We found two novel associations at a genome-wide level of significance near SCARB2 (rs6812193) and SREBF1/RAI1 (rs11868035), both of which were replicated in data from [23]. We also report two novel associations (near RIT2 and USP25) just under the level of significance, one of which (RIT2) was also replicated. While it is difficult to pinpoint any causal genes from a GWAS, there are a few biologically plausible candidates worthy of discussion.

The PD-associated SNP rs6812193 lies in an intron of the FAM47E gene, which gives rise to multiple alternatively spliced transcripts, many of which are protein-coding; the functions of these hypothetical proteins are unknown. A more attractive candidate, located kb centromeric to the SNP, is SCARB2 (scavenger receptor class B, member 2), which encodes the lysosomal integral membrane protein type 2 (LIMP-2). LIMP-2 deficiency causes the autosomal-recessive disorder Action Myoclonus-Renal Failure syndrome (AMRF), which combines renal glomerulosclerosis with progressive myoclonus epilepsy associated with storage material in the brain [34]. LIMP-2 is involved in directing -glucocerebrosidase to the lysosome where it hydrolyzes the -glycosyl linkage of glucosylceramide [35]. Deficiency of this enzyme due to mutations in its gene (GBA) causes the most common lysosomal storage disorder, Gaucher's disease. Recently, mutations in GBA have also been identified in PD [36], pointing to a possible functional link between the newly identified candidate gene SCARB2 and PD.

rs11868035 appears in an intron of the alternatively spliced gene, SREBF1 (sterol regulatory element-binding transcription factor 1), within the Smith-Magenis syndrome (SMS) deletion region on 17p11.2. SREBF1 encodes SREBP-1 (sterol regulatory element-binding protein 1), a transcriptional activator required for lipid homeostasis, which regulates cholesterol synthesis and its cellular uptake from plasma LDL [37]. Studies of neuronal cell cultures have implicated SREBP-1 as a mediator of NMDA-induced excitotoxicity [38]. rs11868035 is directly adjacent to the acceptor splice site for the C-terminal exon of the SREBP-1c isoform of the protein [39], suggesting that the effect of the polymorphism may be specifically related to the splicing machinery for this protein. The mutation is also in strong LD with rs11649804, a nonsynonymous variant in the nearby gene RAI1 (retinoic acid-induced protein 1), which regulates transcription by remodeling chromatin and interacting with the basic transcriptional machinery. Heterozygous mutations in RAI1 reproduce the major symptoms of SMS, such as developmental and growth delay, self-injurious behaviors, sleep disturbance, and distinct craniofacial and skeletal anomalies [40]. Future work is needed to identify the functionally important variant(s) responsible for this association.

The SNP rs4130047, slightly below the genome-wide significance threshold, lies in an intron of the RIT2 (Ras-like without CAAX 2) gene that encodes Rit2, a member of the Ras superfamily of small GTPases. Though we do not claim this SNP as a confirmed replication, there are a number of reasons to suspect that this association may also be real. Rit2 binds calmodulin in a calcium-dependent manner, and is thought to regulate signaling pathways and cellular processes distinct from those controlled by Ras [41]. It localizes to both the nucleus and the cytoplasm. Independent of our study, RIT2 was previously proposed as a candidate gene for PD, based on the possibility that dopaminergic neurons may be especially vulnerable to high intracellular calcium levels, perhaps through an interaction with -synuclein [42]. The PD-associated region contains another biologically plausible candidate gene, SYT4 (synaptotagmin IV), which encodes synaptotagmin-4, an integral membrane protein of synaptic vesicles thought to serve as sensor in the process of vesicular trafficking and exocytosis. It is expressed widely in the brain but not in extraneural tissues [43]. Homozygous Syt4−/− mouse mutants have impaired motor coordination [44]. SYT4 is particularly interesting as a SNP near SYT11 (synaptotagmin XI) has been associated with PD in [22], and the encoded protein, synaptotagmin-11, is known to interact with parkin [45].

The suggestively associated SNP rs28233572 lies in a gene-poor region with only one candidate gene downstream, USP25, encoding ubiquitin specific peptidase 25, which regulates intracellular protein breakdown by disassembly of the polyubiquitin chains. Other ubiquitin-specific proteases (USP24, USP40) have been proposed as candidate genes for PD [46] (although USP24 fails to replicate here, see Table 3).

Our heritability estimates, which suggest that genetic factors account for at least one-fourth of the total variation in liability to PD, represent the tightest confidence bounds determined for the heritability of PD to date. These estimates, which rely on observed genetic sharing rather than predicted relationship coefficients, avoid confounding from shared environmental covariance by restricting attention to very distantly related individuals. Furthermore, they complement estimates of heritability from twin studies by considering large numbers of individuals with low amounts of genetic sharing, rather than small numbers of twin pairs with large amounts of genetic sharing.

These estimates should only be interpreted as lower bounds on the actual heritability of liability of PD for two reasons. First, they only reflect phenotypic variation due to causal variants in LD with SNPs on the genotyping platform. Second, they only capture the contribution to additive variance that arises from a polygenic model of many SNPs of small effect, but do not include the variance arising from known specific associations. This limitation is most apparent in our estimate of heritability based on only early-onset cases (), which is considerably lower than reported in prior twin studies (e.g., in [10]). In early-onset PD, mutations in six specific genes (SNCA, PRKN, PINK1, DJ1, LRRK2, and GBA) have been reported to account for 16% of cases [47]; these specific mutations are not directly accounted for in our estimate, which is based on a polygenic model. We note that a similar effect may explain the low heritability estimate for early-onset PD in [48]. Thus, the actual heritability of PD, and the corresponding true upper bound on discriminative accuracy achievable through genetic factors, may be even higher than the estimates we provide.

Our estimates also indicate a substantial genetic component for late-onset PD (), for which previous estimates of heritability have been inconclusive due to the lack of statistical power (e.g., 0.068 in [10] and 0.453 in [48]). One might ask, if late-onset PD is indeed so heritable, why do cases frequently appear sporadically in the general population? Following the analysis of [49], if one were to assume a heritability of and an average of three children per family, then the proportion of sporadic cases (i.e., no parent, child, sibling, grandparent, aunt or uncle, or first cousin with PD) among all PD cases would be 64% for a prevalence of ; in the 23andMe cohort, 69% of PD cases would be considered sporadic by this definition based on self-reported family history. Similarly, the expected proportion of PD cases with no affected parent or sibling would be 88% under the same assumptions, compared with 84% as reported in [50], or 89% based on the cohort in [51]. These examples illustrate the fact that the presence or absence of a familial pattern cannot always be used to determine pathogenesis, especially for diseases that are rare and have a complex etiology.

Overall, our risk prediction results are consistent with a measured AUC of roughly 0.6. The cross-validated AUCs presented here should be distinguished from more usual measurements of AUC in genome-wide association studies, which are typically only estimated on the development set, and which rely on weighted combinations of SNPs with independently estimated odds ratios. In some cases, the bias resulting from lack of proper external validation can be quite large. For example, a simple genetic profile score based on multiplying together odds ratios for the SNPs in Table 2 appears to achieve an AUC of in the 23andMe data (or if no covariate adjustment is performed) making it appear competitive with some of the best models described in Table 5. However, when the same model is evaluated in the NINDS data, the AUC drops to , exhibiting a drop in performance characteristic of models that have been overfit to their training data. In contrast, the consistency between the internal and external validation results in the models shown in Table 5 demonstrate not only the predictiveness of our models within the 23andMe cohort but also their ability to generalize to other populations.

Our empirical demonstration that including SNPs beyond the genome-wide significant level provides improved discriminative power mirrors the recent results of [32], which also studied the performance of sparse regression methods in a risk prediction setting. In an applied setting where the goal is to achieve the best predictive accuracy rather than to isolate the contribution of individual genetic factors, however, even higher discriminative accuracies may be possible if one were to incorporate these covariates as part of the predictive models. Even without these, however, significant improvements in risk prediction are likely still possible, with our heritability analyses indicating asymptotic target AUCs above 0.8.

Our AUCs are generally conservative for a number of reasons. In the internal experiments, they were obtained by training on only 80% of the data. In the external experiments, the models included only the SNPs in common between the 23andMe and NINDS datasets and thus excluded several SNPs with large effects in LRRK2 and GBA that may add a percent or more to the AUC if included. Furthermore, our analyses adjusted for confounding from population structure and other covariates so as to ensure that the discriminative accuracies we reported were specifically due to genetic effects.

Finally, we note that data for the 23andMe cohort used in this study were acquired in a novel manner, using genotype and survey data acquired through a commercial online personal genetic testing service. The use of self-reported phenotype data raised some unique challenges. For example, our cohort was not a true population sample for a number of reasons, such as the general bias toward higher socioeconomic status, as typical of 23andMe customers. In general, however, we would not expect these ascertainment biases to substantially affect our conclusions unless their effects varied differentially between the case and control sets.

As another example, in compiling the cohort, we used participants with varying levels of completeness in their self-reported data (see Materials and Methods). Out of the 3,426 cases in the 23andMe cohort, though most cases reported having PD in a questionnaire, 482 affirmatively stated they had PD upon entry to the research study but did not fill out any PD-related questionnaire during the study. However, we did not see a large difference between those answering questions and not. Among the 11 associations presented in Table 2, only the association with MAPT showed a significant difference between the cohort who answered a questionnaire and those who did not (see Table S7). Also, approximately 84% of the cases filled out a questionnaire, and of them, over 96% reported a PD diagnosis. Even if a larger fraction (say 10–15%) of those who did not take a questionnaire did not have PD, the gain in power from the additional cases would more than offset the loss of power from having some 50 more false positive cases.

Despite the challenges associated with using self-reported data collected through online surveys, ultimately, our results lend credibility to the accuracy of this novel research design. For example, the agreement between our study and previous studies in terms of the ORs estimated for the 19 associations replicated in Table 3 strongly suggests that our cohort is similar to those used in other PD studies. Similarly, the consistency of AUCs and heritability estimates across our cohort and the NINDS cohort both suggest a limited role of bias in our study.

Importantly, our mode of data collection also provided a number of clear benefits. The use of internet-based techniques enabled rapid recruitment of a large patient community. The 3,426 cases in this study were enrolled in about 18 months, with over half joining in the first month of the study. Also adding significantly to the power and robustness of this study was the availability of a large cohort of controls derived from the 23andMe customer base. By using a non-traditional recruitment approach, we thus were able to attain good power for our study through large sample sizes. To our knowledge, this study represents the largest genome-wide association study of Parkinson's disease conducted on a single cohort to date, with only a recent meta-analysis achieving a larger number of cases [22]. We suggest that this methodology for study design may prove advantageous for other conditions where the advantage of having a large cohort is paramount for detecting subtle genetic effects.
...

In summary, we have for the first time used a rapid, web-based enrollment method to assemble a large population for a genome-wide association study of PD. We have replicated results from numerous previous studies, providing support for the utility of our study design. We have also identified two new associations, both in genes related to pathways that have been previously implicated in the pathogenesis of PD. Using cross-validation, we have provided evidence that many suggestive associations in our data may also play an important role. Using recently developed analytic approaches developed for GWAS that take into account the ascertainment bias inherent in a case-control population, we have estimated the genetic contribution to PD in this sample. These findings confirm the hypothesis that PD is a complex disorder, with both genetic and environmental determinants. Future investigations, expanded to include environmental as well as genetic factors, will likely further refine our understanding of the pathogensis of PD, and, ultimately, lead to new approaches to treatment.


Researchers Develop Methylation-Based Model for Predicting Age from Spit DNA

June 23, 2011

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – In a study appearing online last night in PLoS ONE, a University of California at Los Angeles research team demonstrated that it's possible topredict a person's age from saliva samples using DNA methylation patterns.

Those involved in the study say the findings could have both forensic applications and medical implications, particularly for finding individuals whose biological age differs from their chronological age.

The researchers used microarrays to look at DNA methylation patterns in spit samples from nearly three-dozen male twin pairs between the ages of 21 and 55 years old, tracking down nearly 90 sites where methylation coincides with age. From follow-up experiments involving another group of men and women between the ages of 18 and 70 years old, the team was able to come up with a model that predicts an individual's age to within an average of five years based on methylation status at two sites in the genome.

"Our approach supplies one answer to the enduring quest for reliable markers of aging," senior author Eric Vilain, a professor of human genetics at the University of California at Los Angeles who is also affiliated with the school's Center for Society and Genetics, said in a statement. "With just a saliva sample, we can accurately predict a person's age without knowing anything else about them."

DNA modifications change with tissue development, differentiation, and age, Vilain and his co-authors explained. For the current study, they looked at whether it was possible to exploit these shifts to find age markers, focusing on methylation at cytosine residues — first in identical twins and then in unrelated individuals from the general population.

"While certain methylation changes are genetically controlled, environmental exposure and stochastic processes can lead to a change in methylation patterns," they explained. "In this context, identical twins can be considered replicates of the same developmental and aging experiment."

Using Illumina HumanMethylation27 arrays, the researchers assessed methylation patterns at nearly 16,100 CpG sites in the genomes of 34 pairs of identical male twins between the ages of 21 and 55 years old.

When they sifted through the data for the twins using an analytical approach known as weighted correlation network analysis, the team found five methylation modules containing loci with comparable methylation patterns.

Within the modules, researchers narrowed in on 88 loci at which cytosine methylation status depended on an individual's age, including 69 showing positive correlation and 19 showing negative correlation. The sites fell in and around 80 different genes, they noted, including several genes that have been implicated in cardiovascular, neurological, and other conditions.

Because methylation at three sites in the promoter regions of the EDARADD, TOM1L1, and NPTX2 genes were particularly well correlated with age, the team used either targeted bisulfite sequencing or Sequenom MassArrays to evaluate CpG methylation profiles for these three genes in DNA from saliva samples from 22 of the twins and from another 31 men and 29 women who were between 18 and 70 years old.

Although methylation status for all three genes corresponded to age in the DNA from the male saliva samples, the researchers reported, methylation profiles for just two of these — EDARADD and TOM1L1 — coincided with age in the females.

Meanwhile, the team's predictive model, based on methylation profiles for cytosine residues near the EDARADD and NPTX2 genes, explained some 73 percent of the age variance and could predict age to within a 5.2 year window based on average.

"[O]ur ability to predict an individual's age to an average accuracy of 5.2 years could be used by forensic scientists to estimate a person's age based on a biological sample alone, once the model has been tested in various biological tissues," the study authors wrote.

And, they say, such predictive models may find favor for diagnosing and treating some age-related diseases and for finding situations in which an individual's biological age differs from their age in years.

"Doctors could predict your medical risk for a particular disease and customize treatment based on your DNA's true biological age, as opposed to how old you are," Vilain said in a statement.

Epigenetic Predictor of Age [Full free text of PLOS]

Bocklandt S, Lin W, Sehl ME, Sánchez FJ, Sinsheimer JS, et al. (2011) Epigenetic Predictor of Age. PLoS ONE 6(6): e14821. doi:10.1371/journal.pone.0014821

Published: June 22, 2011

Abstract

From the moment of conception, we begin to age. A decay of cellular structures, gene regulation, and DNA sequence ages cells and organisms. DNA methylation patterns change with increasing age and contribute to age related disease. Here we identify 88 sites in or near 80 genes for which the degree of cytosine methylation is significantly correlated with age in saliva of 34 male identical twin pairs between 21 and 55 years of age. Furthermore, we validated sites in the promoters of three genes and replicated our results in a general population sample of 31 males and 29 females between 18 and 70 years of age. The methylation of three sites—in the promoters of the EDARADD, TOM1L1, and NPTX2 genes—is linear with age over a range of five decades. Using just two cytosines from these loci, we built a regression model that explained 73% of the variance in age, and is able to predict the age of an individual with an average accuracy of 5.2 years. In forensic science, such a model could estimate the age of a person, based on a biological sample alone. Furthermore, a measurement of relevant sites in the genome could be a tool in routine medical screening to predict the risk of age-related diseases and to tailor interventions based on the epigenetic bio-age instead of the chronological age.

Introduction

Throughout development, cells and tissues differentiate and change as the organism ages. This includes alterations to telomeres, accumulation of DNA mutations, decay of cellular and organ structures, and changes in gene expression [1]. Both differentiation of tissues, and ageing effects are at least partially caused by chemical modifications of the genome, such as DNA methylation.

Monozygotic (MZ) twins form an attractive model to study methylation changes with age. At the time of separation both embryos have nearly identical methylation patterns. While certain methylation changes are genetically controlled, environmental exposure and stochastic processes can lead to a change in methylation patterns. In this context, identical twins can be considered replicates of the same developmental and ageing experiment.

Several studies have investigated the epigenetic state of a small number of selected genes or CpG islands in subjects of varying age [2] or have measured the global changes in DNA methylation with increasing age [3]. Recently, unbiased genomewide studies have documented age effects on DNA methylation in cultured cells [4], mice [5], and humans [6], [7], [8]. Most of these reports' subjects were of a limited age range, and the continuity of the age related changes has been unclear. Therefore, estimating the age of a biological sample based on methylation values has not been possible.

[This paper, reporting results that the authors bumped into something quite different that they were looking for, stands out in significance far beyond what the authors claim in their PLOS paper. Their claim is focused on the predictability of "biological age". This is very noteworthy, but pales in comparison with the modern and now clinched fact that the genome is NOT "static", but changes (as its function is regulated in a recursive manner, see book below and references to two reversed axioms of genome informatics). Monitoring the "aging process" - that is rendering auxiliary information unreadable upon perusal in recursion, is a physiological process; now proven by this experimental finding. However, the pattern of change of methylation, as e.g. shown in Google Tech YouTube can also be pathological (e.g. cancerous) - thus the Genome (mis)Regulation by Fractal Iterative Recursion, a technology IP of HolGenTech, Inc. is now even firmer on experimentally verified ground. This entry can be discussed on the FaceBook page of Andras Pellionisz]


Goodbye, Genetic Blueprint

What the new field of epigenetics reveals about how DNA really works.

By Christine Kenneally

Posted Monday, June 20, 2011, at 9:52 AM ET

There are almost as many metaphors for genes as there are genes. One of the most familiar, and the hardest to let go of, is the tidy blueprint, at once reassuringly clear and oppressively deterministic: Our genome is the architectural plan for who we are. It tells our body how to build itself, setting our height, our health, and even our moods since before we are born. Small wonder that we imagine if we can read our genome, we will discover not just the truth of ourselves but perhaps our future, too. Remember the high hopes that spurred on the Human Genome Project in the 1990s? Though the genetic catalog is now largely complete, we still await many of the anticipated insights, and in Epigenetics: The Ultimate Mystery of Inheritance, Richard Francis, a writer with a biology Ph.D., traces the emergence of a different genetic paradigm. Our DNA shapes who we are, Francis reports from the research forefront, but it is far from a static plan or an inflexible oracle; DNA gets shaped, too. For good or ill, the forces that determine our fate can't be captured by anything so neat as a blueprint.

Francis's primer introduces a new field, whose roots predate the rise of pure genetic determinism. How is DNA itself shaped? The search for answers begins in the late-19th-century work of scientists such as Hans Driesch, whose study of sea urchin embryos revealed that the cell plays a key administrative role in an organism's development. He discovered that if you take cells from one location in the embryo—the area that will become, say, the spines--and plant them in another—the mouth area--their function changes: You don't get spines growing out of the mouth, you get a normal mouth. A cell's identity doesn't arise from a preordained genetic recipe inside it. Crucially, it is the cues that a cell gets from neighboring cells that affect how the genes inside it behave.

Epigenetics has taken its cue from this process, and sets out to explore not just how cells control the genes inside them but also how altered genes are passed on when cells reproduce—both within an organism's lifetime and, more fantastically, across generations. If you detect another historical antecedent, you're right. Looming over this new field is the once-derided Lamarck, who proposed in the 18th century that if a giraffe, for example, consistently stretches its neck to reach leaves, its children will be born with longer necks. Lamarck's ideas about how traits are acquired and passed down were mostly wrong. But the basic notion that an event in a parent's life can sculpt fundamental traits in a child, once consigned to the dustbin of biology, has been revived. The epigenetic quest is to discover how chemical attachments to genes shape the fate of an animal by altering the genes' long-term expression.

If cellular regulation of genetic expression sounds complicated, it is, which is one reason—aside from our allegiance to the idea of some foreordained pattern to our lives—the epigenetic field has been slow to develop. The research that has been accumulating for decades upends the conception of "controller" genes that are either "on" or "off." Francis is a thorough guide to the many ways in which personality and health can play out through our genes but not be coded for in DNA. He proceeds step-by-step. After all, this is unsettling terrain: The notion of environmental forces that can be genetically determining does not fit our deeply etched nature vs. nurture categories. Francis begins by explaining what he calls "garden variety," or short term—rather than epigenetic—gene regulation, by way of androgens, like testosterone. This happens in normal development, but also in abnormal situations, such as when athletes abuse steroids. Where normal testosterone changes gene expression, extra testosterone causes a frantically altered gene expression, which leads to strong muscles, shrunken testes, and out-of-control aggression. The changes are direct. You take the steroids, they affect some of the cells in your body, the gene activity inside those cells changes, and then your body changes. The changes in garden-variety gene regulation end with the affected cells, and with you. When cells divide they do not pass along the abnormal genetic activation. The children of a steroid abuser inherit their parent's genes, but they do not inherit the synthetic steroid-induced changes to gene expression.

But gene expression doesn't change merely when you put chemicals in your body. The connections between people may shape it, too. In the 1990s, scientists began to explore how social status can influence biology. In one kind of African cichlid fish, for example, the males are either "territory-owning" tough guys who have vividly bright colors, huge testes, larger neurons, and lots of testosterone. Or they are nonterritorial and much less striking. The low-testosterone males do not get to breed. Scientists discovered they could manipulate the social status of fish, their testosterone level and all the hoopla that accompanies it, by changing only the fish's "friends." If they put big, territorial fish in a tank with much bigger, territorial males, the former-breeders lost color, their testes and neurons shrank, and they literally transformed into nondominant fish. When they put nonterritorial wallflowers in a tank with females and smaller males, they too were transformed, but in the other direction. As Francis points out, we obviously can't run this kind of experiment with humans. It nevertheless shows how context can change the way genes work.

Changes that arise from normal gene regulation happen in the short term, but epigenetic changes alter the way that genes react to the world for a very long time—even when the original cause has vanished. It is this rather shocking long-term influence that makes epigenetics one of most alluring—and terrifying—shifts in how we think about our genes. Epigenetic changes can occur in adulthood, in childhood, even in utero (a phenomenon explained in Origins by Annie Murphy Paul), with the consequence that an event you experienced as a child could dictate the ways your genes behave in a different situation as an adult. It may have been simple-minded to assume that we are programmed by our genes, but there was a weird egalitarianism in that: Even if we get different genes to begin with, we are under their sway in the same way. Epigenetic change means that not only do we start out as unwitting participants in a genetic lottery, but environmental forces we cannot see or control can mess with our genetic hardware and change our destiny. At the level of DNA, epigenetic change occurs when particular chemicals become attached to the gene, and stay there, altering how the gene behaves. The first of these attachments to be discovered, and still the best known, is from the methyl group. In 1980, it was shown that different degrees of methylation can alter gene expression in different ways. Demethylation can cause problems, too. Depending on the genes involved, one consequence can be unconstrained cell division, otherwise known as cancer.

The causes of epigenetic attachments are various, and the evidence so far indicates they range from pollution to stressful social interactions. Studies on the long-term effects of a pregnant woman's poor nutrition suggest that the food our mothers eat while we are in the womb can shape our gene expression. So, too, the food they don't eat. The best data on long-term genetic change come from the terrible Dutch famine of 1944, when the Nazis blockaded food supplies, disrupted transport, and flooded farmlands in western Holland. It has emerged as the classic case study in the field, thanks to the exemplary record-keeping of the Dutch, which gives researchers solid longitudinal data on the famine's many far-reaching effects. For children who were in utero at the time of the famine, the consequences include a higher risk of schizophrenia, antisocial personality disorder and other psychological disturbances, and even 50 years down the road, a greater likelihood of becoming obese. At first glance it may seem that the legacy is poor health in general. But that's not how it works. The impact depends on exactly when the fetus was exposed to the famine, Francis reports. Women whose mothers suffered the famine in the first trimester have a higher risk of breast cancer. Those whose mothers suffered in the second trimester have problems with lungs and kidneys.

The first person to realize what a data trove the Dutch records were was Clement Smith, a U.S. doctor who was flown to the Netherlands in 1945 to help. He found that children born during the famine were much smaller than those born before. Numerous teams have revisited the data, which have been updated during the decades since, and they have discovered many ways the famine is still playing out in the lives of Dutch people, even those who weren't born at the time. The studies became epigenetic 20 years ago, when scientists began to look for altered genes in famine survivors to see whether changed DNA explains the ways in which the survivors differ.

In 2009, one team unearthed a tantalizing result: Examining the blood cells of adults who were in the womb at the time of the famine, researchers discovered unusual epigenetic attachments on the gene that codes for a hormone called insulin-like growth factor 2. The hormone is crucial for growth, particularly in fetuses. It turns out the IGF2 gene of the famine group is methylated to a different degree than the same gene in a non-famine group. Even though scientists haven't yet traced the specific causal chain between the epigenetic attachments, the genes, and people's lives, those attachments are a smoking gun for epigenetic change in the womb, and health issues many decades later.

Even more fascinating, and unnerving, it appears that the consequences of epigenetic change may stretch over several lifetimes. In one Swedish village, which also has records of crop harvests that go back hundreds of years, the paternal grandsons of men who experienced famine were less likely to have cardiovascular disease than their peers whose paternal grandfathers did not experience famine. But, wait, conventional wisdom says only genes are supposed to be passed on to the next generation. Most epigenetic attachments are stripped away from genes in the creation of sperm and egg cells. Yet it seems that a record of some epigenetic attachments is passed on and then recreated in the genome of the embryo, too. That means that an event in your parent's life that occurred before you were conceived could affect how your genes work today. In other words, the sins of the fathers may be visited on the deoxyribose nucleic acids of the sons. How malleable are our sons and daughters? The mechanisms involved are extraordinarily subtle. Researchers are now only beginning to understand how and why this happens.

It's almost enough to make one nostalgic for the simplicity of old-style genetic determinism, which at least offered the sense that the genetic hand you were dealt at birth was the same one you would play your whole life—except that epigeneticists hold out the promise that the blessings of a single life, too, can be passed on. Disease researchers, Francis reports, have hopes that the effects of abnormal epigenesis may be reversed. For example, it's possible that the damage caused by many cancers is epigenetic. If those epigenetic attachments can be altered, then it's possible the cancer can be stopped. Still, even if we are discovering that an extraordinary range of conditions may be epigenetic, not all of them are. There are still specific diseases that follow a deterministic path. If you are unlucky enough to draw the Huntington's mutation in the genetic shuffle, you will develop the disease. Francis rightly emphasizes the wonder of epigenetics and the molecular rigor it brings to the idea that life is a creative process not preordained by our genome any more than it is preordained by God. Yet even as epigenetic research invites dreams of mastery—self-creation through environmental manipulation—it also underscores our malleability. There is no easy metaphor for this combination. But if we must have one, we should at least start with the cell, not the gene. The genome is no blueprint, but maybe the cell is a construction site, dynamic, changeable, and complicated. Genes are building materials that are shaped by the cell, and they in turn create materials used in the cell. Because the action at the site is ongoing, a small aberration can have a small effect, or it can cascade through the system, which may get stuck. Recall that your body is a moving collection of these building sites, piled in a relatively orderly way on top of another. Malleability? It's an ongoing dance with chaos, but, incredibly, it works.

[Epigenomics, working together with Genomics [hence called HoloGenomics] was exposed in the peer-reviewed science paper The Principle of Recursive Genome Function , invoking "fractal iterative recursion" that is called by Francis "cascading through the system", "dance with chaos" - which nonlinear dynamics (chaos and fractals being the two sides of the same coin) are demonstrably working in complex systems of biology. The concept by reversing two mistaken axioms that blocked progress for over half a Century were also popularized in Google Tech Talk YouTube, both in 2008. - This entry may be discussed in the FaceBook page of Andras Pellionisz]


Cells may stray from 'central dogma'

Published online 19 May 2011 | Nature | doi:10.1038/news.2011.304

Erika Check Hayden

The ability to edit RNA to produce 'new' protein-coding sequences could be widespread in human cells.

[Crick's own handwritten "Dogma" ruled 1956-2008 - see website, peer-reviewed science paper, Google Tech Talk YouTube - AJP]

All science students learn the 'central dogma' of molecular biology: that the sequence of bases encoded in DNA determines the sequence of amino acids that makes up the corresponding proteins. But now researchers suggest that human cells may complicate this tidy picture by making many proteins that do not match their underlying DNA sequences.

In work published today in Science1, Vivian Cheung at the University of Pennsylvania in Philadelphia and her team report that they have found more than 10,000 places where the base (A, C, G or U) in a cell's RNA messages is not the one expected from the DNA sequences used to make the RNA read-out. When some of these 'mismatched' RNAs were subsequently translated into proteins, the latter reflected the 'incorrect' RNA sequences rather than that of the underlying DNA.

It was already known that some cells 'edit' RNA after it has been produced to give a new coding sequence, but the new work suggests that such editing occurs much more often in human cells than anyone had realized, and that hitherto unknown editing mechanisms must be involved to produce some of the changes observed. If the finding is confirmed by other investigators — and some scientists already say they see the same phenomenon in their own data — it could change biologists' understanding of the cell and alter the way researchers study genetic contribution to disease.

Editing the central dogma

"The central dogma says that there is faithful transcription of DNA into RNA. This challenges that idea on a much larger scale than was known," says Chris Gunter, director of research affairs at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama.

The work suggests that RNA editing is providing a previously unappreciated source of human genetic diversity that could affect, for instance, how vulnerable different people are to disease.

Cheung does not know whether there are heritable changes, passed down from parent to child, that affect how much RNA editing occurs in different people. But scientists already know of a handful of RNA editing proteins that play a role in human health, such as the APOBEC enzymes, some of which have antiviral activity. Researchers investigating the connection between genetics and disease have been stymied by their inability to find strong connections between genetic variation and risk for most common diseases, leading researchers to wonder where the 'missing heritability' is hiding. The new study at least provides one place to look.

"These events could explain some of the 'missing heritability' because they are not present in everyone and therefore introduce a source of genetic variation which was previously unaccounted for," says Gunter.

Living with error

But because they do not know what mechanism might be responsible, most scientists contacted by Nature remained cautious about the significance of the finding and its possible impact on biology. Some say it is possible that technical errors could have caused the results. For instance, high-throughput sequencing machines can make systematic errors in DNA and RNA sequencing experiments.

And even if the findings hold up, it is still too early to know whether 'mismatching' plays an important role in human biology or not.

"The devil is in the details — to determine if the results are caused by some unintended technical or computational flaw or are correctly describing a biological phenomenon," says Thomas Gingeras at the Cold Spring Harbor Laboratory in New York. "Assuming the latter, I would be encouraged to look at our own large data sets to see if we see similar phenomenona."

Other researchers, such as Manolis Dermitzakis at the University of Geneva in Switzerland, say they are seeing the phenomenon in their data. Indeed, Cheung's team drew in part on data generated by the 1000 Genomes project, of which Dermitzakis is a member. However, Dermitzakis says it is still unclear how important the phenomenon is for disease susceptibility.

Cheung's group attempts to address many of these concerns, some of which were raised when the preliminary work was presented last November (see 'DNA sequence may be lost in translation') at the annual meeting of the American Society for Human Genetics, in Washington DC. Since then, the team has been looking for possible errors that could have caused the results.

For example, the researchers first observed DNA–RNA 'mismatches' in data generated by next-generation sequencing technologies in the International HapMap Project and the 1000 Genomes project. They have now confirmed some of the putative DNA-to-RNA changes using traditional Sanger sequencing, and have found the same changes in different people, across different cell types, and reflected in proteins.

Cheung says that at first "we truly did not believe it". But after performing the additional experiments "we cannot explain this by any obvious technical errors, so we are pretty convinced that this is real," she says.

Researchers who study RNA editing, which up to now was known mostly from plants and some unicellular human parasites, are intrigued by the new finding.

Kazuko Nishikura of the Wistar institute in Philadelphia says she was sceptical at first, because some of the base changes could not be explained by previously identified mechanisms. But she was convinced once she saw Cheung's data.

"It's really exciting, because this study reports a different variety of RNA editing that is much more widespread than existing mechanisms," Nishikura says.

Comment: (Andras Pellionisz)

It is a pitiful fact that “All science students learn the ‘central dogma’ of molecular biology” as if science were based on “Dogma”. To the contrary, science is based on facts explained not by negative pontification but by predictive theories that can be experimentally verified or refuted, and updated as required by new facts. Certain particularly crass pseudo-scientists (hiding their heads in the sandbox of “Dogmatic Ideologies”), when confronted with a new theory, like “The Principle of Recursive Genome Function” (peer-reviewed science paper, popularized by the “Google Tech Talk YouTube”, both in 2008) claim ignorance of the cardinal issues, behind their shield of incompetence in information theory. It is too bad (for them), since the “Battelle Study”: identified old school biochemistry-based Genomics turned into new school information-theory based Genomics. Indeed, an $796 Bn economical impact just in the USA is unsustainable without software-enabling algorithmic approaches to genome function, particularly of genome (mis)regulation. Regarding both “Junk DNA” the historical challenge is not the endurance-game of some provincial ideologies but the vital need to stop ignoring the informatics of “Junk DNA diseases”, most notably cancers. Regarding “Central Dogma” it must be superseded by the science of e.g. fractal iterative recursion, where every event is, by definition, a function of preceding events. As Ms. Hayden so aptly pointed out elsewhere, the result appears complex, but its underlying mathematics (when understood) is lucid; “There is nothing simpler than a problem solved” – Faraday. The Battelle Study also identifies our times as the most important scientific-technological paradigm-shifts, ever. Therefore, those whom the Study refers to as “retards”, in any “lucid heresy” they only note the latter part – new understanding does not illuminate them, rather, it irritates their die-hard habits.


The fractal globule as a model of chromatin architecture in the cell

Leonid A. Mirny

Harvard-MIT Division of Health Sciences and Technology, and Department of Physics, Massachusetts Institute of Technology, Cambridge, MA USA

Chromosome Res. 2011 January; 19(1): 37-51.

Published online 2011 January 28. doi: 10.1007/s10577-010-9177-0.

The fractal globule is a compact polymer state that emerges during polymer condensation as a result of topological constraints which prevent one region of the chain from passing across another one. This long-lived intermediate state was introduced in 1988 (Grosberg et al. 1988) and has not been observed in experiments or simulations until recently (Lieberman-Aiden et al. 2009). Recent characterization of human chromatin using a novel chromosome conformational capture technique brought the fractal globule into the spotlight as a structural model of human chromosome on the scale of up to 10 Mb (Lieberman-Aiden et al. 2009). Here, we present the concept of the fractal globule, comparing it to other states of a polymer and focusing on its properties relevant for the biophysics of chromatin. We then discuss properties of the fractal globule that make it an attractive model for chromatin organization inside a cell. Next, we connect the fractal globule to recent studies that emphasize topological constraints as a primary factor driving formation of chromosomal territories. We discuss how theoretical predictions, made on the basis of the fractal globule model, can be tested experimentally. Finally, we discuss whether fractal globule architecture can be relevant for chromatin packing in other organisms such as yeast and bacteria.


The Principle of Recursive Genome Function: Quantum Biophysical Semeiotics clinical and experimental evidences

Sergio Stagnaro and Simone Caramel

May 8th, 2011

Conclusions: In this paper we have shown that Quantum Biophysical Semeiotics clinical and experimental evidences are consistent with and fully confirm the Principle of Recursive Genome Function. We can argue that the genetic alteration of the mit-DNA is reversible, generally not for a lack or impairment of genes, but for qualitative information imperfections in genes networking which lead to the activation of inappropriate genes or to inefficient configurations, defective or missing in some cases. Similarly, in microvessels there are communication obstructions which slow down the communication itself (blood flow) from structural and functional point of view. In parallel, it may be assumed that the alteration of the mit-DNA is reversible, during lifetime, and not just in overlapping generations, not for the fact that we create new genes from scratch, or because we are able to repair single genes in some way in a patient (as in genetic determinism), but because we intervene holistically on the whole, .. so that a proper and customized release of 'information' gives resonance to a virtuous feedback mechanism between DNA, RNA and downstream structures (tissues, cells, proteins, mithocondria,..) and vice versa, restoring physiological DNA dynamics...


The Myth of Junk DNA - an issue fallen from science in 2006 to a rejected ideology for the masses to chew on as an Amazon bestseller

Product Details
Paperback: 150 pages
ISBN-10: 1936599007
Amazon Bestsellers Rank #8 in Books > Professional & Technical > Medical > Basic Sciences > Genetics

This review is from: The Myth of Junk DNA (Paperback)

In recent years, a number of leading boosters of Neo-Darwinism have claimed that much of our genome is filled with functionless "junk DNA… But what if so-called "junk DNA" is not junk at all? In this meticulously documented book, molecular biologist Jonathan Wells … presents a growing flood of research that is showing the widespread functionality of DNA previously dismissed by leading Darwinists as little more than genetic garbage. For non-scientists, some of the detail provided here may be tough-sledding. But Wells does his best to make things clear even for those who may not have a detailed understanding of modern genetics, and his book includes a helpful glossary of key terms as well as 17 black and white illustrations. Chapter 9 provides a useful (and understandable!) summary of the case for functionality in "junk DNA." For scientifically-inclined readers, Wells' careful distillation in Chapters 3-8 of hundreds of recent journal articles documenting various functions for so-called "junk DNA" will be extremely valuable.

This review is from: The Myth of Junk DNA (Paperback)

… this book is a gold mine for anyone interested in the junk DNA issue and, indeed, the whole genetics field. The references (page 114 to page 159) alone are worth the price of the 173 page book (I paid 9.95 from Amazon). This readable well-documented tome included a glossary for those not in the field to insure that the book was assessable to every educated adult. In the last decade more and more research has supported the conclusion that much, if not most, of the so-called junk DNA has an important, if not a critical, function and I predict that this trend will continue for some time. Darwinism turned out to be a science stopper in this case. Instead of concluding the DNA that has no known function is junk, as was common, researchers should have been asking what does it do, the same question that would have saved us a lot of grief when Darwinists labeled organs that it was not known what their function was as vestigial (meaning then that they were evolutionary leftovers from the past without a function)

This review is from: The Myth of Junk DNA (Paperback)

As exploration began to reveal the full extent of the human genome in the 1970s, it was discovered, much to the shock of early researchers, that only a small portion of our DNA served explicitly to fold proteins. What did the rest of it do? More than a few researchers proposed that most of our genome was "junk," relics of an evolutionary past that served no current function. Despite the caution of some geneticists, who wisely suggested that junk DNA was merely a category representing the limits of our understanding, several self proclaimed spokesmen for science pounced on the idea that junk DNA was in fact proof that popular alternatives to Darwinian evolution were simply wrong… Much of what was originally thought to be "junk" in fact serves a purpose.

The bulk of Well's latest book is a summation of hundreds of articles dealing with the non protein coding portions of DNA. Indeed, the small print endnotes to the book make up more than a quarter of its total pages. But in general, Wells finds two lines of evidence that these DNA sequences are indeed functional. The first is that these sequences are ultra conserved, both in humans and other animals. If they were truly non-functional, we would expect that they would gradually deteriorate and be subject to a greater number of mutations than the DNA which we know to be functional. But in fact, this is not the case. This line of evidence, of course, is indirect. But a more direct line of evidence is found in the number of positive functions that have been uncovered. Apparently repetitive sequences of DNA, for example, serve to deactivate one of the two X chromosomes in female mammals to promote healthy development. Other functions, including RNA coding, are common.

Taken as a whole, this book is a useful summary of the current literature…. But there is another sense in which the story of "Junk DNA" (or rather, the story of how it was widely accepted and then gradually rejected) is in fact devastating to the Darwinian paradigm. Junk DNA was an important "proof" for many defenders of "science" that evolution was "true." And yet, one of the main arguments against the whole concept of junk DNA is that it is not compatible with what we know of natural selection. In other words, Darwinian evolution is at once proved both by the presence and absence of junk DNA. And given that is the case, falsifying Darwinian evolution is nearly inconceivable. In the final analysis, real science can be falsified, but ideologies cannot.

This review is from: The Myth of Junk DNA (Paperback)

This is a scholarly and impeccable book that takes a cogent case against the contemporary Darwinian story of intergenic sequences and conveys it on a level for the general peruser.

["So much 'Junk DNA' in our Genome" (see full facsimile of the 4.5 page science talk by Dr. Ohno, 1972) intended not as a metphor but a fully serious science issue was branded as "a suspect way to arrive at 1% figures of structural utility to 99% junk" (see first person Boyer to rise immediately after the utterance of the nonsense - ibid). The misnomer was discarded as a science issue by the first international organization of scientists, International PostGenetics Society in 2006, October 16, eight months before the US government released the devastating ENCODE results in 2007). Simply put, those who know anything about informatics realize that 1.3% of "about half the information of Windows 7" (indeed, a fraction of it, as amino-acid-coding exons are a small part of what used to be "genes") is simply an insufficient amount of information to build e.g. a human (a stamp-sized gif picture contains about that tiny amount of information). View Google Tech Talk YouTube of the triple Ph.D. Pellionisz or read his peer-reviewed science paper The Principle of Recursive Genome Function (both in 2008, laying down fractal iterative recursion as truly cracking the compressed code of full DNA, once proponents of "Junk DNA" and "Central Dogma" passed away, and even the US Government released the public admission ENCODE results in 2007). Contrary to earlier quotes, as of today all leading scientists, (Let us just list 7) Drs. Watson, Venter, Hood, Lander, Collins, Church, Schadt all agree that the classic notions are not only "frighteningly unsophisticated" (Venter), but have been outright "wrong" (see Lander telling NIH with all leaders present at the 10th Anniversary of the Human Genome Project, the Science Advisor to the President endorsing fractal approach). Others, notably name-caller ideologues, of course, can carry on with their pseudo-scientific propaganda forever, one full of junk just having been congratulated on his birthday of retirement-age as an epitome of a generation of ideologues "who led science nowhere". - This entry can be discussed on the FaceBook page of Andras Pellionisz]


Eric Schadt Joins Mount Sinai Medical School [Dir. of Inst. of Genomics AND Multiscale Biology]

May 16, 2011, 10:03 AM

Schadt Joins Mount Sinai Medical School

By ANDREW POLLACK

[Schadt' "The New Biology" is fractal-like "Multiscale" - AJP]

With his boundless energy and unvarying outfit of shorts, sandals and a white polo shirt, Eric E. Schadt, the chief scientific officer of DNA sequencer manufacturer Pacific Biosciences, is considered a brilliant rebel in the field of genomics.

Now Dr. Schadt is about to take on a new role – as chairman of genetics at the Mount Sinai School of Medicine in New York as well as chief of an institute there designed to study complex biology and apply it to medical treatments.

Dr. Schadt will retain his position at Pacific Biosciences, which is based in Menlo Park, Calif., but will clearly be spending less time there now.

Pacific Biosciences said it would collaborate with Mt. Sinai on the institute that Dr. Schadt will run, known as the Institute for Genomics and Multiscale Biology.

Dr. Dennis S. Charney, dean of the school of medicine, said in an interview that Dr. Schadt could make Mt. Sinai, “among the leaders in making genetics a key part of the way medicine is practiced.’’

The collaboration with Pacific Biosciences is “a value added,’’ Dr. Charney said. He said the genomics institute would receive about $100 million over five to six years.

Under the collaboration, Pacific Biosciences will supply the institute with two prototype machines that can be used for DNA sequencing as well as for analysis of other biological molecules, such as RNA. Mt. Sinai is also expected to buy at least one of the standard $695,000 DNA sequencers that Pacific Biosciences has just started shipping.

Dr. Schadt said the new effort will allow him and Pacific Biosciences to be “embedded’’ in a medical institution to apply sequencing findings to medicine. “They have clinical data that can be provided instantly’’ from patient medical records, he said.

Hugh Martin, chief executive of Pacific Biosciences, said the company had been looking for a big academic collaboration.

Dr. Schadt is a leading proponent of the view that focusing on individual genes might not be the way to treat diseases or discover drugs. Rather, researchers should focus on complex networks of interactions between genes and other parts of an organism. Such complex networks can be understood best by simulating them on computers.

He spent much of his career at Rosetta Inpharmatics, a genomics company acquired by Merck. He is also co-founder of Sage Bionetworks, a nonprofit organization that seeks to apply the complex network philosophy to understanding diseases. He has been profiled, among other places, in The New York Times and in Esquire.

A Cross-Country Venture

May 16, 2011
Genomeweb

As our sister publication GenomeWeb Daily News reports, Pacific Biosciences' Chief Scientific Officer Eric Schadt will lead the Mount Sinai Institute of Genomics and Multiscale Biology. As part of the collaboration between PacBio and Mount Sinai, GWDN continues, "a Single Molecule Real Time Biology User Facility will be established within the institute, which is the hub of genomics research at Mount Sinai and collaborates with 13 other disease-oriented and core technology-based institutes at Mount Sinai." Luke Timmerman at Xconomy San Francisco adds that the "union of PacBio and [Mount] Sinai is a high-profile effort to bridge the traditional divide between lab research and clinical treatment of patients," saying that both parties are likely to benefit:

For Mt. Sinai, hiring a star like Schadt means it will likely attract more donations, and be able to recruit many more bright young physicians and scientists interested in genomic-based personalized medicine. For PacBio, it hopes to learn how to best position its instrument with customers around the world, after hearing from Schadt how it works in the trenches.

Timmerman also says that, by joining Mount Sinai, "Schadt and PacBio are walking away from a potential partnership with UC-San Francisco, which had been wooing Schadt for months," and that this move raises questions as to the future of the New York Genome Center, "a fledgling effort to bring together a number of New York's top biomedical research centers to create a shared world-class genomics research facility," which, he adds, is still its planning phases.

COMMENTS:

Submitted by andras on Tue, 05/17/2011 - 01:24.

Kudos to Eric, who is now also the Chairman of the Mount Sinai Institute of Genomics AND Multiscale Biology. The CSO of sequencer PacBio, he is a brilliant rebel, a proponent that focusing on individual genes is not the way to treat diseases or discover drugs. In the paradigm shift of the "genome revolution" documented by the Battelle Study as the most disruptive economic singularity of science and technology ever, already the size of beyond the GDP of Brazil, about the same as the GDP of Russia and just slightly smaller than the GDP of India, rebels are becoming acknowledged leaders. "Multiscale" is a quantum leap towards "scale free" fractals. It is particularly nice to see that "New York's gain is NOT Silicon Valley's loss! - Andras Pellionisz

[It is remarkable that though Eric's "The New Biology" video, from which the illustration is clipped, was uploaded by PacBio as "Systems Biology" (that is clearly not new, Ludwig von Bertalanffy used the term first in 1928). In order to better define the unquestionable mathematical system lurking behind the "complexity" ("complexity is in the eye of the bewildered") Eric Schadt's new $100 M institute is called "Multiscale Biology" - a quantum leep towards the Fractal Approach such as FractoGene (2002), identifying the intrinsic mathematics of the genome-epigenome (hologenome) system, and thus emerging as algorithmic genome interpretation, where e.g. the "before and after" cancer therapy structural variants can be neatly separated to (harmless) "human diversity" parametric differences, and (harmful) "fractal defects" that are syntax-errors, the genome violating its own mathematical rules, leading to disregulated (cancerous) growth; see Principle of Recursive Genome Function science paper and its popularization Google Tech Talk YouTube, 2008. This entry can be commented in the FaceBook page of Andras Pellionisz]


Battelle Study: The $796 Bn Economic Impact of the Human Genome Project

[.pdf of full 58-page Battelle Study - AJP]

[First an idea what $796 Bn is. It is larger than the 2010 GDP - Gross Domestic Product - of the Continent-size Brazil. It is about the same as the GDP of Russia, and just a little bit smaller than the full GDP of India - AJP]

Excerpts [AJP]

Introduction

The sequencing of the human genome represented the largest single undertaking in the history of biological science and stands as a signature scientific achievement. All of history in the making, human DNA took just 13 years to sequence under the Human Genome Project (HGP), an international public project led by the United States, and a complementary private program.[Myth #1 may be, that "the government did it all" - as we know, it was an official "tie" between the government and public sector; by now at both ends of Sequencing Companies and Genome Computers the USA private sector rules - AJP]. Sequencing the human genome—determining the complete sequence of the 3 billion DNA base pairs and identifying each human gene—required advanced technology development and the assembly of an interdisciplinary team of biologists, physicists, chemists, computer scientists, mathematicians and engineers... the knowledge of genome structure, and the data resulting from the HGP as the foundation for fundamental advancements in medicine and science with the goals of preventing, diagnosing, and treating human disease. Also, while foundational to the understanding of human biological systems, the knowledge and advancements embodied in the human genome sequencing, and the sequencing of model organisms, are useful beyond human biomedical sciences.  

The resulting “genomic revolution” is influencing renewable energy development, industrial biotechnology, agricultural biosciences, veterinary sciences, environmental science, forensic science and homeland security, and advanced studies in zoology, ecology, anthropology and other disciplines.  

In the ten years since the first sequences were published, much has been written about the scientific consequences of mapping the genome but little analysis has been done of the economic significance of the achievement. [Myth #2 may be, that the Battelle Study is about the Science of PostGenetics. It is NOT. It fills the void of a comprehensive ECONOMIC study of its impact, already a historical turning-point, and "just the beginning" - AJP]

The technologies that have empowered genome sequencing range from the gene sequencers themselves, to sample preparation technologies, sample amplification technologies, and a range of analytical tools and technologies. An industry has grown up to supply the scientific research community in the private sector, government and academia with the equipment, supplies and services required to conduct genomics research and development (R&D) and associated product development. This industry, of course, generates additional economic impacts.

To evaluate these genomics-enabled industry impacts in the U.S., Battelle constructed a “from the ground-up” database of individual companies engaged within the sector. The employment of this industry base was used as the foundation for an input/output analysis to quantify the total impacts of these firms (in terms of direct and indirect output and employment and their multiplier effect). The results of the analysis show that—spurred by the original investment and technological development impetus of the human genome sequencing projects—a substantial economic sector has developed thereby benefiting the U.S. economy in terms of business volume, jobs, and personal income supporting American families[Myth #3 may be, that the Battelle Study introduced "Genome-Based Economy". As the website of Genome Based Economy and the 2009 Churchill Club YouTube indicate, the First Chapter opened by the Nobel Prize in 1970 to Norman Borlaug for his "Green Revolution" - AJP]...

Non-coding DNA, previously termed “junk DNA” because it was thought to be a relic of evolution with little biological function, was instead confirmed to have specific functionality in transcription and translational regulation of protein-coding—i.e., most of it is not junk at all, it is central to life functions. This finding alone supports the vision of undertaking whole genome sequencing, since prior to the HGP some detractors argued that the budget would be better spent simply studying known protein coding genes and ignoring the rest. Eric Lander points out that “it has reshaped our view of genome physiology, including the role of proteins-coding genes, non-coding RNAs and regulatory sequences. [Myth #4 may be, that the Battelle Study proved that "Junk DNA is anything but Junk". No, the "Junk DNA" misnomer and "Central Dogma" obsolete axioms have long died of a thousand wounds, and only lingered because "facts don't kill theories, only a better theory kills an obsolete one"; as the Gene/Junk "frighteningly unsophisticated" notion was superseded by The Principle of Recursive Genome Function, 2008.  What the Batelle Study did prove, however, that "detractors" not only inflicted half a Century of delay in the progress of Genomics/Epigenomics (HoloGenomics) science (and documentable harm to scientists) - but also were directly responsible to huge economic losses; e.g. pursuing for decades "gene discovery" (of non-existent genes) at horrendous expense. The end result is a continuous decrease of the "predicted" 4 million, 140,000, 40,000, 30,000, 19,000 "genes", concluding that the very old-fashioned definition of "gene" became extinct - yielding room for FractoGene (2002) - AJP]

A. Basic Science and Knowledge Expansion Impacts

The sequencing of the human genome has resulted in a distinct paradigm shift in our understanding of the biology of humans, and indeed all organisms. As such, ipso facto the decoding of the human genome stands among the preeminent findings in the history of science.

[Myth #5 may be, that the Battelle Study is impeccable from a science viewpoint. It is NOT. Few leading scientists would disagree with the rock-solid statement that "by revealing - sequencing - of the genome, it was NOT "DECODED". Indeed, the Battelle Study is a rock-solid basis of a horrific scenario; that without an actual (mathematical) decoding of the meaning of revealed sequences a Russia-sized economic Titanic may hit an iceberg. The Battelle Study does provide a very strong hint, as follows: - AJP] 

Sequencing the human genome made clear information sciences, mathematics and biological investigation are now inexorably intertwined. The sequencing of the genome was as much a mathematical and computational achievement as it was a biological one and has helped to give rise to new fields of biological science in “computational biology” and “systems biology”. It has been noted that “The revolutions that have been generated by the first draft of the Human Genome Project have barely been felt, but there is one profound change that has already occurred, and that is the realization that biology is fundamentally an information science.” 

[Myth #6 may be, that the Battelle Study equates the new axiom that "genomics (not biology in general) is fundamentally an information science" with the improper statement that sequencing of the genome gave rise to "systems biology". The term Systems Biology was first used by Ludwig von Bertalanffy in 1928 and even his General System Theory book appeared in 1936, decades before HGP. While there is no question in anyone's mind that the Genome/Epigenome (HoloGenome) complex interactions constitute a "system", the absolutely critical challenge is the mathematical identification of what "system" do we face; how its complexity is reduced to the size of about half of Microsoft Windows 7. The Principle of Recursive Genome Function peer-reviewed paper and its popularization as Google Tech Talk YouTube (2008) defines the "system" mathematically as fractal iterative recursion; the name of the game is to come up with a more befitting algorithmic (software enabling) system-definition - AJP]

... Most common diseases, such as cancers, heart disease and psychiatric disorders are not homogeneous diseases, but differ dramatically across individual genomes from patient to patient...

[The profound implications of this simple sentence (on cancers, to be analyzed in detail separately), can be best documented by the Battelle Study directly quoting Thomas Kuhn, see below - AJP]

Thomas Kuhn first coined the term “paradigm shift” which is, by definition, a rather radical upheaval—a major movement of scientific knowledge and understanding to a new platform. Such movement occurs rarely and research leading to a paradigm shift should, therefore, be viewed as a momentous scientific achievement.  

Thomas S. Kuhn. 1962. “The Structure of Scientific Revolutions.”

[Since this is my third paradigm-shift, I can make some comparisons of the Battelle Study with watershed events of the publication of two earlier Studies. I took part in pioneering the transition from ill-defined "Artificial Intelligence" (mistakenly claiming that we can create machine intelligence without the benefit of understanding the real, biological one). In that breakthrough the US defense-agency DARPA made the crucial difference by their DARPA-Study assessing the potential in Neural Nets - the science and technology providing "reverse engineering" e.g. how flying of birds with their cerebellum can be used to automatically land an F15 fighter on one wing. In my second paradigm-shift (when the tiny R&D defense project of connecting computers by email of system administrators became a Private Industry in 1994 by Jim Clark's Netscape and the ensuing Internet-boom exploded bigger and quicker than anyone imagined, the US Government's "Blue Book on Internet future" ("High Performance Computing & Communications: Toward a National Information Infrastructure" multi-agency report, completed in 1994) served as the key Study for the breakthrough. Now, with the Battelle Study on $796 Bn Genomics of the USA (developed from the seed of $3.8 Bn government investment with competing private investments) is the "eye opener" that literally overnight generates now a ripple-effect through all media. Genome-based Economy is the Next Big Thing already. This entry can be commented in the FaceBook page of Andras Pellionisz]


In an improbable corner of China, young scientists are rewriting the book on genome research [Newsweek]

Newsweek 2011/04/24
high-quality-dna

The world’s largest genome-mapping facility is in an unlikely corner of China. Hidden away in a gritty neighborhood in Shenzhen’s Yantian district, surrounded by truck-repair shops and scrap yards prowled by chickens, Beijing’s most ambitious biomedical project is housed in a former shoe factory.

But the modest gray exterior belies the state-of-the-art research inside. In immaculate, glass-walled and neon-lit rooms resembling intensive care units, rows of identical machines emit a busy hum. The Illumina HiSeq 2000 is a top-of-the-line genome-sequencing machine that carries a price tag of $500,000. There are 128 of them here, flanked by rows of similar high-tech equipment, making it possible for the Beijing Genomics Institute (BGI) to churn out more high quality DNA-sequence data than all U.S. academic facilities put together.

“Genes build the future,” announces a poster on the wall, and there is no doubt that China has set its eye on that future. This year, Forbes magazine estimated that the genomics market will reach $100 billion over the next decade, with scientists analyzing vast quantities of data to offer new ways to fight disease, feed the world, and harness microbes for industrial purposes. “The situation in genomics resembles the early days of the Internet,” says Harvard geneticist George Church, who advises BGI and a number of American genomics companies. “No one knows what will turn out to be the killer apps.” Companies such as Microsoft, Google, IBM, and Intel have already invested in genomics, seeing the field as an extension of their own businesses—data handling and management. “The big realization is that biology has become an information science,” says Dr. Yang Huanming, cofounder and president of BGI. “If we accept that [genomics] builds on the digitalization of life, then all kinds of genetic information potentially holds value.”

BGI didn’t always seem destined for success—or even survival. “The crazy guys” was how Chinese colleagues initially referred to the two founders, Huanming and director Wang Jian. Refused government support, they muscled their way into the international Human Genome Project, mapping out 1 percent of that celebrated first full sequence before tackling the rice-plant genome on their own, beating a well-funded international consortium, and suddenly finding political leverage. Yang and Wang used it to set up the research center, which is nominally nonprofit but carries out commercial activities in support of the research. With an annual grant of $3 million from the local government in exchange for moving to the shoe factory in 2007, BGI first grew modestly, generating income from fee-for-service sequencing and conducting molecular diagnostic tests for hospitals. A $1.5 billion loan from the Chinese Development Bank in 2009 allowed the company to catapult into a different league, and its combination of sequencing power and advanced DNA data-management solutions for the pharma industry are now drawing international attention. Last year, pharmaceutical giant Merck announced plans for a research collaboration with BGI, as the Chinese company’s revenue hit $150 million—revenue projected to triple this year. “I admire their passion and the willingness to take risks,” says Steven Hsu, a physicist at the University of Oregon, adding that “it permeates the organization.”

Others would like to see deeper scientific reflection tempering the monumental ambition. “A more philosophical and conceptual rather than just a technical approach to the genome is needed to foster great discovery,” says long-time collaborator Oluf Borbye Pedersen of the University of Copenhagen.

While other well-known genomics centers such as Boston’s Broad Institute concentrate more narrowly on human health, the Shenzhen scientists cover a broad biological spectrum. In one shiny lab, thousands of microbes are being scanned for genes that might serve useful industrial purposes, while in another human stem cells are being developed for clinical applications. Scientists have mapped the genomes of everything from cucumbers and 40 different strains of silk worms to the giant panda. They have also cataloged tens of thousands of genes of bacteria living in the human gut, and pieced together the genomic puzzle of an ancient human—an extinct paleo-Eskimo who lived in Greenland 4,500 years ago. While such academic prestige projects are geared toward publication in scientific journals, real-world experimentation is going on at a nearby farm where pigs are cloned to serve as disease models. And in Laos, scientists are testing genetically enhanced plants to feed China’s growing population. The institute has already amassed almost 250 potentially lucrative patents covering agricultural, industrial, and medical applications.

Satellite research centers have been set up or are underway in the U.S., Europe, Hong Kong, and four other locations in China, and the number of researchers at the main headquarters in Shenzhen has more than doubled during the past year and a half. The institute now employs almost 4,000 scientists and technicians—and is still expanding.

“I’ve seen it happen but sometimes even I can’t believe how fast we are moving,” says Luo Ruibang, a bioinformaticist, who at 23, fits perfectly within the company’s core demographic. The average age of the research staff is 26.

Li Yingrui, 24, directs the bioinformatics department and its 1,500 computer scientists. Having dropped out of college because it didn’t present enough of an intellectual challenge, he firmly believes in motivating young employees with wide-ranging freedom and responsibility. “They grow with the task and develop faster,” he says. One of his researchers is 18-year-old Zhao Bowen. While still in high school, Zhao joined the bioinformatics team for a summer project and blew everyone away with his problem-solving skills. After consulting with his parents, he took a full-time job as a researcher and finished school during his downtime. Fittingly, he now manages a project on the genetic basis of high IQ. His team is sampling 1,000 Chinese adults with an IQ higher than 145, comparing their genomes with those of an equal number of randomly picked control subjects. Zhao acknowledges that such projects linking intelligence with genes may be controversial but “more so elsewhere than in China,” he says, adding that several U.S. research groups have contacted him for collaboration. “Everybody is interested in intelligence,” he says.

A shoe factory becoming a genomics center, scientists replacing blue-collar workers—the Shenzhen research facility embodies the country’s economic and social ambitions. According to a 2010 report from Monitor Group, a management consulting firm based in Boston, China is “poised to become the global leader in life-science discovery and innovation within the next decade.”

The Chinese government will, by next year, have spent $124 billion since 2009 building hospitals and health-care centers. Such strategic investments have lured Chinese scientists back to China. So far, at least 80,000 Western-trained Ph.D.s have returned, the vast majority in the past five years. With the country on track to become the second-largest pharmaceuticals market next year, and the U.S. failing behind, afflicted by weak—and declining—government funding of basic science as well as anemic collaboration between private and public sectors, China could take the lead. As George Baeder, vice president of Monitor Group Asia, says, China “has the potential to create a more efficient model for discovering and developing new drugs,” a prediction echoed by Caroline Wagner, a science-policy specialist and professor at Pennsylvania State University, who argues in a forthcoming paper that the days of American leadership will soon be gone. “After more than half a century at the top spot, the U.S. will become one big player among several,” she says.

But, Wagner adds, “science is not a zero-sum game,” and as the pie gets bigger, so will the opportunities for collaboration. Yang, for his part, puts it simply: “Genomics is international,” he says. “We must collaborate to survive and develop.” Certainly, the scientists at his Shenzhen headquarters have their view on the world. The latest shipment of high-tech toys sits, still unpacked, on the floor; the stamp on the sides of the crates proclaiming: Made in the USA.

[This posting can be debated at the FaceBook site of Dr. Pellionisz. Suffice to note here, that the same Newsweek that in 2007 elected to publish in all its editions - except in the American edition - a brilliant article on the "Genome Revolution" ensuing publication of US-lead ENCODE, now laments on the fact that 80,000 returned to China with the best ideas and equipment and the US 4 years later is about to slip into the traditional role of "followership" that formerly China practiced, not us. I posted the article, and took steps to secure precious intellectual property. It may well be that the manpower will be provided by China, India, Korea and Japan. It is hard to resist meetings like Bio-IT in Shenzen, China, timed for former US "Independence Day", 4th of July, 2011]


Systems Biology 'Makes Sense of Life'

Genomeweb,
April 27, 2011

Systems Biology needs System Identification [e.g. Fractal System - AJP]

Researchers have made great strides in understanding biology by taking a reductionist approach and studying little bits of the "complex tapestry of life" at a time, says Scientific American's Christine Gorman. And until now, that's worked just fine, she says. But researchers increasingly find themselves butting up against the limits of this traditional approach, eventually realizing that they must embrace the complexity of life in order to really understand it, Gorman adds. Institute for Systems Biology co-founder Alan Aderem says a systems biology approach could allow researchers to produce vaccines that have been otherwise unachievable to date. In this video from Virginia Commonwealth University, researchers explain the systems biology approach and how it can be used to study disease.

Submitted by lermanmi on Wed, 04/27/2011 - 15:09.

Systems Biology was invented following the "omics revolution" and is a tool in the war for monies. No more. Like the omics it's totally devoid of ideas and is illusory and mystical. It is so "new" that I have not seen a textbook describing its principles and fundamental findings for students. However there are a lot of new paper and digital magazines titled with the word Systems...... (e.g. Systems Urology etc) Michael Lerman

Submitted by andras on Thu, 04/28/2011 - 03:27.

Nature can only be understood as interactive systems. Trajectories of planets, from a reductionist view focusing on the Earth, look needlessly overcomplicated - while a Galilean view of the planetary system (planets elliptically orbiting around the Sun) instantly reduces a seemingly maddening "complexity". We know by now that Genomics just would not work if viewed from a "frighteningly unsophisticated" perspective of "Genes/Junk". Rather, a holistic approach of uniting Genomics with Epigenomics, expressed in Informatics; HoloGenomics is needed to mathematically identify the (fractal) "system" that we face. FractoGene and The Principle of Recursive Genome Function, identifying the system as fractal iteration is not a vague "systems biology" approach, but is an algorithmic (and thus software-enabling) specification of fractal genome function, arising from the demonstrably fractal nature of the genomic code. All this was laid out in peer-reviwed science paper and popularized by Google Tech Talk YouTube in 2008 - and within the very short window of a mere three years have become the algorithmic inroad to beat - Pellionisz

["Systems theory is the transdisciplinary study of systems in general, with the goal of elucidating principles that can be applied to all types of systems in all fields of research. The term does not yet have a well-established, precise meaning, but systems theory can reasonably be considered a specialization of systems thinking and a generalization of systems science. The term originates from Bertalanffy's General System Theory (GST), 1968." (Wikipedia). As I commented to Genomeweb (above) the fashionable "New Frontier; Systems Biology" is highly welcome to the club of holistic approaches (see HoloGenomics, once the mistaken half a Century of old reductionalist Dogmas of JunkDNA/Genes/Central Dogma is finally written off). Dr. LeRoy Hood would probably not consider Systems Biology all that new, however, as he has pursued it at least since 2002 - about the same time when FractoGene arose in 2002, and we might add that "The term systems biology is thought to have been created by (my fellow Hungarian nobleman) Ludwig von Bertalanffy in 1928" (Wiki). With Systems Theory, the axiomatic challenge has always been the mathematical identification of what system do we face. From the viewpoint of Academics, one can argue back and forth, forever, but the crucial need to identify the system will not vanish. From the viewpoint of those suffering from genomic diseases (e.g. The Genome Disease, a.k.a. Cancer) this is question of extreme urgency - just as it was for another Hungarian, John von Neuman, to architect computers for the computing needs of World War II. For the present, War on Cancer, II. (the first waged by Nixon, $30 Bn - without the modern weaponry), now we have both practically any number of fully sequenced genomes (in repeat customer mode...) - and can leverage defense-validated High-Performance-Computers - assuming that e.g. a Fractal System Identification (or competing approaches) are algorithmic, i.e. software-enabling. - Pellionisz]


Virginia Tech partners with NVIDIA to “Compute the Cure” for Cancer

Virginia Tech professors have received research backing from the NVIDIA Foundation in the Silicon Valley to develop an analysis program that will help researchers identify cancer mutations. Here is a press release from Tech:

Virginia Tech researchers Wu Feng and David Mittelman have won the first worldwide research award from the NVIDIA Foundation, as part of its “Compute the Cure” program.

The award will enable them to develop a faster genome analysis platform that will make it easier for genomics researchers to identify mutations that are relevant to cancer.

This program is a pilot effort started by the NVIDIA Foundation, the philanthropic arm of the Silicon Valley-based technology firm NVIDIA. The company specializes in programmable graphics processing units (GPUs), which are used in everything from super phones to supercomputers. The Compute the Cure program has a strategic mission to leverage GPUs to support cancer researchers in the search for a cure, as well as promote cancer awareness and prevention initiatives for the greater good.

“Compute the Cure seeks to revolutionize the way that cancer biologists conduct their science by delivering a framework and toolkit of personal desktop supercomputing solutions for the analysis of genome changes from next-generation sequencing data, as a first step toward seeking a cure for cancer,” said Feng, associate professor in the Department of Computer Science and Bradley Department of Electrical and Computer Engineering in the College of Engineering at Virginia Tech. He is principal investigator on the project.

Co-investigator David Mittelman is an associate professor with the Virginia Bioinformatics Institute and the Department of Biological Sciences, part of the College of Science at Virginia Tech. Mittelman previously worked at the Human Genome Sequencing Center at Baylor College of Medicine in Houston. As the domain expert, Mittelman’s research program has explored the molecular basis for genome instability in mammalian systems and its role in diseases such as cancer. His lab also has developed sensitive methods for characterizing genome instability using next-generation whole-genome sequencing.

Using GPU-accelerated alignment and mapping software in combination with sensitive mutation detection methods, Feng and Mittelman will use the $100,000 Compute the Cure award to deliver an optimized and powerful solution for genome analysis to the research community that other investigators can build upon, to collaboratively advance the field of cancer genomics.

Feng first worked with NVIDIA in 2009, when he was one of 38 recipients worldwide to receive an NVIDIA Professor Partnership Award, designed to accelerate exploration at the frontiers of visual, parallel, and mobile computing. Feng’s Professor Partnership Award spurred his research in the parallelization and optimization of different algorithms in pairwise sequence alignment and short-read mapping onto the GPU. These are critical tasks towards understanding how genomes change during cancer.

Tonie Hansen, the director of the NVIDIA Foundation, said the tech firm’s interest in medical research always has been strong.

“NVIDIA’s Compute the Cure program combines our company’s technical focus and our people’s personal priorities,” she said. “Medical researchers the world over use GPU technology to accelerate the pace of their research. And NVIDIA employees donate generously each year to cancer research organizations, in support of a friend or family member who is fighting the disease. Compute the Cure combines these objectives into one comprehensive program.”


Breast cancer prognosis goes high tech [Fractal - AJP]

[Fractal lilly - even laypeople are aware of the fractality of cancer tumors - AJP]

Published: Monday, April 18, 2011 - 09:04 in Health & Medicine

Cancer researchers at the University of Calgary are investigating a new tool to use for the prognosis of breast cancer in patients. This new digital tool will help give patients a more accurate assessment of how abnormal and aggressive their cancer is and help doctors recommend the best treatment options. Currently, a useful factor for deciding the best treatment strategy for early-stage breast cancer is tumour grade, a score assigned by a pathologist based on how abnormal cancer cells from a patient tissue sample look under the microscope. However, tumour grade is somewhat subjective and can vary between pathologists. Hence, there is a need for more objective methods to assess cancer tissue, which could improve risk assessment and therapeutic decisions.

Using a mathematical computer program developed at the U of C , Mauro Tamabsco, PhD, and his team used fractal dimension analysis to quantitatively assess the degree of abnormality and aggressiveness of breast cancer tumours obtained through biopsy. Fractal analysis of images of breast tissue specimens provides a numeric description of tumour growth patterns as a continuous number between 1 and 2. This number, the fractal dimension, is an objective and reproducible measure of the complexity of the tissue architecture of the biopsy specimen. The higher the number, the more abnormal the tissue is.

According to the team's published study, this novel method of analysis is more accurate and objective than pathological grade. "This new technology is not meant to replace pathologists, but is just a new digital tool for them to use" says Tambasco, a medical physicist at the University of Calgary Faculty of Medicine and the Tom Baker Cancer Center.

Researchers say they will continue to study this new digital method and hope in the next few years that it could become another tool used in the clinical setting.

The retrospective study analysed tissue specimens from 379 breast cancer patients and the findings were published in the January 2011 edition of the Journal of Translational Medicine.


FractoGene (2002) and Fractal Frenzy set off by The Principle of Recursive Genome Function, YouTube (2008) [AJP]

[FractoGene 2002, (also USPTO, 2002 Aug. 1.) Pellionisz in CSHL 2009 September, Lander et al. 2009 October, Stein 2011]

“There is a growing gap between the generation of massively parallel sequencing output and the ability to process and analyze the resulting data” McPherson [Ontario Institute of Cancer Research]

“Chanock [See quotation from Article below, National Cancer Institute], also a medical doctor, cautions that the community should be realistic with regard to ... all this structural variation data to facilitate improvements in the clinic… "The plausibility and the meaning of this discovery is complex: each one of these regions requires its own study and it's still a work in progress to reach the level of confidence and validity that's needed to incorporate that into our clinical workflow. We have to be careful with all the ballyhooing about 'The genomic age is going to turn everything into Star Trek medicine,' because I find this dangerously naïve”

As seen above, YouTube Is IT Ready for the Dreaded DNA Data Deluge? 2008 clearly predicted both that "Information Technology will be all right" as well as that Information Theory of Genome Function needed not a cosmetic, but profound reform.

Now, three years later - there is a crisis in Cancer Research without Algorithmic breakthrough.
Conventional medicine has lost ground without algorithmic, software-enabling theory

It is also increasingly recognized by leaders, that Brute Force must be augmented by Genome Theory to resolve crisis:

Francis Collins; Scientists have to re-think long-held beliefs

Craig Venter: Our level of understanding the genome is frighteningly unsophisticated

George Church: Zero dollar sequencing and one million dollar interpretation

Eric Lander: Fundamental assumptions were all wrong

John Mattick: Dogmas are obsolete

Eric Schadt: Considers the fractal approach of AP [HolGenTech] “truly revolutionary”

From the Figs. quoted above, a "Fractal Frenzy" is evident since 2008, especially afer the Fractal Approach was also presented in Cold Spring Harbor (2009 September). Lander et al. published their Science cover article in 2009 October - and now a CSHL leader (though not on record with fractal papers) placates fractals as the ZeitGeist. The acceleration is also evident from the escalation of viewership of the 2008 YouTube, see above.

^ back to top


The Structural Struggle - [vs. Fractal Algorithmic Elegance - AJP]

Genomeweb
April 2011

By Matthew Dublin

The diverse world of structural genomic variation research — which includes investigations into copy number variation and mapping myriad inserted, deleted, inverted, and translocated genes — is undoubtedly providing investigators with an exciting and promising source of data on human diversity and disease susceptibility. [But the Ten Million Dollar question is the algorithm that sets the "human diversity" and "disease susceptibility" structural variations apart - AJP]. But if a Nature paper published by the 1,000 Genomes Project's Structural Variant group in February is any indication, eureka moments in this field may be a bit further off than researchers originally hoped. The report — which represents the culmination of roughly two years' work involving more than 50 investigators from across the world — describes the group's construction of a CNV map based on whole-genome sequencing data from 185 human genomes. It encompasses roughly 22,000 deletions and 6,000 insertions and tandem duplications. Using a genotyping approach that examined several partial- and whole-gene deletions, the researchers reported a depletion of gene disruptions among high-frequency deletions as well as differences in the size spectra of structural variants.

While the team produced a robust resource for future sequencing-based association studies, Charles Lee, the group's co-chair and director of Harvard Medical School's Molecular Genetic Research Unit, says the take-home message is that considerable barriers must still be overcome before the field can move forward. "We found that we needed new algorithms to identify structural variants and we ended up creating 19 different computer programs. No one program was sufficient — we had to combine multiple programs to maximize the amount of structural variation we are picking up," Lee says. "But at the end of the day ... even at high coverage, we are picking up probably about 82 percent of known deletions, about 15 to 18 percent of known duplications, and essentially no inversions or translocations that we can verify at this stage — so we have a long way to go. If that's where we're at with over 50 investigators, 19 algorithms, and two years of work, we have a long ways to go."

Stephen Chanock, chief of translational genomics at the National Cancer Institute, says that while the generation of resources like the 1,000 Genomes Project are important to better explore genomic structural variation, the need for analytical accuracy will be the pinch that wakes the dreamers up to face reality. "The excitement of having more and more tools always bring us back to the very important question of having to validate or replicate, and I worry that that's getting lost as everyone gets so excited about the next really cool tool. Those are all in silico observations; you still have to go back and make sure that variant is stable and matches what you think you've seen when you actually sequence a genotype," Chanock says. "CNVs, I think, are very interesting for rare or less-common diseases, although the common disease, common variant hypothesis for CNVs has been not quite as exciting as everyone had hoped. It didn't have the drama that everyone thought was there, unlike [in] the common SNP world. ... Ultimately, the technologies are making it easier and we may be going after uncommon and rare variants if whole-genome sequencing kicks in, or at least much denser chips become available."

Better tools

While there are many tools available to identify structural variants, the question of determining which reported variants are actually valid remains a large challenge that bioinformatics tools alone cannot deal with. "I'm not saying any one study is bad, but there is an under-appreciation for the amount of false-positives in the structural variation data that we're generating as a scientific community from next-generation sequencing data," Harvard's Lee says. "My advice to people who are analyzing next-generation sequence data in structural variants — especially for whole genome analyses — is to use as many technologies to complement their analysis as possible. For example, if you're whole-genome sequencing a given individual, maybe use different insert-sized libraries complemented with arrayCGH data. And, by all means, perform a significant amount of validation so you can minimize the amount of false-positive data."

The limitations to productivity Lee and his colleagues face when using multi-color probes to look at the structure of repeated genes using the fiber FISH technique is just one area in need of improvement. "It's just not high-throughput enough, so if someone could come up with a high-throughput method, that would be an excellent way to genotype some of the more copy number variable regions," he says. "I think also the arrays themselves are continually being improved in terms of what probes are being placed on there to genotype specific CNVs, but there needs to be more effort put into the technology for accurate genotyping of CNVs." For now, Lee says, the only work-around is putting in hours of labor to get the job done. [This is called "the brute force approach" in industry - AJP]

The most significant development with genotyping and CNVs over the last few years is the development of high-resolution array comparative genomic hybridization. This technique enabled the very first studies that mapped structural variation genome-wide in 2003 and 2004. Since then, advancements in high-throughput paired-end mapping, read depth of coverage analysis, split read analysis, and assembly have all seriously ramped up research efforts. "We consider massive paired-end mapping a key technique to identify structural variation and genomic rearrangements," says Jan Korbel, group leader at the European Molecular Biology Laboratory. Korbel and his colleagues at Yale University and 454 Life Sciences developed an approach for massively parallel paired-end sequencing that is helping the team to identify germ-line structural rearrangements in connection with the 1,000 Genomes Project and the International Cancer Genome Consortium. "The key advantage of paired-end mapping [is that] it allows a fairly deep and quick and cheap sequencing structural aberrations in the genome by recognizing ends of long fragments and mapping them," Korbel says.

Some newer genotyping tools show particular promise, he adds. These include the SUN genotyping method, developed by Evan Eichler's group at the University of Washington, which identifies "singly unique nucleotide" positions to genotype the copy and content of specific paralogs within gene families that are highly duplicated, and the analytical software framework Genome-STRiP, developed by Harvard University's Steve McCarroll for characterizing genome structural polymorphisms using multiple types of next-generation sequencing data including read depth, read pairs, and split reads.

Korbel's own group has designed a novel computational method to analyze the depth of coverage of high-throughput DNA sequencing reads, called CopySeq. This tool can infer locus copy number genotypes by integrating paired-end and break point junction analyses based on CNV-analysis approaches such as arrayCGH and FISH. In November, Korbel demonstrated CopySeq in a PLoS Computational Biology paper in which the team used it to genotype 500 chromosome 1 CNV regions in 150 genomes sequenced at low-coverage and to analyze gene regions enriched for segmental duplications by comprehensively inferring copy number genotypes in the CNV-enriched olfactory receptor human gene and pseudogene -loci. Using CopySeq, they found that for several olfactory receptor loci, the reference genome appears to represent a minor-frequency variant — a finding that could inform future functional studies.

As far as discovery methods are concerned, Korbel says he is waiting for a technique that can identify unique CNVs, irrespective of their sizes, as well as those in segmental duplications. "There are still regions in the genome that are very poorly understood and are hard to compare between individuals and with current technologies. We are unable to correctly resolve for these regions. ... Some of them are relevant for medicine, so that's a huge challenge," he says. "The data is good and so much is being generating by newer techniques, but we're still not fully exploring all the benefits of this data yet because we're still developing suitable methodologies that combine all types of signature signals in the data. We're obviously trying to improve this, but there's still a challenge there."

Recently, a team of researchers from Yale and Stanford University developed a method for genotyping and CNV discovery from read-depth analysis of personal genome sequencing. In February, they published a paper in Genome Research describing a method called CNVnator, which is based on a combination of the established mean-shifting approach with multiple-bandwidth partitioning and GC correction. The team used 1,000 Genomes Project validation data sets to calibrate CNVnator so it could be applied to CNV discovery, population-based genotyping, and the characterization of de novo and multi-allelic events. The team also reported its identification of six de novo CNVs in two family trios.

"The technology has sort of changed incrementally over the last decade, but the large data sets that we accumulated really made all the difference and allowed groups to start definitively identifying genetic factors that contribute to autism and schizophrenia," says Jonathan Sebat, an assistant professor at the University of California, San Diego.

"In 2011, the biggest game-changer is the short read sequence data, and shortly on its heels, the long-read, third-generation sequence data. The methods for detection of variants and the spectrum of potential disease alleles that you can find now is enormous, so that's a complete game-changer there."

'Game-changer'

In February, Sebat published a paper in Nature describing a large, two-stage genome-wide scan of rare CNVs that associated copy number gains at chromosome 7q36.3 with schizophrenia. Their findings implicate altered vasoactive intestinal peptide signaling receptor gene VIPR2 in the pathogenesis of schizophrenia and indicate the VPAC2 receptor as a potential target for future antipsychotic drug development. "What's new and interesting about that is that the structural variants that we're finding contrast [with] the large microdeletion syndromes that we knew about from the early CNV studies. We're now honing in on the smaller CNVs, not the big, non-allelic homologous recombination-mediated deletions that we used to see," Sebat says. "We're now seeing structural variants that are mediated by other types of mutational mechanisms. The break points are not the same in different patients — they're overlapping, but very different risk alleles. When we get our disease association, we end up finding many different rare mutations in the same gene, often with the same functional impact." He adds that down the road, new CNV findings will not only be used to pinpoint specific genes but identify neurobiological processes in diseases as well.

The University of Washington's Joshua Akey and his colleagues are refining approaches to explore patterns of genomic variation using exome sequencing, as it allows them to use data from thousands of individuals rather than from the mere handful they'd afford using whole-genome sequencing. "It's really striking to be able to look at a data set of 2,000 individuals because you have such deep insight into patterns of variation and you get a real appreciation for the structure of rare variation that you can't get when you only have 20 or 40 individuals," Akey says. "One of the most interesting things that we'll be able to do with thousands of individuals is make very detailed inferences into recent human history. You can't do that unless you have thousands of individuals. For the first time, we can see these dramatic expansions in human population sizes that have occurred in the [past] couple thousand years."

Akey is involved in several structural variation research projects, including one study that looks at the genetic basis of adverse drug responses across dog breeds. He is working, in collaboration with his colleague Evan Eichler and Washington State University's Katrina Mealy, to characterize the distribution of segmental duplications and CNVs across 20 dog breeds with arrayCGH, as it functions at a higher resolution than chromosome-based comparative genomic hybridization.

Coming up

Later this year, Akey and his colleagues plan to publish what he describes as one of the largest and most comprehensive studies into patterns of human genetic variations using high-quality data from roughly 2,000 exomes. Although the rise of exome sequencing has undoubtedly caused excitement and heightened expectations within the structural variation research community, he cautions that the real insights are only going to come from taking a step back and determining how to interpret and compare those sequences from that many individuals. "There's a critical need for further methodological development to be able to fully extract all of the information in these complex data sets and the challenge is that there are so many challenges," he says. "Let's assume that the genotypes we have are accurate: what do we do with that data in terms of making inferences about human history and about disease susceptibility? What's the best way to test for association between rare variants and disease? What's the best way to look for natural selection? There are challenges from the very beginning of the process to the very end of the process. A lot of theoretical work needs to be developed to fully exploit the information that CNVs have. [Lest anyone think that "theoretical work" is free, let us correct the perception. Superior theory is the most expensive part of research - except for all those funds wasted on "garbage in - garbage out"; done with "frighteningly unsophistaced" (most often disarmingly naive) "theoretical background" - AJP]

While the literature contains a growing number of studies that demonstrate associations of common simple CNVs with specific disease susceptibilities, forming a substantial collection of common CNVs, the issue of resolution still hinders researchers who aim to study rare CNVs. "I think we have a very nice catalog of common copy number variants and we have methodologies to pick up the rare CNVs, although not as high resolution as I'd like to see, but it's the cost effective way of doing it," Harvard's Lee says. "We have 18 to 20 of these very clear associations — these are deletions that increase your susceptibility with more common disease — and I think there are more to come. But the issue we have right now is that we don't have a catalog for the rare variants and the smaller ones. Once we start to develop those catalogs, we can start to improve on our arrays, or whatever method we use to detect CNVs in the disease association studies, to see if any of those rare, smaller CNVs are associated with other diseases."

NCI's Chanock, who is also a physician, cautions that the community should be realistic with regard to the potential for all this structural variation data to facilitate improvements in the clinic. "We've started to make very important steps, and when we look at the age of CNVs and a good part of the sequencing that's going on, the discovery element is spectacular — -almost unprecedented," he says. "The plausibility and the meaning of this discovery is complex: each one of these regions requires its own study and it's still a work in progress to reach the level of confidence and validity that's needed to incorporate that into our clinical workflow. We have to be careful with all the ballyhooing about 'The genomic age is going to turn everything into Star Trek medicine,' because I find this dangerously naïve."

[Would anybody with a sane mind fund a nuclear accelerator to smash an atom into myriads of pieces - if nuclear physics was not enough developed to see where the predicted trajectories differ from the actually measued data? More or less the same is happening now, when the "frighteningly naive" (and totally discredited "Gene/Junk" notion, see Dr. Mattick's article in this column) is far too often the only "theory" that experimentalist can use as an alibi. Let's us face it - the non-targeted hunt for "structural variants" could cost cancer patients (literally) and arm and a leg (let alone other even more precious body parts to surgery) - yeat an amassed "Library" of "structural variants", will yield only an extraordinary knowledge at an exorbitant price of what "structural variants" there are - but (as Thomas Kuhn predicted, the knowledge would never automatically translate into "understanding"). The better way is to go for a Fractal Recursive Genome Function Algorithic Approach, that is software enabling. The algorithmic fractal approach would tell you immediately that changing the "c" constant in a Mandelbrot Set would retain the fractality (such structural variants only affecting human diversity), while the ways how the Genome does not obey its own fractal rules would pinpoint fractal defects that violate the fractal rules of the Genome - thereby associated with pathology of phenotypes. This entry can be commented on the FaceBook page of Andras Pellionisz]


Cancer center builds Texas-sized cloud [Private cloud!]

Computerworld
Beth Schultz

04.04.2011 kl 04:32 | Network World (US)

As researchers at The University of Texas MD Anderson Cancer Center work at "making cancer history," they're doing so with the help of compute power and storage capacity from a private cloud.

But this is no ordinary cloud.

After all, when you're researching something as complex as the human genome you tend to think big, and MD Anderson's cloud reflects that type of ambition and scale. We're talking 8,000 processors and a half-dozen shared "large memory machines" with hundreds of terabytes of data storage attached, says Lynn Vogel, vice president and CIO of MD Anderson, in Houston.

A different path

And while MD Anderson's general server infrastructure uses virtualization, the typical foundational technology for cloud, this specialized research environment doesn't. Rather, the organization uses an AMD-based HP high-performance computing (HPC) cluster to underpin the research cloud.

"We're currently implementing the largest high-performance computing environment in the world devoted exclusively to cancer," says Vogel, who was recently named Premier 100 IT Leader honoree by our sister publication, Computerworld.

The data and processing capacity are available to the MD Anderson cancer researchers as needed, whether they're sequencing human genomes or investigating radiation physics, epidemiology, dosing calculations for radiation therapy or running simulations for clinical trial activities. About three dozen principal investigators, who each have anywhere from two to 10 assistants, regularly tap into the research cloud, Vogel says.

To access the cloud, they use a service oriented architecture-based Web portal called ResearchStation.

"When you look at the classic definition of cloud computing as enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released, that's in fact how we're approaching our environment," Vogel says.

Enterprise Cloud Services: The agenda

However, he notes, the MD Anderson cloud doesn't currently have a chargeback mechanism - an oft-cited but, at this point, little used cloud attribute. "We don't require a chargeback mechanism because we manage demand largely by a peer review process. The actual determination of priority for using resources is driven by clinicians and researchers themselves, not by IT people," Vogel says.

What this means, he adds, is that he never needs to plead a case for, say, more storage. "They're the ones going to executive management, saying, 'You really have to increase the capacity of this capability or that capability for us to continue to do our work and maintain our rating as one of the top cancer centers in the world," Vogel explains.

More, more, more

In addition, MD Anderson doesn't experience the typical up and down spikes in usage that other enterprises might encounter.

"We find that both the clinicians and researchers in the field of medicine have what I would label 'an insatiable demand' for computing resources, and the demand curve just keeps going up," Vogel says.

He notes that the 8,000-processor HPC sitting at the heart of the private cloud already operates at 80% to 90% capacity, as did its predecessor, a mere 1,100-processor machine. Memory-intensive applications rely on six 512-GHz, 32-CPU servers.

The cloud build-out at MD Anderson dovetails with the organization's expansion into a third data center, due to open this summer.

This will be the second new data center the organization has opened in a four-year period - and these are good-sized operations, with 12,000 to 15,000 square feet of raised floor in each, Vogel says. "We thought our second data center would last us four to five years, but it was full within 18 to 20 months. We had to turn our disaster recovery site into a production data center as we built another one," he adds.

The MD Anderson data centers house roughly 3 petabytes of data, a "somewhat surprising amount," Vogel says, since the cancer center is primarily a 500-bed hospital. But the volume of research data, at about 1.4 PB, now exceeds the amount of clinical data at MD Anderson.

"Anybody who looks at genomic medicine and the sequencing of human genomes begins to realize that there's a tsunami of data coming out of those processes," he notes. "So, ironically, today at MD Anderson we have more data storage capacity devoted to research than we do for clinical care, and that includes all of our images. We're being hit by extraordinary amounts of data that needs to be managed and stored." [See YouTube below, that predicted today's conditions 3 years ago - AJP]

To handle the cloud's storage requirements, MD Anderson uses an HP-Ibrix system that supports extreme scale-out. It chose the Ibrix system because of its reliability and its ability to present storage seamlessly over Ethernet or InfiniBand, using CICS, FTP, HTTP, the Linux client, NFS and other technologies, Vogel says. "This capability also enables us to do data tiering through the cluster," he adds.

Manageability also has been a boon. "Having HP as the end-to-end vendor ensures that all parts will fit together and fit into our monitoring system without any clashes," Vogel says.

While MD Anderson uses HP Storage Essentials and CIM to manage each storage unit, it relies on the Ibrix management server, Fusion Manager, for a top-level view. Each server also reports into Fusion Manager, Vogel says.

"As an added bonus, and very much a consideration in a constrained healthcare personnel environment, is the ability to operate our entire cloud configuration with minimal personnel involvement - just two people," Vogel says.

Public cloud: Not on your life

Vogel says he's talked to some public cloud providers who would love to host those MRIs, CT Scans and other clinical images - more than 1 billion of them - within their infrastructures. But no can do, he says.

"We've looked into this, but quite honestly, we've found on performance, access and in the management of that data, going to a public cloud is more risky than we're willing to entertain," Vogel says. "This goes directly to the point that this is identifiable patient data ... and we're just not comfortable with the cloud given the actionable capability of a patient should there be a breach."

What's more, public cloud providers simply can't provide the level of business knowledge that MD Anderson's IT staffers can, some of whom are PhD scientists themselves, Vogel says.

"When you're in the business of biology, which we are, it's a different ballgame in terms of understanding the structures of data, the kinds of access and models used, and the applications that need to be available," Vogel says. "As much as public cloud providers would like us all to believe, this is not just about dumping data into a big bucket and letting somebody else manage it."

^ back to top


Cancer as Defective Fractal Recursive Genome Function (Pellionisz and Lander et. al trigger escalation of fractal approach)

[Google Tech Talk YouTube, The Principle of Recursive Genome Function, 2008]

[The "double lucid heresy" of brain cell growth programmed by an iterative fractal recursive genome function from proteins to DNA - surpassing both mistaken axioms ruling for half a Century; the "Central Dogma" AND the "Junk DNA" misnomer - are traced back to the fractal model of a Purkinje brain cell (Pellionisz, 1989, see page 461 Fig. and point 3.1.3. "repeated access to genetic code", see also acknowledgement on pp. 462 reference to the NIHM grant application - denied because of double heresy, and renewal of ongoing NIH grant to AJP also denied). Based on the core-concept, FractoGene (fractal DNA governing growth of fractal organelles, organs and organisms, Pellionisz, 2002) could only be established in a stealth-mode (USPTO, 2002), till the results of NIH "ENCODE" concluded (2007) that "the community of scientists would have to re-think long-held beliefs (Collins, 2007). Upon this clearance, both the peer-reviewed The Principle of Recursive Genome Function and the Google Tech Talk YouTube could be swiftly disseminated (click above, 2008).

The "fractal approach to DNA" ("The Principle") received a major impetus (mark B) when, within 6 months of its publications was explicitely cited by about 15 authors (see e.g. Shapshak et al, 2008 and Chiappelli et al, 2008, and the fractal approach to recursive genome function was also presented (invited by Prof. George Church) in Cold Spring Harbor, Personal Genomes, Pellionisz, 2009, September (see escalation from Mark A).

As the manuscript was handed over to Dr. Lander prior to its publication, along with a dozen of co-workers a Science cover article appeared (Lieberman et al. 2009, October), showing a fractal folding structure of the DNA. The article with Eric Lander, Science Advisor, amounted to convey the message "Mr. President, the DNA is fractal! Although the Lander et al. paper (2009) reached back by direct citation to the seminal idea of fractal folding by Grosberg et al. (1993) - similarly as Pellionisz' The Principle of Recursive Genome Function (2008) reached back by direct citation to the seminal idea of fractal neural growth by Pellionisz (1989), the mutually reinfocing papers by Pellionisz (2008) and Lieberman et al. (2009) created the flood of viewership of Pellionisz' YouTube (2008).

Thus, with Pellionisz (2002), Collins (2007), Church (2009), Lander (2009), Schadt (2010), Mattick (see article below, prized in 2011) there is a slew of leaders calling for new axioms, some implicitely or explicitely referring to the fractal nature of DNA, setting of a new school in Genomics (HoloGenomics). Some further representatives of the followership include:

Jean-Claude Perez (2010) Codon Populations in Single-stranded Whole Human Genome DNA Are Fractal and Fine-tuned by the Golden Ratio 1.618 Interdiscip Sci Comput Life Sci 2: 228–240. DOI: 10.1007/s12539-010-0022-0

“Since the PHYSICAL structure was found fractal (providing enormous amount of untangled compression), it is reasonable that the LOGICAL sequence and function of the genome are also fractal.” (Pellionisz, A., 2009, personal communication: From the Principle of Recursive Genome Function to Interpretation of HoloGenome Regulation by Personal Genome Computers.Cold Spring Harbor Laboratory. Personal Genomes Conference, Sept. 14–17, 2009).

For several years, ...researchers like A. Pellionisz advocated ways to analyze and detect fractal defects within whole genomes. This is based on recursive fractal exploration methods and artificial neural network technologies (Pellionisz, 2008).

Simone Caramel and Sergio Stagnaro (2011) in their "Quantum Biophysical Semeiotics and mit-Genome's fractal dimension" attempt make generalizations from fractal dimension to chaotic, thermodynamical and quantum-states of the genome.

Alexei Kurakin (2011) "The self-organizing fractal theory as a universal discovery method: the phenomenon of life" attempts to make generalizations from fractal theory to non-equilibrium thermodynamics in genomics, expanding the horizon to life sciences.

The PostModern era of Genomics, HoloGenomics that unites Genomics and Epigenomics in terms of Informatics is upon us. It is a new science- and since it is at the same time a new industry to save lifes as urgently as possible, science results can be immediately verified (or falsified) - and if they are algorithmic (software enabling) can be put into use at once.

^ back to top


The Trouble with Genes [Article Gets prize - Mattick joins leaders to admit that basic premises were all wrong - AJP]

Cosmos
by Elizabeth Finkel

'Junk DNA' research inspires Higher Education Journalist of the Year

3 March 2011

COSMOS journalist Elizabeth Finkel has won the Universities Australia Higher Education Journalist of the Year Award for 2011 for an article on research conducted by IMB's Professor John Mattick.

The article, "The Trouble with Genes", was published in Issue 31 of Cosmos in February 2010 and details Professor's Mattick theory that so-called 'junk DNA' actually helps regulate our development. [Why does a year-old article get Prize? Because the message looms greater and greater as the "Big Genome Letdown" sours - AJP]

This research runs counter to the traditional scientific dogma that the only useful genetic material is genes, which contain the instructions to make proteins, and everything else in the genome is 'junk'. Dr Finkel, who herself has training as a geneticist, said Professor Mattick's view "puts him way ahead of the curve".

"We know he is leading the scientific world with his ideas and we really ought to be writing about them but it is difficult to understand what he has discovered - one seems to require degrees in both computing and genetics," Dr Finkel said. [This is a reason why AJP, originator of FractoGene, can excel - with Ph.D. in computer technology, Ph.D. in biology and Ph.D. in physics - AJP]

The Higher Edcuation Awards, conducted in conjunction with the National Press Club, are judged in two categories, each with awards for print and broadcast. The Journalist of the Year is selected from the category winners and is awarded a $10,000 study tour funded by Universities Australia. Dr Finkel's category win was for excellence in communicating research and innovation, teaching and learning, equity and access, social inclusion or Indigenous education issues in print.

National Press Club President and Chair of the judging panel, Laurie Wilson, said, "Elizabeth's story displayed all the hallmarks of outstanding journalism - she tackled an extremely complex topic, translated it into layman's language and turned it into a great story."

The other judges were: Dr Matthew Ricketson, author and former journalist who is now Professor of Journalism at the University of Canberra; Mischa Schubert, NPC Vice-President and political reporter for The Age; and Malcolm Colless, journalist, media consultant and former Director of News Ltd.

---

Junk DNA was once thought to be little more than gibberish. But it may actually be the software that controls a complex organism.

"WHAT'S A GENE, DAD?" I'd like to be there when the nine-year-old son of iconoclastic geneticist John Mattick pops the question. It used to be simple - a gene coded for a protein.

But when I put that question to Mattick, based at the University of Queensland, his response was as disturbing as it was confusing.

"Genetic information is multilayered and a gene can convey lots of different information into the system. It's almost like we've moved into hyperspace in terms of information coding and transfer."

Mattick's cutting-edge theories about gene regulation have been published in the British journal Nature and even appeared in the New York Times. Yet, even though I was once a geneticist, I couldn't fathom his answer.

It seemed my fears had been realised and I'd been left behind by the genetics revolution. In a desperate ploy to catch up, I asked how he would explain a gene to his young son.

"I would just tell him, 'it's an old-fashioned concept', and then explain about information networks. He's a child of the digital generation - he won't have any trouble with it." [This is a reminder that presently "gene" has ZERO universally accepted definition. You get as many fragmented definitions, as may experts you ask - the algorithmic (software enabling) definition is FractoGene - AJP]

IT'S NOT JUST ME who's confused. I checked the 2008 edition of my favourite text book, Molecular Biology of the Cell.

The traditional definition is still there in the opening chapter. But as you read on, you sense the textbook struggling, trying to wrestle the gene back into the box of a definition.

Mattick prefers not to try. And a lot of other geneticists are starting to think this way too. As Ed Weiss at the University of Pennsylvania told me, "the concept of a gene is shredding". [It is not really shredding - it is becoming fractal - AJP].

The genomics revolution is largely to blame. Scientists were shocked when they found out how few 'old-fashioned' genes we actually have - about the same number as the humble nematode worm (Caenorhabditis elegans).

In fact, almost all multicellular creatures with the complexity of a worm or greater have about 20,000 genes. But for Mattick, the death knell of the traditional concept of the gene was triggered by another revolution altogether - that of the digital information age.

Scientists have always understood biology in terms of the technology of the day. The brain, for instance, was considered by the Ancient Greeks and Romans to be an aqueduct for pumping blood; inhabitants of the 19th century likened it to a telephone exchange; those of the 20th century likened it a personal computer. Now scientists compare the brain to chaos and distributed functions of the Internet. [Indeed, those who actually know nonlinear dynamics, with chaos/fractals the two sides of the same coin, declared FractoGene as soon as ENCODE invalidated the premises of the old school; see peer-reviewed science paper (about 10,000 downloads) and Google Tech YouTube (now at some 10,300 views) in 2008 - AJP].

WHEN IT COMES to the gene, Mattick likes to point out that scientists cracked its code in the 1950s, when the world was purely analogue [This perhaps the only major science mistake of the brilliant science writer. The code was never "cracked" - only "revealed". Those in WWII intercepting coded messages of the Japanese and German forces know exactly the difference between "intercepting" the exact transmission - and "cracking the code, to reveal its meaning - AJP].

We had vinyl records, slide rules and mechanical cars. We were primed to recognise the gene as a recipe for an analogue device - such as a protein, for instance.

Proteins are the analogue devices that operate the chemistry of life: the enzymes that metabolise food; the mortar and bricks of tissues; the motors of muscles; the hormones that transmit signals; and the ferries that carry oxygen through blood. We recognised a gene as being the recipe for a protein.

Today, iPods store the equivalent of many thousands of vinyl records. Microprocessors in cars can control everything from the engine to the stereo.

The digital revolution has succeeded in taking vast amounts of information and compressing it. Mattick believes something very similar happened to the gene. In the course of evolution, it went digital. [Dr. Mattick may not be on firms grounds when it comes to information theory - or the science writer may have misunderstood his statement. The DNA has always been digital throughout the evolution - only human interpretation did not seem to recognize this fact for far too long - AJP]

In 1953 we got our first inkling of how genes work. Scientists knew that genetic information was carried by the threadlike molecule DNA - a polymer of four repeating molecules adenine, thymine, cytosine and guanine or A, T, C and G. But how did this thread carry genetic information? Perhaps a picture would reveal its secret.

BRITISH CRYSTALLOGRAPHERS Rosalind Franklin and Maurice Wilkins bombarded crystals of DNA with X-rays and observed an enigmatic regular structure.

The University of Cambridge's James Watson and Francis Crick figured out what it was. Like the elegant spiral staircase of the Louvre in Paris, it was a double helix. And the moment they figured out the structure, the secret of life was revealed. [No, it wasn not. Reproduction (copy mechanisms) of the DNA was, indeed, discovered, but how the fractal DNA governs the growth of fractal organelles, organs and organisms were not at all revealed. Certainly never understood by Dr. Crick (because of his adherence to his "Central Dogma" till the end of his life, and the jury is still out on Dr. Watson - AJP]

Life copied itself by splitting the helical ladder down the middle. Each half then became a template for generating a new copy because each DNA letter on the split rung specified what its partner must be: A only paired with T; C would link only with G.

Watson and Crick had figured out how the code of life copies itself. Some five years later, Marshall Nirenberg, Har Gobind Khorana and Robert Holley in the U.S. figured out what the code means.

The letters of DNA spelt words that coded for amino acids - the building blocks of proteins. Until then scientists had been kids pulling Tinkertoys apart, but now they had the instructions for assembling them. Mankind had discovered the awe-inspiring logic of life.

Genes were made up of a string of DNA, and DNA coded for proteins. DNA happened to have a go-between, a disposable working copy called 'messenger RNA'.

RNA was chemically similar to DNA, but flimsier. Just as an architect will run off copies of a blueprint, so messenger RNA was the working copy used on the protein construction site.

HAVING CRACKED THE SECRET of life, these scientists now started calling themselves molecular biologists (biologists who studied living molecules). And they became rather sanguine, so sanguine they started talking about dogmas.

"We had two central dogmas that were regarded as universal truths in the '60s," said geneticist Bob Williamson, now an emeritus professor at the University of Melbourne.

"The first was 'DNA made RNA made protein'. The second was that the genetic code was universal: what was true for E. coli would be true for an elephant".

The shock to the system came in 1977. Researchers by now were quite au fait with genetic code. Thanks to its universality, they could insert the predicted DNA code for a human gene into a bacterium and out would pop the correct protein.

Yet no-one had ever glimpsed the 'mother code' of a human gene. It was packaged in a chromosome within the dark nucleus of the cell, like a hallowed tome in the crypt of the Vatican library.

IN 1977, RESEARCHERS decided to fish out the mother code for the gene that makes globin (a component of haemoglobin). But no-one was prepared for its size - the globin gene was way larger than it ought to have been.

Williamson, whose group at St Mary's Hospital in London were the first to put the human globin gene into bacteria, remarked in a Nature editorial: "Once again we are surprised".

The explanation was bizarre. The mother gene did indeed carry the predicted code for globin, but it was strangely interspersed with gibberish.

Imagine that the predicted DNA code for globin was written with the English letters: G-L-O-B-I-N.

The mother code appeared as:

G-L-z-z-z-q-q-O-B-s-r-m-b-I-N.

Researchers panicked. What was this gibberish? Was the genetic code not universal after all?

But the panic soon subsided. Whatever gibberish had infiltrated the mother code, it disappeared from the working copy - the messenger RNA - by the time it got to the factory floor.

Like an edited home video, the internal junk had been clipped out and the good bits spliced back together again. Indeed the process was dubbed 'splicing'. The bits that were spliced together were named 'exons'; the internal junk, 'introns'.

With everything neatly named and explained, "the world collectively breathed a sigh of relief," says Mattick. The hallowed central dogma had been saved.

THERE WERE LOTS OF justifications for dismissing junk DNA as 'junk'. Not only did it lack code words for amino acids; it turned out 50% of the junk was comprised of inane repetition.

These repetitious tracts seemed meaningless. But researchers had a good notion of what many of them were. Most of the repeats were 'transposons' or 'jumping genes'.

Jumping genes, which may have originated from invading viruses, have the ability to copy themselves independently of the rest of the genome and then become inserted randomly throughout the genome.

Then there was another reason to suspect that much of the DNA of a species was junk. The total amount of DNA seemed to bear very little relationship to the complexity of the organism.

An amoeba for instance, had a thousand times more DNA than a human. Sometimes it seemed cells multiplied, but forgot to divide, ending up with vast amounts of DNA. It seemed as though DNA just liked to go along for the ride.

NOT EVERYONE DISMISSED junk DNA. Physicists such as Eugene Stanley at Boston University looked for patterns in junk DNA and found long-range interactions more typical of language than gibberish.

Malcolm Simons, a Melbourne immunologist, stumbled upon junk DNA in the course of testing people's tissue types. Tissue compatibility depends on MHC genes, as do some aspects of immunity.

Yet he found the pattern of junk DNA surrounding the genes was a better predictor of the tissue type. For him, junk turned to treasure.

Mattick's departure from the dogma seems to have been driven more by instinct than evidence. Blame it on his genes: "I've got a natural tendency to challenge everything because of my Irish background," he says.

Mattick recalls sitting in a pub in 1977 during his postdoctoral stint at the Baylor College of Medicine in Houston, Texas, and thinking "Maybe this is telling us something?" But for 16 years, while he built his career as a bacterial geneticist, the problem of junk remained an "intellectual hobby".

In 1993, Mattick felt he deserved a break. He'd completed the Herculean task of setting up an entire new institute from scratch - the Institute of Molecular Biosciences at the University of Queensland in Brisbane. What better reward than to spend a sabbatical at the University of Cambridge scratching his intellectual itch?

He had slowly been building a theory in which RNA was central. The current dogma said that most of the RNA made by the genome, the RNA from introns, was bound for the scrap heap. But Mattick thought otherwise.

Simple organisms such as bacteria do not carry introns, but complex creatures do. Mattick wondered if the scrap RNA was part and parcel of that complexity. After all, RNA has amazing versatility: it is a code-carrying molecule that can recognise matching codes on both DNA and other bits of RNA. And it can also form extraordinary three-dimensional structures to mesh with proteins.

IN MATTICK'S THEORY, the scrap RNA or 'non-coding' RNA as it became called, was not flotsam and jetsam floating off a sea of junk DNA. Rather this scrap was more akin to the optical fibres of a modern high-rise building.

An 18th century time traveller, spying these cables, might pass them off as scrap compared to the recognisable analogue components of the building like bathrooms, kitchens and bedrooms. Yet, just as the cables are crucial for the building's communications and controls, so scrap RNA was crucial to the communications and control of a multicellular organism.

The major problem with his theory was that there was no experiment to prove it right or wrong. So Mattick decided to spend his sabbatical in the library looking for "circumstantial evidence".

What he searched for with the most alacrity was evidence to prove him wrong. "The critical observations were the ones that would show it was bunk. Then I could just return to my lab and forget about all this stuff".

TWO BITS OF EVIDENCE threatened to abruptly end to his quest. One was fugu - the pufferfish (Diodon). Fugu is famous for the tetrodotoxin, which kills dozens of Japanese diners each year and for its tiny genome - about an eighth the size of our own.

Nobel Prize winner Sydney Brenner, then at the British Medical Research Council's Laboratory of Molecular Biology in Cambridge, was in the process of reading fugu's DNA sequence. Rumour was the fish had barely any introns, and if a complex vertebrate such as fugu had no introns, then Mattick's theory about regulatory RNA must be wrong.

He paid a visit to Brenner to discover the terrible truth. It turned out that while most of fugu's introns were very small, some were really big. Mattick's theory survived.

The next mortal threat was a publication reporting that introns, once clipped out of the messenger RNA, were destroyed within seconds. If introns were as ephemeral as a puff of smoke, how could they perform any function?

Mattick scrutinised the report closely. It showed that introns were edited out of the main message within seconds. But as to how long they persisted before being shredded, no-one knew. Perhaps, he speculated, it was long enough to do something.

Mattick, of course, was also on the lookout for evidence that would support his theory. He found some. The fruit fly possessed a set of genes that were responsible for its body plan, known as the bithorax complex. It turned out that a crucial stretch of this DNA produced RNA that did not code for protein. What other function might this RNA have?

MATTICK RETURNED to the University of Queensland with his theory intact. He started writing papers articulating his theory that non-coding RNA (shorthand for non-protein coding RNA) was the high-level coding language of complex organisms.

His approach remained one of gathering circumstantial evidence. Together with co-workers in mathematics and computer science, he amassed some compelling observations.

For instance, as more and more species became the darlings of DNA sequencing projects, Mattick noticed a delectable relationship: there was no link between the complexity of the critter and its total amount of DNA.

But there was a clear relationship between the proportions of junk and protein-coding DNA: as the complexity of the organism increased, so did the relative amount of junk.

And then genome sequencing delivered the pièce de résistance: making a human being required no more old-fashioned genes than making a worm or fly.

Clearly complexity was encoded elsewhere, and according to Mattick and a growing number of converts, it was in non-coding RNA.

Mattick's genetic programming theory, outlined in a recent edition of the Annals of the New York Academy of Science, started to assume its current form. In simple terms it goes like this: bacteria could make do with using analogue devices - proteins.

But even these single-celled critters devoted a large portion of their genetic information to the task of control. If organisms were going to get more complex and coordinate decisions between trillions of cells, they needed to develop a more compact regulatory language.

Just as engineers turned to digital coding to move from LPs to iPods, biological systems turned to RNA to evolve from bacteria to people. According to Mattick, RNA, like DNA, carries coded digital information in four letters that can rapidly interact with other parts of the code, much like the self-modifying or feed-forward routines of some computer programs.

As Mattick was building the framework of his model, the rest of the world started providing the bricks and mortar. Big time.

SINCE 1993, THERE has been an avalanche of evidence on the surprising roles of non-coding RNA.

Most of our DNA may well have originated as 'junk' but that junk has been put to work. One of its most common jobs is to produce tiny bits of RNA known as 'microRNA' that targets other RNA for destruction. MicroRNA has been shown to shut down the activity of protein-coding RNA in everything from petunias to people.

Junk DNA also plays another crucial function: it guards the DNA code from invasion by retroviruses or so-called jumping genes, which can hop about in the genome causing dangerous mutations.

Junk DNA is itself largely composed of former interlopers but, like a patriotic immigrant, it does its best to prevent any further invasions.

The RNA transcripts that run off junk DNA are still a close match to live viruses or active jumping genes, and if these junk transcripts meet up with their relatives, they inactivate them.

JUNK DNA MAY play an even more profound role in the workings of multicellular animals. A crucial part of being multicellular is that different cells do different things - they are not all reading from the same page of the genome hymn book.

The first step toward specialisation is folding down those pages that are not to be read, and it seems junk DNA guides the folding process. For instance, females carry two X chromosomes, but only read the contents of one.

During embryonic development, one of the X chromosomes is folded away, a process initiated by a large string of non-coding RNA called 'Xist'.

While most tissues of the body want to keep jumping genes from jumping, the brain might have other ideas.

Fred Gage's lab at the Salk Institute for Biological Studies in La Jolla, California, found evidence that jumping genes known as 'LINE-1' or 'L1', which are permanently deactivated in other cells, become active during development of the human brain.

The L1 genes replicate and insert randomly, sometimes creating as many as 100 extra copies per cell. This variation among neurons in our brains could be the basis for individual differences in neural circuitry and may open up a new way of looking at neurological disorders.

Junk may also have played a crucial role in our evolution. At the DNA level, one of the things that distinguishes primates from other mammals is the invasion of a million copies of a jumping gene that goes by the name of 'alu'. It now occupies 10.5% of the human genome.

JUNK RNA MAY also account for some of the difference between humans and chimps. Our DNA is 99% similar, but one of the regions that differs is the so-called 'HAR1', or human accelerated region 1.

It turns out HAR1 produces a 118-letter non-coding RNA, which is highly active in the brain.

In 2005, Mattick resigned as director of his institute and went back to work in the lab. Tools to explore the function of non-coding RNA had arrived in the form of heavy-duty sequencing machines.

In just one week a 'next generation' sequencing machine can read three billion letters - the equivalent of an entire human genome. Not long ago, that task took the combined forces of the Human Genome Project 14 years to complete.

Mattick and University of Queensland colleague Sean Grimmond have been in collaboration with like-minds at Japan's RIKEN institute. They have been scouring the output of mouse and human genomes, trying to put together a comprehensive catalogue of their RNA output. The database called 'Fantom' (functional annotation of mammalian genomes) now contains millions of transcripts.

THE LATEST DATA is mind boggling. As Grimmond tells me: "Each gene is capable of seven different transcripts, some of these code for proteins and some don't."

Trying to make sense of this deluge is the challenge. "[But] we're getting good at asking questions about ludicrous amounts of data," he says.

For Mattick, the human genome is an RNA machine. But is his theory well and truly vindicated? Not yet. [Why not? One may ask. A possible explanation is that Dr. Mattick's theory is not algorithmic; thus not "software enabling" - AJP] Though it would be hard to find anyone today who blithely dismisses junk DNA, few are willing to go as far as he is and say that the RNA read from junk code is the software that controls a complex organism.

For example, Claude Desplan at New York University has studied fruit fly development for 25 years and argues that complex genomes, in flies or people, are still fundamentally controlled by proteins. While acknowledging that some junk has a role he says, "most of junk DNA is still junk".

Mattick, though, is convinced that our genome is way ahead of anything that IT designers have yet imagined. "The genome is so sophisticated, that there are lessons in information storage and transmission that will be really useful," once we figure it out, he tells me. "The human genome is a similar size to Microsoft Word, but it makes a human that walks and talks."

Notwithstanding the deluge of papers he has authored in top journals, Mattick still seems to be on the fringe. And you get the impression that's just where he likes it.

^ back to top


Adventures in Extreme Science [Eric Schadt and other kooks - AJP]
Esquire

March 22, 2011, 12:24 PM

From Crick and Watson through J. Craig Venter, we had all our eggs in one basket — molecular biology, gene mapping, whatever you want to call it. It failed. And now we're counting on this guy.

By Tom Junod


There may be another scientist in the world as smart as Eric Schadt. After all, scientists are a pretty smart lot, even though you'd be surprised at how few want to change the world, and how many of them have the trudging souls of brilliant, dutiful clerks. There may even be another scientist in the world as popular, as in demand as Eric Schadt, even though Eric works hard at everything he does, including his popularity, and is engaged, at any given time, in at least ten collaborations with other top scientists, not to mention the production — just last year — of a profligate thirty-five scientific papers, not to mention the delivery, year in and year out, of about forty talks and presentations after receiving invitations to deliver two or three hundred. (You'd also be surprised by how social a lot of scientists are, and how many parties they go to.) But if you're looking for a scientist whose great popularity rests in tirelessly writing papers and delivering speeches whose implicit and sometimes explicit message to the most eminent minds in his field is that they're wrong, that they've failed, and that the best way for them to stop wasting their lives is to follow him in a scientific revolution that he admits might not even work: Well, then you'd probably have to narrow your search a little bit. It takes a pretty smart guy to tell the smartest people in the world that all their success, all their hard-won knowledge has led them to a dead end ... that the approach they've taken has been a little, um, simplistic. It takes Eric Schadt to say that — and then to make the damned sale.

What's he selling? Well, the first way to answer that is to say what he's not selling. He's not selling molecular biology. He's not selling the last big revolution in biology, the revolution that made biology the dominant science of our time and was supposed to save us. The Human Genome Project, which at a cost of about $3 billion mapped the twenty-three thousand or so genes that are said to encode all of human existence? That's molecular biology, man — its signal triumph, its apotheosis, the culmination of an effort that began with the elucidation of the structure of the DNA molecule, picked up speed and funding with the War on Cancer, and then, well, figured everything out for us, from the causes of cancer to the roots of belief. (Hint: They're both in the genes, which govern our biology by the proteins they express.) And so we believed. We believed that in our genes was the code for not just our proteins but our fates; we believed what we read in the newspapers and heard on television, that a gene for this had been found, or a gene for that; and we believed above all that if a cause for a certain disease had been discovered, then a cure had to be on the way. Indeed, without quite knowing it, we believed in the logic of molecular biology, its inexorable momentum, which we equated with scientific progress itself. The logic was this: one gene at a time. One gene at a time, we'd triumph over disease, and if we figured out the right gene, the right protein, and the right pathway between genes and proteins, maybe we'd even triumph over death itself. How triumphant was molecular biology? It was so triumphant that we believed in it (and still believe in it) even when it has gone a long way toward bankrupting the pharmaceutical industry with drugs like the painkiller Vioxx and the diabetes medication Avandia — drugs that hit their molecular targets but also cause catastrophic side effects by hitting other unforeseen targets as well — or drugs that never come close to making it to market at all. We still believe in it even when nearly ten years after the mapping of the genome, it has radically increased the cost of drug development while delivering next to nothing in return.

That's right: nothing. Oh, sure, knowledge, yes. Humans know more about the workings of individual genes, proteins, pathways, and kinds of cells than we ever have. We know so much that surely all we need is time. Because one gene at a time takes time. And drug discovery takes time. And FDA approval takes time, gobs of time, epochal engines of time ... and now here comes Eric Schadt saying, Don't hold your breath. Here comes Eric Schadt saying that time isn't the problem with molecular biology — molecular biology is. Reductionism is. Willful oversimplification is. The very idea that humanity can enlist the aid of grunting lab-coated Sherpas and march toward pharmaceutical nirvana one gene at a time is. Here comes Eric Schadt saying, "All right, so the idea was that understanding individual proteins and their missions could open up our understanding of the complexity of living systems. That's failed. That's turned out not to be true. And that was the dream, right? So it's a crisis. We understand simple processes, but we have no idea how simple processes fit into larger processes." You get that? Molecular biology — the great scientific god of our age, not just the answer to but the explanation for our prayers — in crisis! Not true! A failure! Dead wrong! No wonder that a few years ago, Schadt gave one of his talks at Columbia and five minutes into the speech, a gray biological eminence stood up and said (in Eric's telling), "How dare you dismiss all the biology that has made us so successful today? My recommendation to everyone in this audience is not to listen to what this man has to say."

The gentleman then turned and walked out, in front of a few hundred people. But here's the deal: Everybody else stayed. And listened. Because you see, at the time, Eric Schadt was working for Merck and was already getting a reputation as the guy who was remaking Merck from the inside out. And because Schadt's not just (or even) a critic, not some apocalyptic scold. He's funny. He's a real character. He's the life of the party, with a line of bullshit he likes to call bullshit, a mad motormouthed charisma that he combines with a mad cackling awareness of the absurdity of all intellectual endeavor, especially his own. He has a shtick, a pitch, but he also has a vision, and that's what he's selling, with evangelical fervor. And the vision, basically, is this:

Okay, so focusing on one gene at a time doesn't work, doesn't explain what causes disease, indeed falsifies the causes of disease and makes it nearly impossible to develop the drugs we need to cure it. So how about focusing on thousands of genes at a time? How about focusing on thousands of genes and thousands of proteins with some enzymes and environmental factors thrown in for good measure? How about getting bigger instead of getting smaller? How about going for complexity instead of simplicity? How about implicating not single genes and single pathways of proteins in disease but whole vast networks of genes and proteins — networks that have been invisible to us until now? How about taking advantage of the technology and the data that's become available over the past ten years and using it to create models of the living world that are nearly as complex as the living world itself and by God nearly as large? Oh, sure, it sounds impossible. Maybe it is impossible. But that's why Eric Schadt wants not just to remake the underpinnings of biological science but rather to remake science itself — the way it's done. Okay, so the complexity of living systems — and the amount of data they generate — turns out to be too much for even the most heroic of individual scientists to master. All right then: Biologists have to form networks that mimic the biological networks they're studying. The networks between genes and proteins turn out to be organized socially, like human networks, and so human social networks will be required to understand them ... with Eric Schadt at their center/

Basically, most anecdotes about Eric Schadt involve the two things that have enabled him to be both highly connected and a revolutionary — smarts and salesmanship. For example, here's how he met his wife. He was a graduate student at UC Davis, going for his Ph.D. in pure mathematics. Pure math is the hardest, most abstract and conceptually demanding discipline you can find, which is why Eric was studying it, and why a young woman named Jennifer Harkness one night gave him a call. She was a freshman with nothing to do, and she and her friends were making prank phone calls. The phone rang, Eric picked it up, and he heard a voice say, "Hi, this is Jenny." He said instantly, "Hi, Jenny — is this a prank phone call?" Jenny and her friends started screaming; they couldn't figure how he'd figured out what they were doing. He explained that he was an inveterate prank caller himself — he liked calling seismologists and telling them that he was feeling tremors — and when Jennifer stayed on the line, he found out that she was, like him, from Michigan, and they had some things in common... .

He made the sale, in other words, and now he and Jennifer live in Palo Alto, California, with their five blond kids, the big sprawling brood that inevitably causes other scientists to remark, "Oh, you're an optimist!" when he tells them about it and also prompts him to articulate an elemental personal philosophy when he's sitting at his big dining-room table one morning, trying to finish another of his groundbreaking papers while bouncing his five-year-old daughter in his lap and at the same time checking his oldest son's math homework — "I can never do too much of anything. Bring it on, baby." He's forty-six years old, and he has the moonfaced swagger of a former child star, albeit one who grew up to be a football blocking back. He's stocky and strong, with a knobby nose and an imposingly lumpy brow and a disheveled head of brown hair spit-curled to his forehead by the sweat induced by his Herculean labors. Think Jack Black in a white lab coat and you get the picture ... except that he doesn't wear a lab coat. He's developed his own standardized scientific uniform — a white tennis shirt with a dark-blue Polo insignia and a pair of hiking shorts — and he wears it as faithfully as Steve Jobs wears blue jeans and a black mock turtleneck. He wears it when you see him in the morning and he wears it when you see him at night, so that you don't really know if he's ever gone to sleep or ever changed, and he even wears it when he takes his motorcycle to work, though the motorcycle is to motorcycles exactly what Eric Schadt is to biology — a baroque exaggeration of normal capabilities that either promises deliverance or threatens obliteration. But let Eric, who's something of a gearhead in both the civilian and scientific aspects of his life, describe the specifications of the BMW S 1000 RR:

"Four hundred pounds, 200 horsepower, the fastest thing out there, zero to 60 in 2.9 seconds, the first superbike." Well, at least he wears a helmet — and not just a helmet but a big black-visored one with a video camera rigged on top so that he can record the sublime experience of riding his superbike or the inevitably annihilating experience of being run off the road and crashing it. "People don't like being passed," he worries, but of course he passes them anyway on the way to work, hitting 100 miles per hour on the street and 120 miles per hour at the office park and popping the occasional celebratory wheelie in nothing but the white shirt, the short pants, and the mitered black helmet that makes him look like some kind of postmodern grenadier, sporting technological plumage.

And then, when he gets to work, three miles from his house, he gets to ride something that goes even faster.

He's one of those guys with a coveted brain, so he's one of those guys with a lot of gigs. He's cofounder of a nonprofit organization called Sage Bionetworks, which is dedicated to facilitating biological research through an open-source sharing of data. He's been trying to start his own institute — the catchily titled Institute for Multi Scale Biology — at the University of California, San Francisco, though he's constantly getting wooed, and it seems inevitable that he'll wind up with a big academic appointment somewhere, along with what's known as "massive institutional support" — i.e., a lot of money. There's a lot of money in biology, and Schadt, like a lot of other brilliant minds for hire, spends a lot of his time chasing it, making his pitch to the venture capitalists in and around the Bay Area or else going up to Seattle and making his pitch to the Gates Foundation and to Microsoft cofounder Paul Allen. "I think I must amuse Paul," Schadt says. "He keeps on inviting me up there, and he never gives me any money."

He does, however, have a regular job, with somewhat regular hours, and that's his job as chief scientific officer of a seven-year-old biotech company called Pacific Biosciences. It's a pretty interesting job, because basically Pacific Biosciences hired him the way a big-time Nascar team hires a driver — that is, because it has this miraculous machine, and it wants someone to drive it really, really fast. Schadt had just left Merck and could have gone almost anywhere he wanted — Yale wanted him to start a systems-biology department, and Genentech wanted him as its head of genetic research, a story that harkens once again to the constants of smarts and salesmanship: "When I was interviewing for that job, the head of the company's research department said, 'You're either completely full of shit or the smartest person on earth. We're not smart enough to know. But we're willing to bet that you're the smartest person on earth.' "

He wound up going to PacBio instead, even though it was essentially a start-up whose fortunes were and are tied to a machine called the RS, which stands for "real-time sequencer" but which is really an homage to "Rally Sport" and a nod to the fact that the people who run the company are really into California car culture. Schadt had heard of the RS when he was at Merck — he had heard that a scientist at Cornell named Stephen Turner had created a technology that could look at individual molecules of DNA in real time and was trying to take it into production. Schadt never thought he'd pull it off but realized that if he did ... well, the technology would be to some enterprising biologist what the telescope was to Galileo — a chance to corroborate what was a mathematical inference, a chance to see "changes in DNA causing changes in cellular networks causing changes in tissue networks going up to the whole organism." And then, in 2008, PacBio gave him a call. Turner's technology was now the RS, a $260 million triumph of engineering, design, and the kind of precautionary prophylaxis that's usually implemented around Level 4 biohazards. Would Schadt like to take it for a spin? Oh, hell yes — it was a job offer that satisfied the gearhead in him, the daredevil, the biologist who thinks like an astronomer, and, not incidentally, the salesman. Indeed, in his job as PacBio's top scientist, Schadt is a cross between Galileo, a paid thinker at someplace like the Santa Fe Institute, and a guy hawking ultra-high-end copiers. Yes, he's already glimpsed some things with the PacBio RS — he's looking to prove that instead of four bases making up the DNA molecule, there are actually so many modifications of the four that the real number could be more than twenty. (It's a perfect Eric Schadt breakthrough, because not only would it be a "game changer," it would also complicate the practice of biology beyond human capability.) But Schadt's also on the road a lot and on the phone a lot, because PacBio hired him not only to figure out the best experimental applications for the RS but also to "work your collaborations like you've been doing" — because it was his collaborative nature, his connectivity, that was at the heart of his attempt to remake the scientific culture at Merck. And that's where he's happiest. He's not the solitary scientist heroically thinking big thoughts. What he does at PacBio is his "vision of the perfect life" precisely because he's hardly ever alone, precisely because after "thinking of the things I want to think about," he gets to "travel around and talk them over with the most interesting people in the world."

He's all about the network, you see. He's helped identify it as the fundamental organizing principle of biological systems, and he sees very little difference between biological networks and social ones. When you look at biological networks comprised of thousands of genes, you'll see that they are just like social ones, with a few "highly connected" genes showing up again and again as "hub nodes" and others acting as spokes and outliers. Well, Schadt's ambition is to be a hub node. And PacBio allows him to realize his ambition because now not only is he Eric Fricking Schadt, but he's also got the machine that nobody else in the world has — the telescope, the souped-up gene sequencer, the RS. People call him out of the blue. He picks up the phone. And if their interest sounds interesting enough, he says yes, even when — hell, especially when — he already has too many projects to handle. Bring it on, baby. So now he's got collaborations going on with people at Harvard, Stanford, Columbia, Northwestern, UCLA, UC San Diego, UC San Francisco, the University of Washington, the University of Colorado, and the University of God Knows Where Else. He's got collaborations going that are intended to target the relevant biological networks behind cancer, heart disease, aging, diabetes, and sleeping problems. He's even collaborating with a scientist who's trying to extract energy from bacteria. And back in November, he got a call from a team at Harvard that was trying to figure out the strain of cholera that was ripping the guts out of Haiti almost a year after the earthquake. They'd heard about him. Could he help? Moreover, could the PacBio RS help? So here's what happened: They sent him the cholera strain from the cell culture, and he ran it through the RS. And four weeks after he received the samples, The New England Journal of Medicine published his paper with the results. A month, man. That's instantaneous in the world of biology. That's unprecedented. Turns out the strain of cholera originated in South Asia, and that information now makes a mass-vaccination plan feasible.

But this is also part of Schadt's vision — part of his pitch, part of what he's selling. He's worked at two big drug companies, Roche and Merck, and he knows what they're good at and knows what they're bad at. What they're good at: making drugs. What they're bad at: sharing. Unfortunately, what they're bad at sharing is the information that would help them make better drugs. But Eric Schadt is the strange rebel who happens to play well with others. He's the strange outlier who wants nothing more than to be a hub node. If anything, he overshares, and so what he wants to do is convince biologists to share for those who can't. In all his collaborations, he says, "we don't have as clear a path in getting drugs developed as in a pharmaceutical setting," so what he and his collaborators are doing is "publishing papers so that anyone can pursue them for whatever reason" — i.e., so that drug companies can use the ideas in them to make better drugs. And this is the idea that really gets Schadt going. Eric Schadt's the biggest thinker in biology, but meeting him sometimes feels like meeting Einstein and finding out that what he really liked about physics was the parties, like meeting Niels Bohr and having to look at his autograph collection. But that's why this is Schadt's moment: because he is out to erase the distinctions between intellectual force, technological force, and social force. Because as the Age of Information inexorably morphs into the Age of Information Overload, he's figured out that social force is the key to science's survival. And because when you ask him his grandest aim, his most cherished ambition, what he really wants to be, he answers, without hesitation, is a "master of information."

He never mentions the word science at all.

He wasn't supposed to be a scientist, anyway. Literally. It was, like, forbidden. It was ungodly. He grew up in Stevensville, Michigan, a town of a thousand people one mile square. His family was hardcore evangelical. His stepfather was a hard man, a believer, and a beautician, in roughly that order. Was Eric Schadt a believer? "Of course I was. I had no choice." Education was suspect — "I had no education to speak of." And so although he went to high school, what he calls the "greatest compliment I ever received" came when a teacher pronounced him "untamable," and as soon as he graduated, he was gone. He joined the Air Force, only to realize that instead of escaping the social and intellectual poverty of his background, he had planted himself "on the lowest rung of an organization that people in society already regarded as the lowest rung." His answer: step one, "I became profoundly depressed." Step two: He did the hardest thing he could think of doing, which was joining Special Operations. Parachute rescue. But he blew out his shoulder rappelling down a cliff, so he washed out. The Air Force looked to salvage its investment by giving him a battery of aptitude tests. When the results came back, he was asked if math had come easily to him in high school.

"Yeah, I guess so," he said. "Well, look at your scores," the Air Force said, and sent him to Cal Poly on a military scholarship.

He studied computer science and applied math at Cal Poly, and it was like taking a drug. Education itself blew a mind hungry for expansion. It wasn't just math. It was ... enlightenment. He came home, started talking about logic and philosophy, started posing "thought experiments" to his brothers and sisters. His stepfather kicked him out of the house. What he said, in Eric's recollection: "You are of the devil. Leave and never come back." Eric's answer, of course, was to do the hardest thing he could think of doing, even after his mother called a year and a half later and invited him back into the fold. He went to UC Davis to get a doctorate in pure math. Pure math: a purely conceptual exercise that takes place in purely abstract space. That's why they call it pure. But he kept learning real-world things from it. The first was how to sell something intellectually ambitious, even impossible: "People don't understand, but if you can make them think you understand, your story wins." The second was what he wanted to do with his life. He was still a Christian in orientation, if not in practice, and pure math, after a while, started feeling, well, "a little empty," even ungodly. It wasn't going to help anyone. So he passed his Ph.D. candidacy tests but never wrote his dissertation and instead enrolled in UCLA's biomathematics Ph.D. program. He had the math for it, certainly, but since he hadn't taken a biology course since high school, he had to learn Ph.D.-level biology "from scratch." To catch up, he began reading through the basic textbooks in genetics and molecular biology on his own. Sounds hard. It wasn't. Pure math was hard. Biology? "It was so easy, it was like a vacation. After pure math it was so refreshing and conceptually simple that my mind just locked onto it."

Indeed, biology was so simple that he began to suspect that what was in the textbooks was simplified, even simplistic. He began to suspect there was something wrong with it, molecular biology in particular. As a former creationist, he immediately saw the insufficiency of a biology of broken pieces, and as a man of broken faith he wondered whether he could put it back together again.

There's a famous book, written by Thomas S. Kuhn and published in 1962, called The Structure of Scientific Revolutions. Well, maybe it's not so famous — but it's hugely influential. It introduced the term "paradigm shift" into the language. A promiscuous term, as it turns out, used to describe everything from the emergence of smartphones to the omnipresence of the spread offense in college football. But in Kuhn's book it describes something specific to science. According to Kuhn, scientific progress is not a peaceful process, characterized by the gradual accumulation of knowledge. Rather, it's a nearly political one, characterized by acts of intellectual violence. A paradigm is like a king — it's the body of knowledge and practice that coheres around a theory or a discovery, and in periods of stability everybody serves it by practicing what Kuhn calls "normal science." Eventually, though, it becomes insufficient to its own ends and enters a period of crisis, during which it comes under attack by those practicing "extraordinary science." At last, the king is overthrown, and that's a paradigm shift.

Has Schadt read Kuhn's book? "I remember the exact month, almost the exact day I started reading that. It was when I first started graduate school in 1993." And of course, he knew what kind of science he wanted to practice even before he knew what king he wanted to kill. A paradigm shift requires not only scientists practicing extraordinary science; it requires "attackers" and "persuaders" willing to declaim the end of the old order and announce the dawn of the new. Schadt has turned out to be both. He's very aware that biology is in the middle of a paradigm shift and very aware of his role in both the murder of molecular biology — the king is dead! — and the establishment of its successor. He's even produced a documentary film entitled The New Biology, which heralds the arrival of a biology that's "more like physics" and "more quantitative in nature" than biology has ever been. Not incidentally, it's also a whole lot harder.

He was doing New Biology even before he got his Ph.D. from UCLA in 1999. Big Pharma, in the form of Roche, had recruited him. He was thirty-three years old, just another brilliant nobody, but he started improving the algorithms on the "gene chips" Roche used for gene detection and sharing them publicly. That gave him a name; it also got him investigated by the U. S. Attorney's office under suspicion of stealing trade secrets, because nobody could believe that he was picking the lock on proprietary algorithms without resorting to illegal means. He was cleared when investigators found out that, well, as a matter of fact, he could. Still, it was the start of his career, and he'd already seen "the amount of energy devoted to keeping you from breaking out of accepted paradigms. It was an extraordinary amount of energy. But the cool thing about the human spirit is its ability to push and persist if it thinks it's on the right path. And the path I was on was the right path from which to change biology."

Yes, that's right: The guy who wants to change biology now wanted to change biology even then, and eventually his ambition brought him to the attention of Stephen Friend and Leland Hartwell of Rosetta Inpharmatics. They were molecular biologists. To be more precise, they had made history as molecular biologists, Friend becoming the first scientist to clone a human gene associated with inherited tumors and Hartwell winning the damned Nobel Prize. That's all. But they'd thought deeply about molecular biology and had started Rosetta in part to address its inherent limitations, in particular its failure to deliver drugs to the marketplace. They were looking for the future. And so when Stephen Friend met Eric Schadt, he saw a scientist with "almost Mozart-like qualities — insights that are not always logical, but they're correct. You talked to Eric and you said to yourself, Oh, my God, I can see what's going to come."

Schadt went to Rosetta, and then, when Merck bought Rosetta, he went with Friend and a team of fifty nascent New Biologists to Merck. At the time, Merck was a molecular-biology company. It was using the basic techniques of molecular biology to figure what proteins to target and what drugs to develop. The main technique is called a "knockout study." A scientist interested in the function of a specific gene "knocks out" the gene in mice bred for the purpose, to see what happens. Schadt and Friend thought the strategy was hopeless. Not only are there twenty-three thousand genes to be knocked out one gene at a time, there are also catastrophic side effects when the drug you develop to hit a single protein encoded by a single gene instead hits a network of genes and proteins all working together in mysterious and invisible concert.

The idea of networks was not original to Schadt. What was original to Schadt, however, was a method for finding them and proving their existence. How could he find something that was not only invisible but indescribably vast, involving thousands of genes and thousands of proteins? Well, the sky is vast, and astronomers can't see the planets orbiting distant stars, either. But they prove their existence by measuring the changes in starlight and subjecting the data to statistical analysis. They never see the new planets — the whole new solar systems — they're exploring. They just know they're out there, in the numberless flickerings of the stars.

And that's how Schadt proved the existence of biological networks. He developed algorithms to mine Merck's massive troves of biological data, and he began finding genetic networks through statistical correlations. Were the networks merely theoretical? To the contrary: They were "highly predictive" experimentally — that is, they could predict the success or failure of therapeutic interventions. And so in 2003, he started publishing the papers that, in the words of a Merck spokesman, "changed the way people looked at disease," and at the same time became the foundations of the New Biology. What's more, he and his team began using the networks they were finding to figure out which genes Merck should target, until they were responsible for "half the drugs" in Merck's pipeline. What's more, long before GlaxoSmithKline ran into problems with Avandia, Schadt predicted that a similar drug Merck was developing would fail for the same reasons — because it would lower the risk of diabetes but increase the risk for cardiovascular problems — and therefore proved that the New Biology could save pharmaceutical companies billions of dollars. And then, of course, he and Friend tried to turn Merck into a New Biology company, by which they meant a company that would share its data with networks of outside scientists and that would develop drugs that targeted networks instead of single genes. The problem with that: Merck was still an old biology company. The drugs in its pipeline — including the drugs informed by Schadt's networks — targeted single genes. And so when Schadt and Friend made their presentation, this was Merck's response: "We're not an information company." And when, in 2009, Schadt published a paper in Nature entitled "A Network View of Disease and Compound Screening" — a paper that implied that drugs targeting single genes were doomed to failure — "well, that was the paper that got me kicked out of Merck."

He likes to do his supercomputing on planes now, because that's the one place where he's alone. He had access to a supercomputer at Merck, but he and Friend left Merck in 2009 after negotiating an agreement to take the New Biology component with them — including the millions of dollars' worth of data necessary to continue their work — and turn it into Sage Bionetworks. He still needs the capacity of a supercomputer, however, because the amount of data generated by the networks he's exploring is inordinate, overwhelming. There's terabytes of data, petabytes of data. Fortunately, he has the same access to supercomputers that every other American with an Internet connection and a credit card has. He waits till the plane climbs to a cruising altitude, waits for the pilot to allow electronic devices, and then uses the plane's WiFi to get on Amazon. Amazon sells a lot of stuff — books, washing machines, whatever the hell you want. What it sells Schadt is super-computing on the cheap. You see, companies like Amazon have a lot of computing power available, and now it's gotten in the business of selling some of that to guys like Schadt and whoever else might want it. A guy like Schadt doesn't have to work for a company like Merck anymore, because he has as much computing power available to him on an airplane as a scientist at Merck does on the company's multimillion-dollar supercomputer. More even. On cross-country flights he tells Amazon what data to crunch after takeoff, and for a few hundred bucks the job's done by the time he lands.

He likes to talk about this kind of stuff, because it's one of the ways he makes his sale. A lot of people are afraid of the Age of Information. They think things are getting too big and too complicated, and going too fast. You think scientists are immune? You think biologists are immune? No, they're especially anxious, because biology has turned out to be even more complex than they thought, indeed precisely as complex as the world in general. And so what Schadt has done is not only give biologists the tools to deal with the problem of increasing complexity; he's also sold complexity and has gotten biologists to relax and embrace it, in the words of Stephen Friend. "And it's a good thing, because the complexity is just going to get worse. But Eric gets you to understand that it's out of complexity that a pattern derives. That complexity is not the enemy but the vehicle of understanding, and embracing it is how you get there. You talk to him and he makes you think, Oh, this might turn out all right after all."

Schadt has sold the New Biology by making biologists feel that if they change biology, they can change the world. But he also makes it clear that as the world changes, it will change biology, whether biologists like it or not — whether we like it or not. For instance, he has this idea for what he calls a "disease weather map" that will inform people what kind of pathogens are on the handrails of the escalators, say, at the San Francisco Airport, or for that matter in the bathrooms. The idea would have been laughable just a few years ago, but Schadt is not only thinking about it — he's doing it, with the PacBio RS. He's sending out technicians, getting samples, and sequencing them more or less instantly. This is an extension of Schadt's vision to expand the network model of human disease into tracking the forces of infection in the population at large; the network is not just genes, it's also germs. He's able to do the same with sewage outflows, which has led him to a vision of monitoring the pathogens that pour out of individual households — a vision of helpful technicians knowing what's coming out of your toilets, and calling you if they think you need to eat more yogurt.

Does anybody want a world of pathogen surveillance and transparent effluvia? Well, DARPA does, Schadt says — they're very interested. And he's not overly concerned about everybody else. He's a revolutionary, and what he knows about revolutions — scientific and otherwise — is that "it's best to be one of the drivers of the revolution, and then it will work itself out." What he knows about revolutions is that "there's always this outcry, but the revolution marches on. And I would rather be part of the revolution than on the outside figuring out what it all means."

And that's what Eric Schadt's really all about — why he wants to be a "master of information" instead of simply a scientist. The New Biology is the New World, and he wants to be part of both. He wants to be one of the people who help other people figure out that information overload is not the enemy, if you know how to read it (and have supercomputer access). He wants to be part of what he calls "a revolution in human intelligence." He wants to make the sale, even if what he's selling is what so many fear. The world is getting too big? Make it bigger. The world is getting too fast? Make it faster. The world is getting complex to the point of impossibility? Bring it on, baby.

Bring it on.
Read more: http://www.esquire.com/features/eric-schadt-profile-0411-3#ixzz1HpNA1mta

[One left - but all others stayed. Genes/Junk failed Genomics, by now Mattick, Simons, Pellionisz, Collins, Lander and Schadt are in unisone. See my 2008 Google Tech Talk YouTube currently with 10,138 views - some mediocre minds dismissing anybody with a paradigm-shift as a "kook". (Giordano Bruno was torched, and his ashes thrown into the Tiberis - with the Vatican correcting the course some 300 years later. In modern times, Barbara McClintock had to reach the age 83 to get her Nobel, decades after her discovery). This entry can be commented on the FaceBook page of Andras Pellionisz]


Global Scaling Institute of Germany Explores Roots of Fractals with Euler

Leonhard Euler was one of the greatest mathematicians of all times. He developed the basics of the modern theory of numbers and algebra, the topology, the probability calculus and combinatorics, the integral calculus, the theory of the diffenrential equation and the differential geometry, the variational calculus and he discovered the coherence between trigonometrical functions and exponential functions. Leonard Euler developed the hydrodynamics and fluidic, he made the bases for the theory of the gyroscope. He was a brilliant natural scientist, an excellent teacher and mentor.It was on April 24th, 1727 when on invitation of the Russian czarina Katharina I the 19-year-old master Leonhard Euler left his home town of Basel and set off for a brillant scientific career at the Academy of the Sciences of Russia. The brothers Bernoulli (Nikolai and Daniel), Christian Goldbach and other excellent European scientist already worked there.

Peter I engaged the philosopher and mathematician Christian Wolf from Marburg shortly before his death to unite the best heads of Europe under the seal of Academy of the Sciences of St. Petersburg.

In May 1771 an enormous blaze raged through St. Petersburg. Hundreds of buildings burned down, among others the house of the graduate Leonard Euler. But the basler craftsman Peter Grimm succeeded in saving the 64-year-old blind mathematician from death by burning. Thanks to his courageous intervention almost all manuscripts of the greatest mathematician of all times remained for the posterity. Among others also the work “About continued fractions” (1737) and “About the vibrations of a string” (1748). In these papers Euler formulated theses whose solution would keep mathematics busy for 200 years to come. Eulers work made it possible 250 years later to air one of the most fundamental secrets of nature – the free vibrations of the universe.

Euler examined already free vibrations of an elastic thread with no mass occupied with pearls. In connection with this task d’Alembert developed his integration method for a system of linear differential equations. Starting out from there Daniel Bernoulli put forward his famous theorem that the solution for the problem of a free vibrating string can be portrayed as a trigonometrical series, which lead to a discussion between Euler, d’Alembert and D. Bernoulli which spread over few decades. Later on Langarne pointed out more correctly, how one can come to a solution of the problem of swinging string of beads to the solution of the problem of the vibrating of a homogeneous string by breaching the limit. Only in 1822 J. B. Fourier in solved this formulation completely for the first time.

Meanwhile, nearly insurmoutable problems still arose with pearls of various mass and irregular distribution. This task leads to functions with gaps. According to a letter of Charles Hermite of May 20th, 1893, which called to “reject the lamentable plague of the functions without derivations in fright and fear “, T. Stieltjes examined functions with discontinuities and found an integration method of such functions, which led to continued fractions.

Meanwhile, Euler already recognized that complicated vibrating systems can contain also such solutions (integrals) which aren't differentiating everywhere and left to the mathematically talented posterity an analytic monster – the so called non-analytic functions (this term was chosen by himself). Non-analytic functions have ensured a lot of work up until the 20th century, even after the identity crisis of the mathematics seemed to be already overcome.

The crisis started, as E. du Bois Reymond 1875 reported for the first time about a steady, but not subtly differentiable function designed by Weierstrass, and lasted approximately till 1925. Their dominant players were Cantor, Peano, Lebesgue and Hausdorff. As result a new branch of the mathematics was given a birth – the fractal geometry.

Fractal comes from the Latin fractus and means as much as “in pieces broken” and “irregular”. [We already know since 2002 that Gene/Junk is scientifically incorrect; thus FractoGene, with its self-similar repetitions - AJP]. So fractal are really incomplete, spiteful mathematical objects. The mathematics of the 19th century took these objects for exceptions and therefore looked at regular, steady and smooth structures or tried to put down fractal phenomenons to such structures.

The theory of the fractal quantities made it possible to examine strictly “not analytic” creased, granulous or incomplete forms qualitatively. Soon it turnes out that fractal structures aren't that rare at all. In nature one discovered more fractal objects than suspected till now. More, it seemed so as if suddenly the universe was fractal by nature.

Especially the works of Mandelbrot placed the geometry finally in a position capable of describing correctly fractal mathematical objects: incomplete crystal lattices, the Brown’s movement of the gas molecules, sinuous polymer giant molecules, irregular star clusters, Cirrus clouds, the saturn rings, the distribution of the lunar craters, turbulences within liquids, bizarre shorelines, winding river courses, folded mountain ranges, branched forms of growth of most different plant sorts, areas of islands and seas, rock formations, geological depositions, the spatial distribution of raw material occurrences and so on and so on.

The Leningrad mathematicians F. R. Gantmacher and M. G. Krein looked 1950 at the deflection line of a vibrating string with pearls as partitioned line. Just this attempt made it possible for them to view the problem in a fractal way without being conscious of it (Mandelbrot’s classic “Le Objets Fractals” appeared 1975, his first works from 50’s fell into the linguistics school). Only the fractal view put them to the position to completely solve (also for the most general case) the 200 years old Euler’s problem of the vibrating string of beads for pearls of various masses and irregular distribution. In their work “Oscillation-Matrixes, Oscillation-Cores and Small Vibrations of Mechanical Systems” they proved, that all free vibrations form a finite string of beads or string a finite or infinite Stieltjes-continued fraction. The masses of the pearls and the separations between them are identical with the part denominators of the continued fraction.

1955 V. P. Terskich generalized the (as regards content fractal) continued fraction method on vibrations of complicated branched chain systems. The classic work of T. N. Thiele, A. A. Markov, A. J. Chintchin, O. Perron, J. A. Murphy, M. R. O'Donohoe, A. N. Chovansky, H. S. Wall, D. I. Bodnar, C. I. Kucminskaja, V. J. Skorobogat'ko and others helped to get the definite breakthrough for the continued fraction method and made the development of efficient algorithms possible for the addition and multiplication of continued fractions up till 1981.

Mathematical models of vibrating fractal chain systems are used with great success in the most different scientific fields today. Their popularity reached a highlight already in the sixties in the electrotechnical engineering. The fast development of the computer branch during the last decades have to be put down to the efficiency of such models in the solid state physics. One discovered vibrating fractal chain systems also in neural networks, our genome and eco-systems.

The complete universe is a vibrating fractal chain system, what can be compared mathematically with the free vibrations of a Euler’s string of beads of gigantic proportion. The natural oscillations of matter influence not only the temporal course of all physical, chemical, biological and social processes, but it is also a global morphogenetical factor and cause for a global selection process. Their frequency spectrum is logarithmic fractal.

Leonard Euler left about 900 scientific work, among others:

Mechanica 1736)

Über Kettenbrüche (1737)

Tentamen novae musicae (1739)

Theorie der Planetenbewegung (1744)

Neue Grundsätze der Artillerie (1745)

Nova theoria lucis et colorum (1746)

Über die Schwingungen einer Saite (1748)

Introductio in analysin infinitorum (1748)

Theorie des Schiffbaues (1749)

Institutiones calculi differentialis (1755)

Institutiones calculi integralis (1770)

Vollständige Anleitung zur Algebra (1770)

Lettres · une princesse d'Allemagne sur quelques sujets de Phsique et Philosophie (1772)

Dioptrica (1771)

[Genomics will yield to FractoGene Mathematical Theory and its immediate applications of how the fractal genome violates its own mathematical rules - triggering hereditary syndromes. Understanding of the scaling (identification of the System as a seemingly complex fractal hierarchy) enables targeted search for "fractal defects" - rather than searching in a haystack for something that we can't even define as a needle. Soon, Institutes will spring up to devote the type of attention this paradigm-shift has the power of yielding, by software-enabling algorithms, deployed by defense-, financial-, graphics-, seismic-, meteorological-, cryptographical-, etc. already validated HPC appliations to Genomics (the busines of HolGenTech, Inc.). This entry can be commented on the FaceBook page of Andras Pellionisz]


Avesthagen launches Whole Genome Scanning

IBNlive
PTI | 04:03 PM,Mar 22,2011

Bangalore, Mar 22 (PTI) Bangalore-headquartered Avesthagen Limited launched its Whole Genome Scanning (WGS) service in India today, which will provide an individual a list of the diseases to which the person is predisposed, such as cancer, diabetes and Alzheimer's, in return for a payment.

A whole genome scan of an individual would provide information to understand his/her own genetic make-up that would lead to an increased awareness about the predisposition to the disease(s), the company said in a statement.

The diseases that would be covered by the scan include major types of cancer, cardiovascular diseases, diabetes, schizophrenia, Alzheimer's, asthma and arthritis, it said.

The total cost of the WGS would be Rs 45,000 [about $1,000 - AJP] which is subject to revision based on volume, the company said. The scan would be carried out on DNA extracted from saliva/buccal swab provided by the individual, the company said.

The report would consist of a list of the various diseases to which the person is predisposed, and risk ratio of the disease occurrence and the association of the disease to the mutation, it added. "The technology platform at Avesthagen is able to interrogate the genetic markers (SNPs and CNVs) across an individual's genome to decipher the association of the markers to the diseases", it said. "By employing a streamlined sample collection and delivery system, Indian customers will now have access to personalised genomics services, which until now were only available in the developed markets", the company said. Avesthagen said it would leverage its expertise in genomics technology and state-of-the-art high throughput facilities for carrying out genomic analysis. The facility can process 200 samples per month, the statement said. Explaining about DNA, the company said every organism, including humans, has a genome encoded by deoxyribonucleic acid (DNA) that contains the biological information needed to build and maintain a living example of that organism. DNA is essentially made of four kinds of molecules, called bases, it said. The bases are arranged in a sequential order to form a unit known as gene. The triplet of bases in the gene encode for amino acids that are building blocks of proteins that carry out most of the cellular activities, it said. A change/mutation in the base-pair sequence of the gene(s) could indicate that either certain proteins are not formed or are processed differently, which may result in disease, it said. Measuring the mutation in the genes could tell us the risk (predisposition) of getting a particular disease in his/her lifetime, it said, adding disease risk is one way of describing the likelihood of a person developing a particular disease.

OUR COMPANY & HISTORY

Renaissance Herbs was founded in 1990 by Alex Moffett. In 1994, the Company formed a wholly owned subsidiary, Dhanvantari Botanicals, Pvt., Ltd and built a 50,000 square foot state-of-the-art manufacturing and research and development facility in Bangalore, India. This fully certified cGMP facility has won numerous Nutrition Business Journal awards and is one the very first factories of its type in the world to achieve ISO 22000-2005 and “Certified Organic” processing certification.

Avesthagen Ltd, Bangalore, India purchased all outstanding shares of Renaissance Herbs company stock in 2007. The merger and acquisition of Renaissance Herbs supports the fulfillment of Avesthagen’s bionutritional business strategy through vertical integration and significant scientific competencies.

With the acquisition of California-based Renaissance Herbs and their wholly owned Bangalore subsidiary, Dhanvantari Botanicals, Avesthagen has gained a state-of-the-art, 50,000 square foot, nutriceutical manufacturing complex.

In Renaissance, Avesthagen found an industry colleague whose products were based on a blending of the bountiful traditions of Ayurveda with superior modern production and QC methods. They also discovered that they shared a similar corporate spirit that understood sustainability from all perspectives, from the customer, the company employee, and the planet.

[While the description is rather incomplete (e.g. it is unknown if the analysis is based on true full DNA sequencing, or partial interrogation of the full genome by microarray technology), "the genie is out of the bottle" with this news - since the "Silicon Valley" of Bangalore (India) is entirely outside of the jurisdiction of the USA. Thus, hindrances imposed by FDA and other US requlator fervor will not set back DTC as it was entirely banned for some weeks in California, or severely set back by FDA/Congressional hindrance. Another "Sputnik moment" to the US - we either change the ways and means of our investment and operation, or will no longer stay on top of the use of postmodern genomics. This entry can be commented on the FaceBook page of Andras Pellionisz]

^ back to top


Complementing Private Domain Genome Sequencing Industry - the Birth of Genome Analytics Industry

[The three news items on the venerable Partek, Inc. (1993) and two Silicon Valley new companies of HolGenTech, Inc. (linked from the title) and DNAnexus, Inc. signal the birth of the Private Domain Genome Analytics Industry - not totally unlike the venerable IBM and the new start-ups of Microsoft and Apple of the "home computer era". This entry can be commented on the FaceBook page of Andras Pellionisz]

--

Partek Expands Global Operations
March 14, 2011 08:30 AM Eastern Daylight Time

Software Manufacturer Increases Staff in Europe and Asia

Sees Continued Strong Renewal Rate of its Flagship Product, Partek® Genomics Suite™

ST. LOUIS--(BUSINESS WIRE)--Partek Incorporated, a global leader in bioinformatics software, today announced the expansion of its business development and support staff to keep pace with continued demand for its core product offering, Partek Genomics Suite. The company now has personnel throughout Europe — United Kingdom, France, Germany, Spain and Croatia — and a wholly owned subsidiary in Singapore servicing Asia, Australia and New Zealand.

Complementing a strong US-based support team, Partek’s regional managers have extensive expertise in biology and statistics, and are strategically positioned to assist scientists around the globe with analyzing their microarray and next-generation sequencing (NGS) studies. In addition to its direct operations, Partek also maintains partnerships with local distributors in more than a dozen countries, and worldwide reseller agreements with genomics device manufacturers Life Technologies Corporation and Affymetrix, Inc.

“It’s an exciting time at Partek, as rapid advances in genomics technology are fueling worldwide demand for software that simplifies and accelerates the data analysis process,” said Tom Downey, President of Partek Incorporated. “We are continuing to see a 95+ percent renewal rate among current Partek Genomics Suite licensees, and significant growth in new markets where NGS adoption is increasing. Our dedication to training and individual user-level support helps further differentiate our offering, and it is opening new doors to researchers seeking both the best-quality software and a knowledgeable team that stands behind it.”

Partek Genomics Suite is a comprehensive solution for the analysis, comparison and integration of massive amounts of genomic data. Used by thousands of scientists worldwide, Partek Genomics Suite was cited in more than 440 peer-reviewed publications in 2010. It is unique in its ability to support all microarray and NGS [Next-Generation Sequencing - AJP] technologies for RNA-, DNA- and gene regulation applications in a single software package, allowing also for the integration of multiple applications in a user-friendly way. Embedded tools for alignment, quality control analysis, robust statistics, clustering, biological interpretation and pathways helps scientists identify biomarkers and patterns in the data with confidence.

About Partek

Partek Incorporated (www.partek.com) develops and globally markets quality software for life sciences research. Its flagship product, Partek® Genomics Suite™, provides innovative solutions for integrated genomics. Partek Genomics Suite is unique in supporting all major microarray and next-generation sequencing platforms. Workflows offer streamlined analysis for: Gene Expression, miRNA Expression, Exon, Copy Number, Allele-Specific Copy Number, LOH, Association, Trio analysis, Tiling, ChIP-Seq, RNA-Seq, DNA-Seq, DNA Methylation and qPCR. Since 1993, Partek, headquartered in St. Louis, Missouri USA, has been turning data into discovery®.

---

Complete Genomics Customers Can Now Use DNAnexus' Cloud-Based Informatics Solution to Study Their Human Genome Sequencing Data

Complete Genomics and DNAnexus Work Together to Provide a Scalable Solution for Complete Genomics Data Storage, Visualization and Query

MOUNTAIN VIEW, Calif. and PALO ALTO, Calif., March 14, 2011 (GLOBE NEWSWIRE) -- Complete Genomics Inc. (Nasdaq:GNOM) announced today that its customers can now choose to store and visualize their human genome sequencing data within the cloud-based DNAnexus platform. Furthermore, DNAnexus Inc. will host the publicly available Complete Genomics human genome public datasets and make them available as reference data to all its customers.

Through the DNAnexus platform, Complete Genomics' customers will have access to a suite of powerful and easy-to-use informatics tools, which will allow them to visualize and query multiple complete human genomes and focus on relevant findings. For example, structural variations, copy number variations and small variants detected by Complete Genomics can be simultaneously visualized in the DNAnexus Genome Browser allowing researchers to gain further insights from these important genomic elements. This new workflow has been extremely well-received by Complete Genomics customers who gained early access to the cloud-based solution as part of a beta-testing program.

"As we continue to refine our genome sequencing service, we want to make sure that our customers have access to the best tools available to interpret the rich datasets that we provide," said Mark Sutherland, senior vice president of business development at Complete Genomics. "We have worked closely with DNAnexus to ensure that its informatics platform will support our detailed sequence data and thus allow our customers to capitalize on its visualization capabilities."

Built on the robust Amazon Web Services, DNAnexus is a universal platform for analyzing, managing and storing DNA sequencing data. DNAnexus acquires computational and storage resources on demand from Amazon, on behalf of each user, to provide scalability and pricing that mirrors usage, whether a user is investigating one or thousands of genomes.

"Like Complete Genomics, DNAnexus was founded to relieve scientists of the operational, computational and capital purchase burdens associated with advanced sequencing technologies," said Andreas Sundquist, Ph.D., CEO of DNAnexus. "By enabling its customers to access their data via our easy-to-use, Web-based research environment, Complete Genomics is allowing them to expand their compute and storage infrastructure on demand and streamline access to the sophisticated informatics necessary to quickly access and interpret their sequencing data."

Moving forward, DNAnexus will integrate Complete Genomics Analysis Tools (CGA™ Tools) functionality into DNAnexus, further enabling customers to interrogate their data. The integration of the CGA Tools, an open source software project that provides solutions for downstream analysis of data produced by Complete Genomics, will allow researchers to take on complex multigenome comparisons.

Additional product information and details on how to upload, store and visualize Complete Genomics data within DNAnexus are available at http://www.completegenomics.com/customer-support/partners/DNAnexus.

About Complete Genomics

Complete Genomics is a complete human genome sequencing company that has developed and commercialized an innovative DNA sequencing platform. The Complete Genomics Analysis Platform (CGA™ Platform) combines Complete Genomics' proprietary human genome sequencing technology with our advanced informatics and data management software. We offer this solution as an innovative, end-to-end, outsourced service, CGA™ Service, and provide customers with data that is immediately ready to be used for genome-based research. Additional information can be found at http://www.completegenomics.com.

The Complete Genomics logo is available at http://www.globenewswire.com/newsroom/prs/?pkgid=8216

About DNAnexus Inc.

DNAnexus combines a cloud computing infrastructure with scalable systems design and advanced bioinformatics to address the data storage, management, analysis and visualization challenges inherent in next-generation DNA sequencing. By leveraging the cloud, the company has created a flexible platform that evolves with emerging sequencing technologies and research applications and scales to support the computational infrastructure needs of researchers and sequencing service providers alike. For more information please visit https://dnanexus.com.

[The emerging Industrialization of Genomics by the private domain already generated a slew of Genome Sequencing Companies (Illumina, 454/Roche, Life Technologies (now with Ion Torrents) - all in the USA, plus e.g. Oxford Nanopore in the UK - altogether several dollar billions of private investments, e.g. by Intel Capital, etc.). Indeed, the Industrialization of Genomics would become unsustainable if Genome Analytics, based on our entirely novel understanding of (Holo)Genome function (c.f. The Principle of Recursive Genome Function), could not match the "demand-side" of the already existing "supply side" (of DNA sequences). Not unlike in the "home computer era", existing venerable companies (IBM) opened a new branch for "PC" - but found a stiff competition from Silicon Valley start-ups with paradigm-shifts put into action. Now, e.g. Partek (1993) opens its new branch from statistical analysis in Life Sciences to HoloGenome Analysis - while DNAnexus start-up is built on the (unsecurable) Cloud Computing (by Amazon Cloud Services) for Genome Analytics for those who forgo privacy in the interest of research. It is noteworthy that the proprietary and open source software systems are likely to be in some conflict. HolGenTech, Inc. leverages defense-validated High-Performance-Computing (e.g. DRCcomputer) to ensure the convergence of three fundamental requirements. (1) All Genome Analytics software must be based on axioms that make a clean break from the half-a-Century-old misunderstanding of the Gene/Junk obsolete views, (2) Just like with defense and financial computing, safeguarding fiercely proprietary algorithms and software is essential for geopolitical, Big Pharma and Private Genome Research reasons, (3) The "Holy Grail" of postmodern genomics is deployment in hospitals of HIPPA-compliant and patient-required privacy of genome-based diagnosis, therapy and up to cure, combined with the vitally important "time critical" speed. These requirements are highly parallel with proven solutions in both defense and financial computing. This entry can be commented on the FaceBook page of Andras Pellionisz]

^ back to top


Scientists need new metaphor for human genome [or better yet, industrialization of the new paradigm - AJP]

The National
Robert Matthews

Last Updated: Mar 13, 2011

The science of genetics has particular importance for Arab populations, [a reference to the ongoing World Conference in Dubai - AJP] among whom inherited conditions are relatively common.

Historically, one major reason for this has been the prevalence of marriages between cousins. Less well-known is the fact that some genetic conditions were once beneficial for those living in this part of the world - notably the blood disorder sickle cell anaemia, which confers protection against malaria.

Once upon a time, those attending meetings like this four-day event would tell anyone who would listen that victory against such lethal conditions was now in sight. Such optimism flowed from one of the most trumpeted scientific breakthroughs of the past half-century: the unveiling in 2000 of the first rough draft of the human genome.

The genome is often described as the "genetic book of human life". Decoding it was predicted to lead to a host of advances, from "personalised" medical treatment to the discovery of genetic risk factors for diseases like cancer - and cures for some genetic diseases.

A decade on, however, the mood of geneticists is far more downbeat, although the quest remains as inspiring as ever. There's no doubt that medicine would be transformed if doctors could tailor drugs and dosage to each individual.

For example, many hundreds of people die each year in the UAE because of adverse drug reactions.

The assumption has been that the root cause of such reactions lies in each person's genes, and the dream has been that one day doctors would simply check a patient's genome and reach for precisely the right drug, confident there would be no side effects.

Yet with very few exceptions, the dream seems more like a mirage. To date, only a small minority of women with invasive breast cancer have benefited from anything close to personalised medicine.

These women carry an overactive gene called Her2/neu, and research has shown that about half of them can benefit from a drug called Herceptin. But this drug has itself been found to increase the risk of heart dysfunction, for reasons unknown.

Genetic researchers have fared little better trying to link genes to common ailments such as heart disease and cancer.

Grand "genome-wide association" (GWA) projects have been set up, with the aim of trawling the genomes of thousands of people. But in return, researchers have been rewarded with a baffling melee of faint trails and dead ends.

The GWA technique first made headlines in 2007, when several different teams unveiled the existence of 12 genetic quirks apparently linked to coronary artery disease, the clogging up of arteries.

The disease has long been linked to factors such as high cholesterol. Yet only a handful of the genetic quirks were linked to these factors; the role of all the others remains a total mystery.

More baffling still, the 12 are all but useless at predicting who will suffer the condition - suggesting that even when their role is tracked down, the information will be of little help.

Just last week the journal Nature Genetics reported the discovery by three independent teams of 18 more genetic quirks linked to coronary artery disease. Even when combined with those previously known, however, experts estimate these genes will explain only about 10 per cent of the risk.

Nowhere is the schism between genome hype and reality starker than in the treatment of inherited diseases.

The quest for a cure began long before the human genome project, with scientists focusing on the simplest disorders, caused by faults in single genes. In 1989, a team at the Hospital for Sick Children in Toronto identified the gene for cystic fibrosis, one of the most common of all genetic diseases.

For years afterwards, geneticists talked of curing this fatal disease simply by replacing the faulty gene, like car mechanics fixing an engine by replacing the spark plugs.

Yet to this day, not a single patient has been cured of cystic fibrosis, or indeed any other common genetic disorder. Their life expectancy has certainly improved - in the case of cystic fibrosis patients, by 10 years or more - but credit for that goes to conventional medicine, not genomic research.

The common thread in this litany of disappointment is the simplistic idea of genes being "for" specific traits.

Researchers who should have known better - including a few Nobel Prize winners - cheerfully foisted this view of genes on the public, the media and worst of all, themselves. Ironically, the best evidence against it has come from the genome project itself.

One of the first discoveries from the project was the amazingly low number of genes in the human genome. Before the project began, most geneticists expected to find at least 100,000 genes.

We now know the true figure is about 23,000 - far fewer than for many parasites. The implications are clear: there is no simple relationship between genes and the living organism they supposedly define. Genes aren't at all like the "words" in a "genetic book of human life".

Metaphor plays a key role in science, defining how entire generations of researchers view their subjects. Physicists once thought of gravity as being some kind of invisible "elastic" between masses, light as made of waves, and electrons as subatomic bullets.

All these metaphors have their uses, but they also have their limits - and the success of physicists owes much to their ability to choose the right metaphor for the job.

For decades, geneticists have tried to emulate the success of physicists, and have taken a reductionist, [Newtonian - AJP] mechanistic view of genes. Their lack of progress in understanding the genome suggests they should spend some time at this week's meeting finding a better metaphor for genes. Lives depend on it.

Robert Matthews is Visiting Reader in Science at Aston University, Birmingham, England.

[Metaphors are fine - but they are no substitutes for new science axioms. "FractoGene" (Pellionisz, 2002) could be understood as a "metaphor" (as the basic materials for buildings) - while the design of what architecture you create from essentially the same set of basic materials resides in the vast majority (in human, 98.7%) of "non-directly coding", formerly "Junk" DNA. Both the fractured (non-contiguous) DNA-structure of directly protein-coding sequences replaced "genes" (that have no universally accepted definition today, other than in terms of FractoGene). Also, the algorithmical (software-enabling) fractal approach to regulation of building fractal hierarchies of organelles (such as brain cells), organs (such as lungs, kidneys), and organisms (e.g. cauliflower Romanesca) can be handled by the mathematical grip of fractal theory - rendering "metaphors" to high-school audience. The huge problem is, of course, that almost a decade after FractoGene, there is no major World Conference where a giant of postmodern Genomics would not raise the cardinal point of obsolete axioms, I did it in Cold Spring Harbor, 2009), Lander did it in 2011 to tell the entire top leadership of NIH that "all basic assumptions were wrong" and the DNA structure was fractal, and now in Dubai Dr. Mattick told the World Organization of HUGO the same, but e.g. the Director of HUGO, who was well aware of FractoGene since the 50th Anniversary of the Double Helix (Monterey, 2003), could only confront e.g. Dr. Mattick with a new descriptive outlook with Dr. Brenner's view of still sticking with the "Junk DNA" (for convenience) - it takes an organizer like George Church to invite directly the innovator(s). Thus, real progress (much beyond "metaphors") is left for China (the BGI commanding 3,000+ informatics & programming specialists with the avery age of 27, thus not affected by Old Obsolete Axioms), India (where engineering dovetails much easier with biology due to tens of thousands of software specialists e.g. in "Silicon Valley of Bangalore). The geopolitical race is well reflected by the 10010+ views of my 2008 "Google Tech YouTube" This entry can be commented on the FaceBook page of Andras Pellionisz]

^ back to top


RNA regulation of human development, cognition and disease (Mattick in Dubai)

It appears that the genetic basis of the programming of human development and cognitive capacity has been misunderstood for the past 50 years, because of the assumption that most genetic information is transacted by proteins. Contrary to the derived assumption that most of the human genome is comprised of evolutionary debris (introns, integenic 'deserts', retrotransposon-derived sequences), it is now evident that essentially the entire genome is dynamically transcribed to produce enormous numbers of regulatory RNAs that control gene expression at various levels, including the site-specificity of the chromatin-modifying complexes that underpin developmental trajectories and cognitive function. Increasing numbers of these regulatory RNAs are being shown to be dysregulated in cancer and other complex diseases. It is also becoming evident that the system has evolved extraordinary plasticity via RNA editing, and that this is the molecular basis of environment-epigenome interactions underpinning brain function and influencing the progression of diseases such as type 2 diabetes. Retrotransposons also appear to contribute to genomic, epigenomic and transcriptomic plasticity, and somatic mosaicism, especially in the brain. Thus, apart from the fact that some genes encode proteins, it appears most orthodox assumptions about human genetic information have been incorrect, and that what was dismissed as 'junk' because it was not understood will hold the key to understanding human evolution, development, variation, cognition and disease.

[Dr. Mattick has long been on record to say that "Junk DNA" was "anything but". The significance of his talk is not that he put forward any mathematical theory for Recursive Genome Function (like others did e.g. in 2008 paper and YouTube (now over 10010 views) - but that following Eric Lander (Science Advisor to US Prez) he told the World Conference in their face that "most othodox assumptions ... have been incorrect". This entry can be commented on the FaceBook page of Andras Pellionisz]

^ back to top


Hamdan Bin Rashid to inaugurate HGM 2011 Monday

Mar 13, 2011 - 04:09 -

WAM DUBAI: H.H Sheikh Hamdan Bin Rashid Al Maktoum, Deputy Ruler of Dubai, UAE Minister of Finance and the Patron of the Sheikh Hamdan Bin Rashid Al Maktoum Award for Medical Sciences (SHAMS), will open the 15th Human Genome Meeting in conjunction with the 4th Pan Arab Human Genetics Conference at Maktoum Hall in Dubai World Trade Centre.

The conference is organized by SHAMS in collaboration with the Human Genome Organisation (HUGO).

Dr. Mahmoud Taleb Al Ali, the Director of the Award's Centre for Arab Genomic Studies and a member of the HGM's Scientific and Organizing Committees praised the unlimited support for the largest gathering of geneticists worldwide by HH Sheikh Hamdan Bin Rashid Al Maktoum.

He said that the eyes of all countries of the world will be tomorrow on UAE, which is the first Arab country to host the HGM through more than two decades, the age of HUGO, whose international headquarter is in Singapore.

"The prestigious status of UAE among world countries has allowed this major scientific event to be held here in Dubai", he added.

He said that 1200 geneticists are expected to participate in the 4-day HGM 2011, during which 480 research papers by 1700 researchers from 66 countries around the world will be discussed.

He also noted that the preparation for this big event has taken considerable time and efforts, to be held at the appropriate level.

He added that he is proud of the participation of distinctive top geneticists in HGM 2011, led by Prof. Edison T Liu, the president of HUGO and the Executive Director of the Genome Institute of Singapore, a biomedical research institute of the Agency for science, technology and research.

Also participating in the HGM 2011 is Professor Sydney Brenner, Chairman of the Okinawa Institute of Science and Technology in Singapore and the Nobel laureate for Medicine in 2002.

Dr. Mahmoud Taleb Al Ali said that HGM 2011 will discuss the next generation of genome technologies and their impact on heritable disorders, stem cell therapy, and genetic diseases such as cancers, metabolic diseases, deafness and neurological disorders in terms of diagnosis and treatment.

The conference will also discuss the global economic effects of genome applications and their related ethical and legislative challenges. Among the discussed subjects is the use of developed algorithms to read human genome maps and allow the discovery of disease-related loci.

During the conference, the HUGO Council will conduct several closed meetings to discuss its future strategic plans; the details of the forthcoming HGMs; its publications, in particular, The HUGO Journal; and the goals and directions for its various sub-committees.

The Executive Board and Arab Council of CAGS will also hold a number of important meetings to evaluate their achievements during the past years and discuss future projects.

[The "Arab genome" is one of the largest, fairly homogeneous genome of the Global population (the Chinese Hun tribe, over 1Bn people aside). For strategic planning and hereditary diseases, e.g. due to frequent in-breeding in leading Arab circles, there are very considerable funds mobilized for this next challenge. This meeting in Dubai, under some international umbrella, well serves the Arab interests. In view of Dr. Mattick of Sidney, telling yet another World Conference in their face that most of their basic assumptions were wrong for 50 years, the Fractal Recursive Iteration approach by Dr. Pellionisz, which was duly submitted to his home country for "first refusal" in 2006, becomes of interest to the USA, China and the Arab World. This entry can be commented on the FaceBook page of Andras Pellionisz]

^ back to top


DRC Computer Invites Dr. Andras Pellionisz to Advisory Board

Distinguished expert in genomics to advise DRC on technology and market plan

SUNNYVALE, CA (MMD Newswire) March 2, 2011 -- DRC Computer Corporation (DRC), the leading innovator of dynamically reconfigurable processors, announces that Dr. Andras Pellionisz is joining DRC's Advisory Board. Dr. Pellionisz is a recognized expert in the field of genome informatics specializing in the geometrization of biology, which he applied first in neuroscience to produce industrial neural net applications and later in genomics, manifesting today in applications for personal genomes.

"I am delighted to welcome Andras to our Advisory Board. He brings a wealth of knowledge and contacts in genomics that will be valuable to DRC as we continue to build presence in this major market segment," said Larry Laurich, DRC President.

As a domain expert in Genome Informatics, Andras Pellionisz is an interdisciplinary scientist and technologist. With Ph.D.'s in Computer Engineering, Biology and Physics, he has 45 years of experience in Informatics of Neural and Genomic Systems spanning Academia, Government and Silicon Valley Industry. Dr. Pellionisz played a leading role in the shift from artificial intelligence to neural nets, including the establishment of the International Neural Network Society. In 2005, he combined interdisciplinary communities of Genomics and Information Technology when he established the International HoloGenomics Society (IHGS).

Based on sound genome informatics, his work sets forth new mathematical principles for proceeding with full exploration of the whole genome. Dr. Pellionisz' fractal approach to genome analysis is corroborated by recently published findings about fractal folding of DNA structure by Presidential Science Adviser Eric Lander.

"I am very pleased to be joining the DRC Advisory Board. I am convinced that DRC has the foundation for the genome computer with their leading edge accelerator technology, and I will enjoy assisting them in developing the market for this," said Dr. Andras Pellionisz.

In 2008, his breakthrough research: "The Principle of Recursive Genome Function", superseded the misnomer "Junk DNA". "Junk DNA", a term widely used for 30+ years to define intergenetic material, was as widely misunderstood and dismissed until HoloGenomics. It is now acknowledged as critical to understanding DNA.

In 1973 Dr. Pellionisz was awarded a Stanford Post Doctoral Fellowship, subsequently he served as Research Professor of Biophysics at New York University Medical Center. Later at NASA Ames Research Center, as a Senior Research Associate of the National Academy. From 1994, he served as Chief Software Architect to several Silicon Valley companies.

About DRC Computer Corporation

DRC Computer Corporation is the leading innovator of dynamically reconfigurable processors, addressing the needs of time-critical, data-intense applications in the defense and finance industries, security environments, web companies, and biomedical markets. Recently DRC announced a world performance record in bioinformatics achieving 9.4 trillion cell updates per second using the Smith-Waterman technique. Also this was achieved at a price/performance 5 times better than previous records. DRC's Accelium™ processors deliver ultra-high performance with very low energy usage (typically less than 25 watts) and minimal space requirements, producing actionable intelligence much faster (100x and more) and at significantly lower cost (90% lower) than traditional computer technologies. DRC is a wholly owned subsidiary of Security First Corp., an emerging industry leader in information assurance, data security, privacy, integrity, and high availability.

[Ever since LeRoy Hood (2002) identified the genome as a system for information processing, computer companies started to swirl around Genome Informatics - but they typically lack the multiple domain expertise the subject requires. IBM was the "grand old lady" of Big IT with several decades of interest in Life Science, under the inspiration and leadership of Carolyn Kovach (whose group has migrated to Dell Computer, led by Jamie Coffin). As I outline in my 2008 Google Tech Talk YouTube now approaching 10,000 views Intel's $100 M investment into Pacific Biosciences' nanotechnology-based DNA sequencing was an important turning point by July, 2008. Since then, chip companies like serial CPU manufacturers Intel, AMD, parallel chip manufacturers like Xilinx, Altera, and integrators into high-performance computing by FPGA and/or GPU (DRCcomputer in Silicon Valley holding the speed/price world record, NVIDIA, XtremeData, Convey, Pico, Nallatech) along with the major league of Big IT, Microsoft, Google, Oracle, HP, Sony, Hitachi, Fujitsu, Samsung have all joined the fray of approaching (some already deploying) the genome-based Personalized Medicine and Commerce market, forecast on February, 17, 2011 by $148 Bn market of genome-driven “Personalized Medicine” by 2015, forecast by Businesswire on February 17, 2011 by BusinessWire reaching by 2015 the significant emerging market of maybe several hundreds of Billions range. The challenge for trans-disciplinary experts is the fusion of almost diametrically opposite cultures of old school Genomics of medical doctors with Computer Science and Mathematics-based Genome Informatics specialists. This is the conclusion, see the Feb. 11, 2011 talk, and his answer to the single question, by Eric Lander, the President's Science Advisor. This entry can be discussed on FaceBook of Andras Pellionisz]

^ back to top


NHGRI Celebrates Tenth Anniversary of Human Genome Sequence [what went wrong - Green]

Jacquelyn K. Beals, PhD
Medscape Today
February 18, 2011

Twenty years ago, the Human Genome Project was just getting underway — 10 years later, investigators had a draft sequence of the human genome.

To celebrate these landmark events, and to publicize its vision for the future, on February 11 the National Human Genome Research Institute (NHGRI) held a symposium on "A Decade with the Human Genome Sequence: Charting a Course for Genomic Medicine," on the campus of the National Institutes of Health (NIH). The same week, NHGRI unveiled its vision in Nature....

....Vision for the Future

...NHGRI also marked the celebration by presenting its vision for the future in the February 10 issue ofNature. Dr. Green was lead author of a perspectives piece highlighting 5 research domains for NHGRI "from base pairs to bedside": understanding the structure of genomes, understanding the biology of genomes, understanding the biology of disease, advancing the science of medicine, and improving the effectiveness of healthcare.

A schematic in the article shows that most genomics research accomplishments from 2004 to 2010 dealt with the structure or biology of genomes. The period from 2011 to 2020 is projected as a time of intensifying research on the biology of disease. Beyond 2020, the research focus is expected to advance into the science of medicine and improving the effectiveness of healthcare.

The article repeatedly refers to the "importance of non-coding variants in human disease" or "illuminating the fundamental biology of non-coding sequence variation and its phenotypic implications."

A Lot More to Learn

Medscape Medical News asked Dr. Green to elaborate on this topic, which goes far beyond the familiar formula of "DNA to RNA to protein."

"Study of the human genome, since the end of the human genome project, has revealed that the minority of the functional parts of the human genome are protein-coding regions, which really are the regions we understand the best," said Dr. Green.

"Once upon a time we thought that most of the action, where the functional parts of our genome are going to be, was in the genes, the stuff that made information for coding proteins.

"What has been revealed, since the end of the genome project, is that's really important stuff — something like 21,000 genes or so — but in fact, that only represents about a third of the functional sequences in our genome," he added.

The remaining two thirds of the functional sequences do not code for protein, but have other functions.

"We have a lot to learn. We barely have scratched the surface in studying that part of the genome," said Dr. Green.

Some elements in this noncoding DNA regulate where and when and how much genes are turned on or off; Dr. Green refers to them as "dimmer switches" that can switch on, or switch off, or modulate how much a gene is expressed. That constitutes a whole circuitry of noncoding DNA that is very important.

"The more we study it, the more we realize how complicated it is. We are learning that these genetically complex diseases, which are diseases responsible for filling hospitals and clinics around the world — diabetes, cardiovascular disease, mental illness, some forms of cancer — that these disorders are complicated because they involve multiple different genetic changes that confer risk for disease. The majority of times, the noncoding DNA elements are the ones that contain variants conferring risk for these complex diseases," explained Dr. Green.

"So the reason [noncoding DNA elements] are so medically important is: number 1, it's probably where a lot of the variation is that confers risk for human disease. And number 2, we don't understand that very well. And yet we have to, because it's medically important," he added.

[Open "confession" that the establishment of Genetics missed the boat by not recognizing the misnomer "JunkDNA" early enough, and even a hint that the DNA-RNA-PROTEIN "arrow model" (based on Crick's Central Dogma) might be wrong. These are great news from the establishment. However, do we want to waste yet another decade (with billions of dollars) by waiting to deploy applications, STARTING TO PLAN FOR HEALTH-CARE IMPACT BY 2020 (!), that is possible today (see YouTube "Shop for your Life"? With the USA a superpower NOT having a "National Genome Program" (NASA style) we are facing a colossal dilemma of slipping in the global race - letting applications, possible today, to drift off-shore. Maybe it is time to bring some captains with proven telescopes aboard a streamlined ship, instead of loading up already sinking ships, or planning decades for a turn-around of huge old-built freighter ships drifting sideways. This entry can be debated at the FaceBook of Andras Pellionisz]

^ back to top


Initial impact of the sequencing of the human genome [What went wrong according to Eric Lander]

Eric S. Lander
Nature
doi:10.1038/nature09792

...The view from 2000

Our knowledge of the contents of the human genome in 2000 was surprisingly limited. The estimated count of protein-coding genes fluctuated wildly. Protein-coding information was thought to far outweigh regulatory information, with the latter consisting largely of a few promoters and enhancers per gene. The role of non-coding RNAs was largely confined to a few classical cellular processes. And, the transposable elements were largely regarded as genomic parasites. [This is a very diplomatic but factual rejection of the "JunkDNA misnomer - AJP]

"Mr. President, the basic assumptions were all wrong!" (Lander video at 9:43, Exhibit A)

A decade later, we know that all of these statements are false. [It follows that a new set of fundamental assumptions have to be made - totally consistent with Francis Collins' statement at the publication of the ENCODE results, showing that "the genome is pervasively transcribed", "now the scientific community will have to re-think long-held beliefs"] The genome is far more complex than imagined, but ultimately more comprehensible because the new insights help us to imagine how the genome could evolve and function...

...The road ahead

The ultimate goal is to understand all of the functional elements encoded in the human genome. Over the next decade, there are two key challenges.

"Mr. President, 'Junk DNA' is dead as a doornail (Lander video at 11:29; blue (genic) area is shrinking, red (formerly Junk DNA) area is already five times bigger and is exploding. The "Junk" is evolutionarily conserved five times more than "genes" (1.2% - but most of them are non-coding "introns", anyway, Exhibit B)

The first will be to create comprehensive catalogues across a wide range of cell types and conditions of (1) all protein-coding and non-coding transcripts; (2) all long-range genomic interactions; (3) all epigenomic modifications; and (4) all interactions among proteins, RNA and DNA. [Yes, "all interactions" does include the DNA-to-PROTEIN interaction; this is a very diplomatic rejection of Crick's "Central Dogma" (that Protein information "never" goes back to DNA sequence information - today even high-schoolers know that proteins methylating DNA modulate the accessibility of DNA sequence information - AJP]

Some efforts, such as the ENCODE and Epigenomics Roadmap projects, are already underway

Among other things, these catalogues should help researchers to infer the biological functions of elements; for example, by correlating the chromatin states of enhancers with the transcriptional activity of nearby genes across cell types and conditions.

... The second and harder challenge is to learn the underlying grammar of regulatory interactions; that is, how genomic elements such as promoters and enhancers act as ‘processors’ that integrate diverse signals.

"Mr. President, the DNA is Fractal! Lander video shows fractal structure at 16:38 and utters the word "fractal" at 41:20 (exhibit C)

Large-scale observational data will not be enough. We will need to engage in large-scale design, using synthetic biology to create, test and iteratively refine regulatory elements. Only when we can write regulatory elements de novo will we truly understand how they work. ...

[This somewhat belated but top-level rejection of both mistaken dogmas (Junk DNA and Central Dogma) will lead a scientist, like Eric Lander, a mathematician by training, into learning NOT the "grammar", but the "algorithm" of genome regulation; as e.g. laid out by The Principle of Recursive Genome Function peer-reviewed paper (2008) and its popular rendering in Google Tech Talk YouTube (2008). The road-map when, how and who took the wrong turn towards the over half of a Century dead-end of (Holo)Genomics, blocking the path to understand fractal iterative recursion as the core of genome regulation, (and how to hunt, in a targeted fashion, for fractal defects causing genome regulatory diseases) will be summed up shortly - This entry can be debated at the FaceBook of Andras Pellionisz.

Dr. Lander's extremely revealing video (look up the very end of his YouTube) makes a further dramatic turn AFTER he wraps up his speech. Below is a transcript of a single question, answered only by Dr. Lander, with no comment by Dr. Green at all:

41:55 Eric Lander finishes “and thank you for the invitation”. [Applause by NIH audience in the NIH Auditorium]

41:06 NIHGR Chief, Dr. Eric Green, Session Chair takes over: “We have time for one quick question, if anybody would be bold enough” [Calls out a short haired lady, about 40 years of age clad in green from the audience]

42:14 “That was a great talk! Uhm.. but you, Eric [Lander] and Francis [Collins] all got up and talked about the diminishing cost of sequencing, which I think was great. But what I would really like you to comment on what the increasing cost of analysis has done because the two are really tied in hand in hand, and now you are talking about a million genome and we don’t really want to update the reference, because people really don’t want to analyze a million genome. So could you please comment on where to go get the analysis faster, better and more reproducible?”

42:44 [Dr. Lander answers, standing on the podium with Dr. Green together with a truly baffled face] “This is a great question. There is really there are three parts of the costs of what we have to be going ahead. There is … actually, four parts … collecting samples, preparing samples, sequencing samples and analyzing data. The sequencing part of continuing to drop that’s very good. With every other part we’re gonna have to put a tremendous amount of attention to sample prep, which will, in the next year or so, begin to match the cost of sequencing of all exomes. We got to nail that down. Collecting the samples [pats Dr. Green’s shoulder] is also a non-trivial expense, [smiles] and yes, with analyzing the data, right now data storage alone (used to be a trivial sliver of the pie) is now a visible part of the pie and if you calculate another five-fold decrease of [sequencing] costs, just storage alone will become significant part of the pie – a big problem of running faster than Moore’s Law is you no longer have Moore’s Law of keep decreasing the storage at the rate you need. And so one is gonna need to store only parts of it, one would have to have to use reduced representation by compression techniques, one is gonna say I’have seen this genome a million times before; the salient features I need to store are the following. [AJP: Thus, the 3 easier parts are toi be covered, but... ] We are going to need tremendous input from informatics folks, from computer scientists to think about this kind of reduced representations, about efficient computing but this is in the spirit of the Genome Project that has always been to reaching out to different fields and Lord knows at this point we are going to need help from a lot of fields to bring all those costs down in parallel to really deliver on his [Lander points at Dr. Green, who is leading the Genome Institute of the Dr. Collins-led NIH] “Million Genomes” project.” [One wonders about the algorithmic understanding-project, that can e.g. be found already outlined in The Principle of Recursive Genome Function - AJP]

44:18 [Dr. Green, Chairman of the Celebration makes no comment at all, but moves on] “Right. I know that there is a lot, but we have to move on…”

[video abruptly ends]

^ back to top


Primates' Unique Gene Regulation Mechanism: Little-Understood DNA Elements Serve Important Purpose

ScienceDaily (Feb. 9, 2011)

[Realizing the importance of Genome Regulation needs not much higher IQ than found in most primates; see 150,000+ views of rap "Regulatin' Genes" and elaborate Google Tech Talk YouTube of 2008, approaching 10,000 views - AJP]

Scientists have discovered a new way genes are regulated that is unique to primates, including humans and monkeys.

Though the human genome -- all the genes that an individual possesses -- was sequenced 10 years ago, greater understanding of how genes function and are regulated is needed to make advances in medicine, including changing the way we diagnose, treat and prevent a wide range of diseases.

"It's extremely valuable that we've sequenced a large bulk of the human genome, but sequence without function doesn't get us very far, which is why our finding is so important," said Lynne E. Maquat, Ph.D., lead author of the new study published February 9 in the journal Nature.

When our genes go awry, many diseases, such as cancer, Alzheimer's and cystic fibrosis can result. The study introduces a unique regulatory mechanism that could prove to be a valuable treatment target as researchers seek to manipulate gene expression -- the conversion of genetic information into proteins that make up the body and perform most life functions -- to improve human health.

The newly identified mechanism involves Alu elements, repetitive DNA elements that spread throughout the genome as primates evolved. While scientists have known about the existence of Alu elements for many years, their function, if any, was largely unknown.

Maquat discovered that Alu elements team up with molecules called long noncoding RNAs (lncRNAs) to regulate protein production. They do this by ensuring messenger RNAs (mRNAs), which take genetic instructions from DNA and use it to create proteins, stay on track and create the right number of proteins. If left unchecked, protein production can spiral out of control, leading to the proliferation or multiplication of cells, which is characteristic of diseases such as cancer.

"Previously, no one knew what Alu elements and long noncoding RNAs did, whether they were junk or if they had any purpose. Now, we've shown that they actually have important roles in regulating protein production," said Maquat, the J. Lowell Orbison Chair, professor of Biochemistry and Biophysics and director of the Center for RNA Biology at the University of Rochester Medical Center.

The expression of genes that call for the development of proteins involves numerous steps, all of which are required to occur in a precise order to achieve the appropriate timing and amount of protein production. Each of these steps is regulated, and the pathway discovered is one of only a few pathways known to regulate mRNAs directly in the midst of the protein production process.

Regulating mRNAs is one of several ways cells control gene expression, and researchers from institutions and companies around the world are honing in on this regulatory landscape in search of new ways to manage and treat disease.

According to Maquat, "This new mechanism is really a surprise. We continue to be amazed by all the different ways mRNAs can be regulated."

Maquat and the study's first author, Chenguang Gong, a graduate student in the Department of Biochemistry and Biophysics at the Medical Center, found that long noncoding RNAs and Alu elements work together to trigger a process known as SMD (Staufen 1-mediated mRNA decay). SMD conditionally destroys mRNAs after they orchestrate the production of a certain amount of proteins, preventing the creation of excessive, unwanted proteins in the body that can disrupt normal processes and initiate disease.

Specifically, long noncoding RNAs and Alu elements recruit the protein Staufen-1 to bind to numerous mRNAs. Once an mRNA finishes directing a round of protein production, Staufen-1 works with another regulatory protein previously identified by Maquat, UPF1, to initiate the degradation or decay of the mRNA so that it cannot create any more proteins.

While the research fills in a piece of the puzzle as to how our genes operate, it also accentuates the overwhelming complexity of how our DNA shapes us and the many known and unknown players involved. Maquat and Gong plan on exploring the newly identified pathway in future research.

This research was supported by a grant from the General Medical Sciences Division of the National Institutes of Health and an Elon Huntington Hooker Graduate Student Fellowship.

[Full paper is at Chenguang Gong, Lynne E. Maquat. lncRNAs transactivate STAU1-mediated mRNA decay by duplexing with 3′ UTRs via Alu elements. Nature, 2011; 470 (7333): 284 DOI: 10.1038/nature09701 ]

[With the evidence in ~1 Million of ALU's in human genome mounting, how is that some Lonely Central Moron can continue claiming ignorance? See below - AJP]

[Seed paper 1989, FractoGene early general coverage 2002, and The Principle of Recursive Genome Function 2008 - AJP]

[Oh, yes. How about "the Fractal Defect in an ALU" disrupting a FractoSet, causing Friedreich Spinocerebellar Ataxia; Figure and Cold Spring Harbor presentation, 2009 - Entry can be discussed at FaceBook of Andras Pellionisz]

^ back to top


Genomes: Know Your Genes ... Fast

Personal computing transported the technology out of large machines to the comfort of a room in the form of desktop, laptop and palmtop computers. A similar revolution is now underway with human genome analysers where the concept of expensive DNA decoding is gradually moving out of large medical facilities and onto researchers’ tables and, ultimately, to common people.

One such example is the Personal Genome Machine (PGM) which is presently being used for research purposes only and is a size of a desktop printer. The idea is to input a sample of DNA and receive the results within hours. Since it’s a research-based device, the genetic sequencing revealed cannot be used for medical diagnostic or therapeutic purposes as yet, but it has still achieved what other machines could not do: use a technology similar to computers, which is parallel semiconductor-based sensors measuring hydrogen ions generated by DNA replication.

This real-time processing is the key to speed, using which genetic information is transformed into digital information for the end-user’s consumption. This machine, which weighs around 30kgs, comes with iPod and iPhone dock and sports a touchscreen interface.

One may wonder whether there is a real need to invest time and money in creating smaller and faster DNA and genome sequencing machines and then making it public. The answer is simple: the research in DNA sequencing is the key to understanding basic physical features of human beings and is instrumental in preventing and providing cure for diseases linked to genetic composition.

Additionally, a personal gene sequence can help determine mutations that can predict which drugs will work best in a given condition. Earlier, complete human gene sequencing had been a daunting task, but the Human Genome Project undertook it successfully and it helped determine the exact order of the three billion chemical building blocks making up the DNA of 24 chromosomes found in the human DNA.

Based on these rapid advancements, companies like HolGenTech.com have taken the genome application to the next level by introducing a smartphone Personal Genome Assistant (PGA). This is a unique handset application that reads bar code off a product wrapping to identify ingredients and matches the data with data generated by service providers like 23andMe and Navigenics.

The former is a retail DNA testing service (DTS) that allows interested users to purchase a kit from its online store, provide saliva sample in a test tube present in the kit and ship it to the lab. After a few weeks, results are available online. According to the service, it provides users with details on personal traits from baldness to muscle performance, risk factors for 94 diseases, predicted response to drugs and even ancestral origins. Navigenics also work in a similar way using saliva sample.

The smartphone application also takes into account users’ personal health data stored in database services like Google health and Microsoft health vault. This integrated, matched view provides a recommendation score to allow the consumers to make a choice when purchasing a particular product. The idea is to make consumers aware of any risks associated with the use of product with a known genetic or medical condition.

Upcoming devices

Scientists at Imperial College, London, are researching on workable and quick ways of sequencing the entire genetic makeup. Since each human genome consists of around three billion bases, this at present takes about 3.5 days to sequence them.

Compared to this, the new device will sequence an entire human genome in about five minutes. The methods available at present can sequence genes at a rate of 10 bases/second approximately. However, it’s expected that with the success of the researches, the genome devices will be able to read around 10 million bases/second.

The latest genome device functions on the fact that each base protein has its unique electrical signal. Each strand of DNA is passed through a 50 nanopore openings in a silicon chip. As it goes through the tiny gap, a tunnelling electrode junction on the other side reads each base’s distinct electrical signal. And even though using electrical signatures to read DNA is not a new idea, the problem was that no one was able create a small enough electrode junction until now.

However, the team at Imperial College have successfully constructed a prototype with a small enough gap to read DNA bases. After this they will work on calibrating the device to identify the individual bases. The scientists working on this project are optimistic that their method will be in wide use within the next 10 years.

[Read carefully what the article mentions about HolGenTech' Personal Genome Assistant "smart phone". Neither 23andMe, nor Navigenics establishes diagnosis - and HolGenTech does not even touch anything "medical". Instead, it focuses on Genome-based Product Recommendations, where diagnosis may come from Google Health and/or Microsoft Healthvault data-bases, made to be interoperable with genomic markers how one product (supplement, vitamin, cosmetics, etc, etc, could be better for your genome than another. This entry is open to comments at FaceBook of Andras Pellionisz]

^ back to top


Computing in the Age of the $1,000 Genome

Xconomy, Seattle
Luke Timmerman 12/27/10

Speakers from Isilon, Arch Join Stellar Lineup

The quest to sequence entire human genomes for $1,000 or less is one of the stories that many predict will change healthcare in the 21st century. It’s an enormously complex puzzle that requires some of the brightest minds in both IT and life sciences to put their heads together. And quite a few of them are working to make this happen right here in Seattle.

So that’s why I’m pleased to announce we’re adding a couple more great speakers to our next event, “Computing in the Age of the $1,000 Genome” on February 7th in Seattle. The first is Paul Rutherford, the chief technology officer of Seattle-based Isilon Systems, which has now officially been acquired by EMC for $2.2 billion. The second is Bob Nelsen, the managing director of Arch Venture Partners, an early investor in Illumina, the leading maker of DNA sequencing instruments that are creating this massive data pile-up.

Rutherford is a natural fit for this event because he has spearheaded Isilon’s work in providing the immense data storage capability that biologists need when they run DNA sequencers that spit out billions of data points. Isilon has gotten some early traction in this market, having signed up A-list customers like The Broad Institute of MIT and Harvard, Johns Hopkins University, Merck, & Genentech.

Rutherford will offer his perspective during a panel at this event alongside Deepak Singh of Amazon Web Services, and Jim Karkanias of Microsoft Health Solutions. Each comes at the genomic data challenge from a slightly different angle—Amazon is focused on flexible cloud computing approaches for storage and analysis, Microsoft has an open-source software platform it is pushing along with bioinformatics software to crunch the data, while Isilon offers hard core centralized servers to store and access the data at places that pump out vast amounts of DNA sequences every day.

I’ve asked Tim Hunkapiller, one of the founding fathers of bioinformatics from the early 1980s at Caltech, to moderate this panel. Hunkapiller has a long history as an academic scientist, and these days has his finger on the pulse of what’s new in DNA sequencing instruments, partly through his work as a consultant to one of the industry leaders, Carlsbad, CA-based Life Technologies (NASDAQ: LIFE).

Nelsen, who’s never afraid to stir the pot, will join this event for a closing fireside chat with biotech pioneer Leroy Hood.

So, here’s the updated list of speakers:

—Leroy Hood, the co-founder and president of the Institute for Systems Biology, Seattle

—Cliff Reid, co-founder and CEO, Mountain View, CA-based Complete Genomics

—Eric Schadt, chief scientific officer, Menlo Park, CA-based Pacific Biosciences

—Jim Karkanias, senior director, applied research and technology, Microsoft Health Solutions, Redmond, WA

—Deepak Singh, senior business development manager, Amazon Web Services, Seattle

—Rowan Chapman, partner, Menlo Park, CA-based Mohr Davidow Ventures

—Andreas Sundquist, co-founder and CEO, Palo Alto, CA-based DNANexus

—Ilya Kupershmidt, co-founder and VP of products, Cupertino, CA-based NextBio

—Rob Arnold, president, Seattle-based Geospiza

—Tim Hunkapiller, Seattle-based consultant, Life Technologies

—Paul Rutherford, chief technology officer, Isilon Systems, Seattle

—Bob Nelsen, managing director, Arch Venture Partners, Seattle

Tickets have been going fast for this event, and at the current pace I wouldn’t be surprised if this event sells out a couple weeks in advance. So check your calendars for the afternoon of February 7, and join us for a thoughtful conversation about how computer scientists can work with biologists in a way that will ultimately shake up medicine as we know it.

Luke Timmerman is the National Biotech Editor of Xconomy, and the Editor of Xconomy Seattle. You can e-mail him at ltimmerman@xconomy.com, or follow him at twitter.com/ldtimmerman.

[The Dreaded DNA Data Deluge (see YouTube, 2008) has long been expected. Now, not only "Big IT" (Microsoft, Amazon Cloud Services etc) take it deadly seriously, but also biological systems (lung, kidney, brain cell - and even the genome and the organelles, organs, and organisms it grows) are targeted in the intrinsic mathematical language - fractal geometry. FractoGene (Pellionisz, 2002) had no chance of becoming published in 2002 as it ran dead-against the two prevailing (obsolete) axioms, the Central Dogma (with Crick still alive), and the JunkDNA misnomer that was officially put to rest only by ENCODE in 2007 (though Dr. Pellionisz put together an International PostGenetics Society with an European Inaugural International conference, to speed up publication of ENCODE by 8 months). Thus, FractoGene was entered to USPTO not in the interest of seeking undue monetary gains, but to secure the intellectual priority-date of the concept, that reaches back to his Cambridge University Press publication of "The Fractal Geometry of Purkinje Neurons", 1989. With the ENCODE results published, Dr. Pellionisz could publish in a peer-reviewed research journal The Principle of Recursive Genome Function in 2008, and present his Google Tech Talk YouTube, 2008 with genome computing architectures (YouTube, 2009, 2010).

[See here]

[See here]

-[FractoGene, 2002]

This entry can be debated on the FaceBook of Andras Pellionisz]

^ back to top


The State of Science [What was Obama's Sputnik? What should be his Apollo? - AJP]

January 26, 2011
Genomeweb

The theme of US President Barack Obama's State of the Union address yesterday evening was on how Americans could "win the future" despite increased competition from countries including China and India. Obama noted that China has the largest private solar research facility as well as the world's fastest computer — which Top500 says is at the National Supercomputer Center in Tianjin. "We need to out-innovate, out-educate, and out-build the rest of the world," Obama said.

Obama then called for an increased federal investment in science as well as for training 100,000 more science, technology, engineering, and math teachers. "This is our generation's Sputnik moment," he said, echoing remarks he made at Forsyth Tech in Winston-Salem, NC, in December....

William Talman, the president of the Federation of American Societies for Experimental Biology, said he was pleased to hear Obama say that he wants to make biomedical research investment a 2012 budget priority. "This will promote innovation, create new technologies, improve health, and revitalize the economy," Talman said in a statement. "It is also gratifying to the thousands of young Americans who have dedicated themselves to pursuit of careers in science and engineering, and this will inspire others to follow their lead."

[The President correctly identified his "Sputnik moment" - when China turned on the world's fastest supercomputer (move over USA, October 28, 2010). However - just as with Sputnik - the proper answer was NOT to launch a "bigger, better Sputnik"; but to escalate the competition onto a new dimension (in that case, producing the Apollo Moon Shot). In this column, referring by name to the "Sputnik analogy", I argued on August 15, 2010, that the truly vital challenge is the solving of the science problem of mathematical understanding of genome regulation, such that "Industrialization of Genomics" could be complete with not only sequencing the DNA - but also to interpret its Recursive Genome Function (Fractal Recursive Iteration). When China first sequenced the Han (Chinese) genome, they stressed in the brief announcement that their knowledge of the genome of the largest homogeneous population of the World (Han) is not only vital for their health care, but it is also of strategic priority (so mentioned "national strategy" the Russians when sequenced the first Russian genome). - This entry can be debated on the FaceBook of Andras Pellionisz]

^ back to top


Like Life Itself, Sustainable Development is Fractal

By Sustainable Land Development Initiative | January 7th, 2011

By Terry Mock and Tony Wernke, SLDI Co-founders

Follow Terry and Tony on Twitter: Terry @SustainLandDev; Tony @Sustainable4U

Watch the full episode. See more NOVA.

Watch the full movie here

A new understanding of the world is revolutionizing how scientists and other professionals of all disciplines are solving important problems today, and this understanding also has the potential to significantly impact how we think about and work to achieve a sustainable world.

In just the last couple decades, we have learned that fractal geometry – and its related field of chaos theory – forms the very basis of science. “Chaos,” as its name implies, is the study of processes that appear so random that they do not seem to be governed by any known laws or principles, but which actually have an underlying order. We now know that the physical, biological, social and even the economic universe is not random, and we’re beginning to determine just what that underlying “code” is.

Scientists are learning that everything natural is created by the immutable laws of fractal geometry. This includes static elements as well as energy flows, living things, and their behavior patterns. They are all built on self-similar patterns that replicate each other on increasing and decreasing scales, sort of like Russian nesting dolls. The various levels of scale are not all exactly alike, but they are all self-similar and build one on top of the other based upon a fundamental “code” that reproduces itself on different scales. In both the metaphysical and practical sense, the entire universe is built by fractal geometry.

While it’s a relatively simple concept to understand and the general idea isn’t new, we have barely begun to realize the full importance and usefulness of fractal geometry. We are now beginning to learn how to dissect these “codes” and fractal geometry is now being used in many different ways. It is being used to better understand and impact economic trends, computerized image/data compression, human behavior, natural systems, molecular biology, cosmology, and much, much more.

Brief History

Throughout history, scientists have formed scientific principles in one discipline based on those from another. In ancient China, irrigation science formed the basis for the healer’s model of Chi, the life-force energy that flows through meridians like aqueducts. In psychology, Freud’s conceptualization of emotion used a scientific model of a hydraulic steam engine. More recently, laser technology and holography have prompted the neurological investigation and understanding of memory. The interrelationships go on and on, and now our recent understanding of fractal geometry goes to the heart of it all.

Benoit Mandelbrot, a financial analyst who worked at IBM, discovered the computer’s ability to visually display what had previously been abstract mathematical equations, such as the mathematics of recursive loops. He coined the term “fractal geometry” to describe his efforts to portray the complexity of nonlinear systems in visual form, admitting his fascination for the beauty of many fractal forms. In such an equation, the formula is typically simple, consisting of a single equation, the output of which is fed back into the equation as its next input, forming an infinite loop. Although the idea of recursive loop equations have been around a long time, it is only with the advent of computers that enough iterations to approximate infinity – in both inward (high number of decimal places) and outward (high number of digits) directions – can be produced.

The following video introduces the concepts of fractal geometry and the capacity it has to enhance our understanding of our world. It is a fascinating view.

The Fractal Nature of Life Itself

Fractal geometry has dramatically expanded our scientific understanding in the field of biology. If you understand how a cell works, you understand how a human works biologically. A human is nothing more than a collection of about 50 trillion cells, each with the same structure and basic functionality, replicated and then adapted to work together to achieve the success of the community – the human being. All the functions of the human body are already present in every living cell that comprises it. Each cell has its own intelligence and all the functions of the whole human being. They know how to grow, reproduce, digest food, respirate, defecate, communicate, etc. Everything humans can do, each cell in our body already does individually. After all, humans were programmed by and patterned after the cells themselves. [See FractoGene (Pellionisz, 2002), Fractal DNA governs fractal growth of organelles, organs and organisms, peer-reviewed science paper The Principle of Recursive Genome Function (2008) and Google Tech YouTube 2008)]

Similarly, humanity is comprised of a population of self-similar humans, each made up of trillions of cells. As cells evolve, as people evolve, so does humanity. Biologically speaking, the entire population of the world is akin to a cell. It has the same functions and needs. It’s a self-similar pattern of increasing scale. The old axiom, “As it is above, so it is below,” becomes the geometry of life and the laws of humanity. The similarities of function and need are scientifically inescapable.

As is true with all things that are fractal, understanding the inner-workings of one level of the structure offers insight into all other levels of the structure. Humans tend to think of ourselves as a singularity, but by scientific definition each human is a community of cells, all working together harmoniously toward one end, the sustainability of the whole. If you understand the dynamics of how 50 trillion cells can live in the smallest environment in harmony, all the rules are there for a few billion people to live on the planet – in harmony. But this knowledge must be applied, as it is in the human body.

The following video introduces the laws of fractal geometry to our understanding of biology.

The Environment

One of the largest fractal relationships in real-life is the self-similarity of objects in nature. Clouds, trees, ferns, snowflakes, crystals, mountain ranges, lightning, river networks, coastlines, and much, much more can be produced remarkably accurately within a computer using relatively simple fractal geometric equations. It is the way nature produces itself.

The Human Body

The brain, nervous system, respiratory system, circulatory system, and everything else in the human body, is a product of fractal geometry at work. Significant advances in medicine are currently being developed using fractal geometry that are directly attributable to our new understanding of the body.

Psychology

Scientists are learning that not only is human physiology fractal, but so is human behavior. In fact, fractal geometry has immense consequences on our mental and physical quality of life. Individual behavior, while often seemingly random, actually follows predictable patterns based on the “code” of instructions that are built into an individual’s psyche. Behavior, over time, reflects self-similarity and predictability. In fact, psychologists are beginning to define individual identity based on the patterns of self-similarity embodied in behavior. With the human psyche, just as with fractals, the closer you look, the more there is to see. Psychologists can start with any detail about a person, no matter how trivial it may seem, and the more it is explored, the more richness and complexity is revealed, but it also can be traced back to the “code” of instructions that are embedded within the psyche of that individual person. In true fractal-like fashion, any part of behavior one examines reflects and is intimately connected to the whole. Through the exploration and understanding of this code, psychologists can begin to heal mentally sick patients.

Psychologists are learning that human creativity takes on fractal characteristics. Human creativity has an underlying process that selectively amplifies small fluctuations of mental stimulus and molds them into coherent mental states experienced as thought.

Fractal patterns of mental activity in sleep and wakefulness have been evaluated from EEG recordings. This has important implications for the proposal that dreams result from the brain’s attempt to bring meaning to the images evoked by a stimulation of the brain’s visual and motor centers during rapid eye movement sleep.

Other researchers are beginning to suggest that fractal geometry and chaos theory might help bring some order to the potpourri of provocative findings in parapsychology. For over a century, parapsychologists have investigated such purported phenomena as extrasensory perception and psychokinesis using old linear models from natural science. There is now some optimism that fractal models may begin to provide some breakthroughs.

Our understanding of fractal geometry has enabled scientists to make breakthroughs in social psychology and welfare. We’re learning that our ethical perspectives and the functioning of community are based on fractal geometric relationships. The forms of these community fractal relationships extend to national and even global physiology. For example, to this day George Washington is considered to be the “Father of Our Country” by setting the example for American democracy which now is playing out internationally, albeit with some unintended consequences due to some alterations in his original philosophy.

Economics

Fractal geometry has significant ramifications in economics and finance. Countless economic and financial “behaviors” are fractal in nature. In a nutshell, understanding fractal geometry enables organizations to improve their profitability and opens up entire new economic opportunities that simply did not exist previously.

In the 1930s, Ralph Elliott proposed that market prices unfold in specific patterns, which we now call fractal patterns.

The Elliott Wave Principle says that just as naturally-occurring fractals often expand and grow more complex over time, so does collective human psychology. This is manifested in buying and selling decisions reflected in market prices. The principle has become popularized by Robert Prechter, a noted stock market analyst who realized through reading many of Elliott’s lost works that mass psychology is what the markets are all about, and mass psychology is governed by fractal geometry. Elliott Wave practitioner John Casti said in New Scientist in 2002,

It’s as though we are somehow programmed by mathematics. Seashell, galaxy, snowflake or human: we’re all bound by the same order.

In the 1970’s, Mandelbrot found other economic variables that followed similar patterns. His research suggested that markets of many different kinds exhibited a complex behavior that could be broken into smaller and smaller self-same bits. Price fluctuations could be described by fractal functions. He eventually wrote a book on the subject, The (Mis)behavior of Markets.

Today, economics and finance remains one of the hottest topics in the application of fractal geometry. Fractal geometry is being used to accurately model financial market risk. This understanding has led to the evolution of a new economic discipline called, “econophysics” where the study of fractals is chief among the many topics of study. Another aspect of the real world tackled by fractal finance is that markets keep the “memory” of past moves, particularly of volatile days, and act according to such memory. Volatility breeds volatility; it comes in clusters and lumps according to the laws of fractal geometry.

Sustainable Development and Fractal Geometry

We know that human health and behavior (people) are driven by the laws of fractal geometry. We also know that our natural environment (planet) functions according to the laws of fractal geometry. And we know that the world of finance and economics (profit) follows the laws of fractal geometry. So if the individual people, planet and profit components of triple-bottom-line sustainability are bound by the laws of fractal geometry, it follows that their combination will be self-similar as well. A closer examination reveals that to be the case.

By following a set of instructions, or “code,” designed to define our sustainable condition on earth, all human endeavors, regardless of their size, scope and scale, can produce sustainable results by following a set of instructions, or “code.” It is this universal code which we must model in order to produce replicable and scalable results. Whether we’re interested in land development, food production, or healthy living, the same basic set of instructions apply if sustainable results are desired.

Functioning like life itself, the SLDI Code™ is the world’s first and only model that graphically and conceptually identifies the instructions to achieve sustainable results, regardless of application.

It is represented as a three-sided fractal geometric figure called Sierpinski’s (equilateral) Triangle. It begins with the whole and penetrates deeper and deeper into project decision-making, replicating itself throughout all of the various areas, aspects and phases of the project development process from planning through finance, design, construction, maintenance and back to planning again.

It was not developed as a prescriptive checklist to specify a narrowly defined set of products or practices to achieve an outcome. Such systems almost always prove inappropriate on some level because of the unlimited variability of specific project types and circumstances at any point in time. The SLDI Code provides the basic instructions upon which any sustainable project may be achieved.

Any useful system requires the ability to adapt decisions to specific circumstances. As such, the SLDI Code is equally applicable regardless of project type, scale, terrain, climatic zone, etc. It allows for the consideration of all the unique characteristics and constraints of any project, all of which ultimately produce completely unique outcomes. In other words, rather than prescribing how to design a project, the SLDI Code provides the “code” or programming that provides the user with the tools to enhance the quality of their work by combining and balancing virtually unlimited possibilities into a sustainable end result.

In fact, the SLDI Code, like many computer programs, is applicable on projects of any type, regardless of industry. The principles embedded in its top few orders of magnitude are so universal that they apply to all applications of its use, much like a word processing, spreadsheet or graphic design program may be applied broadly across industries and disciplinary endeavors.

The code enhances the quality of outcomes, however diverse they may be, toward greater sustainability from a holistic people, planet and profit perspective. Using the constructs of fractal geometry, the SLDI Code uses the practical methods of how our world, and in fact the universe itself, has been constructed, to perpetuate life in the universe in the most effective manner possible.

In the pass-it-forward spirit, SLDI is now offering this fractal geometric model, including the instructions, to all those willing to collaborate for the collective benefit of people, planet and profit – today and in the future. It’s high time for us to apply the scientific laws of nature to our hope for sustainable civilization.

[See and debate the above in the FaceBook page of Andras Pellionisz -

The outcry in the first minutes of the NOVA movie is "Oh My God! Of course! It is obvious!" - in retrospect...

Not entirely surprisingly, we are approaching the long-awaited singularity. Just days ago I was told that "there is a universal agreement that the genome-epigenome system is fractal" as if they did not oppose me so vehemently in every step of my way! With such pieces of evidence for the scale invariance (an intrinsic property of fractals) found in neuronal activity, what was yesterday's "lucid heresy" becomes "of course, we knew it all along".

The hard part for "me too copycats" will be the next phase of inevitable hypocrisy "actually I new it before you did", since one would have to pull peer-reviewed paper dated prior to my Cambridge University Press piece published in 1989 on the fractal development of Purkinje neurons, as governed by a recursive genome function. It will be hard for San Francisco Chronicle to explain why they pulled an article already out in 2002 http://www.junkdna.com/plotkin.htm and for the Hungarian "Origo" to explain why they pulled the article already out in 2006 http://junkdna.com/origo/ playing up the Budapest host [Dr. F.A. Semmelweis University] on the expense of my science such that he became the "regular" Member of the Hungarian Academy of Science, while my work already done went unpaid with their unilateral breach of contract, see both in the Hungarian and in the English language at http://junkdna.com/contract_pellionisz.htm

Today, the "name of the game" is to profit on the software that delineates structural variants of the genome that reflect "parametric human diversity" from those that are "syntax-glitches where the genome deviates from its own intrinsic structural and functional geometry" - and thus we can provide diagnosis, therapy and up a to a cure for hereditary conditions.

It is one thing to talk about it - and a totally different ball game to turn major investment into Industrialization of Genome Informatics.

Now, approaching the genome with monstrous expense with no utilization of understanding already gathered and at hand is becoming a major scientific/economic embarrassment.]

^ back to top


EU Funds Development of Gene Regulation Software Suite

January 13, 2011

Genomeweb Staff Writer

NEW YORK (GenomeWeb News) – CLC Bio said today that it is leading a consortium funded by the European Union to develop a software suite that will enable gene regulation analysis of a large number of genomes.

CLC Bio, Decode Genetics, Biobase, the University of Oxford, Hungary’s Alfred Renyi Institute of Mathematics, and Russia’s BioRainbow will use €1.6 million ($2.1 million) in EU funding to support the Comparative Genomics and Next Generation Sequencing, or COGANGS, project.

The COGANGS partners plan to develop a software suite that will enable the analysis of up to one thousand genomes at the same time.

“We will apply both the initial prototype and the final software package for the analysis of regions that have been identified to have strong disease associations in the human genome,” Gisli Masson, Decode’s director of bioinformatics, said in a statement.

Masson explained that the software suite could enable scientists to “unlock a lot of information in the vast collection of human DNA samples we already have, once this project enables us to do large-scale comparative genomics analyses.”

[Comment in FaceBook of Andras Pellionisz - The sum is laughable ($2.1 M) for participants from 6 European countries. In addition, the construction is highly debatable. The news is, that orders of magnitude bigger and healthy funding became a "must".

The EU has never been famous for sound and fair science & technology policies. For this petty sum, two industrial players (one is not even in the EU as yet!) will be permitted to rake in, for free, the largely unprotected intellectual power of an Institute of Mathematics of the Hungarian Academy of Sciences, plus the vast potential of Russia (unlikely to become an EU-member anytime soon). Still, the news speaks highly about the EU initiatives, as the needed, at least two-three orders of magnitude larger USA equivalent program (that should be headed by NSF or DARPA) is not even on the horizon yet.

Unless US "Big IT" moves to target with God' speed, China is likely to win this one - the rest will be history.]

^ back to top


Biotech’s Biggest Winner [according to Forbes, it is Illumina - AJP]

Forbes
Matthew Herper
Jan. 3 2011

To most people, including biotechnology investors, the human genome project was a bust that led to no new drugs, no medical breakthroughs, and no real profits. So as I start a series of posts that I’m calling Gene Week, let’s take a step back and look at the best-performing stocks in all of medicine, not just in biotech, but in medical devices and drugs, too. Five companies delivered total returns – a measure of how much you’d make if you’d held the stock and reinvested any dividends — of more than 300%.

The winner, hands down, is Illumina, the leading maker of DNA sequencing equipment, which has delivered a total return to investors of nearly 800% since late 2005. That annualizes to an amazing 50% return ever year, buoyed by sales that have grown twelve-fold from $73 million in 2005 to a projected $879 million. Goldman Sachs projects that earnings of $72 million will triple by 2012.

How did a company that makes machines for decoding DNA wind up at the top of biotech’s heap, beating every drug maker? Illumina has led a revolution with these machines. Five years ago reading out all six billion letters of a person’s genetic code cost $1 million. Now, it costs something like $10,000, and the accuracy has improved. That’s a drop in price and increase in speed that rivals that seen with microprocessors. In the current issue of Forbes, I argue that DNA sequencing could become the foundation for a $100 billion business.

Illumina’s success is largely attributable to the strategic acumen of its chief executive, Jay Flatley, who may well be the best CEO in biotech. Illumina started making DNA chips, an early technology for detecting mutations. At the time, it trailed Affymetrix, which had created the market; Illumina overtook Affy, but, more impressively, Flatley convinced his board to make a big bet on DNA sequencing by purchasing a company called Solexa for $600 million in 2006. Machines like these, still sold almost entirely to researchers, have become a $1 billion market in which Illumina has 70% market share. “The data really is beautiful stuff,” says blogger and genetic researcher Daniel Macarthur. “It’s just stunning it’s so clean.”

Plenty of companies are gunning for Illumina’s crown in sequencing – I focus on one entrant, Life Technologies’ Ion Torrent division, in my Forbes cover story. Illumina shares currently trade for a whopping 84 times trailing earnings. Goldman still rates the company a buy, saying it stands a good chance of holding on to its leadership in the sequencing space.

The rest of this list – this elite top five of health care companies that sport market capitalizations of more than $4 billion – actually bolsters my feeling that the time is ripe for DNA sequencing to push its way out of the research market and into medicine. Look at what other companies have rivaled Illumina’s five-year return: Alexion, which makes a rare disease drug that costs $500,000 per patient per year; Dendreon, a high-risk drug developer that hit pay dirt after years of desperate work on prostate cancer treatment; Perrigo, the Michigan maker of generic over-the-counter drugs; and Novo Nordisk, the diabetes-focused drug giant.

What do these other companies have to do with DNA sequencing? The argument for this technology has been that it will be cheap enough to become ubiquitous – that’s the idea behind the genoscenti’s favorite catchphrase, “The $1,000 genome.” But it doesn’t need to become cheap – it just needs to become useful. Alexion can charge half a million dollars for a rare disease drug, and Dendreon’s Provenge costs $93,000 a year. The cost of a genome doesn’t need to come down for doctors to start adopting DNA sequencing, the usefulness needs to go up and regulatory and insurance barriers need to come down. The kinds of successes that will drive that – like a case in which sequencing may have allowed doctors to identify the right treatment for a very sick five-year-old – are already happening.

Medicine may be ready for the next technological leap. It’s just going to be a matter of making it happen.

[Well ... almost. Matthew Herper's analysis is very compelling, but some deeper background and essentials for the future may be warranted. With the case of Illumina, while Jay Flatley is without question an outstanding CEO, much credit is also due to the revolutionary microarray-technology that enabled Illumina to grow on the turf of Affymetrix. Also, regarding microarrays, it is noteworthy that the so-called DTC (Direct-to-Customers) Genome Testing is also grabbing a lead-role with the Illumina arrays (e.g. by 23andMe, that has just catapulted its marketing model with the help of a more potent array by Illumina). As for the statement "Medicine may be ready for the next technological leap. It's going to be a matter of making it happen" - one could happily agree, if the prescription for "making it happen" would be just a little bit clearer in scientific terms. The analysis may leave the reader with the impression that success in (personalized) genomic medicine is just a matter of time/money - as it used to be believed, mistakenly as we know now - with the "War on Cancer" some forty years ago. "Just Sequencing" may be the equivalent of "Just Surgery", in case of cancer, whereas success of both Genomic Medicine and Cancer Treatment and Cure need a breakthrough in understanding the software-enabling (algorithmic) regulation of recursive genome function. This entry can be debated on the FaceBook-page of Andras Pellionisz]

^ back to top


The Next $100 Billion Technology Business

Forbes
Matthew Herper
Dec. 30 2010

That headline is the cover language from the current issue of Forbes magazine – for a story I wrote about DNA sequencing and, particularly, about Jonathan Rothberg and his new Personal Genome Machine.

What we are declaring in this story is that DNA sequencing, the technology by which individual letters of genetic code can be read out, could be the basis for a $100 billion market that encompasses not only medicine, where sequencing is already being evaluated to help cancer patients, but also other fields like materials science, biofuels that replace petroleum, and better-bred crops and farm animals. There are even synthetic biologists who are talking about using biology to make buildings and furniture based on the idea that this will be better for the environment than current plastic and concrete.

Rothberg’s machine is important because it is the first attempt to lower the cost of DNA sequencing machines to bring them to a far wider audience. The cost of sequencing DNA has been dropping at a rate that rivals – and may surpass – the increases in speed seen with the microchip, but the machines used to do it cost $500,000 or more. The PGM is far less powerful, but it cost only $50,000, although you need other equipment to get it running. It is being made and sold by Life Technologies, the laboratory equipment firm that bought Rothberg’s company earlier this year.

We’re likening the PGM to the Apple Computer, which changed the world, but it could be more like the Altair, which fizzled. Right now, Illumina of San Diego holds the lead in the newer, faster segment of the DNA sequencing market, and it could very well keep it. There are also other new players, such as Pacific Biosciences, which can sequence a single molecule of DNA, and Complete Genomics, which is taking a factory-like approach to bring down cost. That adds to the excitement. Of course, as with all things biotech, this could all fall apart.

Starting on Monday – earlier if I can’t help myself — I’m going to be posting as much material as I can about the new science of genetics, looking at the companies, the science, the potential and the pitfalls. I’ll show you that there is already been a business revolution driven by genomics, even though you might have missed it. I’ll tell you what I think this means for privacy, for drug development, and for medicine, and tell you what books and blogs are good sources of information about the coming DNA wave. It will be Gene Week on The Medicine Show – sort of like Shark Week, but with alleles and sequencing by synthesis instead of sharp teeth and small brains.

And as I do that, I’d like to hear from you. After you’ve read my story, please tell me what you think and share your questions and criticisms in the comments or via email. I’ll publish the best commentaries I receive – and I promise not to hold back if they are critical of my work. I’ll try to answer questions, or to find sources who can. I think we’re on the cusp of a really big technological change. What about you?

[Pellionisz, called-out comment] Forbes has been pioneering the coverage of the first Decade of “Genome Revolution”, and apparently the second Decade, the “Industrialization of Genomics”. It is imperative to point out what went wrong in the first Decade (since 2001) and what is the sound basis of ramping up now from 2011 a “$100 Billion Technology” – especially in view of the admission that “Of course, as with all things biotech, this could all fall apart”. In my opinion, an interesting historical parallel is the blow-up of Newtonian Physics with Nuclear Physics facing the challenge that the old axioms (i.e. that the atom would not split, elements can not be changed – even the philosophical foundation of determinisms) had to create quantum mechanics first, before plunging into peaceful and not so peaceful applications in nuclear industry. The Decade of “Genome Revolution” (officially, with ENCODE, 2007) revealed that the axioms of Central Dogma, JunkDNA, genomic determinism were false – and instead of an introspection Genomics turned towards an “Industrialization” starting with the necessary but not sufficient need of full genome sequencing made affordable. We may be in for the brutal scene of “Sequencing-based industrialization” falling apart, if the “Dreaded DNA Data Deluge” (that e.g. I featured in YouTube, 2008) is not matched by our ability to interpret by software-enabling theory (based on sound informatics of The Principle of Recursive Genome Function). We must bear in mind that the very sustainability of the Industrialization of Genomics is at stake if it is (wrongly) assumed to be just “Technology”, without the software-enabling (algorithmic) understanding of the genome-epigenome (hologenome). Sequences will be worth nothing, and their huge glut will destroy the ecosystem of investments in and industrialization of genomics without the emergence of mathematical understanding of the complex system of hologenome. As always, analytics of complex systems must start with the identification of what the system is. The principle of recursive genome function holds that the system is fractal. Should there be a better idea, let’s hear about it.

[I entered my 2 cents to Matthew's blog - and will insert a pointer in my FaceBook page. The entry ccould thus be debated in Matthew's blog as well as in FaceBook of Andras Pellionisz]


23andMe lowers price from $499 to $199 permanently

[With the Holiday sale of $99 gone, 23andMe made its change of business model permanent. Under the new model they provide a low-cost entry ($199 plus S&H), but charge a monthly pittance of $5, and the buyer of the kit must enroll for the monthly updates for at least 12 months. Or, with a one-time payment ($499) there is no monthly fee for the updates (looks like on a permanent basis). This new marketing will certainly generate a sizable enlargement of their pool of "before sales" 50,000 or so (the official count is not public). This entry can be debated in FaceBook of Andras Pellionisz]


Genetic Tests Debate: Is Too Much Info Bad for Your Health?

Dec 19, 2010 | 8:31 AM ET |

By Samantha Murphy
My Health News

Hoping to find any disease susceptibilities lurking in her genes, 21-year-old Lee — who goes by the nickname "Zlyoga" on YouTube — spit into a container and posted a video of her salivary sampling on the popular site in December 2008.

"I think this is the coolest thing in the entire universe," she giddily said to the camera. "This is one of the best gifts I ever received — so much better than the bike I was going ask for."

Lee's parents had given her a mail-in genetic test. She completed the forms in the kit — then priced at a few hundred dollars — from California genetic-testing company 23andMe, enclosed the sample of her saliva, and sent it on its way.

Direct-to-consumer genetic tests have become increasingly popular since they first hit the market several years ago — in fact, 23andMe alone boasts of having more than 50,000 customers. Although the company does not release statistics about its actual growth, a spokewoman told MyHealthNewsDaily that "our database has grown steadily."

But there's been much debate over whether knowing the results is beneficial or harmful — and if they even give an accurate picture of a person's risk for certain diseases.

Not the whole picture

When Lee received her results, she took to YouTube once again: "In some ways, reading the results felt like a horoscope," she said, sounding half-satisfied.

The test revealed she indeed has blue eyes and, like both of her parents, has a low-tolerance for statin drugs, which are used to treat people with high cholesterol levels. However, she was surprised to learn a few diseases that run in her family posed little to no risk for her.

"I know [genetic testing] is still in its infancy, but how do I know any of this legitimate?" she said.

Others who have completed the 23andMe test have expressed similar concerns on YouTube: "It's all so vague; what does [higher risk] even mean?" asked "MelissaMich," after receiving her 23andMe results.

Not only may the results seem vague, they are just a snapshot of someone's genetic makeup, called single-nucleotide polymorphisms (or SNPs).

"Imagine that the genome is a huge jigsaw puzzle, with many more pieces than we have ever seen in a puzzle before," said Dr. Andras Pellionisz, founder of HolGenTech, a genome interpretation software company. "Now, suppose someone gives you only 10 percent of the pieces."

By knowing your SNPs, you may "get lucky" and precisely learn your risk of some diseases, said Pellionisz, who thinks the tests are a good idea. But, he said, the genetic picture is incomplete.

For example, the results don't factor in lifestyle choices. A person might be told he has a low risk of developing lung cancer, but if he smokes two packs a day, his chances of getting the disease increase.

Indeed, 23andMe is upfront about this on its site, stating that it provides genetic information and "not the sequence of your entire genome," nor does it perform predictive or diagnostic tests. The company acknowledges SNP information is difficult to interpret.

The uncertainty factor

UCLA sociology professor Stefan Timmermans, who studies the genetic testing of newborns, said that knowing too much puts stress on those who've taken the tests. In a recent study, Timmermans revealed how the newly mandated genetic screening of newborns for rare diseases is creating unexpected upheaval for families whose infants test positive for risk factors but show no immediate signs of any diseases.

"Although newborn screening undoubtedly saves lives, some families are thrown on a journey of great uncertainty," Timmermans said. "Rather than providing clear-cut diagnoses, screening of an entire population has created ambiguity about whether infants truly have a disease — and even what the disease is."

"Basically you're telling families of a newborn, 'Congratulations, but your child may have a rare genetic condition. We just don't know, and we don't know when we'll know,'" Timmermans said.

His study paints a picture of families caught in limbo as they wait months for conclusive evidence that their children are out of the woods for various conditions. In many cases, however, the test results never come, the study found. Instead, the children slowly outgrow their known risk factors for dozens of metabolic, endocrine or blood conditions But the effects linger.

"Years after, everything appears to be fine, parents are still very worried," Timmermans said.

Some families are so traumatized that they follow unwarranted and complicated treatment regimens, including waking their children up in the middle of the night, enforcing restrictive diets and limiting their contact with other people for years.

And the same lasting worries come with direct-to-consumer testing, Timmermans told MyHealthNewsDaily.

"Those types of tests are planting seeds in people's minds for something there isn't a lot of firm data about. The genetic information provided by direct-to-consumer tests by itself isn't enough; they also have to look at family history and what has actually developed."

Understanding the results

Mike Spear, a 56-year-old communications director from Calgary, Alberta, didn't know exactly what to make of his results. He found he had a high risk of age-related macular degeneration, which causes vision loss in old age, and became very concerned.

"When I saw my results on paper, it seemed impossible to distance myself from the fact that it was just an experiment," said Spear, who works for Genome Alberta, a not-for-profit genetics research funding organization.

Spear contacted a genetic counselor to gain more insight into the results' meaning.

"The counselor explained that just because I'm at risk for certain conditions, it doesn't mean it's going to turn into anything," Spear told MyHealthNewsDaily. "I also started to be proactive and go to the eye doctor more."

The company 23andMe offers counseling services and gives customers access to its online community, where people can chat about their results, for an additional fee.

But not everyone who participates in the tests reaches out to genetic counselors who can help explain the results, said Dr. Christopher Tsai, director of clinical informatics of Generation Health, a genetic-testing benefits firm. And this is another place problems rise.

Further interpretation

The interpretation of the results is an increasingly central challenge to genetic testing, Tsai said, and the power to analyze the genome has outpaced the ability to interpret the results.

"Even trained geneticists disagree on how results should be interpreted and used to guide care," Tsai said. "The results can certainly be empowering if they lead to concrete actions that the patient can take. Even in terms of lifestyle, there is some evidence that genetics influence people's response to diet and exercise, and this information can guide their lifestyle changes."

According to a 2009 survey conducted by the National Institutes of Health, about 78 percent of respondents said they would change their diet and exercise habits if their results showed a higher risk of cardiovascular disease.

However, some diagnoses seem to only bring bad news, Tsai warned.

For example, a test can predict if someone will develop Huntington's disease — a devastating neurological condition with no cure. It's common for people to develop depression when given such results. [That is why 23andMe does not even check for this condition - such that they have no idea - even if you'd click "I *DO* want to know" -AJP].

"Understanding the value of genetic tests, and when [and] where to use them in the health care system, is becoming an increasing focus of the health care industry," Tsai said.

FDA crackdown

It is for this reason the Food and Drug Administration is increasingly scrutinizing direct-to-consumer genetic-testing companies, Tsai said.

Critics of the tests worry about the safety of consumers who base important lifestyle or medical decisions on inaccurate or misunderstood test results.

"The risk is what people may do in response to the tests — some may suffer psychological harm or feel dread about their future health risks," said Barbara J. Evans, co-director of the Health Law & Policy Institute at the University of Houston Law Center.

"People sometimes pursue ill-advised medical interventions that may actually cause them harm. They may be unaware that these tests can produce false positives and false negatives, and even when people do have a 'bad' gene that does not necessarily imply that the gene will ultimately make them sick. People's futures depend on many, many factors other than their genes," Evans said.

Evans said the solution will require studies to be done before tests enter the market, and ongoing evaluations to see how well they perform once they are in use.

About 90 percent of genetic tests available in the United States have never been through a regulatory review of how safe they are or how much they improve health, she said. Most experts agree such review is needed, but solutions have been mired in the controversies surrounding the tests.

There are practical barriers to forcing all genetic tests to undergo the same sort of review the FDA requires for other medical products, such as drugs. The obstacles include lack of data, the short commercial lives of test products and the difficulty of assessing products that make long-term predictions.

Evans said making genetic tests as safe and effective as they can be will require close coordination among the FDA, state agencies, professional groups and other private-sector overseers.

"Resolving the lingering unknowns about genetic tests will require more data, and getting more data will require a commitment of resources," Evans said. "And make no mistake, having better data about genetic tests will only improve the public's health if the data are communicated in a timely and understandable way to the public."

[Bottom line: GO FOR IT TILL IT LASTS (sale ends December, 25) - the best gift you can ever give for the Holidays! Checks for 179 conditions, for 50 cents you can save the lives of loved ones (up to ten kits per order). This entry can be debated in FaceBook of Andras Pellionisz]


Key information about breast cancer risk and development is found in 'junk' DNA

EurekaAlert
December 16, 2010

A new genetic biomarker that indicates an increased risk for developing breast cancer can be found in an individual's "junk" (non-coding) DNA, according to a new study featuring work from researchers at the Virginia Bioinformatics Institute (VBI) at Virginia Tech and their colleagues.

The multidisciplinary team found that longer DNA sequences of a repetitive microsatellite were much more likely to be present in breast cancer patients than healthy volunteers. The particular repeated DNA sequence in the control (promoter) region of the estrogen-related receptor gamma (ERR-γ) gene – AAAG – contains between five and 21 copies and the team found that patients who have more than 13 copies of this repeat have a cancer susceptibility rate that is three times higher than those who do not. They also discovered that this repeat doesn't change the actual protein being reproduced, but likely changes the amount.

The researchers from VBI's Medical Informatics and Systems Division (https://www.vbi.vt.edu/vbi_faculty/vbi_research_group/personal_research_group_page?groupId=6), the University of Texas Southwestern Medical Center and the University of Liverpool, United Kingdom, report their findings in an upcoming edition of the journal Breast Cancer Research and Treatment. The study is currently available online. The group sequenced a specific region of the ERR-γ gene in approximately 500 patient and volunteer samples. While the gene has previously been shown to play a role in breast cancer susceptibility, its mechanism was unknown.

"Creating robust biomarkers to detect disease in their early stages requires access to a large number of clinical samples for analysis. The success of this work hinged on collaborations with clinicians with available samples, as well as researchers with expertise in a variety of areas and access to the latest technology," explained Harold "Skip" Garner, VBI executive director who leads the institute's Medical Informatics and Systems Division. "We are now working to translate this biomarker into the clinical setting as a way to inform doctors and patients about breast cancer susceptibility, development, and progression. Akin to the major breast cancer biomarkers BRCA1 and BRCA2, this will be of particular benefit to those high-risk patients with a history of cancer in their family."

The majority of DNA is non-coding, meaning its not transcribed into protein. The largest amount of this type of DNA consists of these microsatellites – specific repeated sequences of one to six nucleotides within the genome. There are over two million microsatellites in the human genome, yet only a small number of these repetitive sequences have previously been linked to disease, particularly neurological disorders and cancer.

"We've become increasingly aware that non-coded DNA has an important function related to human disease," said Michael Skinner, M.D., professor of pediatric surgery at the University of Texas Southwestern Medical Center and collaborator on the project. "Replication of this study in another set of patients is needed, but the results indicate that that this particular gene is an important one in breast cancer and they reveal more details about the expression of the gene. This kind of work could eventually result in the creation of a drug that would specifically interact with this gene to return expression levels to a normal range."

"Ninety percent of all the breast cancer patients we see aren't considered high risk patients, which means there wasn't any indication that they would be susceptible to breast cancer," said Dr. James Mullet, a radiologist at Carilion Clinic's Breast Care Center. "This compels us to screen everyone in some way. If we had a better test – one that is more robust and sensitive, but also specific – we could make sure the women with most risk are getting properly screened for breast cancer."

"One practical clinical application of this research is to have a test available that would allow us to tailor our screening better," Mullet said. "For example, we could lessen patients' time, expense, and worry if we could better determine which patients would need only a mammogram, as opposed to additional tests like ultrasound or screening breast MRI. This work may also give us genetic insight into the cause of the breast cancer that may develop in those 90 percent of patients who are not currently identified as high risk."

According to Garner, "There is a big gap between what is suspected and what is known about the genetics of cancer. While more work is needed to better understand how these changes play a role in cancer, these results can be used now as a new test for breast cancer susceptibility and, as our data suggests, for colon cancer susceptibility and possibly other types of cancer. We think this is just the beginning of what there is to be found in our junk DNA."

--

[Excerpts from the paper in the journal of Breast Cancer, "Discussion" - AJP]

There are at least five possible explanations for our results: (1) direct transcriptional influence of ERR-c based on the length of the repeat, (2) linkage of an ancestral ‘‘lengthening’’ mutation with a cancer causing mutation in/ around ERR-c, (3) the repeat resides in an uncharacterized biologically active RNA which is affected by the length of the repeat, (4) misregulation of splicing due to overexpansion of the polymorphic repeat, or (5) a spurious association due to various sampling errors or population issues (albeit unlikely).

["Runs" in intronic and intergenic regions (in the "Junk") have been associated with heredetitary syndromes - but his is one of the papers that pins the main "genome regulation disease" (cancer) on them. Thus, Crick's fear that collapse of his "Central Dogma" (in 1972 rescued by Ohno's nonsense "Junk DNA" theory) will necessitate "putting genomics on an entirely new intellectual foundation" (see The Principle of Recursive Genome Function), is now becoming an active field of identification of glitches in the recursion, leading to a collapse of genome regulation. It has been presented in Cold Spring Harbor, that e.g. the GAA repeat-run in Friedreich' Spinocerebellar Ataxia is a "fractal defect" that disrupts a FractoSet in the middle of an intronic fractal structure. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


DIY DNA on a Chip: Introducing the Personal Genome Machine

Fast Company
BY ARIEL SCHWARTZ
Wed Dec 8, 2010

[Life/Ion Torrent Desktop Sequencer, $49k - AJP]

DNA sequencing technology isn't exactly accessible; a typical sequencing machine can easily cost $500,000. A startup called Ion Torrent aims to change that with a desktop sequencing machine for just $50,000--cheap enough for well-funded research projects to afford.

The key to Ion Torrent's Personal Genome Machine is a semiconductor chip that holds 1.5 million sensors, each of which can hold a single strand DNA fragment. The chip electronically detects the DNA sequence, unlike other sequencing machines that optically detect DNA with pricey lasers, microscopes, and cameras. It can sequence a DNA sample in a few hours, while other machines can take at least a week. And it can scale up fast. The company explains:

Because Ion Torrent produces its proprietary semiconductor chips in standard CMOS factories, we leverage the $1 trillion dollar investment that has been made in the semiconductor industry over the past 40 years. This industry's huge manufacturing infrastructure enables Ion Torrent to meet any demand for our chips.

There are some caveats. Each $250 chip can only be used once. The chip also reads a small amount of DNA; 10 to 20 million bases per run, out of the 3 billion base pairs in the human genome. But that's enough for genetic diagnostic tests, according to Technology Review.

Ion Torrent's machine goes on sale this month. Soon enough, these semiconductor sequencing chips may start popping up in cash-endowed hospitals around the world. Could consumer DNA sequencing machines be far behind?

[Life Technologies bought Jonathan Rothberg's Ion Torrent for $725 M just a few months ago - for an extremely strong reason; "Democratization of Genome Sequencing" (beyond Industrialization of Genomics, also making it available like Ford T auto models, for the masses). Without question, "Leveraging the Semiconductor Industry" for Genomics is a formidable economic driver. However, since the one-time use of the $250 chip (run on the $49k) machine is a "bridge" from the present microarray-technology (that is NOT a sequencer but is suitable only to interrogate up to about 1.6 M single-letter "structural variants") to "full sequencing of all 6.2 Bn A,C,T,G letters of a human genome" (like Roche' 454, Illumina Genome Analyzer and Life's own SOLiD, with Complete Genomics in production and Pacific Biosciences in beta-production), the "Personal Genome Machine" by Life/Ion Torrent is to cater for a very precious segment of the market. The "fractal defects" (peek at the Cold Spring Harbor presentation by Pellionisz) are much-much larger "structural variants" compared to single-letter SNP-s (typically, they are 150-350 letter oligos) - precisely in the range of the capabilities of the PGM. Moreover, IP-owner of "The Fractal Approach", HolGenTech, Inc. is based on the dovetailing economic driver, "Leveraging High Performance Computer Industry" for Genomics - by focusing on pure-play Genome Analytics software. Note, that the Personal Genome Machine is a "Sequencer" - that sorely needs another box (like a washer needs a dryer), that takes the raw data of sequences and provides Analytics and Interpretation; such as either "diagnosis" in hospital settings (FDA permitting), or "Genome-based Product Advertisements" (see YouTube "Shop for your Life") that require no clearance from the medical establishment - yet accomplishes the kind of "democratization" and turning PGM into real "Consumer Sequencing Machines", completed by enabling consumers to use genomic information in their daily life. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


Break Out: Pacific Biosciences Team Identifies Asian Origin for Haitian Cholera Bug

By Kevin Davies
December 9, 2010

In a dramatic piece of ultra-quick genetic detective work, next-generation sequencing company Pacific Biosciences has decoded the sequence of the strain of bacteria responsible for the deadly cholera outbreak in Haiti. The findings, which confirm the putative Asian origin for the devastating disease, are published online in the New England Journal of Medicine today.

The project was led by physician scientists at Harvard Medical School and Massachusetts General Hospital (MGH), including Matthew Waldor, John Mekalanos, Stephen Calderwood and Morton Swartz. “This understanding has important public health policy implications for preventing cholera outbreaks in the future,” says Mekalanos.

Cholera was first detected in Haiti in mid October, spreading across the country and into the Dominican Republic. Nearly 2,000 people have died from the outbreak, with no end in sight. Shortly after the outbreak, Waldor contacted the Centers for Disease Control and Prevention (CDC) and offered to sequence the bacterial strain using Illumina’s technology. Waldor says the CDC initially said he could have the strain, but five days later, changed their mind (citing political reasons) and said they were going to do it themselves. “At that point, I thought we were out of the game,” says Waldor.

CDC subsequently announced that using pulse-field gel electrophoresis fingerprinting technology, the strain was consistent with a south Asian origin. “But from a pure scientific point-of-view, that’s heresay,” Waldor, who is professor of medicine at Harvard Medical School and an investigator with the Howard Hughes Medical Institute, told Bio-IT World. “What are their controls? Pulse-field gel electrophoresis has nothing like the depth of a full genome sequence.”

But by then, Waldor and colleagues were already putting the finishing touches to their manuscript. Two weeks earlier, two MGH physicians -- pediatrician Jason Harris and Richelle Charles – returned from Haiti with samples they’d collected from a hospital. But who would do the sequencing?

That Was the Week

Two days earlier, on Saturday November 6, Waldor emailed a speculative inquiry to the PacBio website. “I knew they had some exciting technology, my understanding was it was very useful for resequencing bacterial genomes.” While he was fishing around on the PacBio Web site, Waldor noticed that one of his colleagues at the Brigham & Women’s Hospital – Joseph Bonventre – was on the PacBio advisory board.

“So I called him up,” Waldor continues. “He was in his office that Saturday, just like me, I told him the story, and he said, ‘let me make a phone call.’ Literally five minutes later, the CEO of PacBio, Hugh Martin, called me up, and said, ‘that sounds very interesting. Let me talk to Eric Schadt and my team.’ We got the strains on Monday November 8. Eric and the CTO called me that day and said they’d be interested in collaborating.”

“We’re going all in!” Waldor recalls Schadt telling him. “They went all in, I must say.”

Waldor’s team grew up the Vibrio cholerae strains on November 8, and the DNA samples arrived at PacBio in California on Wednesday, November 10. “We got a good idea of the [identity of the] two Haitian strains on the evening of November 12. We sent three other strains for comparison, including a true resequencing of the canonical strain.”

Each of the five strains took about one day to sequence to about 60X coverage. “They did an outstanding job in the analysis,” says Waldor. “Most of the credit for this project goes to Eric and his team.”

“The rapidity and depth of the sequence using this 3rd-generation sequencing technology has enormous potential to transform how we can analyze outbreaks of infectious disease and even the prediction of future outbreaks because of the power of their technology.”

According to PacBio, the five cholera genomes were sequenced on November 12 to 12-15X coverage in less than two hours. [This is the kind of speed I predicted in my 2008 Google Tech Talk YouTube as vital for deploying sequencing in real-life emergencies - AJP]. Further runs bumped up the coverage to 60X over the course of the day. Over the next three days, the sequence data were subjected to in-depth analysis, including genome assembly, annotation, and sequence comparisons, including comparisons to nearly two dozen published cholera genomes.

Subsequent bioinformatic analysis confirmed earlier hints matching the Haitian cholera outbreak to a variant of the “El Tor O1” variant from South Asia. This strain has never been documented in the Caribbean or Latin America, suggesting that a recent visitor to the island, possibly a volunteer or a United Nations peacekeeper helping relief efforts after the earthquake, could have inadvertently carried the bacteria to Haiti from outside Latin America.

“Our data strongly suggest that the Haitian epidemic began with the introduction into Haiti of a cholera strain from a distant geographic source by human activity,” is how Waldor puts it. The results disprove another possibility, namely that the strain arose from the local aquatic environment.

The identification of the Haitian strain has important implications regarding vaccination, says Waldor. “By showing this strain is closely related to a south Asian strain, and not close to Latin American isolates, it shows that human activities – food or water brought from South Asia, led to this epidemic and not from transfer from Latin America. That’s a conclusion that allows us to alter our policies in the future to prevent such a thing. For instance, relief workers or security forces should be deployed where there is no domestic or endemic cholera. Otherwise, workers should be screened and/or vaccinated, so they can’t bring it in.”

Speed of analysis is crucial in such situations, says Jason Harris, requiring a technology “that could immediately provide comprehensive genomic information about this virulent strain and quickly get it into the hands of the global health and research community. In the initial stages of a major epidemic, real time is the speed we need to be working in order to have the greatest impact on saving lives.”

From PacBio’s perspective, Schadt says that “real-time monitoring” of pathogens opens the door to using his firm’s technology as “a routine surveillance method, for public health protection in addition to pandemic prevention and response.”

Warning Sign

Just last month, Waldor and colleagues published a perspective the New England Journal advocating the establishment of a cholera vaccine stockpile in the United States to be used to counter outbreaks such as the one in Haiti. There are an estimated 3-5 million cases of cholera each year resulting in about 100,000 deaths.

“The resistance to vaccination is truly baffling,” Waldor said at the time. The Harvard/PacBio results raise another troubling possibility: expansion of the epidemic with the replacement of the currently endemic strains with much more threatening variants. “That would be a deeply troubling outcome,” says Mekalanos. “A cholera vaccination campaign might not only control the disease but also minimize the dissemination beyond the shores of Hispaniola,”

The scientific manuscript was drafted over a few days and submitted to NEJM on November 19. The paper was formally accepted on December 1 and published December 9. “That’s like my record,” says Waldor.

Further Reading: Chin, C-S. et al. “The origin of the Haitian cholera outbreak strain.”New England Journal of Medicine December 9, 2010.

[PacBio [PACB] stocks shot up by about 10% upon the news of this historical landmark in applying rapid genome sequencing for control of Pandemics. This is the first major example how "time critical" sequencing could make a global difference. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


Which Is a Better Buy: Complete Genomics or Pacific Biosciences?

By Brian Orelli | More Articles

November 29, 2010

Today's battle pits recent DNA sequencing IPOs Complete Genomics (Nasdaq: GNOM) against Pacific Biosciences of California (Nasdaq: PACB). Which one is the better buy? Let's have a closer look.

What they do

Technically, Complete Genomics and PacBio aren't direct competitors, but they're close enough. Complete Genomics specializes in sequencing DNA for researchers; PacBio sells DNA sequencing equipment for researchers. It comes down to whether researchers will outsource their sequencing or do it themselves, so the two end up directly competing with the funds researchers have to carry out experiments.

And the two have plenty of additional competition. Illumina (Nasdaq: ILMN), Life Technologies (Nasdaq: LIFE), and Roche have been selling DNA sequencers for a while. Illumina also offers a sequencing service direct to patients.

Both Complete Genomics and PacBio are still in the ramping-up stage. Neither is turning a profit nor should you expect one soon. Fortunately, they have an influx of cash from their IPOs to keep things going for a while.

As of the end of September, Complete Genomics had sequenced 400 genomes to date, 300 of which were completed in the third quarter. At the time, the company had a backlog of more than 800 genomes.

PacBio had sold seven of its limited production models as of the middle of September and had orders for four more that were scheduled to ship by the end of the year. We should get an update on the commercial launch of its machine when PacBio has its earnings conference call tomorrow after the stock market closes.

What investors think

One of the things David Gardner looks for when picking stocks for the Motley Fool Rule Breakers newsletter is strong past price appreciation. This isn't technical analysis mumbo jumbo, but there's something to be said for a company that other investors have confidence in.

Complete Genomics and PacBio don't have a very long history, but so far their ability to catch the fancy of investors has been fairly limited.

Company

Expected IPO rang
Actual IPO price
Price Close Nov. 26

Complete Genomics $12-$14 $9 $7.76

PacBio $15-$17 $16 $11.53

What this Fool thinks

Investors are rightfully timid about the DNA sequencing hype. Remember how the Human Genome Project was going to save the world? Human Genome Sciences (Nasdaq: HGSI) was worth more than $100 on a split-adjusted basis in early 2000. Ten years later, with the company on the verge of getting its first drug approved, the stock is trading at only $25.

Famous last words or not, I think this time it's different. We know a lot more about what genes do now than we did 10 years ago, and the price of sequencing has come down considerably. At some point, getting a DNA sequence will be a routine part of a newborn's first checkup, and everyone who is already alive is going to have to catch up. There's a lot of DNA to be sequenced and therefore a lot of money to be made.

But investors do need to be careful. The market may be huge, but there's a diminishing size as more people get their genomes sequenced since your genome doesn't really change.

That's different than say the software market, where the potential customers remain constant since Microsoft can convince current customers to upgrade to newer software.

The best long-term hope for Complete Genomics and PacBio is probably to expand into other markets, just as Intuitive Surgical (Nasdaq: ISRG) has expanded the use of its robotic surgery machines into additional surgical procedures. Tumors often have genetic mutations, so they'll likely get sequenced to determine the best drugs to treat the cancer. And you could use DNA sequencing to identify viruses and bacteria.

Still, I think this is ultimately a boom-bust industry, albeit with the bust still many years away.

Which one?

If you're interested in trying to catch the boom and get out before the bust, both Complete Genomics and PacBio look like a good choice to benefit from an exponential increase in DNA sequencing.

It's too early to make a definitive call, but of the two, I like PacBio better because I'm not fond of the low-cost, high-volume business model. Sure, it's worked for Costco and Wal-Mart, but I like PacBio's razor and blade model -- sell the machine once and then continue to supply reagents year after year -- a little better.

Which one is your pick? Take the poll and let us know your reason in the comment box below.

[Actually, both in practice and in theory the answer is rather easy to tell. In practice, if one is limited to the price of a stock (an astonishingly narrow-minded approach), the cheapest stock is that of Complete Genomics. In theory, it is also easy to pick the winner from all entrants of the "Sequencing" technology companies. Since sequences alone are absolutely worthless without interpretation, the winner will be that secures the key to the algorithmic (thus software-enabling) theoretical high ground. Which will it be out of the five listed (and further runner-up) companies? It may be one of them, some sharing key IP - or perhaps another company that is not even listed above. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


A Geneticist's Cancer Crusade

The discoverer of the double-helix says the disease can be cured in his lifetime. He's 82.

By ALLYSIA FINLEY
The Wall Stree Journa
November 27, 2010

'We should cure cancer," James Watson declares in a huff, and "we should have the courage to say that we can really do it." He adds a warning: "If we say we can't do it, we will create an atmosphere where we just let the FDA keep testing going so pitifully."

The man who discovered the double helix and gave birth to the field of modern genetics is now 82 years old. But he's not close to done with his life's work. He wants to win "the war on cancer," and thinks it can be won a whole lot faster than most cancer researchers or bureaucrats believe is possible.

Call it the last crusade of one of the nation's most indefatigable and productive scientists. In a long career, Dr. Watson was awarded the Nobel Prize in Physiology or Medicine (1962), garnered 36 honorary degrees and wrote 11 books, including the bestseller "The Double Helix" (1968), which recounts his dramatic quest with Francis Crick to determine the structure of DNA. He spent the early 1990s helping spearhead and direct the Human Genome Project to identify all human genes. And there's the 40 years he's devoted to transforming the Cold Spring Harbor Laboratory in Long Island, N.Y., from a ramshackle ruin into the elite cancer research institute it is today.

To hear Dr. Watson tell it, this determination began—at least formally—in Hyde Park at the age of 15. "The University of Chicago always used to be ranked in the U.S. News and World Report as the third most unpleasant college to go to in the United States," he chuckles. "It was a place that was knocking you down and expecting you to get up by yourself. Nobody was picking you up."

He says he's the better for it because it taught him how to be a leader, something he thinks there are too few of nowadays. "The United States is suffering from a massive lack of leadership. There are some very exceptional, good leaders. I'm not saying they don't exist, but to be a good leader you generally have to ruffle feathers," which Dr. Watson believes most people aren't willing to do.

He certainly is. Throughout his career, Dr. Watson has been a lightning rod for controversy, beginning with his unflattering portrayal of some fellow scientists as awkward and hostile in "The Double Helix." He later butted heads with fellow genetic researcher and founder of Celera Genomics, Craig Venter, over the commercialization of the human genome. Dr. Venter wanted to turn a buck for his firm by selling access to the human genome sequence. Dr. Watson thought the human genome database should be free to the public.

In 2003, Dr. Watson stirred up another academic kerfuffle when he joked that genetic engineering could be used to make all women beautiful and, more seriously, that gene therapy could one day cure stupidity. His 2007 book "Avoid Boring People: Lessons from a Life in Science" used the following words to describe former Harvard colleagues: "dinosaurs," "vapid," "mediocre" and "deadbeats."

But these days, Dr. Watson is sparring with the bureaucratic behemoth known as the FDA.

"The FDA has so many regulations," Dr. Watson says. "They don't want you to try a new thing if there's an old thing that might work. . . . So you take the old thing, but we know cancer changes over time and we would really like to get it whacked early, and not late. But the regulations are saying you can't do these things until we give you a lot of s— drugs," he snorts. "Shouldn't this be the patient's choice to say I would rather beat the odds with a total cure rather than just to know that I am going to have all my hair fall out and then after a year I'm dead? . . . Why should [FDA commissioner] Margaret Hamburg hold things up? There's the cynical answer it gives employment to lawyers."

Ah, the lawyers. "Right now America is being destroyed by its lawyers! Most of the people in Congress just want work for lawyers." He quickly adds: "I was born an Irish Democrat, so I wasn't born into a family which instinctively says these things. But my desire is to cure cancer. That's my only desire."

Dr. Watson may have been born an Irish Democrat, but he's more of a libertarian when it comes to scientific regulation. In his view, freer research enables greater innovation. "I do think one success of Northern Europe, which the United States came from, was its willingness to accept innovation in business practices like Adam Smith and the whole Enlightenment. It essentially made the merchant class free instead of controlled by the king and aristocracy. That was essential."

Another impediment to innovation today is funding. Dr. Watson thinks money is being spread around too much and not enough is going to the best brains. "Great wealth could make an enormous difference over the next decade if they sensibly support the scientific elite. Just the elite. Because the elite makes most of the progress," he says. "You should worry about people who produce really novel inventions, not pedantic hacks."

He also complains that too often government and private money help support scientists rather than cutting-edge science. "That's not the aim of our money—job research, job security. It should be job insecurity. Or hospital insecurity. Empty the breast cancer ward."

Dr. Watson's commitment to innovation is why most scientists at Cold Spring Harbor don't have tenure. Instead, they have security for five years. "We can't decide at the age of 40 that you're going to have a job for 30 years even though you're not producing much science."

Although Dr. Watson says leaders should think in the long term, he is critical of those who say we might find a cure for cancer in another 10 to 20 years. "If you say we can get somewhere in 10 to 20 years, there's no reason you shouldn't be saying 20 to 40, except then people would just give up hope. So 10 to 20 still maintains hope, but why not five to 10?" He adds that there's no reason we shouldn't know all of the genetic causes of major cancers in another few years.

"I want to see cancer cured in my lifetime. It might be. I would define cancer cured as instead of only 100,000 being saved by what we do today, only 100,000 people die. We shift the balance." Alas, modern research has merely reduced cancer mortality in the United States from about 700,000 per year to about 600,000. "We've still got 600,000, which is what the problem is."

The challenge now—at least by Dr. Watson's lights—is killing the mesenchymal cells that cause terminal cancer and figuring out why those cells have become chemotherapy-resistant. He says scientists and doctors are reluctant to tackle terminal cancer because there's so much that remains uncertain about its causes.

The treatment of early-stage cancer, however, is more certain since scientists have already pinpointed many of the genes that are associated with specific cancers. But they still don't know exactly which gene or gene mutations lead to terminal cancer [one reason they still don't know may be that NO GENE OR GENE MUTATION may be the cause of cancer - but HoloGenome Regulation derailed - AJP].

Dr. Watson points out that scientists are correctly looking at DNA before they treat early-stage cancer, since different drugs work on different genes. "If I had cancer I'd certainly want them to look at the DNA to see if there's a Ras gene or change in the Ras gene," which signals cell growth and proliferation.

He points to lung cancer as a case in point. Right now Dr. Watson says there are two types of treatments. The first is a new drug that treats cancers linked with the specific gene ALK, which has proven effective in trials. "I have no idea if it works beyond the first six months, but most drugs don't work in the first six months, so that's very good."

Then there's Tarceva and Iressa, two drugs that inhibit the epidermal growth factor receptor that causes cancer cells to divide. But "they only work on about 10% of people," who have specific mutations in their tumors. "And they work for about a year, and then you become resistant. And we don't have anything to treat the resistant cancer with."

So this is where we now stand in the war against cancer: at our own 20-yard line with a playbook full of untested, complicated plays. But Dr. Watson is optimistic that there could be a Hail Mary: a single drug that will work on all of the deadly mesenchymal cells. All of these cells, he notes, secrete a protein—interleukin-6—and in lab experiments, adding interleukin-6 to lung cancer cells that had been controlled by anti-cancer drugs made them resistant to the treatments.

Thus the key to curing cancer may be finding a drug that blocks interleukin-6. "While this would be wonderful if it turns out to be true," he says, he doesn't know if it is and he concedes, "it's not conventional wisdom."

Despite his crusade, it's not cancer that personally scares Dr. Watson. It's Alzheimer's disease. When he had his genome sequenced and published in 2007, he specifically asked that the doctors not reveal whether he had a gene that would make it virtually certain he would develop Alzheimer's. The mentally debilitating disease would make it impossible for him to continue his research—not to mention that it would estrange him from his family.

I ask Dr. Watson about the double-edged sword of DNA testing and its proliferation. As prices fall due to improved technology, the market for testing grows. Now companies like 23andMe are selling personal DNA tests for roughly $500. Simply spit in a tube, send it in, and in a few weeks you'll get back everything you've ever wanted to know about your genetic inheritance—and some stuff you'd probably rather not.

While such information might encourage some people to adopt healthier lifestyles or get more frequent check-ups, it could also cause undue anxiety. For example, what do you do when you learn at the age of 20 that you have a gene that makes you susceptible to Parkinson's disease—something that you can't do anything about?

To this question, Dr. Watson says that DNA testing "has to involve a lot of acquired common sense." But he doesn't think that common sense should come from government agencies. "I don't see how regulations can do it." Banning it because of potential negative repercussions would be futile.

Futile—now that's a word you won't often hear Dr. Watson use. "I'm going to look optimistically and of course sometimes it doesn't work," he says. But "you move forward through knowledge. You prevail through knowledge. I love the word prevail. Prevail!"

Ms. Finley is assistant editor of OpinionJournal.com.

[I am not going to mince words here - time for a blunt talk, in order for Dr. Watson to renounce Crick's "Central Dogma" (That Dr. Watson actually never subscribed for). Jim Watson is a friend and a hero (I have chosen my FaceBook icon standing next to him and his Double Helix at Cold Spring Harbor) - when I was invited to present a "breakthrough idea" - ditching both JunkDNA and Central Dogma obsolete axioms to be superseded by The Principle of Recursive Genome Function. Jim Watson is far too clever, and he never subscribed to e.g. Crick's "Central Dogma" - Jim just stated the truth (DNA>RNA>PROTEIN), never saying that "the information never recourses to DNA". However, now we need his leadership (not only to do away with FDA obsolete legal mandate of 1976), but more importantly for science to make a clean break to say with full force that cancers (the explosion of genome regulation) will never be "cured" unless we target it as it is; a derailment of Recursive Genome Function. Dear Jim, renounce Crick's "Central Dogma", "break down that wall"! Just look at cancerous, uncontrolled growth; some can be seen by the naked eye to be the result of Fractal Iterative Recursion went out of control. This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


News from 23andMe: a bigger chip, a new subscription model and another discount drive

[GRAB IT NOW - $99 Offer Extended till December 25, or when supply runs out - AJP]

[$99 plus a monthly $5 covers 175 conditions - click on 23andMe website to see the video testimonials how a $99 gift may save the life of a loved one - AJP]

Category: 23andme • personal genomics

Posted on: November 24, 2010 8:45 AM, by Daniel MacArthur

Personal genomics company 23andMe has made some fairly major announcements this week: a brand new chip, a new product strategy (including a monthly subscription fee), and yet another discount push. What do these changes mean for existing and new customers?

The new chip

23andMe's new v3 chip is a substantial improvement over the v2 chip that most current customers were run on (the v2 was introduced back in September 2008). Firstly, the v3 chip includes nearly double the number of markers across the genome, meaning that it is able to "tag" a larger fraction of common genetic variants ("tagging" means that a marker on the chip is sufficiently highly correlated with other markers that it can be used to make a reasonable guess about someone's sequence at those other markers). Secondly, the chip now includes additional custom markers targeting specific variants that the company thinks will be of interest to its customers.

The technical details: the v3 chip is based on Illumina's HumanOmniExpress platform, which includes 733,202 genome-wide markers. The company has also added around 200,000 custom markers to the chip (vs ~30,000 on the v2 chip). We don't yet have full details on what those custom markers are, but there's a summary of the improvements over the v2 chip in the press release:

Increased coverage of drug metabolizing enzymes and transporters (DMET) as well as other genes associated with response to various drugs.

Increased coverage of gene markers associated with Cystic Fibrosis and other Mendelian diseases such as Tay-Sachs.

Denser coverage of the Human Leukocyte Antigen region, which contains genes related to many autoimmune conditions.

Deeper coverage of the HLA is particularly welcome - variants in this region are very strongly associated with many different complex human diseases (including virtually every auto-immune disease), and the v2 chip was missing several crucial markers.

The addition of more rare variants associated with Mendelian diseases like cystic fibrosis is entirely unsurprising, but the devil will be in the details: in the arena of carrier testing 23andMe is up against the extremely thorough and experimentally validated platform offered by pre-conception screening company Counsyl. It will be very interesting to see the degree to which 23andMe focuses on the carrier testing angle in their marketing of the v3.

More power for imputation

From the perspective of those of us simply interested in squeezing as much information as possible out of our genetic data, the v3 chip is a welcome arrival. The additional markers present on the chip will substantially improve the power of genotype imputation - that is, making a "best guess" of our sequence at markers not present on the chip using information from tagging variants.

The HumanOmniExpress platform has some decent power here: in European and East Asian populations, 60-70% of all of the SNPs with a frequency above 5% found in the 1000 Genomes pilot project are tagged by a marker on the chip (in this context, "tagged" means "has a correlation of 80% or greater"). In effect, that means that being analysed at the one million markers on this chip allows you to make a decent inference of your sequence at around another 4.5 million other positions in your genome.

At the recent American Society of Human Genetics meeting, 23andMe presenter David Hinds suggested that the medium-term future for 23andMe rested not in moving to sequencing, but rather on expanding the role of genotype imputation. The new chip will certainly help with that. However, it's worth emphasising that imputation is not a replacement for sequencing: it is only accurate for markers that are reasonably common in the population, meaning that it will miss most of the rare genetic variants present in your genome.

However, improved imputation with the extra markers on the v3 chip will mean that 23andMe should be able to do a decent job of predicting customer genotypes at the positions we currently know the most about - those arising from genome-wide association studies of common, complex diseases. I expect that many customers will see changes to their disease risk profiles as a result of the move to the new chip.

Over at Genomes Unzipped, we've already been looking at various approaches to imputation from our 23andMe v2 data, and we'll put a post together soon looking at how this will improve with content from the v3 chip.

The new product strategy

There are two interesting things that 23andMe has done with the new product line: firstly, it has reversed the transient division of its products into separate Health and Ancestry components; and it has introduced a subscription model in which customers pay $5/month for updates to their account as new research findings become available (previously, customers paid a flat purchase fee and were then entitled to free updates).

The recombining of the Health and Ancestry products into a single Complete package is an extremely interesting move. As Dan Vorhaus notes, the previous separation of the two product lines was plausibly interpreted as a way for the company to pre-empt the possibility of a regulatory crackdown by the FDA: if regulators hammered the company's ability to offer health-relevant tests directly to consumers, 23andMe could easily switch to its Ancestry product to maintain a revenue stream.

In the currently uncertain regulatory environment, the decision to reverse this division is an unexpected one. It certainly appears that 23andMe - flush with cash following a successful $22M funding round - is somewhat more confident than I am about the regulatory future for health-relevant genetic tests; I hope that confidence turns out to be warranted. [I am much more of an optimist; since 23andMe could easily outsource their services to Asia or even Central Europe - AJP]

Subscription fees: good for customers

The decision to add a subscription fee may prove unpopular with customers (and has already received a qualified thumbs down from blogger Dienekes, albeit for perfectly sensible reasons). However, a business model based on providing continuous product updates that customers don't pay for has never really looked like a viable long-term business model.

I personally see a subscription model as a positive move: it provides a steadier revenue stream for personal genomics companies, which means less focus on splashy discount drives. It also provides more of a financial incentive for the company to improve the ongoing experience of customers: under the current deal customers are locked in for the first 12 months, but after that 23andMe will need to convince them that it's worth continuing to pay for additional content and features.

Other personal genomics companies (e.g. Navigenics) have long relied on some form of a subscription model, but typically at a higher cost. I think 23andMe is hitting a pretty reasonable price point here: I suspect $60/year would be seen by most customers as a fair price.

OMG discount!

That doesn't mean that 23andMe has abandoned the discount drive approach just yet, of course: they're currently offering v3 kits for just $99 (vs the retail price of $499), which must be purchased along with the previously mentioned 12-month subscription fee of $60. Non-US customers can also expect a ~$70 postage fee, based on comments on Twitter.

Anyone who missed out on the DNA Day sale and is keen to take advantage of the v3 content would be well-advised to get in quickly. The discount code is B84YAG.

[Terrific Holiday Gift - grab it TODAY (offer is still good till November 29, or when the supplies run out)! This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


This Recent IPO Could Soar [as money for "Analytics" makes "Sequencing" sustainable - AJP]

[This double IPO could soar? - Yes, when funds start pouring into "Analytics", to make "Sequencing" business sustainable - AJP]

Tuesday, November 16, 2010
Street Authority

The initial public offering (IPO) market continues to heat up with deals coming this week for GM (NYSE: GM), Booz Allen (NYSE: BAH), Caesars Entertainment (NYSE: CZR) and a half dozen other firms. The flurry of deals puts us on track for the most robust quarter for IPOs in more than two years. And looking at the pipeline of new deal registrations, the first quarter of 2011 may be even hotter.

I recently looked at a strategy that uses analyst research to find stocks about to pop. [See: "The Secret Way to Play IPOs"]

Yet that's not the only way to look for upside among recent new deals. You can also scan lists for "broken IPOs," which are firms that have been public for a short while and are drifting lower while investors focus on more established companies.

Last month, I took a look at top-performing IPOs, as I wrote back then, "many new IPOs take time to find their sea legs and only take off well after their debuts. In fact, every single stock [mentioned in that piece] came out of the gate with a whimper and only started rising many weeks or months after their debut."

The stocks in the table below are all broken IPOs, each is trading off at least -15% from its IPO offering price. I've pored through the list and found the best rebound candidate.

Complete Genomics (Nasdaq: GNOM)

Any company that struggles to fetch a desired IPO price is a conundrum for investors. On the one hand, a lower-than-expected price is a sign that investor demand just isn't there. On the other hand, you've got a chance to buy a stock at a cheaper price than investment bankers have recently assessed. Case in point, Complete Genomics, which hoped to sell shares for $12 to $14, had to settle for a $9 offering price last Friday, and the stock is now down to $7. That's just half the high end of the expected range of pricing. The weak demand may be due to the fact that rival Pacific Biosciences (Nasdaq: PACB) had just pulled off an IPO weeks earlier, snatching the attention of any fund managers that buy these kinds of companies.

Complete Genomics is involved in DNA sequencing. While other firms like Illumina (Nasdaq: ILMN) and Pacific Biosciences sell equipment to scientists, Complete Genomics acts as a service bureau, performing third-party DNA sequencing services.

Why the tepid IPO reception? Complete Genomics is just starting to generate sales and investors fear that quarterly losses will continue for the next year or two, setting the stage for another round of capital-raising. Ideally, the company would have waited until sales started building and losses started shrinking, but its backers likely balked at putting any more money into the company.

Yet this stock has all the makings of an IPO rebounder, as the firm's underwriters, led by Jefferies, get set to publish initial reports on the company in early December. You can expect to see bullish forecasts of projected sales growth rates, and if you look out far enough, fast-rising profits.

Analysts are likely to note that Complete Genomics' DNA sequencing approach may prove to be very cost-effective and capable of high market share. Industry leader Illumina can analyze an entire sequence of DNA strands for around $10,000 in materials. Complete Genomics thinks it can do it for just $4,500. And over time, prices could drop well below that level, making DNA sequencing for the masses more feasible.

Action to Take --> Keep an eye on new IPOs. They often stumble out of the gate, giving the false impression that they are unworthy investment candidates. Of the recent crop of IPO laggards, Complete Genomics appears to have the greatest potential upside.

With a broken IPO and scant revenues, investors will need to focus on the company's technology value. Complete Genomics is valued at less than $150 million, roughly $20 million less than the money spent developing its technology platform. The revenue profile tells you that this is a risky as a biotech stock. But if the company can make headway in the space, investors may start to make comparisons to Illumina, which is valued at $7.2 billion -- 50 times more than Complete Genomics.

-- David Sterman

[While a mere $10 M of Round A or M/A (with IP) into "pure-play DNA Analytics Company", like HolGenTech, Inc. (that leverages HPC for Genomics) could yield a decisive advantage to a "Sequencer" Company (if such deal would be exclusive), the compelling argument above that the two fresh "Pure-Play" Sequencer Companies are grossly undervalued (by a factor as much as 50x), thereby providing a historical investment-opportunity, David assumes that long-public genome companies (Roche/454, Illumina, Life Technologies/Ion Torrents) might not try to take the high ground of "Analytics" - and thus even force (as an auditor of Complete Genomics warned its investors) to go out of business before it could take off. This assumption may be mistaken - as the belief that a key "pure-play DNA Analytics Company" would do an "exclusive" - rather than go for the easily $10 B opportunity if its Genome Computers cater for all Sequencers in a non-exclusive basis (see below the explosive global market of Sequencers humming with an eye on $1,000 sequences - but in dire need of $1 M interpretation). This entry can be debated in FaceBook of Andras Pellionisz]

^ back to top


BGI – China’s Genomics Center Has A Hand in Everything

Singularity Hub
November 11th, 2010 by Aaron Saenz

[See interactive (zooming) World Map of Sequencers here - AJP]

When it comes to genomics, China seems a little like the proverbial kid in the candy store – she wants a taste of everything. Of course, unlike the child, China might be making a bid to own the candy store outright as well. The Beijing Genomics Institute (BGI), now located in Shenzhen, is the leading genomics facility in China, and all of Asia. BGI has striven to make a name for itself in every major international genome sequencing project of the last decade. The International Human Genome Project, the International Human HapMap Project, sequencing SARS, the Sino-British Chicken Genome Project, etc. It was also responsible for completely sequencing the rice genome, the silkworm genome, the giant panda genome…the list goes on an on. By the end of this year BGI will have 128 of Illumina’s HiSeq 2000 platforms, 27 of AB’s SOLiD 4 systems, and many other sequencing devices. At full capacity this means they will be capable of the equivalent of 10,000+ human genomes per year. And they are still growing. BGI may not be the largest genomics facility in the world, but it is has phenomenal support from its government, ambition to expand quickly, and a hand in dozens of major sequencing projects. You can’t talk about the future of genetics without talking about China.

In late 1999 the Beijing Genomics Institute started to build China into a world leader of genetic research. In the decade that’s elapsed since, they’ve put their name on some major developments...

Since its inception, BGI has had a very ambitious attitude when it came to participating in world genomics. Every time they were presented with a new project, they basically said, sure, we’ll be a part of that. They contributed 1% to the Human Genome Project’s reference genome, and 10% to the Human HapMap Project. It was like they never met a sequencing project they didn’t like.

That attitude hasn’t seemed to wane at all. BGI is spearheading efforts that will sequence a wide variety of organisms. There’s the 1000Genomes Project aimed at producing a wide database of human genomes from people all over the world. They are also working to sequence 1000 plants and animals, and have already completed 14+ of the former and around 50 of the latter. In 2009, BGI launched its effort to map the genomes of 10,000 microbes – they’ve managed 800 bacteria, 100 fungi, and 100 viruses so far, with more finished every day. They are looking for collaborators to sequence 1000 Mendelian Disorders in humans. Completion of large genetic databases like these will be part of what could empower genetic research to finally make the discoveries the public has been waiting for since the first human genome was sequenced a decade ago.

Even while BGI is a testament to Chinese ambitions in genomics, it also speaks to the prominence of the US in that field. BGI relies heavily, almost exclusively, on sequencing technology rooted in California. Illumina’s HiSeq2000 and Applied Biosystems SOLiD 4 form the bulk of BGI’s machine workforce. To be fair, most of the world has focused on using these systems as well, and BGI is working to expand its hardware horizons, collaborating with OpGen on new optical sequencing methods. Still, when one sees BGI’s successes in genomics one also has to acknowledge that such capabilities weren’t developed in a vacuum. China’s sequencing projects, like every nation’s sequencing projects, have worked as part of a larger global effort.

The only real question, then, is how much will China simply be a part of that worldwide phenomenon, and how much will it lead? Even if the hardware is largely developed by California companies, those companies themselves are international entities [suffice to point out that Life Technologies' just announced 5500 SOLiD sequencers are co-produced with Hitachi - AJP]. BGI is officially part of the sequencing club, recognized by Illumina as one of its associated world class facilities. BGI isn’t some second tier group working its way to the top, it’s already at the top, sharing space with the other lead genomics institutions around the world. If BGI and China continue to dedicate money, labor, and insight into genomics, they’ll be able to set the agenda for many sequencing projects around the globe. Actually, they’re already doing this with their various sequencing projects for microorganisms, plants, animals, and humans.

I know that many of us will view BGI’s growing importance through the lens of competitive national spirit. Yet no matter your feelings about China, you have to view BGI’s accomplishments as wonderful gifts to the global scientific community. Every genomics center around the world is going to have different specialties (Complete Genomics is dedicated to bringing down the costs of human whole genome sequencing, for instance) and it’s only through combining these disparate efforts that we’ll create the general understanding we need to move the field of genetics forward. It’s a team effort. Yay China, Yay us.

[If one would have to point out an outstanding difference of the BGI from much older schools of the UK and USA is Genome Informatics. While it is true that both Illumina and Life Technologies' Sequencers are based on biochemistry-technology of the USA, in BGI in Shenzehen (in the backyard of Hong-Kong) 3,000 Genome Informatics specialists are working busily, with the average age of 27 (thus by definition none of them can be "old schoolers". Hong-Kong and Seoul has some of the very best Neural Network specialists of the World. Once they devote full attention to "The Principle of Recursive Genome Function", China will set PostModern Genomics into an entirely new trajectory of hypergrowth. - This entry can be debated in FaceBook of Andras Pellionisz - AJP.]

^ back to top


Most doctors are behind the learning curve on genetic tests - the $1,000 sequence and $1 M interpretation

Updated 10/24/2010 8:37 PM

USA Today

By Rita Rubin, USA TODAY

GREENWICH, Conn. — It's ironic that Steven Murphy's medical practice is located in this town's Putnam Hill historic district.

His Maple Avenue building, the Dr. Hyde House, is a cozy hodgepodge of architectural styles, with stone and stucco walls, a double bay corner window and orange clay roof tiles. It has housed doctors' offices for a century.

Although Murphy's surroundings may be old-fashioned, his practice is not. Murphy, a board-certified internist who writes a blog called The Gene Sherpa, is one of a small minority of doctors who use genetic tests to help manage their patients' care.

"The majority of people we see have a very strong family history of X, Y or Z disease," says Murphy, who'll be 34 this week. He doesn't bring up genetic testing until after taking a detailed personal and family medical history and assessing such risk factors as cholesterol and blood pressure. "I tell them there are lots of ways to dig deeper. Then I also tell them the limitations."

Other patients show up with the results of personal genome tests, costing upward of $1,000, they had ordered online from companies such as 23andMe and Navigenics. They want to know what it all means. "We like to call it the thousand-dollar genome with the million-dollar interpretation," Murphy says.

Having trained in genetics as well as in internal medicine, he's much further on the learning curve than most doctors.

Since the Human Genome Project was completed in 2003, the introduction of new genetic tests has far outpaced the ability of doctors — who typically have little training in genetics — to figure out what to do with them. Some tests are marketed to help predict disease risk, others to determine how patients might respond to certain medications.

"This is going to become a very big part of mainstream medicine, and we really aren't ready for it," says human geneticist Michael Christman, president and CEO of the Coriell Institute for Medical Research, a non-profit research center in Camden, N.J.

A deluge of data

Eric Topol, director of the Scripps Translational Science Institute in La Jolla, Calif., cites what he calls "a really great paradox."

"Ask patients 'whom do you trust with your genomic data?' and 90% say their physicians," Topol, a cardiologist, says. Yet, when Medco Health Solutions, the pharmacy benefit manager, and the American Medical Association surveyed more than 10,000 doctors, only 10% said they felt adequately informed and trained to use genetic testing in making choices about medications.

That physician survey was conducted two years ago, but Topol, Christman and others in the field doubt much has changed.

Take the blockbuster drug Plavix. In March, the Food and Drug Administration added the strongest type of warning, a black box, to the label of Plavix, which is taken by millions of Americans who have had stents inserted to keep their coronary arteries open. Plavix is supposed to reduce the risk of blood clots in those stents, but, as the boxed warning notes, some patients might not effectively convert the drug to its active form in the body.

The warning points out that a genetic test can identify those patients, who might need a higher dose of Plavix or a different drug. Yet, Christman says, "even in tertiary academic medical centers, you don't have routine testing for Plavix efficacy."

On the other hand, Topol says, doctors have ordered 250,000 $100 tests for a gene called KIF6, tests that were aggressively marketed. One KIF6 variation was thought to raise heart disease risk by up to 55%, but, Topol says, a study this month in the Journal of the American College of Cardiology shot that down.

Considering that there are thousands of genetic tests, doctors might be forgiven for feeling overwhelmed, especially because so many questions remain.

"We have way more data than we have knowledge," says Clay Marsh, a lung and critical-care doctor who directs the Center for Personalized Health Care at The Ohio State University College of Medicine in Columbus. "The biology is struggling to keep up with the technology."

Though some diseases, such as sickle cell and cystic fibrosis, are caused by mutations in a single gene, many common conditions arise from the interplay of a variety of genes and lifestyle and environmental factors, not all of which have been identified.

"Having a family history of heart disease increases your risk of heart disease more than some of these (genetic) markers they test for," Murphy says. "Then, just because you have that marker doesn't mean that's what caused the heart disease in your family. That's one thing I teach residents: No gene is an island."

Right test for one patient

Murphy's first patient on a sunny fall afternoon sat in her home nearly 1,000 miles away, in the village of Niantic, Ill., smack dab in the middle of that state.

Wanda Conner, 72, never met a medication that agreed with her. She heard about the Genelex test for five genes involved in drug sensitivity from her dental hygienist and figured it might provide some answers. So she swabbed some cells from her cheek and mailed them to the company's lab in Seattle.

Besides seeing his own patients, Murphy reviews test results by phone for Genelex customers. He scrolled through Conner's on his computer. Turns out that she carried variations in three of the genes for which she was tested that could affect her response to certain medications.

Murphy touched on types of drugs that Conner wouldn't process normally if taken. He advised her to stay away from SSRIs, or selective serotonin reuptake inhibitors, a class of antidepressants, and cautioned her that she might experience side effects if she took beta-blockers, a heart medication. He promised to fax his report to her doctors.

"They're fascinated with this," Conner says of her doctors in Illinois, "but they don't know much about it. In fact, I probably know more than they do."

Christman's and Topol's organizations hope to change that. "The purpose of our study ... is to determine the best practices, from soup to nuts, in using personal genome information in clinical care," Christman says. "What are the best information technology systems to deliver this?"

For example, he says, when it comes to genetic factors affecting drug response, it probably makes more sense for pharmacists, not genetics counselors, to advise doctors or patients.

The Coriell Personalized Medicine Collaborative is halfway toward its goal of enrolling 10,000 people. Many are doctors. "We're measuring a lot of genetic information about them," Christman says. Genetics counselors explain the results, usually by e-mail or phone, which participants seem to prefer over a face-to-face visit.

The next 5,000 participants will have already been diagnosed with breast or prostate cancer or heart disease. The cancer patients will be recruited through Fox Chase Cancer Center in Philadelphia, the heart disease patients through Ohio State.

Coriell is sharing only results that patients or their doctors can do something about. Expert committees meet twice a year to review the latest findings about different genetic markers. "If somebody came out with an effective cure for Alzheimer's," Christman explains, "then we would report Alzheimer's risk."

In a related study, Coriell is investigating how best to educate doctors about genetic testing and how that affects what they do with results.

Meanwhile, Scripps plans to launch the College of Genomic Medicine, a free online physician training and accreditation program, early next year, Topol says. To become accredited, he says, doctors will spend five to eight hours reviewing materials developed by an international group of leaders in the field and then take a "highly interactive" test.

The genomic medicine college was born at last year's TEDMED, an annual medical technology and health care conference. There, Topol says, both he and Gregory Lucier, CEO of the San Diego-based Life Technologies, a leading supplier of gene-sequencing equipment to academic laboratories, delivered talks about the need to get the medical community up to speed.

As a result, the Life Sciences Foundation, the company's philanthropic arm, awarded Scripps a $600,000 grant to develop the genomic medicine college.

Topol expects interest in the program will be high.

"Consumers are coming into their physician with their genomic data," he says. "Physicians don't want to be trumped in their knowledge by the patient they're looking after. Instead of playing catch-up, they need to be in the leading front of knowledge."

[Whom are we kidding? Doctors spending "five to eight hours" will be equipped to catch up with e.g. 3,000 full-time Genome Informatics specialists' output, just in one place of the globe, the Beijing Genome Institute??? This task is simply nonsense without Doctors' use of results prepared by High Performance Genome Computers. At your annual check-up, does your doctor conduct your actual blood test? Nonsense! The sample is sent to the lab, where super-sophisticated machines conduct the tests, and deliver only the results. The doctor does not even have to look at those "within range". This entry can be debated in FaceBook of Andras Pellionisz - AJP.]

^ back to top


Forget About a Second Genomics Bubble: Complete Genomics Tumbles on IPO First Day

Xconomy
Luke Timmerman 11/11/10

Super-fast, super-cheap DNA sequencing technologies have made big news in biotech the last couple years. But the early returns have made it clear that investors haven’t gone ga-ga for genomics like they did a decade ago.

The latest data point arrived today in the form of Complete Genomics (NASDAQ: GNOM). This Mountain View, CA-based company, which aspires to lead the way in the quest to sequence entire human genomes for as little as $1,000, got a ho-hum reception from IPO investors this week. The company pared back its forecasted price from a range of $12 to $14, ultimately settling for an initial price of $9 per share. And in its first day of trading as a public company—on an overall down day for the markets—Complete Genomics stock fell 11 percent, closing at $8.03.

The deal still provides Complete Genomics with a much-needed shot of $54 million in fresh capital, which it plans to use to pitch its commercial sequencing service to researchers. But the company was originally hoping to raise as much as $86 million, so Complete Genomics is going to have to pursue this market on a leaner budget.

One of Complete Genomics’ archrivals—Menlo Park, CA-based Pacific Biosciences (NASDAQ: PACB) has also seen some wind come out of its sails. PacBio broke out with a $200 million IPO late last month, commanding a hefty $800 million market valuation at an initial price of $16 a share. Investors initially bid up PacBio’s shares to as high as $17.47, but the stock has since been on a downward slide, closing today at $12.51.

It’s nothing like the hype-driven period of 2000, in which first-generation genomics companies like Human Genome Sciences, Celera Genomics, Incyte, and Millennium saw their stocks enter triple-digit territory on notions that the genome would lead to new cures and personalized medicine, right around the corner. It didn’t happen, and for a nice little retrospective, check out this piece from Nature last March.

This will be a fascinating story to watch over the coming months and years, to see whether PacBio and Complete Genomics, as well as established players like San Diego-based Illumina (NASDAQ: ILMN) and Carlsbad, CA-based Life Technologies (NASDAQ: LIFE), will truly make gene sequencing so cheap and convenient that it’s really accessible to the average biologist and changes the way they think about running experiments. But it now looks clear that the two intriguing new entrants into the market will have to tap into this emerging arena without the luxury of being able to raise more cash at the snap of a finger.

[Some of us have warned against the unsustainability of the "Industrialization of Genomics" without proper "supply chain management". First, the Genome Based Economy is NOT about "running experiments" - but "Sequencing" must match "Analytics by Genome Computers", such that results can be useful for the ultimate markets of the ecosystem: Consumers should be directly involved for their own P4 Health Care. Hospitals must be provided with both Sequencers and matching Genome Computers in order to be useful in diagnosis and personalized therapy, up to personalized cure. Biodefense, Agriculture and Synthetic Genomics are all branches of the "Genome Industry" - but (similar to Nuclear Industry) it can not be achieved based on demonstrably false axioms of the underlying science. (Nuclear reactors or nuclear weapons were simply unthinkable based on the obsolete axiom that "the atom does not split nor fuses"). The predicted crash happened much earlier than projected; with the President's Science Advisor (Eric Lander) admitting to the hordes of workers at ASHG convention in Washington, D.C. that the underlying axioms have been false (and both himself and I went on record with the understanding of genome structure and function in the mathematical and thus software enabling terms of fractals). Just as outlined in YouTube-s "Pellionisz" (2008, 2009, 2010) and in the peer-reviewed science paper "The Principle of Recursive Genome Function, 2008", industries (both Sequencing industry and Big IT now involving Intel, Google, Microsoft, IBM, HP, Xilinx, Altera, Hitachi, Samsung, etc) are ready to take off - as well as major Consumer Companies (Procter and Gamble, Johnson and Johnson, Nestle, etc.) are ready - and so are Big Pharma and Big Agriculture Firms, (such as Roche, Merck, Pfizer or Monsanto, etc). However, without sound mathematical (thus software enabling) genome informatics on one hand, to first understand how the hologenome functions, before plunging into handling its malfunctions, and without an industrial-strength supply chain management on the other, how supply of sequences will not glut our ability to process them, Industrialization of Genomics will remain inherently wasteful and unable to bring out its tremendous potential. This entry can be debated in FaceBook of Andras Pellionisz - AJP.]

^ back to top


The Daily Start-Up: Gene Tests Attracting Money & Scrutiny [23andMe C round with J&J]

NOVEMBER 10, 2010, 10:56 AM ET
The Wall Street Journal
By Scott Austin

This morning’s roundup of the latest venture capital news and analysis across the Web:

The [hollowed by the change of politics in the Congress, making it extremely unlikely that FDA might have an updated mandate from its 1976 legislation anytime soon - AJP] threat by the Food and Drug Administration to regulate direct-to-consumer genetic tests didn’t stop Johnson & Johnson and two venture firms from investing more than $22 million in 23andMe, whose services are designed to help consumers better understand what their genetic information says about their ancestry and disease risk. In July a Government Accountability Office [deeply flawed, admittedly non-scientific analysis of science] report raised questions about the accuracy of these services’ results, and the FDA is moving to regulate them. Besides 23andMe, whose investors include Google Ventures and New Enterprise Associates, other venture-backed companies developing the genetic tests include deCODE Genetics, Navigenics and Pathway Genomics.

GroupMe is only a few months old, but the start-up that lets users send group text messages on their cellphones is already worth about $35 million, according to All Things Digital. That high valuation, set upon GroupMe’s $9 million Series B round led by Khosla Ventures, resulted from a bidding war among prominent venture firms, Business Insider previously reported, and reportedly acquisition interest from Twitter.

Two Groupon co-founders are sinking $1 million in a start-up that helps small businesses manage social-media tools like Twitter and Facebook, The Wall Street Journal reports. The money for Sprout Social comes from Lightbank, a seed fund managed by Eric Lefkofsky and Brad Keywell. Lightbank previously invested in Betterfly.com, which helps users find professional services, and Where I’ve Been, a Facebook application that lets people share travel information.

Rhode Island’s general treasurer-elect, Gina Raimondo, is officially cutting ties with Point Judith Capital, the firm she co-founded. The Providence Journal reports she will no longer make investments, and will set up a blind trust to manage her investment in the firm. Raimondo, who invested in health-care companies, resigned from her boards, which included GetWellNetwork, NABsys, Novare Surgical and Spirus Medical.

“The Deal Professor” explains why entrepreneurs should be sure to maintain control of their companies after raising venture capital. “The key for entrepreneurs in negotiations is to make sure that when they do raise V.C. money, they have options,” writes Steven M. Davidoff, a former corporate attorney who is now a professor at the University of Connecticut School of Law. “If they can get multiple term sheet offers, then they can negotiate to sell the smallest part of their company on the most lenient terms. If you only have one term sheet, you are not going to fare well.”

[This entry can be debated in FaceBook of Andras Pellionisz - AJP.]

^ back to top


NIH Chief Readies the Chopping Block
MedPage Today

Published: November 07, 2010

WASHINGTON -- The National Institutes of Health is considering ways to cut research grant funding in anticipation of possible budget restrictions, NIH Director Francis Collins, MD, PhD, said here Saturday.

"One area we have to look at more is whether our workforce is properly planned for," Collins said in a keynote address at the American Society of Human Genetics meeting.

"I don't think it's reasonable to assume that NIH is going to have another doubling [of its budget] anytime soon, and yet, we never tried to model what the workforce should look like in such [an economically difficult] environment."

One idea that has been raised is whether university faculty members receiving NIH grants should continue to have their salaries largely supported by those grants.

"One might make the case that it would be better for those funds perhaps to be available for other investigators," he said. "We at NIH are committed to looking at this in a careful way, and we're going to have a discussion about this at the NIH Institute Directors Leadership Forum coming up in three weeks."

He elaborated at a press conference after his address. "NIH is supporting an awful lot of salaries, and that seems fair to the degree that they are spending that amount of time on the research. Universities have also discovered that's a great way to build programs... but it may be in the long run that this may not be the best way... for research to be supported."

Despite his worries about the agency's future budget, Collins said he was excited by some of the new research opportunities at NIH. One example is the NIH's new Therapeutics for Rare and Neglected Diseases program, a collaboration among various NIH laboratories to take research findings and develop them to a point where private companies would be interested in getting the products to market.

The program currently has a budget of $24 million, but it is expected to soon grow to $50 million, Collins said. Diseases currently being worked on under the program include schistosomiasis/hookworm, Niemann-Pick Type C disease, hereditary inclusion body myopathy, sickle cell disease, and chronic lymphocytic leukemia.

One interesting compound being studied in the sickle cell project is 5-hydroxymethyl-2-furfural, which binds to the sickled hemoglobin and increases its oxygen affinity, thus allowing blood cells to hold onto oxygen. The work on this compound is now in the late pre-clinical stage, Collins said. "It's something fairly bold for a disease that hasn't attracted much private sector investment."

In general, the NIH has been sheltered from the winds of shifting political opinion, Collins said during the press conference. "For the most part people, regardless of their political party, are concerned about human health -- about themselves, their families, and their constituents. If one can pull medical research aside from hot-button issues like stem cells, most people regardless of political persuasion say 'Yeah, it's a good thing.'"

He acknowledged that the controversy surrounding human embryonic stem cell (hESC) research has been a problem for NIH. The agency is fighting litigation by medical researchers who allege that NIH's recent support of such research violates federal law and hurts adult stem cell research.

At the press conference, Collins noted that the controversy has put a damper on the NIH's efforts to hire a director for its new Center for Regenerative Medicine.

"We were hoping to recruit a world-renowned expert in stem cells to come and direct it [but] with the cloud over the whole field, it is difficult," Collins said. "It has been a factor in slowing down the process of trying to search for that director."

He added that his institution spends nearly three times as much money on adult stem cell research as it does on studies with hESCs -- in fiscal year 2009, the ratio was $397 million to $143 million.

Overall, Collins said, the NIH needs to do a better job of selling itself.

"Whether there is a general understanding that the government has been the main driver of medical research in the last five decades, I'm not sure," he said. "We have not done a good job of getting our brand name appreciated for what it does."

[It is certainly a fact that medical research was driven by the government for half a Century - just like Internet was ramped up from nothing by the initial support coming entirely from the government (defense). Now time has come for Genomics to be integrated with Epigenomics, in terms of Informatics (HoloGenomics) - which will take off driven by private industries worldwide. When Intel, AMD, Xilinx, Altera, Hitachi, IBM, Samsung, Illumina, Life Technologies, Roche, Merck (and fresh IPO-s by Pacific Biosciences two weeks ago, and Complete Genomics tomorrow) - the landscape will change forever. - AJP.]

^ back to top


Next Generation Sequencing

BioCompare
Monday November 01, 2010

by Jeffrey M. Perkel

If you want to get a sense of the current state of the high-throughput sequencing market, look no further than this month's news.

First, the US Department of Energy's Joint Genome Institute mothballed the last of its fleet of Sanger chemistry-based sequencers, completing the transition to newer, faster, next-gen sequencers that has been in the works for several years.

"With these new sequencers incorporated into the production line over the last two years, our productivity has risen to 1 terabase in FY09; 5 Tb in FY10 and to a projected over 25 Tb in FY11," GenomeWeb quotes JGI Spokesman David Gilbert as saying. [1] "To put this in perspective, our total commitment to DOE in FY98 was 20 megabases, which we do now in a few minutes."

The second item was the initial data release from the 1000 Genomes Project Consortium, an effort to sequence the genomes of 1,000 humans and thereby get a handle on human sequence variation. In a report in the journal Nature, the Consortium detailed the sequencing and analysis of nearly 900 individual genomes (179 full genomes and 697 partial exomes), as part of the project's "pilot phase," using a blend of next-gen sequencers from Illumina, 454 (a Roche company), and Life Technologies. [2]

Remarkable as that achievement is, it represents just a fraction of next-gen sequencing output to date. According to an infographic accompanying the article, "at least 2,700 human genomes will have been completed by the end of this month [October 2010], and [the] total will rise to more than 30,000 by the end of 2011." [3]

The final news item: One of those 2,700 genomes belongs to none other than rocker Ozzy Osbourne, of MTV's The Osbournes and biting-the-head-off-a-live-bat fame, who wrote of the experience in the October 24 Sunday Times of London. According to Scientific American, [4]:

"I was curious," he wrote in his column. "Given the swimming pools of booze I've guzzled over the years—not to mention all of the cocaine, morphine, sleeping pills, cough syrup, LSD, Rohypnol…you name it—there's really no plausible medical reason why I should still be alive. Maybe my DNA could say why."

If the JGI announcement and 1000 Genomes Project data release speaks the fact that sequencing whole genomes is, as Jay Therrien, vice president of commercial operations for next-gen sequencing at Life Technologies, puts it, "basically routine," the Osbourne sequencing project attests to how far there still is to go.

"We're in this era at the moment of celebrity genomics," says Daniel MacArthur, a UK-based postdoctoral fellow who blogs [5] and tweets [6] extensively about the next-gen sequencing industry. "That will persist for a while until the cost goes down enough that ordinary people can actually afford to do it. And I guess that's when it will get really interesting."

Of course, from a technology point-of-view, the next-gen sequencing arena has been interesting for years—Harvard geneticist George Church estimates the industry cost has plummeted about 10-fold per year for each of the past five years—and even if that pace is slowing a bit (Church estimates this year's improvement at between three and five fold) it continues to be so.

To wit: the rise of "personal" next-gen platforms. All three of the major sequencing companies, Life Technologies, Roche/454, and Illumina, have announced such devices, which provide a lower-cost, lower-throughput alternative for those researchers who would like to take advantage of next-gen sequencing, but have neither the resources nor the need for the industrial-scale equipment that previously was their only option.

"To keep the latest generation of Illumina fully loaded, you need to have a 400-gigabase-pair project. Most people don't have a 400-Gbp project," says Church, who is a scientific advisor for some 18 next-gen firms, including all six with commercial products (Dover Systems, Roche/454, Life Technologies, Illumina, Complete Genomics, and Helicos).

First out of the gate was Roche/454, which announced its GS Junior system late in 2009. Priced at around $100,000 (as compared to $500,000 for the company's top-of-the-line GS FLX), the GS Junior runs the same pyrosequencing chemistry as the GS FLX, but at a lower throughput: 100,000 parallel reactions, compared to one million on the GS FLX.

"It's a scaled down version of our big system," says Katie Montgomery, marketing communications manager at 454 Life Sciences.

At about 400 bases apiece, those reads currently lead the industry in terms of length. But in 2011, the company plans an upgrade to about a kilobase, says Montgomery, adding that this will be available to existing users as "a small hardware upgrade to accommodate the increased reagent volumes."

On Oct. 26, Illumina announced a new member of its sequencer line, as well. The HiSeq 1000 "is designed for researchers who want the ease of use, industry-leading cost per gigabase (Gb) and data rate of the HiSeq 2000 but do not currently require its throughput," the company said in a press release (Illumina was unavailable to comment for this article). [7]

This "single flow cell version" of the HiSeq 2000 "will deliver in excess of 100 Gb of data per run using paired 100 base pair reads, easily enabling the sequencing of a complete human genome in a single run," according to the release.

Finally, at the American Society of Human Genetics annual meeting this week, Life Technologies announced two new additions to its line of SOLiD sequencers. On the high end, the company is launching the SOLiD 5500xl. Built in collaboration with Hitachi, the 5500xl will generate twice the data of SOLiD 4 (200 Gbp per run) at half the cost ($6,000 per run) and in half the time (5-6 days vs 10-12), Therrien says.

"You can sequence an entire human genome at 30x coverage at a price of $3,000, which was unheard of just a year ago," says Therrien.

At the same time, the company also announced a personal option. Priced at $299,000 (vs $595,000 for the 5500xl), the SOLiD 5500 base system is essentially a single flow-cell version of the 5500xl.

Life Technologies is also gearing up to commercialize an entirely new sequencing technology this November. Based on its recent acquisition of Ion Torrent Systems for some $725 million, the Personal Gene Machine (PGM) will provide up to 10 Megabases worth of 100-base reads in just two hours for about $500, according to Therrien. (An upgrade to 100 Mbp per run is planned for release "early next year," he adds.)

That, Church notes, is 1,600 times more expensive per-base than the SOLiD 4. But the company, says Therrien, is positioning it as mid-way between a Sanger capillary electrophoresis-based instrument and the SOLiD, for applications such as bacterial and viral genomics and targeted amplicon sequencing. "What that gets you is a radical reduction in turnaround time for really what is a very large amount of sequence data," he says.

The Ion Torrent consumable "is a computer chip that's been modified so we can flow biologics into it," Therrien says. Amplified DNA templates on silicon beads are flowed into that flow cell, where they sit in tiny wells. At the bottom of those wells is basically "the smallest pH meter in the world." As nucleotides are flowed into the reaction chamber one by one and added to the growing DNA chain by DNA polymerase, they release protons, causing a pH drop that registers as a change in voltage.

It's a design that requires no optics, no fluorescence, and no imaging; "We call it 'post-light sequencing,’" Therrien says. It is also, for that reason, considerably less expensive than other next-gen platforms, costing just $52,000 for the hardware.

"It's a clever system," says MacArthur. "I think it's really quite elegant, but how well it actually works in the field will be the big test."

(In related news, Roche/454 announced Monday a partnership with DNA Electronics "for the development of a low-cost, high-throughput DNA sequencing system," according to a press release. Details are sketchy, but the described system bears certain similarities to Ion Torrent's technology. According to the release, the system will use "inexpensive, highly scalable electrochemical detection"—as opposed to optical detection—and "leverages 454 Life Sciences' long read sequencing chemistry with DNA Electronics' unique knowledge of semiconductor design and expertise in pH-mediated detection of nucleotide insertions, to produce a long read, high density sequencing platform.")

Of course, not all sequencing will be done on these platforms. Sanger sequencing remains a powerful force in the industry. "If you just want to check one fact in 24 hours, as biologists often do, it's $2 for a 700 bp run," Church says. "And that's a no-brainer."

At the same time, sequencing firms are also pursuing the "next-next" generation of instruments.

One such technology is Project Starlight, which Life Technologies discussed at the Advances in Genome Biology and Technology meeting in February. Project Starlight is a sequencing approach based on single-molecule fluorescence resonance energy transfer (FRET) between a FRET donor-bearing DNA polymerase and FRET acceptor bearing nucleotide triphosphates.

Unlike most commercial sequencing systems, which amplify a template prior to sequencing it, Starlight sequences individual molecules directly. (Helicos BioScience's HeliScope commercial sequencer is also a single-molecule technology, as is Pacific Biosciences' in-development SMRT technology.) According to Therrien, the company is "currently targeting a commercial release in mid-2011," with 1-kb read lengths—about twice as long as any current next-gen technology—at launch.

Such long reads are sorely needed, MacArthur says. Short reads (such as the products of the SOLiD and Illumina chemistries) make it difficult to assemble a genome without a reference scaffold (that is, de novo), especially if the genome contains repetitive sequences. A few long reads could go a long way towards overcoming that problem, he says.

"Once you start pushing beyond about a kilobase or so, that starts giving you some real power," MacArthur says. "If you can sprinkle even a few of these kind of longer reads into a sequencing project that's already generating lots and lots of short reads, then that potentially can make a big difference to how well you can put the genome together."

Another next-next-gen technology in development is nanopore-based sequencing, in which DNA is "read" as it passes through nanometer-scale holes. Oxford Nanopore has been pursuing that approach for several years now. More recently, Roche/454, in partnership with IBM, entered the fray.

According to a press release [8] announcing the latter collaboration, IBM's in-development approach is based on the company's “DNA Transistor” technology. "The novel technology, developed by IBM Research, offers true single molecule sequencing by decoding molecules of DNA as they are threaded through a nanometer-sized pore in a silicon chip," the release explains.

"It's still an early-stage research project at this point, but Roche is very interested in the future of sequencing," says Montgomery. "I think there's still a lot yet on the horizon to come."

For the next-generation sequencing market overall, that surely is the case.

["Sequencing" became industrialized. Now is the turn towards industrialization of the "Genome Computing" - HolGenTech does this by leveraging defense computing for genomics. - AJP.]

^ back to top


Experts Discuss Consumer Views of DTC Genetic Testing at ASHG, Washington

GenomeWeb
November 08, 2010

By Andrea Anderson

WASHINGTON (GenomeWeb News) – Researchers are sifting through survey and other data that may eventually help discern consumers' attitudes about direct-to-consumer genetic tests and inform future oversight of such tests, experts explained at the American Society of Human Genetics meeting here today.

The ASHG's existing recommendations for DTC genetic testing call for transparency and evaluation of DTC tests by health and/or consumer organizations such as the US Food and Drug Administration or the Federal Trade Commission, along with education about the tests for consumers and healthcare providers and studies of consumers' views on and use of such tests, ASHG President-elect Lynn Jorde, chair of human genetics at the University of Utah, told reporters during a press briefing.

Jorde moderated a panel of experts who weighed in on DTC tests and outlined findings from their own studies of consumer attitudes toward DTC tests.

For instance, David Kaufman, director of research and statistics at Johns Hopkins University's Genetics and Public Policy Center, described results from a survey of more than 1,000 individuals that was designed to get at individuals' motivation for using DTC genetic testing services offered by 23andMe, Decode Genetics, and Navigenics — and their experiences and level of satisfaction with these tests.

Kaufman and his colleagues surveyed 1,048 DTC genetic test customers who had been tested by 23andMe, Decode Genetics, or Navigenics and received their test results between June 2009 and the following March.

In general, they found that the earliest DTC genetic test adopters tended to be well educated and had significantly higher incomes than the average American. Most participants said they were motivated by factors such as general curiosity and an interest in assessing their ancestry and/or disease risk, Kaufman noted, though many also cited an interest in improving their health as a motivator for testing.

Although some 70 percent of consumers surveyed supported oversight by a consumer agency that would hold testing companies to their scientific claims, Kaufman and his team found that roughly two-thirds of those surveyed believe DTC tests should be available to the public without government oversight.

The researchers also gained insights into everything from participants' understanding of test results to their overall satisfaction with the tests.

Nevertheless, Kaufman cautioned, though the survey provided information on how DTC test results are interpreted by customers, it did not address the scientific rigor of the tests themselves or the clinical validity or utility of the test results.

Meanwhile, Barbara Bernhardt, co-director of the University of Pennsylvania's Center for the Integration of Genetic Health Care Technologies, and her co-workers surveyed 60 individuals who had been tested for risk variants associated with eight conditions through the Coriell Personalized Medicine Collaborative.

Again, the team found that participants tended to be well educated and motivated by factors such as curiosity and interest in improving their health.

While most of the individuals surveyed understood their general results — often interpreting them within the context of their own family history — they did not necessarily have a deep understanding of the relative risk information provided to them, Bernhardt explained. Though some were told they were at a heightened risk certain diseases, the researchers found that none of the participants reported being very concerned about this increased risk.

Even so, Bernhardt said, nearly a third had at least somewhat changed their behavior or lifestyle based on their test results. And half had discussed their test results with their doctor.

Finally, Andrew Faucett, director of genomics and public health at Emory University School of Medicine, outlined some questions that consumers and clinicians should keep in mind when selecting, evaluating, and interpreting DTC genetic tests and results.

For example, Faucett said, consumers need to consider what they hope to learn by taking the test. For clinicians, meanwhile, issues such as treatment implications of genetic findings are important, Faucett noted, as are an understanding the population(s) in which a particular test has been evaluated.

Faucett also drew a distinction between DTC genetic tests that are regularly used in the clinic and those that aren't, explaining that while testing labs in general are doing a good job with test analyses, much less is known about the clinical validity and utility of some tests.

[During the week of ASHG in Washington, the American people have spoken - and the legislative lame-duck remainder of the mid-term and ensuing division of Congressional Legislature make it an impossibility that the 1976 mandate of the FDA will be updated anytime soon. There is enough consumerism around, most people think, to give advice to consumers to make their free choices. The more choice, the better. - AJP.]

^ back to top


Complete Genomics plans Tuesday initial public offering of stock
Associated Press
11/05/10 5:11 PM EDT

INDIANAPOLIS — Complete Genomics Inc. expects to raise about $69.3 million in an initial public offering of 6 million shares Tuesday to help fund improvements and expansion of its DNA sequencing strategy.

The Mountain View, Calif., company said proceeds could rise to about $80.2 million if underwriters exercise an overallotment option for 900,000 shares. The company expects the stock price to range between $12 and $14 per share.

It plans to use the money to expand the sequencing and computing capacity at its Mountain View and Santa Clara locations, to fund more development of its technology and for sales and marketing and working capital, according to a registration statement filed with the Securities and Exchange Commission.

The company said it has developed and commercialized an innovative DNA sequencing platform and aims to "become the preferred solution for complete human genome sequencing and analysis," according to the statement. Complete Genomics believes its products will offer academic and biopharmaceutical researchers complete analysis without requiring them to invest in equipment like in-house sequencing instruments and high-performance computing resources.

"By removing these constraints and broadly enabling researchers to conduct large-scale complete human genome studies, we believe that our solution has the potential to revolutionize medical research and expand understanding of the basis, treatment and prevention of complex diseases," the company said.

Complete Genomics started operations in March 2006 and spent its first three years focused on research and developing its sequencing technology. It has piled up a $108.1 million deficit during its development stage.

The company plans to list its stock on the NASDAQ Global Market under the ticker symbol "GNOM."

[In the five-way horse race for Genome Sequences with 3 public companies (Roche, Life Sciences and Illumina) and 2 fresh IPO-s (Pacific Biosciences and now two weeks later Complete Genomics) the crucial question, surprising as it is, will not be "Sequencing" - but "Analytics" - based on entirely new paradigms, since the Decade since the finish of "Human Genome Project" "was all wrong" (confessed publicly the President's Science Advisor, Eric Lander, November 2nd, 2010). In the horse race "leveraging" is expected to play a major role. For instance, while PacBio leverages Big IT (IntelVC), Life Technologies leverages Big IT (Hitachi) in their new high-end SOLiD 5500 sequencer (announced at ASHG for next Spring). At the same time, for their "low end" ($49k) Ion Torrent sequencer Jonathan Rothberg, now part of Life Technologies, leverages the entire $1 Trillion semiconductor industry for the sequencer chip. As for "Analytics", HolGenTech leverages "Defense Computing" (with their High Performance Computing Hybrid Platforms), combined with Pellionisz' Fractal Approach to Recursive Genome Function - AJP.]

^ back to top


Today, we know all that was completely wrong - Lander's Keynote at ASHG, Washington

Lander’s Lessons Ten Years after the Human Genome Project

Bio-IT World
November 3, 2010
Kevin Davies

By Kevin Davies

November 3, 2010 | WASHINGTON, DC – If anyone was capable of distilling the lessons learned in the ten years since the first draft of the Human Genome Project (HGP) in 2000, it was Broad Institute director Eric Lander.[Also, Science Advisor to the President - AJP]

Opening the annual American Society of Human Genetics (ASHG) convention in Washington, D.C., Tuesday evening, Lander tried to meet the organizers’ challenge to sum up “what’s come of it?”

From a technical perspective, the HGP produced “a scaffold onto which information can be put,” said Lander, including cancer genes, epigenomics, evolutionary selection, disease association, 3-D folding maps, and much more. As for intellectual advances, Lander made a series of startling comparisons of geneticists’ knowledge around the time of the HGP in 2000 and today.

In 2000, for example, only four eukaryotic genomes (yeast fly, worm, and Arabidopsis) had been sequenced, as well as a few dozen bacteria. Today, those numbers stand at 250 eukaryotic genomes, 4,000 bacteria and viruses, metagenomic projects and many hundreds of human genomes. By the end of this year, Lander expects the Broad Institute to have generated 100,000 Gigabases (Gb) of sequence.

“The cost [of sequencing] has fallen 100,000 fold in past decade, vastly faster than Moore’s Law,” said Lander. But the question remained: “How will this get used in clinical medicine? The costs need to drop to $1,000 and then $100,” said Lander.

“I no longer think these things are crazy.”

In 2000, Lander and his HGP consortium colleagues estimated there were about 35,000 protein-coding genes, with a few classical non-coding RNAs. Repetitive DNA elements called transposons were just parasites and junk.

“Today, we know all that was completely wrong,” said Lander.

Studying patterns of evolutionary conservation in some 40 sequenced vertebrates, the human gene count is “21,000, give or take 1,000,” said Lander. “There are many fewer genes than we thought. Much more information is non-coding than we thought . . . 75% of the information that evolution cares about is non-coding information.

The study of 29 mammalian genomes shows some 3 million conserved non-coding elements in the genome, covering about 4.7% of the genome. Some of these have regulatory functions, he said. Another exciting area was the generation of genome-wide 3-D maps, which has revealed that the genome resides in ‘open’ and ‘closed’ compartments. There was much more work to be done in the coming decade, but with new next-generation sequencing tools, “it will happen.”

Mendel Redux

In 2000, the genes for about 1,300 Mendelian genetic disorders had been identified. Today, that number is about 2,900, leaving “another 1,800 Mendelian disorders to go,” said Lander. He noted the success of some whole-genome sequencing projects in identifying rare Mendelian disease genes, although the approach was not trivial. “We all have about 150 rare coding variants,” he said, in other words glitches in about 1% of a person’s genes. Those have to be carefully vetted and filtered, but in the case of recessive genes or a small number of patients, the whole-genome approach was very powerful.

Lander also broached the progress in genome-wide association studies (GWAS) for common inherited disease, where Lander says “an entire village came together” to develop the array tools, haplotype maps, and a catalogue of more than 20 million single nucleotide polymorphisms (SNPs). “The vast majority of common variation is known,” said Lander. The numbers are 1,100 loci associated with 165 common diseases/traits. For diseases such as inflammatory bowel disease and Crohn’s disease, 70-100 loci have been mapped, a pattern that Lander showed exists for lipid disorders, type 2 diabetes, height, and many other conditions.

Lander addressed the oft-publicized disappointment and criticisms expressed by some prominent geneticists, including ASHG president-elect Mary-Claire King, in the “missing heritability” and the net value extracted from GWAS papers. One widely voiced concern is that the effect size of individual GWAS “hits” is small. “I think that’s nonsense,” said Lander. “Effect size has nothing to do with biological or medical utility.” He pointed out that a drug acting on a target can have much bigger effect that the effect of the common allele.

Some geneticists believe that the “missing heritability” so far untapped by GWAS must be explained by rare DNA variants. Not so fast, said Lander. For one thing, the proportion of heritability explained in disorders such as Crohn’s and diabetes is increasing. Population genetics theory suggests that for many common diseases, rare variants will explain less than common variants.

Lander also said that geneticists must take into account epistasis, the effects of modifier genes. Such effects cannot be found statistically in GWAS, he argued. Rather than moving from mapped loci to explaining heritability to understanding biology, Lander said we must understand biology first, and then explain the models of heritability.

Cancer Conclusion

In 2000, Lander said some 80 cancer-related genes were known. The tally is now 240 genes, with genome sequencing studies revealing mutational hotspots in colon, lung, and skin cancers with therapeutic implications. As an example, Lander said his Broad Institute colleague Todd Golub, studying multilple myeloma tumors, had discovered mutations in four well known cancer genes, but more excitingly, implicated a handful of new biological pathways, including protein synthesis and an extrinsic coagulation pathway.

The battle against cancer needed more sequencing. “We’ll need the equivalent of the 1 million genomes project. We better start thinking how to engage patients,” said Lander, suggesting social networking and other ideas had to be leveraged to get patients involved.

Lander concluded by presenting what he called “the path to the promise.” If the HGP provided the raw tools, scientists were still translating basic genome discoveries into more medically directed research. That’s how far we’ve progressed in ten years. But that still leaves the daunting tasks of clinical interventions, clinical testing, regulatory approval and widespread adoption.

[This public confession of the Science Advisor to the President that the last ten years of Genomics (derailed by Crick in 1956) was on a completely wrong track even for the last decade since HGP, may still be stunning for a part of the huge crowd, though such a message was heralded in peer-reviewed paper and widely disseminated YouTube in 2008. Still, there were even companies at the ASHG convention that attempt to sell analytics with validity only for "genes" - when nobody can even know the number of genes (protein-coding exons amounting to far less than 1% of human DNA), since at this point there isn't even a scientifically valid commonly accepted definition of what a "gene" is. Thus, ASHG conference in Washington, 2010 November became the "Eye of the Cyclone". Along with his notion to turn the "upside down" approach to "right side up" ("we must understand biology first, and then explain the models of hereditability"), Dr. Lender published in Science a year ago that the structure of DNA was fractal. Meanwhile, Pellionisz gained a decade with his "Fractal Approach to Recursive Genome Function using Fractal Iteration", on record since 2002, reaching back to his seminal 1989 publication. Those who only see the "tip of the iceberg" of Pellionisz' Fractal Approach in publications are into huge suprises now that all (formerly fierce) opposition has been wiped out from the top. Look for more coming here and also on Pellionisz FaceBook and pellionisz_at_junkdna.com - AJP.]

^ back to top


1,000 Genomes Project Maps 16 Million DNA Variants: Why?

CBS News
October 28, 2010 11:47 AM

Remember the race to map the human genome? Science crossed that finish line in 2003. But it turns out that was just the beginning.

Scientists are now focusing on the small differences in our genomes, hoping to find fresh clues to the origins of many diseases.

The effort is called the 1000 Genomes Project and already it claims to have found 16 million previously unknown variations in human DNA, about 95 percent of all variations in our species. That's just from the 800 people who are part of the pilot study. The group hopes to catalog DNA from 2,500 people before they are done. [How do we know that after 16 M only 5% will be found? If 95% is already found from 800 people, why does the "1,000 Genomes" project plan for 2,500 Genomes? - AJP]

Why does it matter?

"What really excites me about this project is the focus on identifying variants in the protein-coding genes that have functional consequences," said Dr. Richard Gibbs, director of the Human Genome Sequencing Center at the Baylor College of Medicine, in a statement. "These will be extremely useful for studies of disease and evolution."

That's geek speak for finding cures to diseases that have genetic components, such as Alzheimer's, mental illness, cystic fibrosis and Huntington's Disease. The work may eventually also help certain cancers that are genetically linked such as breast and prostate.

The research is being done at government, academic and corporate facilities around the world including the National Institutes of Health in America and is made possible by new high speed techniques for mapping genetic material.

The pilot results were published in Nature and are being shared freely to speed research.

[The Nature (hard core science) paper concludes "The 1000 Genomes Project represents a step towards a complete description of human DNA plymorphism". With all due regard to the very distinguished line-up of the Consortium IMHO a "complete description of human DNA polymorphism (if completion of such a "brute force aproach" is possible at all) may not be the best and most scientific approach. At the conception of the 1,000 Genomes Project e.g. Francis Collins and George Church critized the project' planning in a Nature article in 2008. Will there be reconsideration of the design in view of the initial results? (Looks like 1,000 is already changed to 2,500). The crucial question seems to be that no two genomes may be identical - yet chances are that those differences that are responsible for "human diversity" might hugely outnumber those structural variants that are the root causes of hereditary diseases. Some algorithmic approaches can tell the "parametric" and "syntactic" variants apart. This might a burning question at the 60th meeting of the Amercian Society of Human Genetics this week in Washington, D.C. - AJP.]

^ back to top


Parody of Public’s Attitude Toward DTC Genetics

October 27, 2010
Genomeweb

Daily Scan’s inbox has been teeming with announcements for various talks and workshops to be held at the upcoming American Society of Human Genetics annual meeting in Washington, D.C., though none have read quite like Blaine Bettinger’s at the Genetic Genealogist blog. Bettinger has posted a parody of a press release for a talk in which “a group of the nation’s top geneticists and ethicists” showed data that analyzed the public’s awareness of direct-to-consumer genetic testing services and their regulation. “The researchers, funded in large part by federal grants, interviewed over 10 people randomly chosen at the entrance to the nearest grocery store and asked them whether they were familiar with one or more of the five DTC genetic testing companies included in the study,” whether they had – or had ever considered – taking a DTC gene test, and whether they felt the public should be allowed access to their genetic information, the blogger writes in his satire. “Finally, to gauge the participant’s understanding of the basic principles of genetics, each was asked to briefly describe in 100 words or less the role of the replication fork in DNA replication,” Bettinger adds

---new

I'm getting a 404 error when

Submitted by sarahemily on Wed, 10/27/2010 - 13:24.

I'm getting a 404 error when I click the link to the parody and can't find it anywhere else on the site. I would love to read it - can you supply a new link?

• reply

---new

Here is the text: New Study

Submitted by Jeff.Rosner on Wed, 10/27/2010 - 14:01.

Here is the text:

New Study Analyzing DTC Genetic Testing Released Today

October 26th, 2010 in Genealogy |

I received this news release yesterday via email. I’m probably breaking the embargo by publishing this, but I think it’s too important not to get it out there. Please be sure to read ALL the way to the bottom.

Nation’s Top Geneticists and Ethicists Release New Study of Consumer Perceptions of Direct-to-Consumer Genetic Testing and Announce New DTC Testing Guidelines

Leading up to the American Society of Human Genetics 60th Annual Meeting, which will be held November 2-6, 2010 in Washington, D.C., a group of the nation’s top geneticists and ethicists today released the results of a new study analyzing the public’s awareness and use of so-called “direct-to-consumer” genetic testing by companies such as 23andMe, deCODEme, and Pathway Genomics.

The researchers, funded in large part by federal grants, interviewed over 10 people randomly chosen at the entrance to the nearest grocery store and asked them whether they were familiar with one or more of the five DTC genetic testing companies included in the study. The participants were then asked if they had participated in DTC genetic testing, and whether they might be interested in doing so in the future. The participants were also asked whether they believed that members of the general public should be allowed to access their own genetic data without the assistance of a physician or genetic counselor. Finally, to gauge the participant’s understanding of the basic principles of genetics, each was asked to briefly describe in 100 words or less the role of the replication fork in DNA replication.

The results of the study indicate that 100% of the study participants were completely unfamiliar with these DTC testing companies, and none had any experience with DTC testing. The study also showed that while none were currently interested in performing testing on their own DNA, 90% believed that Americans should be allowed to access their genetic data without the assistance of a physician or genetic counselor. The results also showed that none of the participants in the study were able to competently explain even the basics of the DNA replication fork.

“Our study shows for the first time that the vast majority of the American public is completely unaware of even the most popular DTC testing companies,” reported Dr. David N. Anderssen, lead geneticist in the study. “Additionally, the inability of every single one of the study participants to explain one of the most basic aspects of genetics was, quite frankly, very disappointing, again suggesting that people are not equipped to handle genetic information.”

While 90% of the participants stated that they should be able to access their own genetic information without a physician or geneticist’s assistance, we completely disagree with their opinions and took this opportunity to explain to each one of them just how dangerous their genetic information can be. We also explained to them that their erroneous opinions and beliefs don’t really matter anyway, since it is the role of certified geneticists and ethicists to determine for America exactly who should access genetic information.”

In light of the findings, Dr. Anderssen noted the group’s newly-issued guidelines on DTC testing: “We’re recommending that all DTC genetic testing companies immediately close up shop, or, alternatively, hire a staff of 25 or more genetic counselors. We also recommend that Congress immediately make it illegal to even look at an ‘A,’ ‘T,’ ‘G,’ or ‘C’ without a physician or genetic counselor within at least 5 feet; the danger of privacy violations and/or the misunderstanding DTC genetic testing results is just too great to ignore.”

“Indeed, the majority of the group [of ten people - AJP] believes that there is no role for genetics in health care, disease risk, genealogy, or anthropology, among other endeavors; the old-fashioned – but always informative – family history is really the only way to go here,” reported the geneticist. “However, since most of us need these jobs, we decided to approve the use of genetics for disease assessment in the new guidelines.”

Dr. Anderssen noted that the group is continuing to study this emerging area of genetics, and plans to expand the study to 25 more participants from the nearby gas station in the near future.

____________________________________

(This post is a parody only, meant as criticism of some of the glaring deficiencies in recent studies analyzing DTC claims. A reasonable person would not interpret this post to contain factual claims, and is within my First Amendment rights (isn’t it sad that I have to write this?)).

• reply

new

Hi Sarah, the post appears to

Submitted by tvence on Wed, 10/27/2010 - 14:51.

Hi Sarah, the post appears to have been removed from the site. We'll provide an updated link when -- and if -- it becomes available.

Jeff: Thanks for pasting the full text in the meantime.

[This is really a parody of an infamous, and explicitely "non-scientific" study of the science of DTC by some genomically illiterate lame-duck politician, having already announced his retirement - AJP.]

^ back to top


UPDATE: Pacific Biosciences IPO Rises While First Wind Cuts Price

Wall Street Journal
By Lynn Cowan, Tess Stynes And Christopher Zinsli
Of DOW JONES NEWSWIRES
OCTOBER 27, 2010, 4:30 P.M. ET

A genetics technology company on Wednesday became the first U.S. life sciences initial public offering this year to both price well and trade higher, while a wind farm operator cut its asking price ahead of its offering.

Pacific Biosciences of California Inc. (PACB), which has created an instrument platform to help scientists observe nucleotides being added to DNA in real time, offered up a strong data point for the initial public offering market, while First Wind Holdings Inc. showed that green energy companies continue to be a hard sell in America.

Pacific Biosciences closed at $16.44 a share on the Nasdaq, up 2.8% from its initial public offering price of $16. It sold 12.5 million shares at the midpoint of its $15 to $17 price range.

...

Even though it hasn't commercially released its DNA sequencing platform or generated any revenues from it, Pacific Biosciences has more going on than the typical early-stage life science IPO hopeful. It plans to begin commercial delivery in the beginning of next year, has an order backlog of $15 million, and could see recurring revenue from the consumable components that need to be re-ordered.

The platform is a new generation of DNA sequencing technology, one that allows longer nuceotide chains to be read in less time than existing systems, according to the company's prospectus.

"It's a disruptive technology," said Steve Brozak, a biotech and medical-devices analyst who is president of WBB Securities LLC. "It could be a building block for future innovation."

Not every deal this week seems destined for easy pricing and trading. Wind-farm developer First Wind Holdings Inc. on Wednesday cut the estimated price range of its 12-million share IPO to $18 and $20 each, $6 below the $24 to $26 that it had originally planned. The company, which is supposed to begin trading on the Nasdaq Thursday under the symbol WIND, operates 504 megawatts of wind farms in the Northeastern and Western U.S.

...

[While market conditions are still shaky, "Genome Informatics" is "IN", while "Green Tech" may be fading OUT, according to investors - AJP.]

^ back to top


UPDATE 1-Pacific Biosciences IPO prices at midpoint-underwriter

Tue Oct 26, 2010 7:22pm EDT

* Prices at $16 vs $15-$17 range-underwriter

* Sells 12.5 mln shares, raises about $200 mln-underwriter

* To trade on Nasdaq under symbol "PACB"

NEW YORK, Oct 26 (Reuters) - Pacific Biosciences of California Inc (PACB.O), which designs machines to speed up DNA sequencing in labs, priced shares in its initial public offering at the midpoint of the expected range on Tuesday, according to an underwriter.

The company sold 12.5 million shares for $16 each, raising about $200 million. It had planned to sell shares for $15 to $17 each.

Menlo Park, California-based Pacific Biosciences sells equipment that can be used for clinical, agricultural and drug research, food safety, biofuels and biosecurity applications.

The company has never been profitable and all of its revenue to date has come from government grants. Pacific Biosciences posted a net loss of $63.04 million on revenue of $1.17 million in the six months ended June 30.

Pacific Biosciences said it had a backlog of orders worth $15 million as of June 30. The U.S. Department of Energy Joint Genome Institute and Monsanto Co (MON.N) are among those that have ordered Pacific Biosciences equipment.

Underwriters were led by JPMorgan, Morgan Stanley, Deutsche Bank Securities and Piper Jaffray. The shares are expected to begin trading on the Nasdaq on Wednesday under the symbol "PACB."

[The Industrialization of PostModern Genomics has begun - AJP.]

^ back to top


Complete Genomics Sets IPO Price Range

XConomy
Luke Timmerman 10/26/10

Complete Genomics, the low-cost gene sequencing company in Mountain View, CA, has set a goal of pricing 6 million shares in its initial public offering at a price of $12 to $14, according to a filing today with the Securities and Exchange Commission. If the company can find demand from investors at the top of its range, and its underwriters buy an extra 900,000 shares, then the deal could bring in as much as $96.6 million. The company is scheduled to set the actual IPO price the week of Nov. 8, according to Renaissance Capital. Complete Genomics’ existing roster of investors includes OrbiMed Advisors, Essex Woodland Health Ventures, San Diego-based Enterprise Partners Venture Capital, Kirkland, WA-based OVP Venture Partners, and Palo Alto, CA-based Prospect Venture Partners. The company plans to begin trading under the symbol (NASDAQ: GNOM).

[As heralded since YouTube in 2008 (based on peer reviewed science paper of The Principle of Recursive Genome Function) the key is Analytics, not only for the public investors, but for the sustainability of the Industrialization of Genome revolution of Genome Revolution. The paradigm-shift has been available since 2002 - a year before the START of ENCODE - AJP.]

^ back to top


IPO Preview: Pacific Biosciences [this week; a huge surge for Fractal Analytics - AJP]

Bloomberg BusinessWeek
October 22, 2010

Pacific Biosciences expects to offer up to $212.5 million in common stock in an IPO next week.

The company said it expects to price 12.5 million shares between $15 and $17 apiece. It is also offering underwriters 1,875,000 shares to cover overallotments. If all options are exercised, the company could have gross proceeds of just under $244.4 million.

The company, based in Menlo Park, Calif., makes genetic analysis technology focused on helping researchers investigating biochemical processes. Its initial focus is in the DNA sequencing market, with customers including research institutions and commercial companies focusing on agricultural research, drug discovery and development, biosecurity and bio-fuels.

The company said there are a significant number of competitors in the market, including Illumina Inc., Life Technologies Corp. and Roche Applied Science. Many of its competitors already have established manufacturing and marketing capabilities.

"We expect the competition to intensify within this market as there are also several companies in the process of developing new technologies, products and services," Pacific Biosciences said in its prospectus.

Those emerging competitors could include Complete Genomics Inc., Oxford Nanopore Technologies Ltd. and Ion Torrent Systems Inc., which is in the process of buying Life Technologies [for $735 M in cash and stocks - AJP].

Pacific Biosciences said it had $135 million in revenue in 2009. The fledgling company has yet to turn a profit.

The company said it expects net proceeds of about $210.4 million, after costs, depending on how the stock prices within the range and overallotment options. It said it would invest between $60 million and $70 million in current and future applications of its SMRT technologies [The $60-70 M looks most like the investment in Analytics - AJP], use $40 million to $60 million to fund anticipated future working capital needs, and use $20 million to $30 million to fund planned capital expenses. It would use between $40 million and $60 million for other general corporate purposes.

Underwriters in the offering include J.P. Morgan, Morgan Stanley, Deutsche Bank Securities, and Piper Jaffray.

The company plans to trade under the "PACB" symbol on the Nasdaq Global Market.
--

Among "Investment Risks" disclosed by PacBio on their S1 filing:

Adoption of our products by customers may depend on the availability of informatics tools, some of which may be developed by third parties.

Our commercial success may depend in part upon the development of software and informatics tools by third parties for use with our products. We cannot guarantee that third parties will develop tools that will be useful with our products or be viewed as useful by our customers or potential customers. A lack of additional available complementary informatics tools may impede the adoption of our products and may adversely impact our business.

[As the PacBio IPO will happen in the coming week, there will be a huge surge for the Fractal Approach to Analytics:

a) If the IPO will be successful, there will be monies to make good on what PacBio claims to be; a DNA "analysis" company - "at the first step" with the focus on sequencing". Smart Public Investors are keenly aware that analytics is missing - and would be a huge (Silicon Genetics type, 2000) mistake investing in Analytics based on totally wrong ancient axioms of Genome Informatics (Central Dogma and Junk DNA), at a time when the superseding replacement paradigm "The Principle of Recursive Genome Function" is available. The "proof of concept" that the genome (structure) is fractal was provided now over a year ago by Science Advisor to President Eric Lander et al.

b) If the PacBio IPO will be less than successful (e.g. should the $15-17 stock list price drop upon IPO to lower levels), it will have to be the bitter lesson to all Sequencing Companies that smart enough public investors are as aware of the "Dreaded DNA Data Deluge" - as I presented it in 2008 (Google YouTube, which by the way rises relentlessly, now over 9,000 views from all Continents) - and would be reluctant to invest in Genomics where a potential glut of sequences that can not be adequately analyzed may threaten sustainability. Many investors are aware of the present DTC "unsustainability" - though that is caused by the US government, rather than supply-chain-management initial difficulties in the Industrialization of Genomics by the Private Sector, banking on public investment. Sequencing companies would have to do something rather quickly to embrace Fractal Analytics. AJP.]

^ back to top


Benoît B Mandelbrot: the man who made geometry an art [censored -reinstated - AJP]

Guardian.co.uk
October 19, 2010

[Excerpts - see full article linked to title - AJP]

Few recent thinkers have woven such a beautiful braid of art and science as Benoît B Mandelbrot, who has died aged 85 in Cambridge, Massachusetts. (The B apparently doesn't stand for anything. He just felt like adding it.) Mandelbrot was a provocative mathematician, a subversive geometer. He left a beautiful legacy in visual art, for Mandelbrot was the man who named and explained fractals – those complex, apparently chaotic yet geometrically ordered shapes that delight the eye and fascinate the mind. They are icons of modern understanding of the universe's complexity.

The Mandelbrot set, one of the most famous fractal designs, is named after him. With its fizzing fringe of crystal-like microforms blossoming out of a conjunction of black circles, this fractal pattern looks crazy but is the outcome of geometrical calculations.

.... Mandelbrot was not the first, but with his startling fractals concept he created a visual manifesto for a non-Euclidean universe.

Fractals – and I'd be delighted if mathematicians can give a better explanation below– are shapes that are irregular but repeat themselves at every scale: they contain themselves in themselves. Mandelbrot used the example of a cauliflower which, like a fern, is a fractal found in nature; if you look at the smallest sections of these vegetable forms, you see them mirroring the whole....

Artists have been fascinated by geometry for as long as mathematicians have. The studies of Euclid are reflected in the regularities of classical and Renaissance architecture, from the Pantheon in Rome to the duomo in Florence. But artists and architects were also thinking centuries ago about non-regular, curving geometries. You could argue that fractals give us the mathematics of the Baroque – they were anticipated by Borromini and Bach. I have a facsimile, given away by an Italian newspaper, of part of Leonardo da Vinci's Atlantic Codex, which contains page after page of his attempts to analyse the geometry of twisted, curving shapes.

Mandelbrot was a modern Leonardo, a man who showed the beauty in nature...

---

Comments [partial list - AJP]

singo111

19 October 2010 2:22PM

As others have already pointed out...

The beauty is not the 'pretty pattern' fractal. The beauty is in the fact that a dry and simple mathematical function can give rise to a fractal output of such complexity. That's what blows my mind anyway.

And as for:

Mandelbrot was not the first, but with his startling fractals concept he created a visual manifesto for a non-Euclidean universe - there isn't anything necessarily Non-Euclidean (as a mathematician would understand the term) about it at all.

Why isn't a science correspondent writing this?

RIP Benoit - you deserved better than this article.

---

Pellionisz

19 October 2010 6:36PM

This comment has been removed by a moderator. Replies may also be deleted.

Pellionisz

21 October 2010 4:33PM

Comment reinstated (see contents below).

--- [ end of excerpts from the Guardian - AJP] ---

[The comment by Pellionisz that was censored out - AJP]

19 October 2010 6:36PM

Mandelbrot defined himself by his book "Fractal Geometry of Nature" (B. B. Mandelbrot. W. H. Freeman, 1982) as his creative job title was "mathematical scientist". Though a highly artistic soul, Benoit left "the art part" to colleagues; e.g. The Beauty of Fractals (H. O. Peitgen and P. H. Richter, Springer 1986). His oeuvre is profoundly seminal, with a significance way over visual arts only. Suffice to mention his well-known fractal understanding of stock market prices.

Further, to illustrate how "seminal" his geometrical understanding of Nature became, I quote FractoGene (Pellionisz, 2002), The Principle of Recursive Genome Function where the fractal DNA governs growth via fractal recursive iteration of fractal organelles (e.g. brain cells, cardiac coronaries), organs (e.g. lung, kidney) and organisms (e.g. cauliflower romanesca). For those preferring over peer-reviewed science papers, a Google Tech Talk YouTube is available.

One might argue that (fractal) universe, lunar surfaces, mountain ranges etc. had been around for time measured by mega-millions of years, Mandelbrot did not invent fractality of lifeless and living Nature (just like Newton did not invent gravity) - but discovered their mathematical principles. While he realized (2004) that the genome is fractal, testing his uncanny ability to tell with high precision the fractal dimension of roughness I asked in public at Stanford "what do you think the fractal dimension of the genome might be?" his honest answer was "I do not know" (implying that understanding the fractal nature of the genome structure and function will dominate genomics of the 21st Century). Just as John von Neumann could have arrived at the intrinsic mathematics of brain function (that he stated in his posthumus book "The Computer and the Brain" as certainly different from logical calculus), had beloved Benoit lived a decade more, his expressed interest in the most seminal biology (genomics) could have contributed with breakthroughs beyond his realization of the challenge.

Pellionisz_at_JunkDNA.com

---

[Mandelbrot and Pellionisz at Stanford, 2004]

[Labeling Mandelbrot as an artist by an admittedly non-mathematician is like celebrating Picasso as a mathematician; a flat journalistic mistake that censorship will not hide (comment reinstated in 48 hours) - AJP, FaceBook and Pellionisz_at_JunkDNA.com]

^ back to top


'Fractalist' Benoît Mandelbrot dies

New Scientist
Valerie Jamieson, chief features editor
21:08 18 October 2010

[View his last major presentation at TED, 2010 - AJP]

Benoît Mandelbrot, who died a month shy of his 86th birthday on Thursday, wanted to be remembered as the founding father of fractal geometry – the branch of mathematics that perceives the hidden order in nature.

He became a household name, thanks to the psychedelic swirls and spikes of the most famous fractal equation, Mandelbrot set. (Recently, a 3D version of the set was discovered, called theMandelbulb.)

Fractals are everywhere, from cauliflowers to our blood vessels. No matter how you divide a fractal nor how closely or distantly you zoom in, its shape stays the same. They have helped model the weather, measure online traffic, compress computer files, analyse seismic tremors and the distribution of galaxies. And they became an essential tool in the 1980s for studying the hidden order in the seemingly disordered world of chaotic systems.

By his own admission, Mandelbrot spent his career trawling the litter cans of science for fractal patterns and found them in the most unusual places. His job title at Yale University in New Haven, Conneticut, was deliberately chosen with this diversity in mind. "I'm a mathematical scientist," he told me. "It's a very ambiguous term."

I met Mandelbrot in 2004 when he was promoting The (Mis)behaviour of Markets, the book he'd written with financial journalist Richard L Hudson. After a long detour through other fields of science, Mandelbrot turned the tools of fractal geometry to financial data and had a stark warning for economists. "We have been mismeasuring risk," he said. "Brokers who ask why we should even think about 'wild events' where one bad event in the stockmarket can wipe out everything are misleading themselves."

Mandelbrot's hope was that by thinking about markets as scientific systems, we might eventually build a stronger financial industry and a better system of regulation. He also challenged Alan Greenspan, chairman of the Federal Reserve, and other financiers to set aside $20 million for fundamental research into market dynamics.

He called himself a maverick because he spent his life doing only what he felt was right and never belonging to a particular scientific community. And he enjoyed the reputation of someone who was happy to disturb ideas.

Back in 2004, Mandelbrot showed few signs of slowing down. He was writing his memoir – The Fractalist: Memoir of a Geometer – which was set to be published in 2012.

He worked every day except Sunday and enjoyed going to conferences (watch his talk at the 2010 TED conference ...). This year, he even co-authored two papers in the Annals of Applied Probability. "What motivates me is the feeling that these ideas may be lost if I don't push them any further," he told me of his desire to continue his research.

Mandelbrot may be gone. But the beauty of his fractals live on. You only have to look around you to be reminded of his insights. In his own words: "Clouds are not spheres, mountains are not cones, coastlines are not circles, bark is not smooth, nor does lightning travel in a straight line." [and the Genome is not contiguous snippets of Gene-sequences in the vast see of Junk DNA - but FractoGene, AJP]

^ back to top


Benoît Mandelbrot, Novel Mathematician, Dies at 85

By JASCHA HOFFMAN
New York Times
Published: October 16, 2010

[All this visual complexity is compressed into the Z=Z^2+C equation - AJP]

Benoît B. Mandelbrot, a maverick mathematician who developed the field of fractal geometry and applied it to physics, biology, finance and many other fields, died on Thursday [Oct. 14, 2010 – five weeks before turning 86 - AJP] in Cambridge, Mass. He was 85.

The cause was pancreatic cancer, his wife, Aliette, said. He had lived in Cambridge.

Dr. Mandelbrot coined the term “fractal” to refer to a new class of mathematical shapes whose uneven contours could mimic the irregularities found in nature.

Applied mathematics had been concentrating for a century on phenomena which were smooth, but many things were not like that: the more you blew them up with a microscope the more complexity you found,” said David Mumford, a professor of mathematics at Brown University. “He was one of the primary people who realized these were legitimate objects of study.”

In a seminal book, “The Fractal Geometry of Nature,” published in 1982, Dr. Mandelbrot defended mathematical objects that he said others had dismissed as “monstrous” and “pathological.” Using fractal geometry, he argued, the complex outlines of clouds and coastlines, once considered unmeasurable, could now “be approached in rigorous and vigorous quantitative fashion.”

For most of his career, Dr. Mandelbrot had a reputation as an outsider to the mathematical establishment [Received tenure from Yale University in 1999, at the age of 75 - AJP]. From his perch as a researcher for I.B.M. in New York, where he worked for decades before accepting a position at Yale University, he noticed patterns that other researchers may have overlooked in their own data, then often swooped in to collaborate.

“He knew everybody, with interests going off in every possible direction,” Professor Mumford said. “Every time he gave a talk, it was about something different.”

Dr. Mandelbrot traced his work on fractals to a question he first encountered as a young researcher: how long is the coast of Britain? The answer, he was surprised to discover, depends on how closely one looks [How many "Genes" the human DNA has? It depends how closely one looks - AJP]. On a map an island may appear smooth, but zooming in will reveal jagged edges that add up to a longer coast. Zooming in further will reveal even more coastline.

“Here is a question, a staple of grade-school geometry that, if you think about it, is impossible,” Dr. Mandelbrot told The New York Times earlier this year in an interview. “The length of the coastline, in a sense, is infinite.”

In the 1950s, Dr. Mandelbrot proposed a simple but radical way to quantify the crookedness of such an object by assigning it a “fractal dimension,” an insight that has proved useful well beyond the field of cartography.

Over nearly seven decades, working with dozens of scientists, Dr. Mandelbrot contributed to the fields of geology, medicine, cosmology and engineering. He used the geometry of fractals to explain how galaxies cluster, how wheat prices change over time and how mammalian brains fold as they grow, among other phenomena.

His influence has also been felt within the field of geometry, where he was one of the first to use computer graphics to study mathematical objects like the Mandelbrot set, which was named in his honor.

“I decided to go into fields where mathematicians would never go because the problems were badly stated,” Dr. Mandelbrot said. “I have played a strange role that none of my students dare to take.”

Benoît B. Mandelbrot (he added the middle initial himself, though it does not stand for a middle name) was born on Nov. 20, 1924, to a Lithuanian Jewish family in Warsaw. In 1936 his family fled the Nazis, first to Paris and then to the south of France, where he tended horses and fixed tools.

After the war he enrolled in the École Polytechnique in Paris, where his sharp eye compensated for a lack of conventional education. His career soon spanned the Atlantic. He earned a master’s degree in aeronautics at the California Institute of Technology, returned to Paris for his doctorate in mathematics in 1952, then went on to the Institute for Advanced Study in Princeton, N.J., for a postdoctoral degree under the mathematician John von Neumann.

After several years spent largely at the Centre National de la Recherche Scientifique in Paris, Dr. Mandelbrot was hired by I.B.M. in 1958 to work at the Thomas J. Watson Research Center in Yorktown Heights, N.Y. Although he worked frequently with academic researchers and served as a visiting professor at Harvard and the Massachusetts Institute of Technology, it was not until 1987 that he began to teach at Yale, where he earned tenure in 1999.

Dr. Mandelbrot received more than 15 honorary doctorates and served on the board of many scientific journals, as well as the Mandelbrot Foundation for Fractals. Instead of rigorously proving his insights in each field, he said he preferred to “stimulate the field by making bold and crazy conjectures” — and then move on before his claims had been verified. This habit earned him some skepticism in mathematical circles.

“He doesn’t spend months or years proving what he has observed,” said Heinz-Otto Peitgen, a professor of mathematics and biomedical sciences at the University of Bremen. And for that, he said, Dr. Mandelbrot “has received quite a bit of criticism.”

But if we talk about impact inside mathematics, and applications in the sciences,” Professor Peitgen said, “he is one of the most important figures of the last 50 years.”

Besides his wife, Dr. Mandelbrot is survived by two sons, Laurent, of Paris, and Didier, of Newton, Mass., and three grandchildren.

When asked to look back on his career, Dr. Mandelbrot compared his own trajectory to the rough outlines of clouds and coastlines that drew him into the study of fractals in the 1950s.

“If you take the beginning and the end, I have had a conventional career,” he said, referring to his prestigious appointments in Paris and at Yale. “But it was not a straight line between the beginning and the end. It was a very crooked line.”

[How mammalian brains fold in a fractal way, was followed up (after the initial ideas of Grosberg 20 years ago) almost exactly a year ago by Dr. Lender et al., for the fractal folding of the DNA; "Mr. President, the Genome is Fractal!". The primary concept of FractoGene (2002 by Pellionisz, reaches back to the Fractal Geometry of Cerebellar Purkinje Cells, 1989, based on a musing of Mandelbrot in his "Fractal Geometry of Nature" classic book - AJP]

^ back to top


Going 'Beyond the Genome'

Genomeweb
October 14, 2010

BioMed Central's Beyond the Genome conference in Boston this week — which was held in conjunction with Genome Biology's 10th anniversary — showcased the work of several researchers whose ideas go beyond just sequencing.

The University of Maryland's Steven Salzberg kicked off the conference with a keynote speech about the work he and others are doing to try and accurately estimate exactly how many genes a person has. In 1964, F. Vogel wrote a letter to Nature estimating that humans have 6.7 million genes. He was way off, Salzberg said, but it hasn't gotten any easier over the years to make the estimate more accurate. In the mid-1990s, three different papers estimated the count to be 50,000 to 100,000, 64,000, and 80,000. Even after the draft genome was published, the estimates widely varied. The public consortium estimated the count to be between 30,000 and 40,000, while Celera and its private partners estimated 26,588, with 12,000 other additional "likely" genes. So far, the most accurate estimate is 22,333 human genes, Salzberg said, but there is still much of the human genome that not much is known about, and RNA-seq is still revealing a lot of new genes that may have previously been overlooked. In the end, Salzberg said, it's not as important to know how many genes there are as to know what they are and what they do.

George Church emphasized how important it is to continue to read the genome. About 2,000 genes are highly predictive and medically actionable, he said, and as the price of sequencing continues to drop, researchers will be able to find more genes they can work with to the benefit of human health. Church also stressed the importance of open-access data, and said there is a need for an open database that researchers can use to analyze each others' data.

Elaine Mardis spoke about her work with cancer genomics, and said that, in researching the way tumors work, validating tumor variants is important especially for dissemination of the information to the wider scientific community for further analysis. The speed of data generation is both challenging and enabling, she added.

The University of Washington's Jay Shendure talked about his lab's work with exome sequencing in autism studies. At least some percentage of autism is caused by coding mutations, and exome sequencing is useful in studies of the disorder because the technique can be used to focus in on a single gene instead of an entire region of the genome, Shendure said. He described a trio-based exome study done in his lab, where 60 exomes — from 20 autistic children and both of their parents — were sequenced, and then analyzed to identify Mendelian errors. The researchers found 16 de novo SNPs validated by Sanger from the 20 autism trios, and found two genes — GRIN2B and FOXP1 — which they think could be causative in autism.

The University of Colorado's Rob Knight and BGI's Jun Wang discussed their respective labs' work with microbes. Knight talked about the research he has done with obese and lean mice, and trying to elucidate the relationship between an organism's weight and its gut microbes. Wang talked about some of the studies BGI has done with diabetic patients, and said one study of Chinese type II diabetes patients discovered more than 500,000 novel bacterial genes and found 1,306 bacterial genes associated with diabetic patients, though whether the genes were the cause or the effect of diabetes is not yet known.

Comments:

Submitted by S. Pelech - Kinexus on Thu, 10/14/2010 - 14:08.

It is intriguing that despite the complete sequencing of the human genome for many years now, it is still unresolved exactly how many human genes actually exist. Mass spectrometry studies have revealed several protein sequences that were not originally described in gene databases. In my own experience, with the assignment of over 90,000 phospho-sites in predicted human proteins for PhosphoNET (www.phosphonet.ca), I have noticed several hundred proteins that were originally documented in UniProt (www.uniprot.org) that have had the entries deleted without any replacements. Since these phosphoproteins were identified from cell lysates by mass spectrometry, obviously the encoding genes actually exist. Since Uniprot has just over 21,000 distinct human proteins currently listed, perhaps 4 to 5 percent of human proteins are still not tracked in the best repository that we have information about our proteins. How well the 22,333 figure for the total number of human genes accounts for these anomalies identified by mass spectrometry analysis of proteins is also unclear.

reply

Submitted by andras on Thu, 10/14/2010 - 19:47.

A better title would be: "FractoGene Recurses to the Genome". Both "Going beyond the Genome" and "Counting the Exact Number of Genes in Human DNA" are exercises in futility - unless going beyond the genome is tracked through its full recourse from intrinsic and extrinsic proteins back to the DNA>RNA>PROTEINS> and on, as well as contiguous sequences, formerly defined as "genes" yield to facts of their "alternative splicing" (one gene acting as many different genes when spliced in various different ways), as well as to the newly found facts that given contiguous sequences constitutes functional units with sequences very far downstream or upstream, totally outside of the boundaries of the (now obsolete) "gene" definition. The Principle of Recursive Genome Function peer reviewed science paper and Google Tech TalkYouTube of our Genome Revolution defines not just one trip to go "beyond Genome" (akin to the Russians in the early days of Space Age, blasting a dog into space and leave it there to perish), but more like "Sending a Man on the Moon - and taking him safely back to Earth" (and do it repeatedly, again and again). It is within the context of recursive algorithms, such as fractal iterative recursion, that seemingly scattered elements of genes, FractoGene governs growth of fractal organelles (such as brain cells), organs (such as the lung) and organisms (such as the cauliflower romanesca) as guided by the demonstrably fractal genome, recursing through epigenomic channels back to the DNA. Pellionisz_at_JunkDNA.com (reprint requests to holgentech_at_gmail.com).

^ back to top


Cold Spring Harbor Lab Says Benefits of ARRA Funding Will Outlast Stimulus Program

October 08, 2010

By Alex Philippidis

NEW YORK (GenomeWeb News) - Cold Spring Harbor Laboratory says it expects to benefit from the stimulus funding it received through from the American Recovery and Reinvestment Act of 2009, well past the program's end next year.

"For CSHL, the injection of ARRA funds has been very positive and will have an impact past the two years of the funding in that it is generating new data that will lead to new projects and new opportunities to pursue grant funding from public and private sources," a laboratory spokeswoman, Dagnia Zeidlickis, told GenomeWeb Daily News this week.

CSHL secured $23.4 million of stimulus funding in 19 awards. The largest award, at more than $4.7 million over two years from the National Cancer Institute, funded the creation of a Molecular Target Discovery and Development Center, with the goal of determining which of the hundreds of genes that are altered in cancer actually play a role in causing the disease.

The center - part of a network of five such centers established nationwide - is evaluating the torrent of data from recent human cancer genome projects, as well as validating candidate genes in mouse models. The center hopes that information can help in discovering and validating new cancer drugs targeting molecular changes in the disease seen in patients. Scott Powers, an associate professor, is the project's principal investigator.

Next largest, at just over $2.5 million over two years from the National Heart, Lung, and Blood Institute, is a study of the epigenetic dynamics of developing germ cells and early mouse embryos. Gregory Hannon, professor and Howard Hughes Medical Institute investigator, served as PI for the study, part of which compares epigenetic profiles in early embryos derived from normal mice to those of early embryos in hormone-treated, super-ovulated mice, since hormone treatments are believed to alter the epigenetic state of some genes.

The research is designed to help understand hormone-assisted attempts at conception undergone by up to 1 million women each year.

CSHL also used almost $1.3 million of ARRA funds over two years from the National Institute of Mental Health to hire a developmental neurobiologist with expertise in neural circuit development and plasticity. Zeidlickis said the laboratory won't disclose the faculty member's identity until the appointment is finalized.

That person will join CSHL's current 50-member faculty, which includes professors, associate professors, assistant professors, and fellows.

A less costly project, using $497,423 in ARRA funds from the National Science Foundation, consisted of renovations of the greenhouse at CSHL's Uplands Farm Research Field Station, which supports research into Arabidopsis and crop plants as well as the plant genetics teaching programs of CSHL's Dolan DNA Learning Center.

In an abstract of its grant application, CSHL concluded: "These facilities are inadequate to meet the demands of current genome driven plant biology research. The infrastructural improvements will provide appropriate growing conditions for a greater diversity of plant species and will increase the energy efficiency of the facilities."

"With new research project funding, upgraded infrastructure, and a new faculty position, CSHL will be able to continue to pursue the kind of innovative research that we are best known for," Zeidlickis said. "This research should lead to new opportunities for funding from Federal programs that are increasingly recognizing innovation and transformative research - like the TRO1 and Challenge Grants that we have been successful in securing - in addition to ARRA."

ARRA is the $814 billion measure signed into law by President Obama last year with the intent of stimulating the nation's economy. The law required NIH to spend, or commit to spend, all $10 billion available to the agency under the legislation by Sept. 30, 2010 - though ARRA money doesn't have to be in the hands of grant winners, generally, until Sept. 29, 2011.

^ back to top


New Research Buildings Open at Cold Spring Harbor Laboratory

Research at the $100 million Hillside Laboratories will address “grand challenges” facing science and society

Cold Spring Harbor, NY – Cold Spring Harbor Laboratory (CSHL) cut the ribbon on six new research buildings, collectively called the Hillside Laboratories, at a dedication ceremony on June 12. The $100 million complex represents the largest expansion in CSHL’s 119-year history and increases active research space by 40%. When fully occupied the buildings will house approximately 200 new research-related personnel, which will mark a 20% increase in employment at CSHL.

At the dedication ceremony, CSHL President Bruce Stillman said, “This expansion will allow Cold Spring Harbor Laboratory to do more of what it has always done best: perform pioneering research at the leading edge of biological science, particularly in the areas of cancer and neuroscience, but also in the emerging field of quantitative biology.” Dr. Stillman spoke before a distinguished audience of CSHL staff and supporters from the research, business, philanthropic and government communities, including Nobel laureate Philip Sharp.

In his dedication remarks, Dr. Sharp, perhaps best known as the co-discoverer of gene splicing, suggested how research to be performed in the Hillside buildings “will help humanity surmount some of the great challenges of our time.” He recalled that the first public announcement calling for a national effort to sequence the human genome was made at the dedication of a new research building at CSHL in 1985. He then issued his own implicit challenge to the scientists who will occupy the gleaming new Hillside buildings. Sharp envisioned a future in which data collected in millions of patient electronic medical records will be merged with genome scans of the same individuals. This would serve as the basis for profound insights into cancer and mental illness, two of the foci of work in the new Hillside Laboratories. Such an effort, he said, might also help usher in an era of personalized medicine.

CSHL’s President Stillman in his remarks thanked gathered guests for their support of the expansion project, saying, “Such a significant addition to our research space was made possible by the generous contributions of private donors, philanthropic foundations and the New York State ‘Gen*NY*sis’ initiative, which provided a grant of $20 million. They had the foresight to understand the significance of this expansion to the Laboratory’s long-term mission to advance our ability to diagnose and develop more effective ways of treating cancers, neurological diseases and other major causes of human suffering.”

A capital campaign raised over $200 million to support the construction of the new research buildings, recruitment of new investigators, equipment for new research projects and endowment for research and graduate education. The project was also supported by a bond issued with the Nassau County Industrial Development Authority.

The Hillside Laboratories

Called the “Hillside Laboratories,” the six new research buildings total 100,000 square feet and include:

The Donald Everett Axinn Laboratory, for research on the neurobiological roots of mental illness;

the Nancy and Frederick DeMatteis Laboratory, for research on the genetic basis of human diseases, including autism, cancer, and schizophrenia;

the David H. Koch Laboratory, home to a newly established Center for Quantitative Biology, where an interdisciplinary team of top mathematicians, physicists, and computer scientists will develop mathematical approaches to interpret and understand complex biological data sets;

the William L. and Marjorie A. Matheson Laboratory, for research on the tumor microenvironment and metastasis;

the Leslie and Jean Quick Laboratory, for research on new therapeutic strategies for treating cancer; and

the Wendt Family Laboratory, for research on neurodevelopment and the wiring of complex circuits in the brain.

Designed to foster the progress of scientific discovery

Speaking at the opening ceremony, CSHL Board Chairman Eduardo Mestre said, “An important goal for the design of the Hillside Laboratories was to encourage collaboration among scientists and foster the progress of scientific discovery, while preserving the historic appeal of CSHL’s picturesque campus. Looking at this beautiful complex I believe we have succeeded brilliantly.”

The six new buildings are actually outcroppings of a single interconnected structure with an infrastructure that is integrated beneath ground level. Each of the laboratories rises from the ground in a different place, giving the appearance of six discrete buildings. Nestled into the hillside, the buildings are connected at various elevations and share a common utility grid that will make them 30% more energy-efficient than prevailing standards for laboratory facilities.

In order to preserve the idyllic nature and existing environment of the 115-acre campus, the Hillside Laboratories have been designed to complement rather than overpower CSHL’s smaller, historic buildings along the western shoreline of Cold Spring Harbor. In addition to the six research buildings, the new complex features the Laurie and Leo Guthart Discovery Tower, the tallest structure in the group, which serves to vent heat from the six buildings while providing an aesthetic “cap” for the ensemble.

Other unique features of the complex include a water element that threads like a mountain stream through its center; a 200-foot-long bridge; an award-winning storm water management system; meticulous landscape design; and spectacular new vantage points for viewing Cold Spring Harbor.

The Hillside Laboratories were designed by Centerbrook Architects and Planners LLP, which was selected for this project based on its history of award-winning designs of earlier CSHL buildings and its commitment to creating unique and uplifting designs that fulfill program and budget objectives while enriching the natural surroundings.

In the construction phase of the project CSHL emphasized the hiring of local Long Island craftsmen and -women. It is estimated that the project provided as many as 250 construction industry jobs on Long Island during the course of its nine-year planning and construction phases.

Hillside Laboratory Facts

The construction project is the largest ever undertaken by CSHL and will increase research space by 40%.

The expansion at CSHL will create 200 new high-paying, high-tech jobs on Long Island.

More than 250 project contractors, consultants, and craftsmen who worked on the project were from local Long Island companies.

Construction costs on the new 100,000-square-foot building complex totaled $100 million.

The laboratory buildings are designed to be 30% more energy-efficient than standards set for laboratories by ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers).

An innovative environmental design for storm-water management uses newly-created wetlands, rain gardens, and bio-swales to filter storm water runoff from the hillside before it makes its way into the harbor. This system has a capacity of 254,000 gallons, and was awarded the 2007 Project of the Year by the Nassau County Society of Professional Engineers.

Nearly 700 trees have been planted to reforest the approximately 11 acres of forest that were cleared to make way for construction.

All organic material from the site was retained for reuse. Trees were chipped and mulched for site restoration, and topsoil was scraped away from the building site, retained onsite, and reapplied during site restoration.

Approximately 200,000 cubic yards of excess earth was removed from the site. A sand mining operation was set up on-site, screening out rock, gravel, fine sand and other high-quality construction material before being removed from site. Sale of the construction material reduced the cost of excavation from $4 million to $2 million.

William H. Grover, FAIA, and James C. Childress, FAIA, are Centerbrook Architects’ Partners-in-Charge of the Hillside complex. Todd E. Andrews, AIA, is the Project Manager. Visit www.centerbrook.com for more information.

Art Brings is the Vice President, Chief Facilities Officer in charge of the project.

Cold Spring Harbor Laboratory is a private, nonprofit research and education institution dedicated to exploring molecular biology and genetics in order to advance the understanding and ability to diagnose and treat cancers, neurological diseases and other causes of human suffering.

^ back to top


What to Do with All That Data?

October 07, 2010

By Alex Philippidis

Recent technological advances in genomics have caused something both "terrifying" and "exciting," Mike the Mad Biologist says - "a massive amount of data." Mike says that genome sequencing is already fast and cheap, but it will become faster and cheaper; the problem is evolving from how to sequence genomes to get informative data to how best to use the information we already have. "We are entering an era where the time and money costs won't be focused on raw sequence generation, but on the informatics needed to build high-quality genomes with those data," Mike says. While it's great to be able to contemplate a $100 genome, the costs of storing and using the data could be upwards of $2,500. Researchers must find ways to store the data and analyze everything that's already been sequenced. "You have eleventy gajillion genomes. Now what? Many of the analytical methods use 'N-squared' algorithms: that is, a 10-fold increase in data requires a 100-fold increase in computation. And that's optimistic," he says.

Submitted by Stephen.Craig.J... on Thu, 10/07/2010 - 16:47.

I am thinking there needs to a an independent organization, comprised of experts, which continually analyzes and interprets the new information and translates it into a usable form for clinicians and other end users.

• reply

Submitted by S. Pelech - Kinexus on Thu, 10/07/2010 - 18:41.

Unless there is a well funded parallel program of biomedical research that can make sense of the genomics data from a proteomics perspective, the genome sequencing efforts will yield primarily correlative data that will offer limited risk assessment at best. In view of the complexities of cellular regulation and metabolism, it will not provide conclusive data about the actual cause and progression of an individual's disease and how best to treat it. Unfortunately, much of the currently efforts to understand the roles and regulation of proteins are undertaken in simple animal models that are attractive primarily because of their ease of genetic manipulation. However, such studies have little relevance to the human condition. Without a better understanding of how mutations in genes affect protein function and protein interactions in a human context, genome-based diagnostics will in most situations probably not be much more beneficial than phrenology.

Phrenology is an ancient practice that was extremely popular about 200 years ago. It was based on the idea that formation of an individual's skull and bumps on their head could reveal information about their conduct and intellectual capacities. Phrenological thinking was influential in 19th-century psychiatry and modern neuroscience. While this practice is pretty much completely ridiculed now, it is amazing how many people still use astrology, I Ching, Tarot cards, biorhythms and other questionable practices to guide their lives, including medical decisions. I fear that an even wider portion of the general population will put their faith into whole genome-based analyses, especially with the strong encouragement of companies that could realize huge profits from offering such services. The most likely consequences, apart from yet another way for the sick to be parted from their money, is a lot more anxiety in the healthy population as well.

While I am sure that many of my colleagues may view my comparison of gene sequencing with obvious pseudo-sciences as inappropriate, the pace at which such genomics services are becoming offered to the general population warrants such consideration. We know much too little about the consequences of some 15 million mutations and other polymorphisms in the human genome to make sensible predictions about health risks. For only a few dozen human genes, primarily affected in cancer, do we have sufficient data to make reasonable pronouncements about the cause of a disease and the means to do something effective about it in the way of targeted therapy.

While it is easy to become exuberant about the power and potential of genomic analyses, the limitations of this type of technology alone to improve human health will soon become painfully obvious. Ultimately, economics will be the main driver of whether it is truly worthwhile to pursue whole genomic sequencing on mass. This will not be dictated simply by the cost of whole genome sequencing, but as pointed out by others, the costs of storing and analyzing the data, and whether significant improvement outcomes in health care delivery actually materialize.

I am much less optimistic about the prospects of this. When I grew up in the 1960's, there was excitement about human colonies on the moon and manned missions to Mars before the turn of the 20th century. Nuclear power, including fusion, was going to solve our energy problems by this time. I believe in 30 years when we look back at current plans to sequence tens to hundreds of thousands of human genomes, we will be amazed at the naivety of proponents for this undertaking.

^ back to top


Pacific Biosciences Targeting $15-$17 Share Price for IPO

October 05, 2010

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – Pacific Biosciences will make 12.5 million shares available at a price between $15 and $17 per share for its initial public offering, the company said in an amended preliminary prospectus filed with the US Securities Exchanged Commission today.

Today's amended S-1 document follows a similar filing last week with the SEC in which the company disclosed it had done a 1-for-2 reverse stock split in September and increased the amount it expects to raise from its IPO to $230 million from the $200 million originally targeted when the company announced its proposed IPO in August.

The Menlo Park, Calif.-based single-molecule sequencing firm also said in today's filing that it is making about 1.9 million shares available to its underwriters to purchase to cover over-allotments

At the midpoint of its share price range, $16, PacBio's net proceeds from the offering would be about $182.5 million or $210.4 million if the underwriters fully exercise their option to purchase additional shares, the company said.

The underwriters on the offering are JP Morgan, Morgan Stanley, Deutsche Bank Securities, and Piper Jaffray.

PacBio previously had stated that through the first half of 2010, it had recorded almost $1.2 million in revenues, all from government grants, and a net loss of $63 million, or $99.58 per share. In H1 2009, it had no revenues, and a net loss of $35.1 million, $75.39 per share.

The company had cash and cash equivalents of $90.1 million as of June 30, it said.

[There is hardly any question that with the stock market easing back to 11,000 three possible successful IPO-s could clinch the return of a definite recovery. It is widely rumored that the social Internet media sector FaceBook is poised for one. The other two could be guestimated for Affordable DNA Sequencing by Complete Genomics (Mountain View) and almost simultaneously filed by Pacific Biosciences (Menlo Park). Thus the remaining crucial question is "when", "which one first", and "which will be more successful than the other"? Answers to these heretofore open questions might harbor some very good news for Silicon Valley - and for the US Economy, as heralded in YouTube "Personal Genome Computing" panel by Churchill Club in early 2009. (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


The Road to the $1,000 Genome

Bio-IT World, September 28, 2010

SPECIAL REPORT

The term next-generation sequencing (NGS) has been around for so long it has become almost meaningless. We use “NGS” to describe platforms that are so well established they are almost institutions, and future (3rd-, 4th-, or whatever) generations promising to do for terrestrial triage what Mr Spock’s Tricorder did for intergalactic health care. But as the costs of consumables keep falling, turning the data-generation aspect of NGS increasingly into a commodity, the all-important problems of data analysis, storage, and medical interpretation loom ever larger.

“There is a growing gap between the generation of massively parallel sequencing output and the ability to process and analyze the resulting data,” says Canadian cancer research John McPherson, feeling the pain of NGS neophytes left to negotiate “a bewildering maze of base calling, alignment, assembly, and analysis tools with often incomplete documentation and no idea how to compare and validate their outputs. Bridging this gap is essential, or the coveted $1,000 genome will come with a $20,000 analysis price tag.

“The cost of DNA sequencing might not matter in a few years,” says the Broad Institute’s Chad Nusbaum. “People are saying they’ll be able to sequence the human genome for $100 or less. That’s lovely, but it still could cost you $2,500 to store the data, so the cost of storage ultimately becomes the limiting factor, not the cost of sequencing. We can quibble about the dollars and cents, but you can’t argue about the trends at all.”

But these issues look relatively trivial compared to the challenge of mining a personal genome sequence for medically actionable benefit. Stanford’s chair of bioengineering, Russ Altman, points out that not only is the cost of sequencing “essentially free,” but the computational cost of dealing with the data is also trivial. “I mean, we might need a big computer, but big computers exist, they can be amortized, and it’s not a big deal. But the interpretation of the data will be keeping us busy for the next 50 years.”

Or as Bruce Korf, the president of the American College of Medical Genetics, puts it: “We are close to having a $1,000 genome sequence, but this may be accompanied by a $1,000,000 interpretation.”

Arbimagical Goal

The “$1,000 genome” is, in the view of Infinity Pharmaceuticals’ Keith Robison, an “arbimagical goal”—an arbitrary target that has nevertheless obtained a magical notoriety through repetition. The catchphrase was first coined in 2001, although by whom isn’t entirely clear. The University of Wisconsin’s David Schwartz insists he proposed the term during a National Human Genome Research Institute (NHGRI) retreat in 2001. During a breakout session, he said that NHGRI needed a new technology to complete a human genome sequence in a day. Asked to price that, Schwartz paused: “I thought for a moment and responded, ‘$1,000.’” However, NHGRI officials say they had already coined the term.

The $1,000 genome caught on a year later, when Craig Venter and Gerry Rubin hosted a major symposium in Boston (see, “Wanted: The $1000 Genome,” Bio•IT World, Nov 2002). Venter invited George Church and five other hopefuls to present new sequencing technologies, none more riveting than U.S. Genomics founder Eugene Chan, who described an ingenious technology to unfurl DNA molecules that would soon sequence a human genome in an hour. (The company abandoned its sequencing program a year later.)

Another of those hopefuls was 454 Life Sciences, which in 2007 made Jim Watson the first personal genome using NGS, at a cost of about $1 million. Since then, the cost of sequencing has plummeted to less than $10,000 in 2010. Much of that has been fueled by the competition between Illumina and Applied Biosystems (ABI). When Illumina said its HiSeq 2000 could sequence a human genome for $10,000, ABI countered with a $6,000 genome dropping to $3,000 at 99.99% accuracy.

Earlier this year, Complete Genomics reported its first full human genomes in Science. One of those belonged to George Church, whose genome was sequenced for about $1,500. CEO Cliff Reid told us earlier this year that Complete Genomics now routinely sequenced human genomes at 30x coverage for less than $1,000 in reagent costs.

The ever-quotable Clive Brown, formerly a central figure at Solexa and now VP development and informatics for Oxford Nanopore, a 3rd-generation sequencing company says: “I like to think of the Gen 2 systems as giant fighting dinosaurs, ‘[gigabases] per run—grr—arggh’ etc., a volcano of data spewing behind them in a Jurassic landscape—Sequanosaurus Rex. Meanwhile, in the undergrowth, the Gen 3 ‘mammals’ are quietly getting on with evolving and adapting to the imminent climate change... smaller, faster, more agile, and more intelligent.”

Nearly all the 2nd-generation platforms have placed bets on 3rd-gen technologies. Illumina has partnered with Oxford Nanopore; Life Technologies has countered by acquiring Ion Torrent Systems; and Roche is teaming up with IBM. PacBio has talked about a “15-minute” genome by 2014, Halcyon Molecular promises a “$100 genome,” while a Harvard start-up called GnuBio has placed a bet on a mere $30 genome.

David Dooling of The Genome Center at Washington University, points out the widely debated cost of the Human Genome Project included everything—the instruments, personnel, overhead, consumables, and IT. But the $1,000 genome—or in 2010 numbers, the $10,000 genome—only refers to flow cells and reagents. Clearly, the true cost of a genome sequence is much higher (see, “The Grand Illusion”). In fact, Dooling estimates the true cost of a “$10,000 genome” as closer to $30,000, by the time one has considered instrument depreciation and sample prep, personnel and IT, informatics and validation, management and overheads.

“If you are just costing reagents, most of the vendors could claim a $1,000 genome right now,” says Brown. “A more interesting question is: ‘$1,000 genome—so what?’ It’s an odd goal because the closer you get to it the less relevant it becomes.”

Special Interests

This special issue of Bio•IT World contains a series of stories and essays that provide some useful perspectives on the march to the $1,000 genome, which some regard as a medical imperative and others a grand illusion.

We get an up-close look at sequencing operations at the Broad Institute, which has been the U.S. flagship genome center for a decade (see page 30). We also meet the leaders of BGI Americas, which aims to provide sequencing capacity and analysis for labs big and small, while managing editor Allison Proffitt gleefully visits BGI’s prized new sequencing center under construction in Hong Kong (page 42).

We look at the genesis of Solexa, the British company that provided the raw technology for Illumina, the best-selling NGS platform to date (page 52). We meet Kevin Ulmer, a man who has spent more than three decades trying to develop the killer app for the $1,000 genome (page 64). And we meet NABsys, a 3rd-generation technology taking aim at the myriad clinical applications of NGS (page 61).

Given that the costs of data analysis and storage will increasingly dominate the NGS equation, Alissa Poh reviews some of the latest software solutions on offer (page 58), while Allison Proffitt appraises some of the latest data storage technologies (page 38).

Finally, we meet some of the organizations—from bioinformaticians and medical geneticists to pathologists and software engineers—who are developing new ideas and resources for clinical genomic interpretation (page 48). And we profile Hugh Rienhoff, physician and founder of My Daughter’s DNA.org, and follow his inspirational quest to solve his daughter’s mystery condition (page 34).

Also in this report are invited commentaries from genomics experts at two big pharma—Amgen’s Sasha Kamb and Novartis’ Keith Johnson and colleagues—discussing the potential applications and adoption hurdles to NGS in pharma. We also have our regular columns, including BioTeam’s Michele Clamp and our colleague Eric Glazer on social media and a preview of an exciting online community called NGS Leaders.

We hope you enjoy this special report on the road to the $1,000 genome as much as we have enjoyed reporting and preparing it.

—Kevin Davies, Mark Gabrenya and Allison Proffitt

[George Church has been talking about "the zero dollar DNA sequence" for years by now. The "Great Inflection" from emphasis on Sequencing to emphasis on Analytics of Data was looming for at least since YouTube "Is IT ready for the Dreaded DNA Data Deluge?" 2 years ago. We almost agree with Stanford' Russ Altman that The Principle of Recursive Genome Function will us preoccupied for 50 years (I think 500 is more likely...). (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Revolution [was] Postponed [for too long, over Half a Century - AJP]

Scientific American
By Stephen S. Hall
October 18, 2010

[The full 8-page article is to be purchased for $5.99 at Scientific American]

The Human Genome Project has failed so far to produce the medical miracles that scientists promised. Biologists are now divided over what, if anything, went wrong—and what needs to happen next

In Brief:

In the year 2000 leaders of the Human Genome Project announced completion of the first rough draft of the human genome. They predicted that follow-up research could pave the way to personalized medicine within as few as 10 years.

So far the work has yielded few medical applications, although the insights have revolutionized biology research.

Some leading geneticists argue that a key strategy for seeking medical insights into complex common diseases— known as the “common variant” hypothesis— is fundamentally flawed. Others say the strategy is valid, but more time is needed to achieve the expected payoffs.

Next-generation methods for studying the genome should soon help resolve the controversy and advance research into the genetic roots of major diseases.

---

Comment #3

Yehuda Elyada

06:26 PM 10/1/10

The complexity of the concept of a gene requires analytic tools far more sophisticated than the naive assumption that there exist a one-to-one correspondence rule between gene variation and phenotype traits. The DNA is not a "blueprint" in the simple metaphor borrowed from engineering drawings. A more fitting metaphor is a musical score, defining the timing and amplitudes of notes series expression by various organs in the assemblage. Each musical instrument produces different waveforms due to its unique note expression mechanism, but music is made when all are controlled by a single set of notes and playing instructions and synchronized by the conductor. The waveforms combine to generate something "higher" than just more complex waveforms - just as life is more than metabolism.) The musical metaphor suggests how to analyze the relationship between DNA and phenotype.

The "holistic" approach to appreciation of music is based on subjective, human-centric psychological response to various harmonics, note sequences, tempo, emphasis, etc. No "reductionist" approach can grasp the essence of what make music a different experience from noise. However, we do not possess a similar mental capacity to analyze DNA expression, so we have to develop a reductionist approach to enable analysis based on mathematical rigor. This is where physics can point the way.

From the point of view of physics, music is a time varying complex waveform that can be broken into its simple components. By doing so, you move from the complex world of waveforms into the linear, "orthogonal" world of frequencies. You pay for this transformation by losing the ability to grasp the wholeness of the musical experience - a heavenly symphony become just flickering bars on your spectrum analyzer - but the technique of Fourier transform is essencial when you want the zero-in on an acoustic trait of a music instrument.

It is somewhat naive to assume that the same transformation that proved so useful and central in physics (not just in acoustics. Where would quantum mechanics be without the Fourier transformation) will unlock the genotype-phenotype conundrum. But it's a promising first step in injecting some more sophisticated mathematics into genomics. To gain insight into the rules of the game you have to start with an overarching paradigm: our aim is to uncover a many-to-many transformation (perhaps expressed as a matrix) between two complementary world-views, the genome (the vector of DNA bases) and the phenotype (the vector whose components are traits).

[The Scientific American article (though for copyright reasons we can not reveal contents in full) blows open for the public the core problem of "PostModern Genomics"- that scientists are split in the middle, some would not even admit that anything was deadly wrong with the "frighteningly unsophisticated" (and mathematically void) "theory" of (holo)genome function, oversimplified to "genes" (1.3% and Junk DNA 998.7%) with both intronic and intergenic, as well as epigenomic pathways not only neglected, but their research discouraged by withdrawing ongoing government grants. As commenter #3 points out, the article is fundamentally flawed, since if one does not consider overarching new paradigms (such as e.g. the "Recursive Genome Function - AJP) and sophisticated mathematics (such as e.g. the "Fractal approach - AJP), in princple one can not tell if anything based on obsolete axioms is valid or not.

For a preview of this message, see YouTube "Is IT ready for the Dreaded DNA Data Deluge?" 2 years ago . Also, see The Principle of Recursive Genome Function (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


The $1,000,000 Genome Interpretation

Bio-IT World
By Kevin Davies
October 1, 2010

Groups of clinicians, academics, and some savvy software companies are crafting the tools and ecosystem to make medical sense of the sequence.

It is doubtful that the scientists and physicians who first started talking about the $1,000 genome in 2001 could have imagined that we would be on the verge of that achievement within the decade. As the cost of sequencing continues to freefall, the challenge of solving the data analysis and storage problems becomes more pressing. But those issues are nothing compared to the challenge facing the clinical community who are seeking to mine the genome for clinically actionable information—what one respected clinical geneticist calls “the $1 million interpretation.” From the first handful of published human genome sequences, the size of that task is immense.

Although early days, a number of groups are making progress in creating new pipelines and educational programs to prepare a medical ecosystem that is ill-equipped to cope with the imminent flood of personal genome sequencing.

Pathologists’ Clearing House

The pathology department at one of Boston’s most storied hospitals isn’t necessarily the place where one might expect to find the stirrings of a medical genomics revolution, but that’s what’s happening at Beth Israel Deaconess Medical Center (BIDMC) under the auspices of department chairman Jeffrey Saffitz.

“I see this as ground-breaking change in pathology and in medicine,” he says.

Together with Mark Boguski and colleagues, Saffitz has introduced a genomic medicine module for his residents (see “Training Day”). And under the stewardship of applied mathematician Peter Tonellato, he is building an open-source genome annotation pipeline that might pave the way for routine medical inspection once whole-genome sequencing crosses the $1,000 genome threshold.

All well and good: but why pathology? [In my 25 years in University Medical Schools I find it quite unheard of to teach Pathology first, followed by Physiology. For Pathology, taking Physiology is a prerequisite - AJP] “We are the stewards of tissue and we perform all the clinical laboratory testing. This has been our function historically for many years. But we have a sense that the landscape is changing,” says Saffitz. Genetic testing, he argues, must be conducted under the same type of quality assessment, regulatory oversight, and CLIA certification as provided by the College of American Pathologists (CAP), “and should be done by physicians who are specifically trained to do this. That’s us!”

“The brilliance of that,” says Boguski, a pathologist by training, “is that it removes a lot of the mysticism surrounding genomics and makes it just another laboratory test.” There’s really nothing magical or different about DNA, insists Saffitz. “We regard a file of sequence data as a specimen that you send to the lab, just like a urine specimen!”

BIDMC is a medium-sized hospital that conducts 7 million tests a year. Arriving in Boston five years ago, Saffitz began recruiting visionaries to shape “the future of molecular diagnostics” and help the discipline of pathology become a clearinghouse for genomic medicine in a way that is “going to revolutionize the way we do medicine.”

Boguski is best known as a bioinformatician who spent a decade at the National Center for Biotechnology Information (NCBI). He sums up the genomic medicine informatics challenge thus: “You have 3 billion pieces of information that have to be reduced to six bytes of clinically actionable information. That’s what pathologists do! They take in samples—body fluids and tissues—and we give either a yes/no answer or a very small range of values that allow those clinicians to make a decision.”

Increasingly, he says, pathology will become a discipline that depends on high-performance computing to extract clinically actionable information from genome data. That frightens many physicians, but Boguski cites a precedent. “Modern imaging technology would not be possible were it not for high-performance computing, but it’s built into the machine!” he says. “Most practicing radiologists don’t think about the algorithms for reconstructing images from the X-rays. Most pathologists in the future won’t think about that stuff either—it will just be part and parcel of their trade. Nevertheless, we have to invent those technologies.”

Math Lab

Mathematician Peter Tonellato has a deep interest in software systems for the clinic, and formulated the idea of a whole-genome clinical clearinghouse within pathology. “We have to start thinking about genetics as just another component of data information and knowledge that has to be integrated into the electronic health record. Stop labeling genetics as something different and new and completely outside the mainstream medical establishment and move it back into the fundamental foundational effort of medical activity.”

Come the $1,000 genome, it will simply make sense to sequence everyone’s tumor, he says. Just as pathologists study tissue biopsies under a microscope, “we’re going to be sequencing it in parallel and figuring out which pathways and targets are pertinent to that person’s condition.” Simply doing more specialized tests isn’t the solution. “How many tens of millions of dollars and how many years has it taken to validate [the warfarin] test?” asks Boguski. “Multiply that by 10,000 other genes and it simply doesn’t scale. We’re going to have to look at this in a whole new way.”

Tonellato has been funded by Siemens and Partners HealthCare to construct an open-source, whole-genome analysis pipeline. Although not commercially released, the pipeline is built and being used for some pilot projects. He is also partnering with companies—including GenomeQuest—who want to do the sequencing analysis in a best-of-breed competition to establish the most refined NGS mapping utilities and annotation tools. The goal is to annotate those variants in a clinically actionable way down to Boguski’s six bytes of information and the drug response recommendation. “We think we’re as far forward in terms of doing that in an innovative and pragmatic way as anyone,” says Tonellato.

Using the cloud (Amazon Web Services), his team has lowered the cost of whole-genome annotation to less than $2,000. “Everybody talks about the $1,000 genome, but they don’t talk about the $2,000 mapping problem behind the $1,000 genome,” he says. It takes Tonellato’s group about one week using five nodes for the resequencing, mapping and variant calling, while the medical annotation takes three people about a month. High-quality computer scientists have to be paid too, he says. “You can’t just talk about the sequencing costs.”

Of course, it is most unlikely that hospitals will start running massive NGS and compute centers. “We envision a day where every clinical laboratory in every hospital in this country can do this testing,” says Saffitz. “They’re not going to do the sequencing, but there’ll be a machine where they can basically acquire the data, analyze it, and send a report to the doctor saying, ‘This is what we found, this is what it means, this is what you do.’” Where the sequencing is done isn’t of great concern. “We actually treat sequencing as a black box,” says Boguski. What’s important is that the hospital’s cost requirements and quality standards (and those of the FDA) are met. But Tonellato reckons it would be “very odd to have U.S. samples sent abroad for sequencing to Hong Kong or India... and then sit around and wait for the CLIA-certified, clinically accurate results to come back to us. That may happen in the future, but we have to get our own house in order first.”

Another problem is the current state of the gene variant databases, which Boguski calls “completely inadequate” in terms of clinical grade annotation. Where such a resource belongs is open to debate but Boguski is certain it does not belong with the government. “The government is not a health care delivery organization. Whatever that database is, it needs to operate under the same CLIA standards as the actual tests.”

Pathologists have traditionally interacted with patients when they are sick. “But more and more,” says Saffitz, “we’re going to be analyzing the genomes of people who are well, and I hope assuming a very prominent role in the preservation of health and preempting disease.”

Quake Aftershocks

The most comprehensive clinical genome analysis to date was reported in May 2010 in the Lancet. Stanford cardiologist Euan Ashley and colleagues, including Atul Butte and Russ Altman, Stanford’s chair of bioengineering, appraised the genome of Stephen Quake (see, “A Single Man,” Bio•IT World, Sept 2009). “This really needs to be done for a clinical audience to show them what the future is going to be like,” says Altman, who is also director of the biomedical informatics program and chief architect of the PharmGKB pharmacogenomics knowledgebase. The task of interpreting Quake’s genome involved more than 20 collaborators, including bioethicist Hank Greeley and genetic counselor, Kelly Ormond. When discussions turned to the risk of sudden cardiac arrest (Quake’s family has a history of heart disease), Ormond would invite Quake to leave the room until a consensus was reached.

Altman’s own group was able to predict Quake’s response to about 100 drugs. Some of it was imprecise, but he realized that, “especially for the pharmacogenomics, we are much closer [to clinical relevance] than I realized.” He said he would “bet the house” on the results dealing with statin myopathy, warfarin and clopidogrel dosing. The Stanford team also tried linking environmental and disease risk, but Altman admits that is farther from clinical practice. The Lancet study drew high praise from the BIDMC team. “As good as it gets,” is Tonellato’s verdict. “But go down to some town in the middle of America and say, ‘What are you going to do with this genome dataset for your patient?’... Is medicine ready for genetics yet or not? There is a long way to go.”

Since the publication, Altman has received inquiries from companies interested in doing similar “genomic markups” and licensing his group’s annotations. Altman intends to hire an M.D. curator to complement his Ph.D. curators, someone who can highlight the clinical significance of research data. Altman says he would be happy to have PharmGKB data included “in any and all pipelines. Meanwhile, Ashley is leading a Stanford program to make a computer pipeline to reproduce the Quake analysis on a larger scale.

In a rational world, Altman says, it seems logical to sequence human genomes at birth and put the data in a secure database, querying it only when you know what you’re going to do with the results. That’s in an ideal world. In the United States, he notes dryly, some people do not trust governmental databases. “I could imagine if it’s cheap enough, that people will actually resequence the genome on a need-to-know basis, simply so they don’t have to store it. I think that’s a little bit silly, but in order to get genomic medicine effected, I’m not going to lose the fight over the database.”

Whoever ends up doing clinical genomic sequencing in the future, Altman says they will have to document high-quality data with a rapid turnaround. “We will then put [the data] through the pipeline—hopefully the Stanford pipeline or whatever pipeline seems to be winning—and then we will query it as needed and as requested by the physicians on a need-to-know basis.”

1,500 Mutations

Genome Commons was established by Berkeley computational biologist Steve Brenner to foster the creation of public tools and resources for personal genome interpretation. He wants to build an open access Genome Commons Database and the Genome Commons Navigator. He is also launching a community experiment called CAGI (The Critical Assessment of Genome Interpretation) to evaluate computational methods for predicting phenotypes from genome variation data (http://genomeinterpretation.org).

One notable private effort in clinical genome annotation is that of Omicia, a San Francisco-based software company founded by Martin Reese in 2002.

Omicia is taking genome data and extracting clinical meaning, focusing on DNA variation, rather than gene expression or pathways. “We have one of the best systems for interpreting the genome clinically,” claims Reese. He started with Victor McKusick’s classic Mendelian Inheritance in Man catalogue, which now lives online as OMIM, mapping a “golden set” of disease mutations to the reference genome. Omicia is also developing algorithms to predict the effect of protein-coding variants to better understand which mutations are medically relevant.

Reese sums up the goal: “You have 21,000 protein coding mutations compared to the reference genome. 10,000 of them are non-synonymous. We have 3,500 in disease genes. That’s roughly 15%. So 15% of 10,000 is 1,500 protein coding mutations. The goal is to interpret 1,500 mutations.”

For the time being, Omicia is offering its services through collaborations. Reese has a three-year collaboration with Applied Biosystems, and was a co-author on the first NGS human genome paper using the SOLiD platform in 2009. Then there is the Life Alliance, a cancer genome alliance, featuring various medical centers and Life Technologies. “We’re doing their interpretation of these cancer genomes for 100 untreatable cancers,” says Reese.

Presenting the data for a physician is a challenge, says Kiruluta, but not as bad as the scant amount of time a physician has to see a patient. “The reporting is to help a physician make a decision quickly—green light, red light. Then there’s a much more detailed interface behind the scenes,” where other medical professionals can study the patient’s data in more detail.

Reese sees advantages to the commercial approach for genome software compared to academic solutions. “This will be a big play in next few years as people make clinical decisions. So the quality of the software, the QC of the assembly, how transparent you are, the annotation, is critical. It will be a big problem for academia to do that—you know how it is when a postdoc writes something! [Yes, I do. Government Software does not measure up, because it is not in touch with any real market. University Software does not mesure up, since when a postdoc writes a code - and graduates - the University Software either takes a new life in Industry if the postdoc leaves there - or quite simply dies without maintenance. For Genome Informatics software the only way to go is through "Industrialization of Genomics" - AJP]

Reese has also been spearheading the effort to develop a new Genome Variation Format with Mark Yandell (University of Utah) and others, which was recently published in Genome Biology.

DNA Partners

The challenge facing the affable Samuel (Sandy) Aronson, executive director for IT at the Partners HealthCare Center for Personalized Genetic Medicine (PCPGM) and PCPGM’s clinical laboratory director, Heidi Rehm, is to deliver clinically actionable information to physicians in the Partners HealthCare network. “This challenge cannot be entirely solved by a single institution,” Aronson notes. “It takes a network of institutions working together.”

Rehm maintains a knowledge base of 95 genes that are routinely curated by the PCPGM’s Laboratory of Molecular Medicine and supplies information to physicians on the status of those genes in their patients in real time. The PCPGM’s GeneInsight suite, developed by Aronson’s team, has been in use for about seven years. There are two components—one for the laboratory, the other for the clinician. The lab section consists of a knowledgebase—the tests, genes, variants, drug dosing, etc—as well as an infrastructure to generate reports via the Genome Variant Interpretation Engine (GVIE).

On the clinical side is a new entity, the Patient Genome Explorer (PGE), which allows clinicians to receive test results from an affiliated lab and query patient records. “The PGE, without a doubt, is one of its kind,” says Rehm. “There’s no other system out there. There’s a lot of excitement about it. Labs are choosing us for testing because we offer that service.” When an update is made to the PCPGM knowledgebase on a variant that is clinically significant, the PGE proactively notifies the clinicians caring for patients with that variant. If there are 100 clinics with 10 patients each, and Rehm updates the knowledgebase, then 1,000 patient updates are dispatched automatically.

For inherited disease testing, the alert changes the variant from one of five categories to another: 1) pathogenic 2) likely pathogenic 3) unknown 4) likely benign, or 5) benign. The PGE made its debut last summer in the Brigham and Women’s Hospital Department of Cardiology. When the system launched, a dozen “high alerts” (meaning a variant has shifted from one major category to another) were immediately dispatched. The physicians’ response has been really positive, says Aronson. “There’s a significant disconnect between the level of quality of data being used for clinical purposes and the quality of data in the research environment,” says Rehm. “Our hope with the distribution of this infrastructure is to get more data validated for clinical use.”

Core Challenges

The Partners effort is a worthy start, but the larger goal is to build a network where labs with expertise in other genetic disorders such as cystic fibrosis contribute their data, perhaps by offering attribution or a nominal transaction fee. “We can’t maintain data on every gene, but we’re willing to establish nodes of expertise,” says Rehm. As for the IT infrastructure, Aronson hopes to enable organizations to create a node on the network, link to the PGEs, and then operate under their own business models—whatever it takes to make the data accessible. The first external partner that linked to GeneInsight was Intermountain Healthcare (IHC) in Utah. “We believe this is the first transfer of fully structured genetic results between institutions so that they got into IHC’s electronic health record and are now available for decision support,” says Aronson.

Aronson anticipates a day where whole-genome sequencing for patients will be a clinical reality. “It’s very much on our radar,” he says, but doesn’t appear unduly concerned. After all, he says, the PGE is designed to store highly validated clinical information, and he doesn’t expect the millions of variants in a whole genome to contain enough clinically actionable variants to overwhelm the database. The challenge will come in understanding complex/low-penetrance diseases, “where we’re more algorithmically dependent. That will require new infrastructure.”

A bigger problem is facilitating the business models that will solve personalized medicine challenges. “Our goal is to expand networking, adding labs, PGEs and going after a network effect,” says Aronson. “We have a structure that could present an answer to how do you—in a true patient-specific, clinically actionable way that clinicians can use in their workflow—help interpret the data?

[Comment (1)

Kevin has pointed to a most important development if genomics is to deliver on long-awaited promises. An available, affordable personal genome has little value without analysis and interpretation delivered in consumable application.

Our "Genome Revolution" often invites analogy with the Space Age, e.g. comparing the Genome Project to the Moon Project.

The comparison is unfair to the Moon Project unless genomics delivers the full ride.

The Moon Project promised “to put a man on the Moon, and bring him safely back”. If we sequence the entire human DNA and fail to deliver the harder half of interpreting the sequence, the comparison is more akin to the Russians blasting a dog into space and leaving him there.

The ambitious approaches noted in this article all seem to be charting new territory and struggling to break from long-held erroneous beliefs. Trying to fix hereditary diseases without solid principles of recursive genome function as explained by (holo)genome physiology and biophysics is like trying to fix a broken television set without understanding how it works in the first place.

Thank you, Kevin, for addressing the key topic for our critical time.

---

For a preview of this message, see YouTube "Is IT ready for the Dreaded DNA Data Deluge?" 2 years ago . Also, see The Principle of Recursive Genome Function (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Mastering Information for Personal Medicine

Pathway
Sept. 27, 2010
Eric E. Schadt, Chief Scientific Officer of Pacific Biosciences

To make the most of new technologies, medicine must come to grips with the mountains of data it will produce.

Sometimes, mastery over the raging influx of information all around us has life and death consequences. Consider national security agencies charged with ensuring our safety by detecting the next big terrorist threat. Presented with worldwide email traffic, phone conversations, credit card purchase histories, video images from pervasive surveillance cameras and intelligence reports, the challenge for these agencies is to integrate all this abundant, disparate information and to present it to analysts in ways that help to identify significant threats more quickly.

Soon we will all be faced with a similar challenge that could have a dramatic impact on our well-being. In the not-too-distant future, average Americans will have access to detailed information about their genetic makeup, the molecular states of cells and tissues in their bodies, longitudinal collections of readings on their weight, blood pressure, glucose and insulin measurements, and myriad other clinical traits informative about disease, disease risk and drug response. Whereas classic molecular biology and clinical medicine offered only simple links between molecular entities and disease (for example, relating insulin levels and glucose levels to risks of diabetes), new technologies will provide comprehensive snapshots of living systems at a hierarchy of levels, enabling a more holistic view of human systems and the molecular states underlying disease physiologies. All this data—once appropriately integrated and presented—will allow us and our doctors to make the best possible informed decisions about our risks for disease; it will also help us to tailor treatment strategies to our particular disease subtypes and to our individual genetic and environmental backgrounds.

Powerful examples of how this new era of personalized medicine will change diagnosis and treatment are already available. A now-routine genetic test can indicate whether breast cancer patients will respond to treatment with the drug Herceptin, and testing for certain changes in DNA that affect blood-clotting can help doctors decide what dose of the anticoagulant warfarin would be safest for certain patients.

However, unlike the doctor of today, armed with a stethoscope and thermometer, tomorrow’s doctors will have access to a multitude of biosensor chips and imaging technologies capable of monitoring variations in our DNA and in the activities of genes and proteins that drive all cellular functions. They will be able to order scans with singlecell resolution for any organ in our bodies. How will such data be managed? How will it be analyzed and contrasted with similar types of data collected from populations so that the totality of these data help us to better understand our specific condition? How will the complex models derived from such data be interpreted and then applied to us as individuals by our doctors? Is the medical community prepared—and are individuals ready—for this revolution?

Managing Mountains of Data

The biomedical and life sciences are not the first to encounter this type of big data deluge. Google, which is among the most sophisticated handlers of big data on the planet, aims for no less than “organizing the world’s information” by employing high performance, large-scale computing to manage the petabytes of data available on the Internet. (A petabyte is one million gigabytes. By one estimate, 50 petabytes could store every written work in every language on earth since the beginning of recorded history.)

Companies such as Microsoft, Google, Amazon and Facebook have become proficient at distributing petabytes of data over massively parallel computer architectures (think hundreds of thousands of sophisticated, highly interconnected computers all working in concert to solve common sets of problems). Their technologies grab bits of those data on the fly, link them together and present them to users on request in fractions of a second.

However, the problems those companies have solved thus far are much simpler than understanding how the millions of variations in DNA distinguish us as individuals, the activity levels of genes and proteins in all the cell types and tissues in our bodies, and the physiological states associated with disease. The data revolution in the biomedical and life sciences is powered by technologies that provide insights into the operation of living systems, the most complex machines on the planet. But achieving such understanding will require that we tame the burgeoning information those technologies generate.

Within the next five to 10 years, for example, companies like Pacific Biosciences will deliver new singlemolecule, real-time sequencing technologies that will enable scans of a person’s entire genome (DNA), transcriptome (RN A) and chemical “epigenetic” modifications of the genome in a matter of minutes and for less than $100. For a single individual, hundreds of gigabytes of this information could be gathered from many tissue and cell types, at multiple time points and under varying environmental stresses. Layer on top of it more data from imaging technologies, other sensing technologies and personal medical records, and one might possibly produce terabytes (trillions of bytes) of data per individual, and well into the petabyte range and beyond for populations of individuals. Hidden in those collective data sets will be answers relating to the causes of disease and the best treatments for disease at the individual level.

Integrating such data and constructing predictive models will require approaches more akin to those now employed by physicists, climatologists and workers in other strongly quantitative disciplines. Biomedical investigators and information specialists will need to develop tools and software platforms that can integrate the large-scale, diverse data into complex models; experimental researchers will need to be able to use these models and refine them iteratively to improve their ability to assess the risk and progression of disease and the best treatment strategies. In the end, these models will need to be able to move into clinical settings where doctors can employ them effectively to improve patients’ conditions without necessarily having to understand all of the underlying complexities that produced the models.

Only by marrying information technology to the life sciences and biotechnology can we realize the astonishing potential of the vast amounts of biological data that new generations of devices can gather and share. Such data, if properly integrated and analyzed, will enable personalized medicine strategies that could lead to everyone making better choices, not only on treating disease, but preventing it altogether.

Eric E. Schadt is chief scientific officer at Pacific Biosciences in Menlo Park, Calif. and co-founder and a director of Sage Bionetworks in Seattle, Wash.

[For a preview of this message, see YouTube "Is IT ready for the Dreaded DNA Data Deluge?" 2 years ago - (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


Cacao Genome Database Promises Long Term Sustainability

Chocolate-based Economy? (Potato, too... ) - AJP

Triple Pundit
September 20, 2010
by Leon Kaye

It is one of the oldest foods and is the subject of ancient texts and myths. One of the top ten most traded commodities in the world, this plant is a huge part of the economies of countries ranging from Cote d’Ivoire to Ecuador to Papau New Guinea. The finished product props up some of the world’s best known brands, and it exudes luxury while also contributing to a brutal way of life for some of the world’s poorest people—including hundreds of thousands of children. The cacao tree is also the subject of science, from botanists, agronomists, and now, geneticists.

Now scientists have decoded 92% of the cacao tree’s genome. Funded by MARS with the support of the US Department of Agriculture, IBM, and several universities, the Cacao Genome Database project is three years ahead of schedule. With the sequenced genotype, Matina 1-6, the project promises to solve such problems as pests and diseases that often plague cacao farmers, and in the long run, could improve both the production and sustainability of the cacao industry.

With this genome sequencing, cacao will join other commodities including rice, corn, and wheat, all of which have already gone through the process. Promises are aplenty: improved crop yields, heartier cacao beans, and improved production within the entire supply chain from farmers to chocolatiers. The project also tackles the long term sustainability of what some would call big chocolate: the demands of giants including Hershey, MARS, Cadbury, Nestle, Kraft, and Lindt. Over the past quarter-century, the growing global appetite for cacao has caused its global production to double, but the increase has come through more land use, not improved yields.

So will we all nosh on Matina Bars or Matina Kisses in the near future? It definitely could boost demand for organic and fair trade chocolate brands, as plenty of customers will add chocolate to the “non-GMO” shopping list. Large cacao farming operations stand to benefit as well. The environment possibly could be a winner, with a reduction in the use of pesticides and other chemicals. Plenty of long term questions, however, remain. Will less common varietals of cacao trees survive? What about smaller farmers, who could find themselves squeezed by the price of coveted pods and seedlings?

Or is this just the reality farmers, producers, and consumers must face in order for an industry to survive? The global cacao industry lost $US700 million the past 15 years from a trio of fungal diseases alone. Such losses do not only affect the bottom line of companies like Nestle & Hershey: the livelihoods of many people who have limited economic opportunities may hang in the balance as well.

[As detailed on "Genome Based Economy", the present "Genome Revolution" is not at all the first chapter in entirely changing the global economy. Norman Borlaug (Nobel Prize, 1970) started the "Green Revolution" - that with the help of Genomics at that time, saved billions of people from starvation (and by removing the reality of dying of hunger, turned India and China into global powers, as they are now). With the global population exploding over 7 billion, there is universal agreement that a "Second Green Revolution" is needed. By means of full DNA sequencing, PostModern Genomics delivers (no wonder that Product and Food Companies, such as NESTLE, KRAFT pitch in with very substantial funds, as well as agricultural giants like Monsanto are in the first 10 on the Pacific Bioscience' "short list" that already bought in cash the PacBio SMRT sequencer. Another aspect of this news is, that the already existing "Personalized Chocolate, best fitting to your Genome" (with technology shown here) will reach masses of consumers. (Without and way before full DNA sequencing, diabetics already have sugar-free chocolate...). Some might say "chocolate is insignificant" (which it isn't). Potato certainly is one of the main sources of global nutrients (and look what the US did with rice for India in the First Green Revolution...). Now high-protein potato is claimed to have been developed by India, for themselves and perhaps to the World. - (FaceBook) / Pellionisz_at_JunkDNA.com]

^ back to top


US clinics quietly embrace whole-genome sequencing

Published online 14 September 2010 | Nature | doi:10.1038/news.2010.465

It may be small-scale and without fanfare, but genomic medicine has clearly arrived in the United States. A handful of physicians have quietly begun using whole-genome sequencing in attempts to diagnose patients whose conditions defy other available t