Newsletter of HoloGenomics

Genomics, Epigenomics integrated into Informatics:

HoloGenomics

SKIP TO TABLE OF CONTENTS

SKIP TO NEWS

andras_at_pellionisz_dot_com

holgentech_at_gmail_dot_com

Four Zero Eight 891-718_Seven

The Next Big Thing in Silicon Valley:

Google-Stanford joint Venture for Precision Medicine

Dawning to Apple that Personal Genome will end up on Smartphones

Grail combines Microsoft, Amazon Founders, Ilumina Board Dir. (Gates, Bezos, Flatley)

Intel going for Precision Medicine

GE Health sponsors Venter's Human Longevity (by Venter, in Silicon Valley with Dr. Och)

All within a 15 mile radius of Silicon Valley.

Technology is ready, Biology is catching up, fast.

Key is the secured Intellectual Property

"Fractal Genome Grows Fractal Organisms"

You See Above The Future: "Double Technology Disruption" Of New School Genomics:

Apple Smartphone Genome Analytics by SmidgION of Oxford Nanopore!

Science, 2016, July 22 (Folding DNA full PDF)

Science, 2016, July 22 (COVER figure of Self-similar [fractal] proteins)

Science, 2009, October (COVER figure of fractal folding of DNA)

(Below is ABSTRACT from Pellionisz, Cold Spring Harbor presentation, 2009 September)

(Abstract from Pellionisz, Cold Spring Harbor presentation, 2009 September)

A Compilation by Andras J. Pellionisz, see Contact, Bio and References here

SCHOLARLY SUMMARY OF ACCOMPLISHMENTS BY ANDRAS PELLIONISZ

Any/all IP-related Contact is solely through

NEW ADDRESS

Attorney Kevin Roe, Esq. FractoGene Legal Department

12280 Saratoga-Sunnyvale Road, Suite 203

Saratoga, CA 95070

Secured contact to Dr. Pellionisz

regarding Academic, Board, Non-Profit activities:

andras_at_pellionisz_dot_com or cell Four-Zero-Eight - 891- Seven - One - Eight - Seven

Skip to HoloGenomics Table of Contents


Who

Genome is Fractal? - "Yeah, for sure!"

(Eric Schadt, Double-degree mathematician, Ph.D. in Biomathematics, Sept 15, 2014)

listen here

Mendelspod interview with Eric Schadt, Director of the $600 M Institute of Genomics and Multiscale Biology, NYC (Sept. 15, 2014)

Q [Theral Timpson, Mendelspod]: I have read that you are a Ph.D. in bio-mathematics?

A [Eric Schadt, Ph.D. in bio-mathematics]: Yes, bio-mathematics.

Q: I have recently met a Hungarian-American scientist, András Pellionisz, and he says that we need to bring math into biology and genetics and he says that

THE GENOME IS A FRACTAL.

Do you buy any of that?

A [Eric Schadt]: YEAH, FOR SURE!

[What is the significance of Eric Schadt' confirmation of FractoGene (the utility derived from fractal genome growing fractal organisms)? Credentials of Eric Schadt, with double-degree mathematician, Ph.D. in bio-mathematics, a sterling record of Merck, Pacific Biosciences and now Director of $600 M "Mount Sinai Institute for Genomics and Multi-scale Biology", plus his non-biased (straight as an arrow academic and personal integrity), would be extremely difficult to beat globally to form an independent professional judgement, based on top-command on both bio-mathematics and information theory & technology. Two times seven years after the Human Genome Project (2000-2007 Encode-I first concluding that "Junk DNA is anything but" and the Central Dogmatism proven to be one of the most harmful mistakes "of the history of molecular biology", followed by the wilderness of 2007-2014. From 2000 to 2014 genomics essentially existed "a new science without valid or even universally agreed upon definitions of theoretical axioms". Characteristically, even Eric Lander heralded globally the "nothing is true of the most important fundamental assumptions" (but put in 2009 the Hilbert-fractal of the genome, just two weeks after George Church invited to his Cold Spring Harbor Meeting Dr. Pellionisz). Detractors had to swallow their (sometimes ugly) words - with the only "alternative theory" of "mathematically solid and software-enabling FractoGene" a random sample of metaphors that "genome regulation is turning genes on and off", or that "the genome is a language" (found not to be true twenty years ago, see Flam 1994).

Eric Schadt's academic endorsement of FractoGene consistently goes back to 2010 (if the theory that fractal genotype is found to be experimentally linked to fractal phenotype, "it would be truly revolutionary"). Well, from 2011 compromised fractal globule has been linked by scores of top-notch independent experimental studies worldwide to cancer, autism, schizophrenia and a slew of auto-immune diseases.

What will the "academic endorsement" result in? First, like in case of Prof. Schadt, leading academic centers are likely to gain intellectual leadership to schools of advanced studies, where non-profit applications (see below already over a thousand) are streamlined by a thought leader of non-linear dynamics as the intrinsic mathematics of living systems. Second, (and IP augmented by trade secrets since last CIP in 2007) is likely to result in a for-profit application-monopoly (in force over the US market till mid-March of 2026)]


[Dr. Pellionisz is legally permitted to practice Compensated Professional Services (Analysis, Advisorship, Consultantship, Board Membership, etc) as long as there is no "Conflict of Interest", through Secured Contact (see above).

Communication regarding Intellectual Property of any kind, including but not limited to patents, trade secrets, know-how associated with Dr. Pellionisz must be strictly gated by "Attorney Kevin Roe, Esq. FractoGene Legal Department" (see above)]

Skip to Most Recent News (2014-2012)


Archieves

2014

2011-2013

2010

2009

2008

2007 Post-Encode

2007 Pre-Encode

2006

2005

1972-2004


The Decade of Genomic Uncertainty is Over

The FractoGene Decade (2002-2012)

Pellionisz' FractoGene, 2002 (early media coverage)


Pellionisz' "FractoGene" patent, priority date 2002, patent issued in 2012 (see 2002 priority date, 2007 CIP filing in Google Patents 8,280,641 , and also recursive fractal iteration utility disseminated in peer-reviewed paper and Google Tech Talk YouTube (Is IT Ready for the Dreaded DNA Data Deluge?), both in 2008, presented in September 2009 in Cold Spring Harbor. The issued patent is in force till late March, 2026. The invention drew utility from RELATING genomic- and organismic fractal properties. "Methods" were as described in the body of application, plus ~750 pages of "Incorporation by Reference" ("should be treated as part of the text of the application as filed", see US Law 2163.07(b)). State of Art Methods beyond CIP of Oct. 18, 2007 are handled as "Trade Secrets", as customary in the strongest combination of Intellectual Property Portfolios.

"Evidence for" and/or "Consistent with"??

As evident from the title of the paper above, authors clearly refer to "evidence". Other, and after the initial decade an escalating number of particular authors of independent experimental investigations consider their results "consistent with" the fractal organization found either in the genome, and/or in physiological/pathological (e.g. cancerous) organism(s).

With the significance of claims rapidly gaining a very different value ("evidence for" becoming extremely precious, while "consistent with" generally regarded as almost meaningless) authors are respectfully requested to clarify their (sometimes unclear or ambiguous) claims if they consider themselves in the valuable category of "providing evidence for" - or almost meaningless "consistent with" general class. Clarification to HolGenTech_at_gmail_dot_com will help proper citation, if any. - Dr. Pellionisz

By 2012, independent researchers arrived at the break-through consensus, overdue since 2002. First, ENCODE 2007, followed by ENCODE 2012 replaced the mistaken axioms of "Junk DNA" and "Central Dogma" by the "nolo contendere assumption" of "The Principle of Recursive Genome Function, 2008" (requiring the experimentally found "nearest neighbor organization" of the Hilbert-fractal of genome, at the later date of 2009). The independent illustration above of both the genome as well as organisms exhibiting fractal properties put the challenge plainly in their RELATION. Methods, as e.g. relating genomic fractal defects to the fractality of tumors in the genome disease of cancer, constitute secured intellectual property:


Eric Lander (Science Adviser to the President and Director of Broad Institute) et al. delivered the message
on Science Magazine cover (Oct. 9, 2009) to the effect:

"Mr. President; The Genome is Fractal !"


"Something like this (disruptions in the fractal structures leading to phenotypic change)" were shown to be true (starting in 2011 November, see top-ranking independent experimentalist's publications, cited below).

"Yeah, of course" - it is now "truly revolutionary".

There are only two question for everyone:

(a) "What is in it for me?"

(b) "What is the deal?"


Proof of Concept (Clogged Fractal Structure Linked to Cancer) was already available
at the Hyderabad Conference (February 15, 2012)
Dozens of additional Independent Experimental Proof of Concept Papers were cited in
Hyderabad Proceedings


The genome is replete with repeats. If the fractal structure is compromised
(see laser beam pointing at where the "proximity" is clogged)
syndromes are already linked to cancer(s), autism, schizophrenia, auto-immune diseases, etc.


Table of Contents

(Sep 24) Microsoft initiatives treat cancer as a computing problem
(Sep 19) 'Junk DNA' tells mice - and snakes how to grow a backbone
(Sep 10) The Oncoming Double Disruption of Technology of New School Genomics; SmidgION by Oxford Nanopore
(Sep 05) Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com
(Sep 05) Multiscale modeling of cellular epigenetic states: stochasticity in molecular networks, chromatin folding in cell nuclei, and tissue pattern formation of cells
(Sep 05) Fractal Dimension of Tc-99m DTPA GSA Estimates Pathologic Liver Injury due to Chemotherapy in Liver Cancer Patients
(Sep 05) Unique fractal evaluation and therapeutic implications of mitochondrial morphology in malignant mesothelioma
(Sep 05) Systems Medicine of Cancer: Bringing Together Clinical Data and Nonlinear Dynamics of Genetic Networks
(Sep 05) ASXL2 promotes proliferation of breast cancer cells by linking ERα to histone methylation
(Sep 05) Researchers identify two proteins important for the demethylation of DNA
(Sep 05) Mathematical Modelling and Prediction of the Effect of Chemotherapy on Cancer Cells
(Sep 01) Breast cancer researchers look beyond genes to identify more drivers of disease development
(Aug 30) HPE Synergy Shows Promise And Progress With Early Adopters
(Aug 22) Illumina - Can GRAIL Deal A Death Blow In The War Against Cancer?
(Aug 15) Precision medicine: Analytics, data science and EHRs in the new age
(Aug 10) Stanford Medicine, Google team up to harness power of data science for health care
(Aug 09) Leroy Hood 2002; "The Genome is a System" (needs some system theory)
(Aug 08) What if Dawkin's "Selfish Gene" would have been "Selfish FractoGene"?
(Jul 25) DNA pioneer James Watson: The cancer moonshot is ‘crap’ but there is still hope
(Jul 23) Science Cover Issue 2016 July 22 with Fractal folding of DNA and of Proteins
(Jul 22) CRISPR Immunotherapy ahead
(Jul 21) Qatari genomes provide a reference for the Middle East
(Jul 17) CIA Chief Claims that Genome Editing May Be Used For Biological Warfare
(Jul 14) Are Early Harbingers of Alzheimer’s Scattered Across the Genome?
(Jul 04) [Independence Day] A 27-year-old who worked for Apple as a teenager wants to make a yearly blood test to diagnose cancer — and he just got $5.5 million from Silicon Valley VCs to pull it off
(Jun 25) President Obama Hints He May Head to Silicon Valley for His Next Job
(Jun 12) Is This the Biggest Threat Yet to Illumina?
(May 24) Big talk about big data, but little collaboration
(Apr 25) Can Silicon Valley cure cancer?
(Apr 14) Life Code (Bioinformatics): The Biggest Disruption Ever? (Juan Enriquez)
(Apr 02) Craig Venter: We Are Not Ready to Edit Human Embryos Yet
(Mar 29) Big Data Meets Big Biology in San Diego on March 31: The Agenda
(Mar 08) Illumina Forms New Company to Enable Early Cancer Detection via Blood-Based Screening
(Mar 08) Illumina CEO Jay Flatley Built The DNA Sequencing Market. Now He's Stepping Down
(Mar 07) CRISPR: gene editing is just the beginning
(Mar 01) Geneticists debate whether focus should shift from sequencing genomes to analysing function.
(Feb 10) Top U.S. Intelligence Official Calls Gene Editing a WMD Threat
(Feb 06) Craig Venter: We Are Not Ready to Edit Human Embryos Yet
(Feb 01) UK scientists gain licence to edit genes in human embryos
(Jan 30) Why Eric Lander morphed from science god to punching bag
(Jan 24) Easy DNA Editing Will Remake the World. Buckle Up.
(Jan 23) Genome Editing and the Future of Human Species
(Jan 20) Chinese-scientists-create-designer-dogs-by-genetic-engineering
(Jan 16) Gene edited pigs may soon become organ donors
(Jan 13) New life for pig-to-human transplants
(Jan 10) Genome Editing [What is the code that we are editing?]
(Jan 03) CRISPR helps heal mice with muscular dystrophy
(Jan 01) Credit for CRISPR: A Conversation with George Church
^2016^
(Dec 23) Genome misfolding unearthed as new path to cancer [Defects of Hilbert-Fractal Clog "Proximity", see Figure above - Andras_at_Pellionisz_dot_com]
(Dec 22) The Fractal Brain and Fractal Genome [by bright layperson Wai h tsang]
(Dec 20) 2016 - The Genome Applicance; Taking the Genome Further in Healthcare
(Dec 15) Whole-Genome Analysis of the Simons Simplex Collection (SSC)
(Nov 25) The role of big data in medicine - Bringing together the right talent
(Oct 06) Researchers ID Copy Number Changes Associated With Cancer in Normal Cells
(Oct 05) Genome Pioneer: We Have The Dangerous Power To Control Evolution
(Sep 24) Genetic Analysis Supports Prediction That Spontaneous Rare Gene Mutations Cause Half Of All Autism Cases
(Sep 22) Sorry, Obama: Venter has no plans to share genomic data
(Sep 21) Google (NASDAQ: GOOG) Dips into Healthcare Business
(Sep 15) Head of Mental Health Institute Leaving for Google Life Sciences [Exodus from Government to Private Sphere]
(Sep 01) Bill Gates and Google back genome editing firm Editas
(Sep 01) Zephyr Health grabs $17.5M with infusion from Google Ventures
(Sep 01) Evolution 2.0 by Perry Marshall
(Aug 10) Genome researchers raise alarm over big data
(July 25) The case for copy number variations in autism
(July 25) Intricate DNA flips, swaps found in people with autism
(July 25) The mystery of the instant noodle chromosomes
(July 22) Can ‘jumping genes’ cause cancer chaos?
(July 21) Why you should share your genetic profile [the Noble Academic Dream and the Harsh Business Climate]
(July 20) Why James Watson says the ‘war on cancer’ is fighting the wrong enemy
(July 19) National Cancer Institute: Fractal Geometry at Critical Juncture of Cancer Research
(July 15) Apple may soon collect your DNA as part of a new ResearchKit program
(July 10) Sequencing the genome creates so much data we don’t know what to do with it
(July 07) The living realm depicted by the fractal geometry, (endorsement of FractoGene by Gabriele A. Losa)
(July 03) Google and Broad Institute Team Up to Bring Genomic Analysis to the Cloud
(June 19) GlaxoSmithKline, Searching For Hit Drugs, Pours $95M Into DNA 'Dark Matter'
(June 09) Recurrent somatic mutations in regulatory regions of human cancer genomes (Nature Genetics, dominant author Michael Snyder)
(May 22) Big Data (Stanford): 2015 Nobelist Michael Levitt (multi-scale biology) endorses the Fractal Approach to new school of genomics
(Apr 15) Eric Schadt - Big Data is revealing about the world’s trickiest diseases
(Apr 15) IBM Announces Deals With Apple, Johnson And Johnson, And Medtronic In Bid To Transform Health Care
(Apr 09) An 'evolutionary relic' of the genome causes cancer
(Mar 31) Time Magazine Cover Issue - Closing the Cancer Gap
(Mar 31) We have run out of money - time to start thinking!
(Mar 27) The Genome (both DNA and RNA) is replete with repeats. The question is the mathematics (fractals)
(Mar 21) On the Fractal Design in Human Brain and Nervous Tissue - Losa recognizes FractoGene
(Mar 16) Cracking the code of human life: The Birth of BioInformatics & Computational Genomics
(Feb 26) Future of Genomic Medicine Depends on Sharing Information - Eric Lander to Bangalore
(Feb 25) Genetic Geometry Takes Shape (and it is fractal, see FractoGene by Pellionisz, 2002)
(Feb 19) The $2 Trillion Trilemma of Global Precision Medicine
(Feb 11) BGI Pushing for Analytics
(Feb 10) Who was next to President Obama at the perhaps critical get-together (2011)?
(Feb 03) Round II of "Government vs Private Sector" - or "Is Our Understanding of Genome Regulation Ready for the Dreaded DNA Data Tsunami?"
(Jan 31) Houston, We've Got a Problem!
(Jan 27) Small snippets of genes may have big effect in autism
(Jan 27) Autism genomes add to disorder's mystery
(Jan 27) Hundreds of Millions Sought for Personalized Medicine Initiative
(Jan 22) SAP Teams with ASCO to Fight Cancer
(Jan 15) Human longevity-genentech ink deal sequence thousands genomes
(Jan 13) UCSC Receives $1M Grant from Simons Foundation to Create Human Genetic Variation Map
(Jan 12) Silencing long noncoding RNAs with genome-editing tools with full .pdf
(Jan 08) Who Owns the Biggest Biotech Discovery of the Century?
(Jan 07) NIH grants aim to decipher the language of gene regulation
(Jan 07) End of cancer-genome project prompts rethink: Geneticists debate whether focus should shift from sequencing genomes to analysing function
(Jan 07) Variation in cancer risk among tissues can be explained by the number of stem cell divisions

For archived HoloGenomics News articles see Archives above


NEWS


Microsoft intiatives treat cancer as a computing problem

By Rick Massimo

September 21, 2016

WASHINGTON — Medical research has traditionally treated cancer as a disease to be cured, but Microsoft’s latest efforts to aid the medical professionals treats it as a puzzle to be solved.

The company recently announced a range of initiatives in which computer scientists are working out the complexities of cancer and the best options for treatments. The efforts range from a way to sort through the mountain of research data on cancer to a “moonshot” effort to program cells to fight cancer and other diseases.

Much of the work happens at the genetic level, Microsoft and their associated scientists say.

“We’re in a revolution with respect to cancer treatment,” said David Heckerman, a scientist and senior director of the genomics group at Microsoft.

“Even 10 years ago people thought that you treat the tissue: You have brain cancer, you get brain cancer treatment. You have lung cancer, you get lung cancer treatment. Now, we know it’s just as, if not more, important to treat the genomics of the cancer, e.g. which genes have gone bad in the genome.”

That generates a mountain of information, both in terms of genetic mutations and combinations, and in the research on the genomes.

Bloomberg Technology reports that there are more than 800 cancer medicines and vaccines in clinical trials. The reports on all these drugs are far more than any oncologist can sift through — and that’s where “machine learning” comes in.

Microsoft gives as an example of machine learning a program’s ability to recognize photos of cats based on previous photos of cats a system has seen. The key is to translate that to the task of sifting through research.

Microsoft’s Hoifung Poon says his Hanover project is designed to help scientists sift through all the data. He showed Bloomberg Technology pictures of a patient whose cancer had been knocked back but had reappeared.

“There are already hundreds of these kinds of specifically targeted drugs, so even if you think let’s pair two drugs, there are tens of thousands of options,” Poon said. “It’s very hard to wrestle with. You might need several drugs to lock down all of the tumor’s pathways.”

Meanwhile, research is looking at the cancer gene directly.

“The tools that are used to model and reason about computational processes — such as programming languages, compilers and model checkers — are used to model and reason about biological processes,” Microsoft said in the statement.

Jasmin Fisher, a Microsoft researcher and biochemist in Cambridge, England, says that she’s taking a computational approach to the process that turns a cell cancerous. She’s trying to figure out the behavior of a cell the way a computer scientist would decode a computer program he didn’t create. The goal, Microsoft says, is to figure out in a schematic way the behavior of a healthy cell, compare it to a cancerous cell, and work out how it can be fixed.

“If you can figure out how to build these programs, and then you can debug them, [cancer is] a solved problem,” she said.

[There is a difference between software (as lines of instructions) and a mathematical code (e.g. how Z=Z^2+C encodes a Mandelbrot fractal). These days ALL BIG IT engaged in Genome Informatics, from different angles. My tenet is aimed at identifying the mathematics intrinsic to recursive genome regulation (see my FractoGene; "Fractal Genome Grows Fractal Organisms", with the "number of skeletal structures is determined by "Junk DNA"). Andras_at_Pellionisz.com]


Junk DNA’ tells mice—and snakes—how to grow a backbone

By Diana CrowAug. 1, 2016

‘Junk DNA’ tells mice—and snakes—how to grow a backbone

By Diana CrowAug. 1, 2016 , 11:45 AM

Why does a snake have 25 or more rows of ribs, whereas a mouse has only 13? The answer, according to a new study, may lie in “junk DNA,” large chunks of an animal’s genome that were once thought to be useless. The findings could help explain how dramatic changes in body shape have occurred over evolutionary history.

Scientists began discovering junk DNA sequences in the 1960s. These stretches of the genome—also known as noncoding DNA—contain the same genetic alphabet found in genes, but they don’t code for the proteins that make us who we are. As a result, many researchers long believed this mysterious genetic material was simply DNA debris accumulated over the course of evolution. But over the past couple decades, geneticists have discovered that this so-called junk is anything but. It has important functions, such as switching genes on and off and setting the timing for changes in gene activity.

Recently, scientists have even begun to suspect that noncoding DNA plays an important role in evolution. Body shape is a case in point: “There’s an immense amount of variation in body length across vertebrates, but within species the number of ribs and so forth stays almost exactly the same,” says developmental biologist Valerie Wilson of the University of Edinburgh. “There must be some ways to alter the expression of those [genes] regulating evolution to generate this massive amount of variation that we see across the vertebrates.”

To explore this question further, researchers led by developmental biologist Moises Mallo of the Gulbenkian Institute of Science in Oeiras, Portugal, turned to an unusual mouse. Most mice have 13 pairs of ribs, but a few strains of mutant mice bred by Mallo and colleagues have 24 pairs. Their rib cages extend all the way along their backbone, down to the hind legs, similar to those of snakes.

Snakes, such as this Gaboon viper, have can have more than 100 pairs of ribs.

Snakes, such as this Gaboon viper, have can have more than 100 pairs of ribs.

Stefan3345/Wikimedia Commons

The research team traced the extra ribs to a mutation deactivating a gene called GDF11, which puts the brakes on another gene that helps stem cells retain their ability to morph into many cell types. Without GDF11 to slow down that second gene—OCT4—the mice grew extra vertebrae and ribs. But GDF11 seemed just fine in snakes. So what was regulating vertebrate growth in snakes? The researchers decided to look at the DNA surrounding OCT4 to see whether something else was going on.

The OCT4 gene itself is similar in snakes, mice, and humans, but the surrounding noncoding DNA—which also plays a role in slowing down OCT4—looks different in snakes. To see whether this junk DNA gives snakes a longer-lasting growth spurt, Mallo and his colleagues spliced noncoding snake DNA into normal mouse embryos near OCT4. The embryos grew large amounts of additional spinal cord, suggesting that this junk DNA does indeed play a role in body shape regulation, the team reports this month in Developmental Cell.

But the researchers will have to do more to definitively confirm their findings, says developmental biologist and snake specialist Michael Richardson of Leiden University in the Netherlands, who was not involved in the study. Snakes would have to be genetically engineered with noncoding DNA that switches off OCT4 early, as it does in most other vertebrates. If this noncoding DNA is in fact the cause of snakes’ extra-long midsections, then snakes with that version governing OCT4 would be much shorter. Unfortunately, genetically engineering snakes is almost impossible because there’s no way to get access to very early embryos. “When the snake lays an egg, it’s already got a little head and about 26 vertebrate, so it’s already well on the way [to becoming a fully formed snake]. That way we miss out on the early genes,” Richardson explains.

Developmental biologists say OCT4 could be another example of evolution using noncoding DNA to change up animal anatomy. “We know that oftentimes it’s not the gene itself that changed—it’s the flanking regions or the regulatory regions,” Richardson says. “What they’ve shown quite clearly here is that the OCT4 gene isn’t different but the timing [of its expression] is prolonged.”

Snakes are an extreme variation. Almost all vertebrates have a head, a neck, a rib cage, and a tail (or tail region), but the lengths of those sections vary wildly among different species. “A flamingo has a very long neck, but snakes have a huge trunk. It’s not only the tail that’s longer,” Mallo explains. “The ingredients are not changing. The amounts and the timing of adding ingredients are.”

[The above is an obvious proof of "The Principle of Recursive Genome Function" (2008). No matter how visibly obvious were "Jumping Genes", most people missed the key concept for 40 years. Therefore, one can not really complain that after a mere 8 years the Fractal Recursion to "non-coding DNA" is breaking through. Even in an earlier Fractal Paper (co-authored by Pellionisz and the late Malcolm Simons, an early champion that "Junk is anything but - and in his late years, commuting from Melbourne to me in Silicon Valley became convinced of my FractoGene. There were predictions that were supported in just about 4 years by independent experimentalists. A core concept of Fractal Recursion is, not just what to repeat, BUT WHEN TO STOP  recursion. The paper gave a lucid explanation of repetitions coming to a halt when no supporting information is available any more. Andras_at_Pellionisz_dot_com]


The Oncoming Double Disruption of Technology of New School Genomics; smidgION by Oxford Nanopore

[My intellectual "mentor" (whom I could never meet), John von Neumann played a dual role in how "Nuclear Age" depended on the parallel development of disruptive "Nuclear Physics" (suddenly, Newtonian mechanics had to yield to the brand new science of quantum mechanics), and to make it all happen, the disruptive "Nuclear Technology" had to be develped (at the cost of the Manhattan Project). Likewise, New School Genomics has to build up its very own mathematical foundation (biology is obviously a multi-scale mathematical object, see Eric Schadt or Michael Levitt). In parallel, development of the technology of both sequencing and interpretation requires disruptive development. Below, we see it coming together to the "user-friendly" smartphone (in a way, predicted in my YouTube, 2010 and recently advocated by Eric Schadt and Eric Topol together), culminating in the miniaturized Oxford Nanopore-based USB-device (see below). At the time of Google teaming up with Stanford, fundamental questions that Eric Schadt and Eric Topol discussed arise in full grandiosity if some developments (just like with nuclear technology) should be kept "proprietary" - or must be spread widely in an open manner in Academia (such that a new crop of students can be spruced for the colossal challenge). Here we have it, Google is already engaged, and Apple is on the brink or becoming perhaps even more crucial for its Steve Job's "DNA" - "user friendliness". Technology alone is not enough, however. For the Manhattan Project technology was marshalled by the generals - but scientists figured out how all together it would work! For ultimate miniaturization, our technology may want to adopt the same "proprietary compression" that the genome itself uses; fractal algorithm. It was Barnsley who showed that fractals can provide 30,000 times of compression! Big IT is already competing - it may be crucial for any one to win; to secure proprietary IP. - Andras_at_Pellionisz_dot_com]

SmidgION uses the same core nanopore sensing technology as MinION and PromethION but will be designed for use with smartphones or other mobile, low power devices. It is designed to cater for a broad range of field-based analyses; potential applications may include remote monitoring of pathogens in a breakout or infectious disease; the on-site analysis of environmental samples such as water/metagenomics samples, real time species ID for analysis of food, timber, wildlife or even unknown samples; field-based analysis of agricultural environments, and much more.


Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com

Jim Watson was asked "what happened when you came up with The Double Helix? His answer was revealing. "Nothing happened! For 7 years nobody even referenced our work". Barbara McClintock, with her astounding paradigm shift of "jumping genes" had to wait 40 years till she "suddenly"received proper recognition.

Below I present SEVEN article appeared over a year, to show fruits of labor of my original contribution "FractoGene"; Fractal Genome Grows Fractal Organisms. Priority date of the "lucid heresy", a double-disruption concept, was secured in 2002, hearlded immediately, issued 2012 with about the Next Decade left for the patent in force.  With the two cardinal mistakes ("Junk DNA" and "Central Dogma") reversed, "The Principle of Recursive Genome Function" emerged both as the first requirement in a peer reviewed science paper with the additional requirement satisfied that predictions were made and confirmed by independent experimentalists, the subject of the patent and additional trade secrets announced widely by a Google Tech Talk YouTube with many thousands of views, opening a "New School Genomics". FractoGene explains genome regulation in mathematical terms of fractal genome growing a fractal (physiological as well as pathological, e.g. cancerous) growth. The growth is based on DNA-fractal through an operator generating fractal organisms. In itself, correlation yields statistical diagnosis and probabilistic prognosis (and precision therapy). George Church of Harvard invited the FractoGene presentation to his Cold Spring Harbor Conference (2009 September) and within weeks, substantion by independent workers was accomplished in a mere 7 years after the "Eureka" concept (by the Science cover Fractal Globule as the Hilbert fractal, 2009, October). Within 2 short more years Mirny established that compromised fractal globule is linked to cancer (2011).

The second 7-year period (2009-2016) produced ample R&D evidence by independent researchers, that the FractoGene concept is not only on the righ track, but presently is the single most coherent mathematical (software enabling) handle on genome (mis)regulation. Early proponents, as mentioned were George Church (Harvard) and Eric Schadt (at that time at Pacific Biosciences and now as Head of the $600 M Mount Sinai Institute of Genomics and Multiscale Biology). In addition to the 7 articles of the last year below, it is noteworty to mention that the approach was endorsed by the pioneer of "fractals in biology" (G. Losa of Switzerland), and Stanford Nobelist Michael Levitt,

All "Academic" requirements, therefore, were satisfied by 2016, after the the 2 x 7 = 14 years of efforts since 2002.

So what is so special about 2016?

The announcement a few weeks ago, that with the new leadership of Stanford University from Sept. 1, 2016, a Stanford-Google joint effort was launched, aimed at "Precision Medicine" (most importantly, a paramount effort to beat cancer).

Why is it so vastly important? Because in "non-profit Academia" (e.g. Stanford University, or the countless academic institutions of the cited 7 landmark-papers) anybody is "free to use the utility inherent in correlating genomic and organismal fractals". However, Google is the colossus of "FOR-PROFIT" Big IT (and it is not anchored to any single Big Pharma, for instance). Therefore, the driver of the New School of Genomics (just as it was with the Internet) has shifted from mostly government-supported non-profit institutions into the ruthlessly "for profit" business of the biggest of Big Information Technology!

Already, there are scores of an unmistakable "consolidation" as a result of entirely changing the ecosystem of New School Genomics. Stanford and Google will lead, but because of their sheer size and complexity will not move the fastest. Smaller entities will mushroom - ultimately to be bought by Google (or perhaps Apple?). Investors will scramble to secure available Intellectual Property (patents & trade secrets).

Some, like me, worked on "the Internet Boom", thus the pattern is unmistakable:

"The Next Big Thing in Silicon Valley will be Genome Informatics".

by andras_at_pellionisz_dot_com ]


Multiscale modeling of cellular epigenetic states: stochasticity in molecular networks, chromatin folding in cell nuclei, and tissue pattern formation of cells

Jie Liang,1,* Youfang Cao,2 Gamze Gürsoy,1 Hammad Naveed,3 Anna Terebus,1 and Jieling Zhao1

Crit Rev Biomed Eng. Author manuscript; available in PMC 2016 Aug 8.

Published in final edited form as:

Crit Rev Biomed Eng. 2015; 43(4): 323–346.

doi: 10.1615/CritRevBiomedEng.2016016559

PMCID: PMC4976639

NIHMSID: NIHMS790913

Abstract

Genome sequences provide the overall genetic blueprint of cells, but cells possessing the same genome can exhibit diverse phenotypes. There is a multitude of mechanisms controlling cellular epigenetic states and that dictate the behavior of cells. Among these, networks of interacting molecules, often under stochastic control, depending on the specific wirings of molecular components and the physiological conditions, can have a different landscape of cellular states. In addition, chromosome folding in three-dimensional space provides another important control mechanism for selective activation and repression of gene expression. Fully differentiated cells with different properties grow, divide, and interact through mechanical forces and communicate through signal transduction, resulting in the formation of complex tissue patterns. Developing quantitative models to study these multi-scale phenomena and to identify opportunities for improving human health requires development of theoretical models, algorithms, and computational tools. Here we review recent progress made in these important directions.

The fractal globule (FG) model[13] was the first model developed to describe the global folding properties of the human genome, as it can explain the scaling relationship between Pc(s) and s. However, it does not account for the leveling-off effects observed in FISH experiments.[10,11] Subsequently, the Strings and Binders Switch (SBS) model was developed, which pointed to a more heterogeneous structural ensemble, in which the scaling properties of the individual structures depend on the concentration of binder molecules such as architectural proteins.[53] However, scaling in the SBS model strongly depend on the choice of model parameters, and all observed scaling properties cannot be accounted for using a fixed set of parameters.

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]


Fractal Dimension of Tc-99m DTPA GSA Estimates Pathologic Liver Injury due to Chemotherapy in Liver Cancer Patients.

Ann Surg Oncol. 2016 Jul 20. [Epub ahead of print]

Hiroshima Y1, Shuto K1, Yamazaki K2, Kawaguchi D1, Yamada M2, Kikuchi Y1, Kasahara K1, Murakami T1, Hirano A1, Mori M1, Kosugi C1, Matsuo K1, Ishida Y2, Koda K1, Tanaka K3.

Author information

BACKGROUND:

Chemotherapy-induced liver injury after potent chemotherapy is a considerable problem in patients undergoing liver resection. The aim of this study was to assess the relationship between the fractal dimension (FD) of Tc-99m diethylenetriaminepentaacetic acid (DTPA) galactosyl human serum albumin (GSA) and pathologic change of liver parenchyma in liver cancer patients who have undergone chemotherapy.

METHODS:

We examined 34 patients (10 female and 24 male; mean age, 68.5 years) who underwent hepatectomy. Hepatic injury was defined as steatosis more than 30 %, grade 2-3 sinusoidal dilation, and/or steatohepatitis Kleiner score ≥4. Fractal analysis was applied to all images of Tc-99m DTPA GSA using a plug-in tool on ImageJ software (NIH, Bethesda, MD). A differential box-counting method was applied, and FD was calculated as a heterogeneity parameter. Correlations between FD and clinicopathological variables were examined.

RESULTS:

FD values of patients with steatosis and steatohepatitis were significantly higher than those without (P > .001 and P > .001, respectively). There was no difference between the FD values of patients with and without sinusoidal dilatation (P = .357). Multivariate logistic regression showed FD as the only significant predictor for steatosis (P = .005; OR 36.5; 95 % CI 3.0-446.3) and steatohepatitis (P = .012; OR, 29.1; 95 % CI 2.1-400.1).

CONCLUSIONS:

FD of Tc-99m DTPA GSA was the significant predictor for fatty liver disease in patients who underwent chemotherapy. This new modality is able to differentiate steatohepatitis from steatosis; therefore, it may be useful for predicting chemotherapy-induced pathologic liver injury.

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]


Unique fractal evaluation and therapeutic implications of mitochondrial morphology in malignant mesothelioma

Sci Rep. 2016; 6: 24578.

Published online 2016 Apr 15. doi: 10.1038/srep24578

PMCID: PMC4832330

Frances E. Lennon,1 Gianguido C. Cianci,2 Rajani Kanteti,1 Jacob J. Riehm,1 Qudsia Arif,3 Valeriy A. Poroyko,4 Eitan Lupovitch,5 Wickii Vigneswaran,6 Aliya Husain,3 Phetcharat Chen,7 James K. Liao,7 Martin Sattler,8 Hedy L. Kindler,1 and Ravi Salgiaa,1,*

Author information ► Article notes ► Copyright and License information►

Abstract

Malignant mesothelioma (MM), is an intractable disease with limited therapeutic options and grim survival rates. Altered metabolic and mitochondrial functions are hallmarks of MM and most other cancers. Mitochondria exist as a dynamic network, playing a central role in cellular metabolism. MM cell lines display a spectrum of altered mitochondrial morphologies and function compared to control mesothelial cells. Fractal dimension and lacunarity measurements are a sensitive and objective method to quantify mitochondrial morphology and most importantly are a promising predictor of response to mitochondrial inhibition. Control cells have high fractal dimension and low lacunarity and are relatively insensitive to mitochondrial inhibition. MM cells exhibit a spectrum of sensitivities to mitochondrial inhibitors. Low mitochondrial fractal dimension and high lacunarity correlates with increased sensitivity to the mitochondrial inhibitor metformin. Lacunarity also correlates with sensitivity to Mdivi-1, a mitochondrial fission inhibitor. MM and control cells have similar sensitivities to cisplatin, a chemotherapeutic agent used in the treatment of MM. Neither oxidative phosphorylation nor glycolytic activity, correlated with sensitivity to either metformin or mdivi-1. Our results suggest that mitochondrial inhibition may be an effective and selective therapeutic strategy in mesothelioma, and identifies mitochondrial morphology as a possible predictor of response to targeted mitochondrial inhibition.

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]


Systems Medicine of Cancer: Bringing Together Clinical Data and Nonlinear Dynamics of Genetic Networks

Comput Math Methods Med. 2016; 2016: 7904693.

Published online 2016 Jan 11. doi: 10.1155/2016/7904693

PMCID: PMC4737019

Systems Medicine of Cancer: Bringing Together Clinical Data and Nonlinear Dynamics of Genetic Networks

Konstantin B. Blyuss, 1 Ranjit Manchanda, 2 Jürgen Kurths, 3 Ahmed Alsaedi, 4 and Alexey Zaikin 5 , 6 , *

1Department of Mathematics, University of Sussex, Falmer, Brighton BN1 9QH, UK

2Barts Cancer Institute, Queen Mary University of London, London EC1M 6BQ, UK

3Potsdam Institute for Climate Impact Research, 14473 Potsdam, Germany

4Department of Mathematics, King AbdulAziz University, Jeddah 21589, Saudi Arabia

5Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod 603950, Russia

6Institute for Women's Health and Department of Mathematics, University College London, London WC1E 6AU, UK

*Alexey Zaikin: Email: ku.ca.lcu@nikiaz.yexela

Author information ▼ Article notes ► Copyright and License information ►

The last few years have witnessed major developments in experimental techniques for analysis of cancer, including full genome sequencing, measurement of multiple oncomarkers, DNA methylation profile, genomic profile, or transcriptome of the pathological tissue. Whilst these advances have provided some additional understanding of cancer onset and development, the problem of cancer is far from solved. Since the amounts of data emerging from these are very substantial, and the data itself can be extremely heterogeneous (qualitative, quantitative, and verbal descriptions), this makes standard data analysis techniques not practically applicable. One promising direction of addressing the problem of analysis of cancer data is to use modern and sophisticated methods from systems biology and cybernetics. Some challenges of this approach are connecting the results of mathematical analysis with real clinical data, and bridging the existing gaps between the communities of clinicians and applied mathematicians.

This special issue showcases some of the most recent developments in the areas of nonlinear dynamics, mathematical analysis and modeling, data analysis, and simulations in the area of cancer. Having received 11 submissions, six best papers were chosen and published to provide an overview of the research field and to motivate further study.

In “Optimal Placement of Irradiation Sources in the Planning of Radiotherapy: Mathematical Models and Methods of Solving,” O. Blyuss et al. analyse optimal choice of placement of irradiation sources during radiotherapy as an optimization problem. Using the techniques of nondifferentiable optimization and an approximate Klepper's algorithm, the authors derive a new approach for solving this problem and illustrate their methodology with actual numerical simulations of different scenarios.

The paper “Time-Delayed Models of Gene Regulatory Networks” by K. Parmar et al. provide an overview of existing mathematical techniques applicable to modeling the dynamics of gene regulatory networks. The authors focus on the effects of transcriptional and translational time delays and demonstrate how stability of different steady states and the associated behavior change depending on system parameters and the time delays. They contrast the dynamics of the fast mRNA regime, as described by a reduced model, with the dynamics of the full system to illustrate possible differences in behaviour and to highlight the important role played by the time delays.

H. Namazi and M. Kiminezhadmalale in their paper titled “Diagnosis of Lung Cancer by Fractal Analysis of Damaged DNA” discuss how DNA sequences emerging from patient blood plasma can be studied by analysing DNA walks. Comparing the features of DNA profiles for healthy individuals with those for lung cancer patients, the authors derive several predictive criteria for lung cancer based on Hurst exponents and fractal dimension.

The circadian physiology, clock genes, and cell cycle may critically affect results of cancer chronotherapeutics; hence, investigation of gene regulatory networks controlling circadian rhythms is an irreplaceable part of systems medicine of cancer. R. Heussen and D. Whitmore study how the circadian clock is entrained by light, and they experimentally investigate whether there exists a threshold for this synchronization. Moreover, analysis of the constructed numerical model shows that stochastic effects are an essential feature of the circadian clock that provides an explanation of signal decay from the zebrafish cell lines in prolonged darkness.

Some cancer types are especially dangerous because of the high progression rate, and malignant gliomas represent one of the most severe types of tumors. Modern medical approaches offer sophisticated treatment procedures, based on microsurgical tumor removal combined with radio- and chemotherapy. A success of these surgical resection depends on the clarity of intraoperative diagnostics of human gliomas, and the review of O. Tyurikova et al. surveys the wide diversity of modern diagnostic methods used in the course of glial tumor resections.

Recent development in systems biology and medicine have enabled us to analyse and infer networks of interactions and to search for new network oncomarkers. S.-M. Wang et al. in their paper about the identification of dysregulated genes and pathways in clear cell renal cell carcinoma provide an example of this research direction. Using systematic tracking the dysregulated modules of reweighted protein-protein interaction networks they successfully identified dysregulated genes and pathways for this type of cancer, thus gaining insights into possible biological markers or targets for drug development.

We hope that the readers will find the papers published in this special issue interesting, and this will encourage and foster further research on developing new and efficient techniques in systems biology and data analysis for predicting the onset and for monitoring the progression of cancer.

Konstantin B. Blyuss

Ranjit Manchanda

Jürgen Kurths

Ahmed Alsaedi

Alexey Zaikin

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]


ASXL2 promotes proliferation of breast cancer cells by linking ERα to histone methylation.

Oncogene. 2016 Jul 14;35(28):3742-52. doi: 10.1038/onc.2015.443. Epub 2015 Dec 7.

Park UH1, Kang MR2, Kim EJ3, Kwon YS1, Hur W4, Yoon SK4, Song BJ5, Park JH6, Hwang JT6, Jeong JC7, Um SJ1.

Author information

Abstract

Estrogen receptor alpha (ERα) has a pivotal role in breast carcinogenesis by associating with various cellular factors. Selective expression of additional sex comb-like 2 (ASXL2) in ERα-positive breast cancer cells prompted us to investigate its role in chromatin modification required for ERα activation and breast carcinogenesis. Here, we observed that ASXL2 interacts with ligand E2-bound ERα and mediates ERα activation. Chromatin immunoprecipitation-sequencing analysis supports a positive role of ASXL2 at ERα target gene promoters. ASXL2 forms a complex with histone methylation modifiers including LSD1, UTX and MLL2, which all are recruited to the E2-responsive genes via ASXL2 and regulate methylations at histone H3 lysine 4, 9 and 27. The preferential binding of the PHD finger of ASXL2 to the dimethylated H3 lysine 4 may account for its requirement for ERα activation. On ASXL2 depletion, the proliferative potential of MCF7 cells and tumor size of xenograft mice decreased. Together with our finding on the higher ASXL2 expression in ERα-positive patients, we propose that ASXL2 could be a novel prognostic marker in breast cancer.

PMID: 26640146 DOI: 10.1038/onc.2015.443

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]


Researchers identify two proteins important for the demethylation of DNA

January 12, 2016

Scientists at the Institute of Molecular Biology (IMB) in Mainz have identified a missing piece of the puzzle in understanding how epigenetic marks are removed from DNA. The research on DNA demethylation sheds new light on a fundamental process that is important in development and diseases such as cancer.Epigenetics is defined by heritable changes in gene expression that do not derive from changes in the DNA sequence itself.

Epigenetic processes play a central role in a broad spectrum of diseases, such as cardiovascular disease, neurodegenerative disorders and cancer. One of the most prominent epigenetic processes is DNA methylation, where one of the four bases of animal DNA is marked by a methyl group. DNA methylation typically reduces the activity of surrounding genes.

A lot is known about how methyl marks are put onto the DNA, but how they are removed – a process called DNA demethylation – and, thus, how genes are reactivated is still not well understood. In their recent study, published in Nature Structural and Molecular Biology, IMB scientists have identified two proteins, Neil1 and Neil2 that are important for the demethylation of DNA. "These proteins are a missing link in the chain of events that explain how DNA can be efficiently demethylated," said Lars Schomacher, first author on the paper.

Intriguingly, DNA demethylation has been shown to involve proteins of the DNA repair machinery. Thus epigenetic gene regulation and genome maintenance are linked. Schomacher and his colleagues identified in Neil1 and Neil2 two more repair factors that not only protect the DNA's integrity but are also involved in DNA demethylation. The researchers showed that the role of Neils is to boost the activity of another protein, Tdg, which is known to be of central importance for DNA demethylation.

Both the Neils and Tdg are essential proteins for survival and development. Schomacher et al. carried out experiments where they removed either one of these proteins in very early frog embryos. They found that the embryos had severe problems developing and died before reaching adulthood.

Failure in setting and resetting methyl marks on DNA is involved in developmental abnormalities and cancer, where cells forget what type they are and start to divide uncontrollably. Understanding which proteins are responsible for DNA demethylation will help us to understand more about such disease processes, and may provide new approaches to develop treatments for them.

Explore further: Researchers identify new mechanism used by cells to reverse silenced genes

More information: Lars Schomacher et al. Neil DNA glycosylases promote substrate turnover by Tdg during DNA demethylation, Nature Structural & Molecular Biology (2016). DOI: 10.1038/nsmb.3151

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]


Mathematical Modelling and Prediction of the Effect of Chemotherapy on Cancer Cells

Hamidreza Namazi, Vladimir V. Kulish & Albert Wong

Scientific Reports 5, Article number: 13583 (2015)

doi:10.1038/srep13583

Published online:

28 August 2015

Abstract

Cancer is a class of diseases characterized by out-of-control cells’ growth which affect DNAs and make them damaged. Many treatment options for cancer exist, with the primary ones including surgery, chemotherapy, radiation therapy, hormonal therapy, targeted therapy and palliative care. Which treatments are used depends on the type, location, and grade of the cancer as well as the person’s health and wishes. Chemotherapy is the use of medication (chemicals) to treat disease. More specifically, chemotherapy typically refers to the destruction of cancer cells. Considering the diffusion of drugs in cancer cells and fractality of DNA walks, in this research we worked on modelling and prediction of the effect of chemotherapy on cancer cells using Fractional Diffusion Equation (FDE). The employed methodology is useful not only for analysis of the effect of special drug and cancer considered in this research but can be expanded in case of different drugs and cancers.

Introduction

Cells production and die are regulated in human body in an orderly way. But in case of cancer, the division and growth of cells is out of control. In this manner, the damaged cells start to occupy more and more space in a part of body and so they expel the useful healthy cells. By this way that part of body is called tumor. So fighting with these cancer cells and changing the way of their production and accumulation is a critical issue in medical science.

Scientists have developed different methods for treatment of cancer. Some of these methods are surgery, chemotherapy, radiation therapy, hormonal therapy, targeted therapy and palliative care. Employing the less-invasive methods have always had critical role in patient treatment. Chemotherapy as a method for cancer treatment deals with application of drugs affecting the cancer cell’s ability to divide and reproduce. The drug makes the cancer cells weak and destroys them by directly applying to cancer site or through the bloodstream.

During years some researchers have worked on mathematical modelling of the effect of chemotherapy on cancer treatment. Some researchers employed different types of differential equations for modelling of the effect of chemotherapy on cancer treatment. For instance, Pillis et al. developed a mathematical model based on a system of ordinary differential equations which analyses the cancer growth on a cell population level after using chemotherapy1. Despite the overall success of this mathematical model, it couldn’t explain the effects of IL-2 on a growing tumour. So, in another work Pillis et al. updated their model by introducing new parameters governing their values from empirical data which are specific in case of different patients. The new model allows production of endogenous IL-2, IL-2-stimulated NK cell proliferation and IL-2-dependent CD8+ T-cell self-regulations2. In another work, using a system of delayed differential equations, Liu and Freedman proposed a mathematical model of vascular tumor treatment using chemotherapy. This model represents the number of blood vessels within the tumor and changes in mass of healthy cells and competing parenchyma cells. Using the proposed model they considered a continuous treatment for tumor growth3. See also4,5. In a closer approach some researchers specially focused on mathematical modelling of the diffusion of anti-cancer drugs to cancer tumor6,7,8,9,10,11.

Beside all the works done on mathematical modelling of cancer treatment using chemotherapy, no work has been reported which analyse and model this treatment by linking between DNA walk and drug diffusion. In this research we model the response of tumor to anti-cancer drug. For this purpose we consider the diffusion of the drug in solid tumor. This diffusion will cause the damaged cells die and thus healthy cells appear.

In the following first we talk about DNA walk as a random multi fractal walk. After that we discuss about chemotherapy and diffusion of drug in tumor. By considering these two topics we start to develop the Fractional Diffusion Equation (FDE) which maps the effect of drug diffusion on DNA walk. The result and discussion remarks are brought in last sections.

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]


Breast cancer researchers look beyond genes to identify more drivers of disease development

August 29, 2016

Science Daily

Dr. Lupien, Toronto (Canada)

Breast cancer researchers have discovered that mutations found outside of genes that accumulate in estrogen receptor positive breast tumours throughout their development act as dominant culprits driving the disease.

The research, published online today in Nature Genetics, focuses on the most common type of breast cancer, estrogen receptor positive, says principal investigator Mathieu Lupien, Senior Scientist, Princess Margaret Cancer Centre, University Health Network and Associate Professor in the Department of Medical Biophysics, University of Toronto.

“By investigating acquired mutations found outside of genes through the power of epigenetics, we have identified that functional regulatory components can be altered to impact the expression of genes to promote breast cancer development,” says Dr. Lupien.

The multi-institutional research team collaborated with the Princess Margaret Genomics Centre and Bioinformatics group to analyze changes in the DNA sequence that accumulate in patients’ tumours with respect to the epigenetic identity of estrogen receptor-positive breast cancer cells..

“Thinking of genes as the source of light in the human genome, our research shows that driver mutations will not only hit the light bulbs but also directly alter light switches and dimmers that serve as functional regulatory components,” says Dr. Lupien.

“We now have the opportunity to start mining the genome for driver mutations not only in genes but also in other functional regulatory components to expand our capacity to identify the best biomarkers and to delineate the fundamental biology of each tumour to help advance personalized cancer medicine for patients.”

Dr. Lupien’s research builds on a previous study that identified why 44 known genetic variations increased breast cancer risk (Nature Genetics, Sept. 23, 2012).

The convergence of more knowledge about inherited risk variants and the role of acquired mutations should readily enable translating the science into more precise clinical tests to diagnose and monitor patients, he says.

Story Source:

The above post is reprinted from materials provided by University Health Network (UHN). Note: Content may be edited for style and length.

Journal Reference:

Swneke D Bailey, Kinjal Desai, Ken J Kron, Parisa Mazrooei, Nicholas A Sinnott-Armstrong, Aislinn E Treloar, Mark Dowar, Kelsie L Thu, David W Cescon, Jennifer Silvester, S Y Cindy Yang, Xue Wu, Rossanna C Pezo, Benjamin Haibe-Kains, Tak W Mak, Philippe L Bedard, Trevor J Pugh, Richard C Sallari, Mathieu Lupien. Noncoding somatic and inherited single-nucleotide variants converge to promote ESR1 expression in breast cancer. Nature Genetics, 2016; DOI: 10.1038/ng.3650

[The above research appears to shake the foundations of the old school, that "genes are responsible for everything". There is the "oncogene-school" with the core-belief that cancerous (mutant) gene(s), maybe just a single gene, is the culprit of cancer. Today, there are so many hundreds (or thousands...) of so-called "oncogenes" (genes that are involved in cancer, mutant or not), that many believe that in terminal cases practically all genes (and non-genes...) are affected by zillions of mutations. This school is based on the rationale, that "genes pump proteins" (that are the protein-materials of tumors). The above article points into the other possibility of the "what was first, the chicken or the egg". The New School maintains that "cancer is the melt-down of genome REGULATION", during which mutations spread along zillions of "pathways". Think of Chernobyl. Was it the uranium-rods (emitting enormous energy by their fission) that was the culprit of the "melt-down"? In case of Chernobyl we know for sure that the complex and delicate REGULATORY SYSTEM went out of control, and the uncontrolled fission-energy blew up half of Europe. Since the "non-coding DNA" (maiden name "Junk DNA") used to be pretty much unknown, as a matter of course all attention was focused on protein-pumping genes. While a definitive answer to the question "which came first, the chicken or the egg" is also true to "cancer-theories", in my personal opinion a mathematical understanding of genome-REGULATION -SYSTEM is inevitable for truly effective "Cancer Moon Projects". Andras_at_Pellionisz_at_com]


HPE Synergy Shows Promise And Progress With Early Adopters

Aug. 29, 2016

FORBES

by Patrick Moorhead

Hewlett Packard Enterprise has placed a big bet with HPE Synergy—the company is a pioneer in the composable infrastructure market, and is the furthest along in customer enablement. In the rapidly changing world of IT, composable infrastructure could be the next big thing in enterprise infrastructure. Designed to treat hardware like software (what is often referred to as, “infrastructure as code”) it has the ability to allocate the optimal resources for each application—with the goal in lowering infrastructure costs, providing flexibility as a resource, and accelerating time-to- market for customers. HPE Synergy was launched in December 2015, and touted as the first platform in the market purposefully built for composability (read more here). Hewlett Packard Enterprise is making progress with building out the composable infrastructure ecosystem and though it is still too early to say definitively, they are seeing some success with Synergy’s early beta customers.

HudsonAlpha Institute for Biotechnology, a nonprofit specializing in genomics research, education, and medical treatment, was HPE Synergy’s first customer (you can read our full case study here). Genomics is a highly data-intensive field (they generate more than one petabyte of data a month), and in order to handle the intense workload demands, HudsonAlpha had to rethink their infrastructure—HPE Synergy promised the flexibility and compute power they needed to get the job done. The solution is well-aligned with the HudsonAlpha’s existing strategy—the institute already manages its infrastructure via resource pools. HudsonAlpha says Synergy’s Direct Attach Storage (DAS) simplifies storage for maximum efficiency—a must, when dealing with such large volumes of data. Hewlett Packard Enterprise ’s partnership with Docker is also a selling point—HudsonAlpha views containers as being critical for delivery of microservices. In addition, HudsonAlpha says HPE Synergy delivers the agility needed for collaboration between thousands of researchers worldwide—the platform is quick to get users road-ready and running with new applications.

As it currently stands, they are in the beta stage of deployment—but they have started running production-level workloads on HPE Synergy. Testament to the platform’s ease of installation, HudsonAlpha was able to set up the hardware and complete the install process in-house before the HPE support team even arrived. They’re currently using Docker Swarm, Docker Machine, and DevOps tools like Vagrant on top of Synergy. They’ve constructed their own templates for the platform to allow better transitions through tenants, and developers have begun to deploy their own workloads to the hardware without requiring the the assistance of operations. According to Jim Hudson (Co-Founder and Chairman of the institute), an analysis of the human genome that used to take about 2 days to complete can now be accomplished in 45 minutes with HPE Synergy—an impressive jump. As HudsonAlpha’s existing infrastructure is swapped out for Synergy, they say they will continue to measure gains in efficiency and capability through comparison of the two. I think we’re going to continue to see good results.

Other early testimonials of HPE’s new solutions have also been positive. Rich Lawson , Senior IT Architect at Dish Network (one of the first 100 Synergy customers), praised Synergy’s flexibility, and its ability to unlock the full potential of the public cloud. Greg Peterson (VP of HPE Solutions at Avnet AVT +0.26%, Inc.) lauded HPE Hyper Converged 380’s ease-of- deployment and management, saying that “the solution works as advertised.” We’ll continue to monitor as more early adopters report back on their experiences with HPE Synergy, but so far it’s looking pretty good.

The other, very important piece of the puzzle is the work that HPE is doing to expand the composable infrastructure ecosystem. I’ve said it before, and I’ll say it again—I think HPE “gets strategic partnering,” even though the company-wide approach is new. They’ve spent the first half of 2016 integrating HPE OneView with tools from their partners—Docker, Chef, nLyte, Eaton ETN -0.83%, SaltStack, Ansible, and VMTurbo , just to name a few. The crux of the entire composable movement is to make it easier for customers to drive automation with whatever tools they already have. Expanding the composable ecosystem is going to be an ongoing task for years to come, but HPE appears to be making some good strides through the collaborations with their many partners.

In conclusion, I’m not quite ready to call HPE Synergy a composable slam-dunk yet—signs are looking positive, but it’s still too early in the testing period to say. I do feel comfortable saying that these proof points are an indicator that HPE can deliver on their promise PMSEY +% of composable infrastructure. It’s not just a nice buzz-phrase anymore, it’s actually a viable way of doing things—and it’s only going to get more viable as HPE continues to build out the composable ecosystem. If HPE Synergy’s beta customers continue to report positive results, I think we could be looking at a big shift in enterprise infrastructure.

[It is not only HP - the entire Silicon Valley is in a scramble upon the announcement of the World's most potent Joint Venture (by Google with Stanford going for genome-based precision medicine). HP just acquired SGI for $275 Million, and as we see above, Jim Hudson is also one of those who have their mind on "human genome analysis"... Those of us who went with the Internet from a tiny US Government pet project (that system administrators could chat by emails of mainframes...), and switched gears from Government to hand the Internet over to Public Domain Big Business, know and understand the kind of "scramble" that is taking place in Silicon Valley as we speak! Andras_at_Pellionisz_dot_com]


Illumina - Can GRAIL Deal A Death Blow In The War Against Cancer?

Aug. 22, 2016 About: Illumina, Inc. (ILMN)

SeekingAlpha

Summary

The scientific and business communities have joined forces in a war against cancer - will it be a death knell for the dreaded disease?

Illumina's startup venture GRAIL has aspirations of developing an early-detection cancer screening test and unlocking a $20 billion industry.

The new venture faces scientific uncertainties and a growing field of competing researchers.

Despite the significant challenges it faces, management claims that GRAIL possesses key advantages which give it an edge against rivals.

Will investors be able to gain their own edge in predicting the outcome of this scientific endeavor?

Introduction and Significance to Illumina

Earlier this year, Illumina (NASDAQ:ILMN) announced plans to form a separate entity to be among the pioneers in liquid biopsy research with the goal of creating a simple blood test for early detection of all major types of cancer. Illumina is a 52 percent owner of the new venture GRAIL, which has also received well-publicized backing from Microsoft (NASDAQ:MSFT) Founder Bill Gates and Amazon.com (NASDAQ:AMZN) CEO Jeff Bezos, among other notable investors.

The new initiative complements Illumina's core business of DNA sequencing equipment very well, and the company can use this expertise to improve its odds of success in developing its early detection cancer test. Illumina is dominant in DNA sequencing machines with an estimated 75% market share which reaches as high as 90% among premium sequencing devices according to Morningstar estimates. The total addressable market of its core business has been estimated by management to be in excess of $20 billion, which represents an opportunity for significant future growth from the $2.2 billion of revenue Illumina booked in 2015 while holding 75% of the market.

While the future of the business is very promising, expectations are also very high, with the stock's price-earnings multiple near 60. The stock is a high-potential, high-risk investment that appeals strongly to many enterprising, growth-oriented investors. Illumina's dominant core business has been discussed in detail in prior coverage; the analysis today will focus on its new investment in GRAIL and its prospects for achieving the inspiring goals that management has set for it. The company's ambition is to bring a cancer-screening test to market by 2019. With demand for Illumina's DNA sequencing machines already quite robust, any incremental success for GRAIL could contribute significantly to the future growth the company must achieve to fulfill the promising potential its investors envision.

The test GRAIL hopes to develop is a form of liquid biopsy, a new scientific technology that has attracted substantial attention from cancer researchers. Many have been quick to recognize the most optimistic prospects of GRAIL, for which management has estimated a total addressable market size between $20 billion and $200 billion. However, what has been harder to come by is analysis regarding the company's likelihood of capturing this market, its potential future market share, and the challenges that must be overcome before all of this can transpire. The following will set forth an evaluation of why the new field of liquid biopsy is so promising, what challenges face those researching it, and how GRAIL stacks up against the competition within this emerging industry.

Liquid Biopsy Overview & Opportunities

Tissue biopsy has long been the standard in diagnosing and profiling cancer tumors in patients. The method entails extracting a tissue sample from a patient through invasive and sometimes painful surgical procedures. A more patient-friendly alternative has emerged as an alternative and possibly even future replacement. The new method, liquid biopsy, serves a similar purpose as tissue biopsy, but uses blood or other body fluids instead of tissue.

Cancer tumors mutate over time and their characteristics change as the disease progresses from early to more advanced stages. During the process, dying cancer cells give off DNA into the blood stream. Advances in DNA sequencing technologies have made it possible to harvest the enormous amount of information contained in our blood by isolating genetic markers of cancer. This makes diagnosis of cancer through a blood test theoretically possible.

An additional benefit of blood-based liquid biopsy is the ease of obtaining a sample; any doctor's office is surely equipped to draw blood. This is an advantage over the invasive and painful method of tissue biopsy, which often is performed only once on a patient. In contrast, liquid biopsies can be performed multiple times throughout an illness, giving care providers a better view of the disease, an understanding of its progression over time, feedback on the success of current treatment, and insight into how treatment should change as the illness evolves.

Liquid biopsy is a rapidly developing new field with five current and possible future applications:

1. Screening for Early Cancer Detection

2. Profiling Cancers

3. Monitoring Treatment

4. Developing Personalized Treatment

5. Assessing Possible Recurrence

The most promising but also most difficult among these is early detection of cancer. An early detection test capable of diagnosing all major types of cancer is referred to by some in the industry as the holy grail of liquid biopsy research. This new tool against cancer has ignited a major technological race among researchers and companies poised to deepen their understanding of this new procedure and bring solutions to the marketplace. The emergence of liquid biopsy presents many opportunities but, along with them, perhaps just as many challenges as well.

Challenges of Liquid Biopsy

While successfully developing liquid biopsy tests could revolutionize cancer diagnosis and treatment, the technology faces steep challenges as well. One such challenge relates to the two methods of isolating cancer DNA. The primary method being researched relates to circulating tumor DNA (ctDNA). While more prominent, this method obtains information from DNA shed by dying cancer cells. However, some researchers will point out that DNA from dying cells doesn't accurately represent the characteristics of the disease inside the body.

The alternative method, which assesses circulating tumor cells and is known as CTC, attempts to isolate whole tumor cells, not just DNA fragments as with ctDNA. Dr. Daniel Haber, one such researcher, offers the following justification for his preference for CTC over ctDNA: "A fragment from a dying tumor cell 'doesn't tell me anything about the biology of the living tumor.'" CTC-based research offers a more thorough understanding of a specific cancer. Another advantage is that CTC makes it possible to assess the effectiveness of a treatment against a tumor cell, a benefit not shared by ctDNA. The drawback is that circulating tumors are extremely rare in the bloodstream, and remarkably difficult to isolate. While Dr. Haber believes that "we're on the cusp of having a standardized, affordable technology" in CTC, the propensity of most researchers to direct their efforts toward ctDNA could suggest otherwise. Still, it's important to keep in mind that there is a competing method with arguably more favorable characteristics.

Another challenge threatening the success of ctDNA is the difficulty of developing tests with sufficient sensitivity (ability to identify the presence of genetic markers) while also ensuring that the test is accurate (not prone to identify false positives). With the extensive amount of health-related data that can be obtained from an individual's blood, separating out the extremely small amount of cancerous DNA from the voluminous amount of healthy DNA can make sensitivity of cancer screening tests a troublesome proposition. Additionally, millions of false positives could result from a test that is only 90 percent accurate.

If false positives become prevalent in cancer screens they could undermine the hopes of early detection efforts if the risk cannot be sufficiently mitigated. The problem is that positive test results for individuals who have no cancer present could potentially lead to unnecessary additional biopsies and treatments. Similarly, there is a risk of overdiagnosis. Because our bodies are equipped to deal with and dispose of the frequent cell mutations they experience, many early stage cancers may never advance to a more threatening phase. Screening tests without long-term trials could be found to result in unnecessary tests and treatments for patients with tumors that would have never spread inside the body. This fact emphasizes that, even for tests with sufficient sensitivity and accuracy, it can still be a challenge to determine when treatment should be administered to a patient who tests positive.

Despite these major challenges, many remain highly optimistic about the future of liquid biopsy. Victor Velculescu, a lead researcher at Johns Hopkins, expects these issues to resolve favorably and considers an early detection test for cancer to be probable within the next five years, which is consistent with Illumina's 2019 target date. With that prospect in mind, numerous organizations have positioned themselves to capitalize on the opportunity.

Sizing Up the Competition

One player becoming more prominent in the field is Guardant Health. Much like Illumina's moonshot GRAIL, Guardant Health's Project Lunar program has begun making strides towards developing a simple blood test capable of detecting most types of cancer at an early stage. Guardant's early successes include a study where it performed its liquid biopsy blood screening test side by side with the standard tissue biopsy and produced similar results 98 percent of the time. Guardant itself claims to be able to identify circulating tumor DNA with 1,000 times the accuracy of standard sequencing methods, and its Guardant 360 test has attained specificity (accuracy) reaching as much as 99.9999 percent, thereby generally ruling out false positives. Guardant has enlisted the assistance of research institutions including Massachusetts General Hospital, Perelman School of Medicine at the University of Pennsylvania, Robert H. Lurie Comprehensive Cancer Center at Northwestern University, UC San Francisco, and others.

In response to Illumina's announcement regarding its plans for GRAIL, Sequenom (NASDAQ:SQNM) CEO Dirk van den Boom noted that, "Sequenom had already made significant progress on the technology." The company had been seeking a partner to bring its test to market prior to its recent acquisition bid from LabCorp (NYSE:LH). Trovagene uses liquid biopsy to monitor the progression of cancer in patients already diagnosed and keep tabs on changes in the disease over time. Janssen Diagnostics, a subsidiary of Johnson & Johnson (NYSE:JNJ), has an FDA-approved liquid biopsy test. Life sciences giant Roche also has aspirations in the field. And this is just a small sampling of the large number of companies making a bid in the field.

In total, liquid biopsy research has attracted attention from 38 companies in the U.S. alone, and there are approximately 350 clinical trials under way to learn more about the new technology. With the crowded research space, investors will undoubtedly be interested in learning how GRAIL measures up against its competition.

GRAIL's Competitive Positioning

Illumina is interested in early detection screening tests for cancer; its ambitions do not extend to other potential applications of liquid biopsy, such as monitoring, assessing possibility of recurrence, or other areas according to CEO Jay Flatley. In addition, Flatley emphasizes that monitoring for recurrence and analyzing tumors has dominated much of the current research. Instead, GRAIL is aiming to carve out a niche in the most scientifically complex area of the field: early detection of all major types of cancer.

The company itself claims that it is "uniquely positioned" and that it can achieve superior accuracy in its testing due to its technology which enables deep sequencing capabilities. Another part of GRAIL's unique positioning comes from scale: the company's plan is to obtain clinical data from hundreds of thousands of people. One third party, CEO Andre Marziali of Boreal Genomics, noted that GRAIL will be capable of amassing data well beyond the scope possible with any other current competitor, and further stated that, whether it succeeds or fails, "GRAIL will accelerate the arrival of ctDNA screening."

GRAIL describes its advantage as making small amounts of ctDNA detectableby improving signal to noise, thereby providing the test greater sensitivity and reducing the risk that cancer goes unnoticed in early stages. While the company has promoted its expectations to achieve greater sensitivity, it has also been careful to stress that eliminating false positives will be among its primary concerns. Based on these communications, it might be inferred that management would be willing to sacrifice a certain amount of sensitivity to achieve a greater amount of accuracy and fewer false positives, a plan that helps to overcome one of the major liquid biopsy challenges if GRAIL can deliver on this promise in its testing.

Without question, GRAIL still faces major challenges in executing its plan. One author may have put it best with the comment that "GRAIL's plans will require development of biological [I would say informatics of genome regulation, AJP] understanding that is presently unknown, and strategies that contradict much current thinking in public health, making GRAIL's initiatives extremely ambitious." Still, the rush of research activity into this field shows that the scientific community senses a major opportunity. And GRAIL possesses capabilities and ambitions which afford it a reasonable possibility of capitalizing on that promise.

Conclusion

As part of the larger company, GRAIL is a relatively minor investment which represents a small amount of incremental risk for Illumina, a company for which is there is much to be optimistic about. However, a careful analysis of the considerations discussed above will reveal to investors what they likely already know: Illumina's side project GRAIL is a startup with infinite promise, but speculative prospects and significant competition and scientific uncertainty. As with many biotech and scientific endeavors, early investors who lack expertise in the field but hope to gain an edge in predicting the outcome of this technological race may find themselves hard pressed to do so. Despite this, any incremental value Illumina can create from its investment in GRAIL will be a welcome additional benefit for shareholders of the promising core business.

Author's Note: Investors who valued this analysis and who wish to receive future articles and ideas from The Virtuous Cycle can do so by clicking the "Follow" button at the top of the article and selecting real-time alerts.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: All investments involve risks. Investors are encouraged to do their own due diligence prior to making buy or sell decisions.

[The fundamental idea that blood tests can reveal early signs of certain types of cancer is already a well-known practice. In your Annual Physical, your blood test most likely include a check for "PSA number", a Prostate-Specific Antigen (PSA) Test. It is very affordable, and if for a man it stays close to 1 in a steady manner for years, one can be reasonably relaxed about the health of prostate. The Big Idea of Grail, catapults such singular test into an entirely new dimension. Instead of measuring an Antigen (a protein), the Illumina-dominated Grail can look at the genome, and the full genome, of suspicious cells circulating very early in the bloodstream. The colossal science question, of course, is "to find a needle in a haystack". In my terms, looking for Fractal Defects, mostly in the Non-Coding (Regulatory, maiden name "Junk" DNA). Cancer leads to a genomic melt-down with millions of mutations and even chromosomes breaking up. In Chernobyl (and other nuclear reactors) there was nothing wrong with the blazing Uranium (or Plutonium) fuels! THE PROBLEM WAS A BREAK-DOWN OF THE REGULATORY SYSTEM! What makes Grail a true Holy Grail that it has the potential with Full Human Sequencing, using the World's best and most powerful X10 (or X5 for smaller Countries, like my homeland Hungary of only 10 million people, but due to her post-Communist history there is a single Central Health Insurance System, with all the medical records digitized for at least the last 20 years!). Thus, it is very likely that a FractoGene patent 8,280,641 to correlate higher fractality of certain cell surfaces (with cancer, the cells become more "spiky", these cells can be fished out and the genome fully sequenced (in a "repeat-customer mode", as the article points out). Correlation of fractality of the shape and Fractal Defects detected in the Genome can provide potentially all leading 38 competitors in the field towards "holy grail" to seize a monopoly in the vast cancer market (estimated anywhere between $20 Billion to $200 Billion. The time is now to seize. Andras_at_Pellionisz_dot_com]

--

The leading 8 of the 38 Competitors in Liquid Biopsy Analytics

8 Companies Developing Liquid Biopsy Cancer Tests

NANALYZE

Cancer blood tests, or “liquid biopsies” as they are called, promise to be a huge niche in molecular diagnostics (MDx). Cowen & Co estimates that using DNA blood tests for cancer screening will be a $10 billion a year market in just 4 years. Illumina just announced that they are developing a universal blood test to identify early-stage cancers in people with no symptoms of the disease. The Illumina venture is called “Grail”, and has taken in more than $100 million in Series A financing from investors that include ARCH Venture Partners, Sutter Hill Ventures and Bezos Expeditions.

With Illumina having worked on this project for 18 months already, competing companies in this space should be worried. While over 38 companies are actively targeting this space, the below chart from Piper Jaffrey shows 6 key players that we’ll take a closer look at:

MYRIAD GENETICS (NASDAQ:MYGN)

We first wrote about Myriad in November of 2014, and since then the stock price is up +28% giving the Company a market cap of nearly $3 billion. Myriad’s initial product was a 25-gene genetic test called myRisk that identifies an elevated risk for eight important cancers using just a simple blood sample. Since then, the Company has developed an entire suite of “liquid biopsy” tests. Here’s where the company wants to be in 2020. Those are some lofty goals, but Myriad has been making some good progress towards them. Revenues for Q3 2015 were $183 million which was down slightly from the previous quarter. Over 90,000 physicians have ordered Myriad’s tests, with over 2 million tests being performed so far.

ONCOCYTE (NYSEMKT:OCX)

Founded in 2009, Oncocyte is developing non-invasive liquid biopsy diagnostic tests in areas of high unmet need in oncology, specifically the 13-16 million lung and breast cancer patients. With an IPO taking place just several weeks ago, the Company must be wondering if their timing couldn’t have been any worse. Investors who bought shares the first day of that IPO would have lost more than -50% of their investment in just 14 days resulting in a market cap for OCX of just $104 million. Oncocyte has yet to generate any revenues.

VERMILLION (NASDAQ:VRML)

Founded in 1993, Austin based Vermillion has lost -78% of their share price value in the past 5 years giving the Company a market cap of just $86 million. Vermillion has developed the first FDA-cleared, multi-biomarker blood test that helps assess the risk of ovarian cancer prior to surgery. The Company brought in minuscule revenues in Q3 2015 making investors wonder just when this product offering will take off.

VERACYTE (NASDAQ:VCYT)

Since their market debut in 2013, VCYT is down over -50% giving the Company a market cap of just $171 million. Veracyte’s liquid biopsy tests are targeting lung and thyroid cancer, two diseases that often require invasive procedures for an accurate diagnosis. The Company has shown strong revenue growth along with equally strong operating losses. 2014 showed revenues of $38 million on losses of $29 million.

FOUNDATION MEDICINE (NASDAQ:FMI)

While their first product “FoundationOne” required an actual tissue biopsy, their second product “FoundationOne Heme” is a liquid biopsy which was released after we first wrote about Foundation Medicine back in March of 2014. We also wrote about Foundation more recently in January 2015 when their share price soared +95% on the back of a strategic partnership with Roche. While we hoped that partnership would establish a support level for the stock, it didn’t. Since then, FMI as sunk -65% giving the Company a current market cap of $560 million. Revenues appear to be steadily growing over time, with Q3 2015 revenues coming in at around $25 million.

GENOMIC HEALTH (NASDAQ:GHDX)

Since their market debut in 2005, this $1 billion market cap company has returned +166% compared to a NASDAQ return of +113% over the same time frame. Similar to Foundation Medicine, Genomic Health has made a strong business of detecting cancer in tissues obtained from biopsies using their Oncotype DX test suite which brought in revenues of $275 million in 2014. In January of 2015, the Company announced their intention to release a liquid biopsy test in 2016 which they would price much lower than their Oncotype DX test which is priced at around $4,500.

BIOCEPT (NASDAQ:BIOC)

We first wrote about Biocept in May 2014 and since then the stock is down over -60% giving the Company a market cap of just $30 million. Biocept is developing their OncoCEE platform which claims to offer 10-100X the sensitivity of competing platforms in detecting cancer mutations in the blood stream. While the stock price languishes, Biocept continues to strengthen their patent portfolio and sign commercial agreements. Revenues last quarter continue to be minuscule.

GUARDANT (Private)

While not mentioned in the above chart from Piper Jaffrey, Guardant is a company we wrote about in March 2014 which is developing the GUARDANT360, a test that looks for tumor DNA which is shed into the bloodstream for almost every type of cancer. Just last week, Guardant closed a massive funding round of $100 million, the same amount of money put forward by Illumina to launch Grail. Guardant360 is currently being used by 20,000 patients with cancer per year at a price point of $5,400.


Precision medicine: Analytics, data science and EHRs in the new age

[For those who may not know; EHR stands for "Electronic Health Records" obsolete standard by the Office of the National Coordinator for Health Information Technology launched the interoperability initiative in 2004 - TWELVE YEARS AGO - AJP]

The promise of genomics and personalized care are closer than many realize. But clinical systems and EHRs are not ready yet. While policymakers and innovators play catch-up, here’s a look at what you need to know.

By John Andrews August 15, 2016

Considering how fast technology advances in the healthcare industry, it seems natural that a once-innovative concept could become obsolete in the span of, say, a dozen years. Knowledge, comprehension and capabilities continue moving forward, and if the instruments of support don't keep pace, it can cause a rift to appear. If nothing is done, it can exacerbate into a seismic event.

Some contend that this situation exists with the rapid advancement of precision medicine continually outstripping the static state of electronic health records. Medical research is forging ahead with genomic discoveries, while EHRs remain essentially the same as when the Office of the National Coordinator for Health Information Technology launched the interoperability initiative in 2004.

Over that time, healthcare provider IT teams have worked tirelessly at implementing systems with EHR capability and towards industry-wide interoperability. If the relationship between science and infrastructure has hit an inexorable bottleneck, what are the reasons for it?

"It depends on how you look at it," noted Nephi Walton, MD, biomedical informaticist and genetics fellow at Washington University School of Medicine in St. Louis. "One of the problems I have seen is when new functionality is created in EHRs, it is not necessarily well integrated into the overall data structure and many EHRs as a result have a database structure underneath them that is unintelligible with repetitive data instances. We often seem to be patching holes and building on top of old architecture instead of tearing down and remodeling the right way."

Walton addressed the disconnect between the growth in precision medicine and the limitations of healthcare IT infrastructure at a presentation during the recent HIMSS Big Data and Healthcare Analytics Forum in San Francisco.

"IT in healthcare tends to lag a bit behind other industries for a number of reasons," he said. "One of them is that healthcare IT is seen as a cost center rather than a revenue-generating center at most institutions, so fewer resources are put into it."

Overall, EHR limitations have resonated negatively among providers since they were introduced, said Dave Bennett, executive vice president of product and strategy at Orion Health in Scottsdale, Ariz.

"The EHR reality has fallen painfully short of the promise of patient-centric quality care, happy practitioners and reduced costs," he said. "In recent surveys, EHRs are identified as the top driver of dissatisfaction among providers. According to the clinical end-users of EHRs, it takes too long to manage menial tasks, it decreases face-to-face time with patients, and it degrades the quality of documentation. In one sentence, it does not bring value to providers and consumers."

Despite the limitations though, EHR designs aren't to blame, Bennett said.

"It is not the technology in itself – it is the technology usability that needs a new approach to successfully deliver data-driven healthcare," he said. "We need to redesign the EHR with the patient in mind and build a technology foundation that allows the EHR full integration into the care system. Today's EHRs are good for billing and documenting but are not really designed to be real-time and actionable. They cannot support an ecosystem of real-time interactions, and they lack the data-driven approaches that retail, financial, and high tech industries have taken to optimize their ecosystems."

Strengthening weak links

Technological disparity doesn't just exist between medical research and EHRs, but in how EHRs are used within health systems, added Jon Elwell, CEO for Boise, Idaho-based Kno2.

"One of the biggest struggles in healthcare IT today is the widely uneven distribution of healthcare providers, facilities and systems along the maturity continuum towards a totally digital healthcare future," he said. "One healthcare system is only as technologically advanced as the least mature provider or facility within its network."

For example, he said an advanced accountable care organization may be using EHRs in every department and using direct messaging to exchange patient charts and other content with others in the network. However, he said, it is still common for some to be using faxes to communicate, "thrusting the advanced system back into the dark ages."

As an industry, providers "have to work harder to develop solutions that prevent early adopters from being dragged down to the lowest common technology denominator," Elwell said. "These new solutions should extend a hand up to less-advanced providers and facilities by providing easy ways for them to adopt digital processes, particularly when it comes to interoperability."

Aligning vision with reality

The Office of the National Coordinator for Health IT, which evolved alongside EHRs over the past 12 years, hasn't sat idly by as the imbalance has gradually appeared. Jon White, MD, deputy national coordinator, is fully aware of the situation and says it is time to take a fresh look at precision medicine and EHRs.

"What we need to do is bring reality in with our vision," he said. "It's not just science, but the IT infrastructure that supports it."

With its roots going back to 2000, precision medicine sprung up from genome sequencing and has continued to map its route forward. White says at its inception the ONC realized that information infrastructure needed improvement and the EHR initiative was designed to get the process moving.

"The view of precision medicine and the vision for precision medicine has broadened considerably beyond the genome, which is still a viable part of the precision medicine field," White said. "But it is really about people's data and understanding how it relates to one another."

Precision medicine is being given a cross-institutional approach, with new types of science and analysis emerging and a new methodology being envisioned, White said. For IT, a solid and dynamic infrastructure has been built "where little existed before and over the past seven years EHRs adoption has gone from 30 percent of physicians to 95 percent now."

So the vast majority of provider organizations are now using EHRs and the systems are operating with the clinical utility that was expected, White said. Next steps for interoperability and enhanced functionality, he added, are a logical part of the long-term process.

"EHRs are doing a lot of the things we want them to do," he said. "We're at a place where we have the information structure and need to understand how to best use it as well as continuing to adapt and evolve the systems."

More introspection needed

In order for EHRs to gain more functionality and interoperability to achieve a wider scope of utilization, more has to be done with the inner machinations of the process, Walton said.

"I don't think there has been much of a focus on interoperability between systems, especially now where you have a few major players that have pretty much taken over the market," he said. "I fear that as we have less choices, there will be less innovation and I sense now that EHR vendors are more likely to dictate what you want than to give into what you need. The overarching problem with interoperability is that there is no common data model – not only between vendors, but between instances of a particular vendor. There really needs to be a standard data model for healthcare."

Yet while precision medicine – especially as it relates to genomics – continues to emerge, analysts like Eric Just, vice president of technology at Salt Lake City-based Health Catalyst, aren't sure IT infrastructure is solely to blame for the problem.

"I'm not really convinced that EHR interoperability is the true rate limiter here, save for a few very advanced institutions," he said. "Practical application of genomics in a clinical setting requires robust analytics, the ability to constantly ingest new genomic evidence, and there needs to be a clinical vision for actually incorporating this data into clinical care. But very few organizations have all of these pieces in place to actually push up against the EHR limits."

To be sure, White acknowledged that academic institutions who pushed EHRs for research purposes do want more functionality and capability from electronic records.

"Those large academic institutions have been telling their vendors that when it comes to EHRs, 'this is our business and we need you to meet our needs,'" he said.

When presenting on the topic of precision medicine and EHRs, Just said he senses "a big rift" between academic and non-academic centers on the topic.

"Our poll shows that maybe the issue is not EHRs, but the science that needs to be worked out," he said. "A lot of progress is being made, but analyzing the whole genome and bringing it to medical record is not an agenda that many organizations are pushing. And those that are don't have clear vision of what they're looking for."

Charting new horizons

Because precision medicine's advancement is growing so rapidly, it is understandable that EHRs will be limited, Just said.

"These new analyses have workflows no one has seen before, they need to be developed and current technology won't allow it," he said. "EHRs are good at established workflows, but we need to open workflows so that third parties can develop extensions to the EHR."

As it exists today, the healthcare IT infrastructure is "simply genomic unaware," said Chris Callahan, vice president of Cambridge, Mass.-based Genelnsight, meaning that genetic data has no accommodations within the records.

"Epic and Cerner don't have a data field in their system called 'variant,' for example, the basic unit of analysis in genetics," he said. "It's simply not represented in the system. They are not genomic ready."

Enakshi Singh is a genomic researcher who sees firsthand academia's quest for higher EHR functionality. As a senior product manager for genomics and healthcare for SAP in Palo Alto, Calif., she is at the center of Stanford School of Medicine's efforts to apply genomic data at the point of care. In this role, she works with multidisciplinary teams to develop software solutions for real-time analyses of large-scale biological, wearable and clinical data.

"The interoperability win will be when patients can seamlessly add data to their EHRs," she said. "But at this point, today's EHR systems can't handle genomic data or wearable data streams."

EHRs may not be equipped for ultra-sophisticated data processing and storage, but Singh also understands that they reflect the limitations of the medical establishment when it comes to genomic knowledge. Every individual has approximately three billion characteristics in their genomic code, with three million variants that are specific to each person.

"General practitioners aren't equipped to understand the three million characteristics that make each individual unique," she said.

One reason for precision medicine's growth is how the cost of sequencing has shrunk, Singh said. The first genomic sequence in 2000 took 13 years and $13 billion from a large consortium to produce. Today a genome can be sequenced for $1,000, which has led to a stampede of consumers wanting to find out their genetic predispositions, she said.

Singh's colleague Carlos Bustamante, professor of biomedical data science and genetics at Stanford calls the trend "a $1,000 genome for a $1 million interpretation."

The frontier for genomics and precision medicine continues to be vast and wide, Singh said, because of the three million variants, "we only know a fraction of what that means. When we talk about complex diseases, it's an interplay of multiple different characters and mutations and how it's related to environment and lifestyle. Integrating this hasn't evolved yet."

The other challenge is connecting with clinical data and sets that have shown to play a role in disease, how to integrate at the point of care and create assertions based on profile information. Singh is involved with building new software that takes new data streams and provides for quick interpretation. The Stanford hospital clinic is in the process of piloting a genomic service, where anyone at the hospital can refer patients to the service for a swab and sequencing.

"They will work the process and curate it, filter down what's not important and go down the list related to symptoms," Singh said. "This replaces searching through databases. What we have done is to create a prototype app that automates the workflow and create a field where workflow is streamlined for interpreting more patients. Current workflow without the prototype is 50 hours per patient and ours is dramatically cutting the time down. It's not close to being in clinical decision support yet, but it did go through 30 patients with the genomic service."

Jeff Wu

Workflow and analytics

With a background in EHRs, Jeffrey Wu, director of product development for Health Catalyst, specializes in population health. To adequately utilize EHRs for genomics, Wu is developing an analytics framework capable of bringing data in from different sources, which could include genomics as part of the much broader precision medicine field. Ultimately, he said it's about giving EHRs the capability to handle a more complete patient profile.

"Right now there is minimal differentiation between patients, which makes it harder to distinguish between them," Wu said. "Standardizing the types of genomes and the type of care for those genomes will make EHRs more effective."

Wu explained that his project has two spaces – the EHRs are the workflow space, coinciding with a separate analytics engine for large computations and complex algorithms.

"These two architectures live separately," he said. "Our goal is to get those integration points together to host the capabilities and leverage up-and-coming technologies to get the data in real time."

Stoking the FHIR

A key tool in helping vendors expand the EHR's functionality is FHIR – Fast Health Interoperability Resources, an open healthcare standard from HL7. While it has been available for trial use since 2014.

SMART on FHIR is the latest platform offering, designed to provide a complete open standards-based technology stack. SMART on FHIR is designed so developers can integrate a vast array of clinical data with ease.

Joshua Mandel, MD, research scientist in biomedical informatics at Harvard and lead architect of the SMART Health IT project, is optimistic that SMART on FHIR and a pilot project called Sync for Science will give vendors the incentive and the platform to move EHR capability in a direction that can accommodate advancing medical science.

"When ONC and the National Institutes of Health were looking for forward-thinking ways to incorporate EHR data into research, using the SMART on FHIR API was a natural fit," he said. "It's a technology that works for research, but also provides a platform for other kinds of data access as well. The technology fits into the national roadmap for providing patient API access, where patients can use whatever apps they choose, and connect those apps to EHR data. In that sense, research is just one use case – if we have a working apps ecosystem, then researchers can leverage that ecosystem just the same as any other app developer."

With Sync for Science, Mandel's team at the Harvard Department of Biomedical Informatics is leading a technical coordination effort that is funded initially for 12 months to work with seven EHR vendors – Allscripts, athenahealth, Cerner, drchrono, eClinicalWorks, Epic, and McKesson/RelayHealth – to ensure that each of these vendors can implement a consistent API that allows patients to share their clinical data with researchers.

Sync for Science – known as S4S – is designed to help any research study ask for (and receive, if the patient approves) patient-level electronic health record data, Mandel said. One important upcoming study is the Precision Medicine Initiative.

Josh Mandel

"It's important to keep in mind that much of the most interesting work will involve aggregation of data from multiple modalities, including self-reports, mobile device/sensors, 'omics' data, and the EHR, he said. "S4S is focused on this latter piece – making the connection to the EHR. This will help keep results grounded in traditional clinical concepts like historical diagnoses and lab results."

The project is focused on a relatively small "summary" data set, known as the Meaningful Use Common Clinical Data Set. It includes the kind of basic structured clinical data that makes up the core of a health record, including allergies, medications, lab results, immunizations, vital signs, procedure history, and smoking status. The timeline is structured so that the pilot should be completed by the end of December and Mandel expects that the technical coordination work will be finished by that time. The next step, he says, is to test the deployments with actual patients.

"We're still working out the details of how these tests will happen," Mandel said. "One possibility is that the Precision Medicine Initiative Cohort Program will be able to run these tests as part of their early participant app data collection workflow."

Built on the FHIR foundation, S4S is designated as the lynchpin for interoperability to broaden its scope for research, clinical data and patient access. FHIR is organized toward profiles and the use case data and the data types that characterize it. S4S is building a FHIR profile so that the data such as demographics, medications and laboratory results can be accessed and donated to precision medicine.

As a proponent of S4S, the ONC sees the program as an extension of "the fundamental building blocks for interoperability," White said. The APIs that are integral to the S4S effort have been used in EHRs for a long time, but he said vendors kept them proprietary.

"When we told vendors in 2015 that they would need to open APIs so that there could be appropriate access to data, they agreed, and moreover, they said they would lead the charge," White said.

MU and MACRA influence

When the industry started on the EHR and interoperability initiative in 2004, meaningful use hadn't been conceived of yet. With MU's arrival as part of President Obama's ARRA program, healthcare providers were suddenly diverted from the original development plan with an extra layer of bureaucracy.

Walton talks about its impact on the overall effort: "Meaningful use had some value but largely missed the goals of its intention," he said. "I think a lot of people essentially played the system to achieve the financial benefit of meaningful use without necessarily being concerned about how that translated into benefits for patients. Meaningful use has pushed people to start talking about interoperability, which is good, but it has not gone much further than that. Most of the changes in EHRs around meaningful use were driven by billing and financial reimbursement, but it has opened the door to more possibilities."

The broader problem, says Wayne Oxenham, president of Orion Health's North America operation, is that a business-to-business model did not really exist in healthcare, "so incentives were not aligned, and MU was only focusing on EHR interoperability and quality measures that provide no value versus proactive care models."

In essence, Oxenham said "MU did not deliver much. The program tried to do too much by wanting to digitize healthcare in 10 years and curiously, their approach was only focused on the technology instead of focusing on the patient and creating value. The point was to improve outcomes and stabilize costs, not to exchange documents that did not necessarily need to be shared, and they brought no value when stored in a locker deep in a clinical portal. MU missed the point – it just helped digitize processes that were and are still oriented towards billing, but aren't focused on optimizing care and using the data in meaningful ways."

As with MU, new certification requirements for the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) could also influence the dynamics of precision medicine and genomics, but Walton contends that it isn't an issue at this point.

"I don't think MU has inhibited the development at all, and people are still trying to wrap their heads around MACRA," Walton said. "A big part of the problem is that there has not really been a financial incentive to pursue this and many healthcare IT innovations are driven by billing and trying to increase revenue. I think that MU has tied up a lot of healthcare IT resources but I don't know that I can say they would have been working on precision medicine if they were not tied up."

Eric Rock CEO of Plano, Texas-based Vivify, calls MU "a measuring stick used to drive quality through financial penalties or gains, the strongest driver for healthcare industry change." While he considers MU a "good move, it perhaps wasn't strong enough to make a 'meaningful' difference with interoperability or on the cost of care."

Forthcoming CMS bundles, such as the recent Comprehensive Care for Joint Replacement model, could advance the MU incentive component further, he said.

"The impact that CMS bundles and other value-based care models will have is a much stronger demand by providers towards healthcare interoperability in a landmark effort to reduce costs," Rock said. "As a result, winning large contracts may require a commitment to a new level of interoperability."

Eric Rock

Altering the trajectory

If the current trajectory of precision medicine-EHR imbalance continues, it won't be for a lack of trying by medical science and the healthcare IT industry to curb it. Programs like Sync for Science need time to develop and produce results. At this point, however, there are a lot of questions about how the "technology gap" issue will proceed and whether it will continue to widen.

From a technology perspective, Walton believes the focus needs to be on scaling horizontally.

"Right now EHRs are primarily based on massive servers rather than distributing tasks across multiple computers," he said. "You can't handle the data from precision medicine this way – it's just not going to work, especially when you have multiple departments trying to access and process it at the same time."

True cloud computing, whether internally or externally hosted, is needed for this type of data, Walton said, because "the database infrastructure behind EHRs and clinical data warehouses is not geared towards precision medicine and cannot handle the data generated. There are clinical data warehouses that can handle the data better but they are not usually updated in real time, which you need for an effective system for precision medicine. This will require investments in very fast hardware and distributed computing and we have a ways to go on this front."

On the current trajectory, precision medicine is "slowly sweeping into standards of care and what we are doing is going little-step by little-step to find places where personalized medicine is applicable and can be used," Callahan said.

The only way the current trajectory will change is if reimbursement patterns change, he said.

"If and when payers latch onto the idea that personalized medicine is actually a key enabler of population health, then that they should pay for it as an investment," he said. "That will be a game changer, a new trajectory. Right now the payer community views precision medicine and genetics as just another costly test and people don't know what it means or what the clinical utility of it is. That is the exact wrong way to think about it. Precision medicine and genetics are key enablers for population management. When you can get your head around that idea, when you can marry the idea, then you really start to see things change." .

[Precision Medicine, very much like computing from mainframes by IBM to minicomputers by DEC to personal computers by APPLE will be disrupted by entirely new paradigms secured since 2004. Andras_at_Pellionisz_dot_com]


Stanford Medicine, Google team up to harness power of data science for health care

Stanford Medicine will use the power, security and scale of Google Cloud Platform to support precision health and more efficient patient care.

Stanford Medicine and Google are working together to transform patient care and medical research through data science.

The new collaboration combines Stanford Medicine’s excellence in health-care research and clinical work with Google’s expertise in cloud technology and data science. Stanford’s forthcoming Clinical Genomics Service, which puts genomic sequencing into the hands of clinicians to help diagnose disease, will be built using Google Genomics, a service that applies the same technologies that power Google Search and Maps to securely store, process, explore and share genomic data sets.

Stanford Medicine includes the Stanford School of Medicine, Stanford Health Care and Stanford Children’s Health. Together, Stanford Medicine and Google will build cloud-based applications for exploring massive health-care data sets, a move that could transform patient care and medical research.

“Stanford Medicine and Google are committing to major investments in preventing and curing diseases that afflict ordinary people worldwide. We’re proud to be setting this milestone for the future of patient care and research,” said Lloyd Minor, MD, dean of the School of Medicine.

The agreement — considered key to Stanford Health Care’s development of the Clinical Genomics Service — makes Google Inc. a formal business associate of Stanford Medicine. As such, Google and Stanford will both comply with the Health Insurance Portability and Accountability Act, a federal law that regulates the privacy and security of medical information. HIPAA requires that Stanford Medicine patient data stored on Google Cloud Platform servers stay private. Patient information will be encrypted, both in transit and on servers, and kept on servers in the United States.

Analyzing genetic data

With Google Genomics, Stanford Medicine will build its new Clinical Genomics Service on the Google Cloud Platform, expanding genomics research and establishing new methods of real-time data analysis for efficient patient care. “We are excited to support the creation of the Clinical Genomics Service by connecting our clinical care technologies with Google’s extraordinary capabilities for cloud data storage, analysis and interpretation, enabling Stanford to lead in the field of precision health,” said Pravene Nath, chief information officer for Stanford Health Care.

The Clinical Genomics Service will enable physicians at Stanford Health Care and Stanford Children’s Health to order genome sequencing for patients who have distinctive or unusual symptoms that might be caused by a wayward gene. The genomic data would then go to the Google Cloud Platform to join masses of aggregated and anonymous data from other Stanford patients. “As the new service launches,” said Euan Ashley, MRCP, DPhil, a Stanford associate professor of medicine and of genetics, “we’ll be doing hundreds and then thousands of genome sequences.”

The Clinical Genomics Service aims to make genetic testing a normal part of health care for patients. “Genetic testing is built into the whole system,” said Ashley. A physician who thinks a genome-sequencing test could help a patient can simply request sequencing along with other blood tests, he said. “The DNA gets sequenced and a large amount of data comes back,” he said. At that point, Stanford can use Google Cloud to analyze the data to decide which gene variants might be responsible for the patient’s health condition. Then a data curation team will work with the physician to narrow the possibilities, he said.

“This collaboration will enable Stanford to discover new ways to advance medicine to the benefit of Stanford patients and families,” said Ed Kopetsky, chief information officer at Lucile Packard Children’s Hospital Stanford and Stanford Children’s Health. “Together, Stanford Medicine and Google are making a major contribution and commitment in curing diseases that afflict children not just in our community, but throughout the world. It’s an extraordinary investment, and we’re proud to play such a large role in transforming patient care and research.”

Ashley noted that medicine mostly deals in small data, such as lab tests. But genomic studies, patient health records, medical images from MRI and CT scans, and wearable devices that monitor activity, gait or blood chemistry involve huge amounts of data that can allow doctors and researchers alike to analyze myriad aspects of patient health in ways that lead to improved medical decisions and products that are tailored to the patient — the essence of a precision health approach.

Focusing on precision health

“In the past few years, the amount of available data about health care has exploded,” said Minor. “While researchers are learning to integrate this big data, putting it to work for individual patients, in real time, is a huge challenge. Our collaboration with Google will help us to meet this challenge.”

Sam Schillace, vice president of engineering for industry solutions at Google Cloud Platform, said, “I’m excited because this agreement brings together expertise in three areas: data science, life science research and clinical care. The next decade of improvements in understanding and advancing health care is going to come from leaders in those three areas working together to build the next generation of platforms, tools and data.”

It’s all consistent with Stanford Medicine’s focus on precision health. “You could imagine that, going forward, potentially every patient could be sequenced,” said Michael Halaas, chief information officer for the School of Medicine. “The technology challenge we need to solve is how to derive useful insights from data and apply it directly to the care of a patient in near real time and also make progress on research.”

Halaas said the Stanford-Google agreement does more than provide Stanford with server space. “It’s not just stacks of servers,” he said. “It includes layers and layers of innovative technology. This agreement allows us to do the analytics in a way that is fast and secure.”

Minor said, “We’ll be working with Google to build innovative technology that will enable Stanford to lead in precision health, the goal of which is to anticipate and prevent disease in the healthy and precisely diagnose and treat disease in the ill.”

Data as the engine that drives research

Large-scale patient data is already helping answer research questions at Stanford. For example, Ami Bhatt, MD, PhD, an assistant professor of medicine and of genetics, is exploring changes in patient microbiomes that can precede symptoms of a disease such as cancer.

Another study is looking at alarm data from patient hospital rooms. The de-identified, or anonymized, data has been accumulating at Stanford’s adult and children’s hospitals for about 15 years, said Ashley, but until now no one has studied it. Hospitalized patients are typically hooked up to monitors that display their heart rate, blood-oxygen levels and other basic data, with alarms that go off if the measurements suggest something is wrong. The problem is that the alarms go off when nothing is wrong — sometimes when the patient just moves. Health-care providers often turn off the alarms so patients can rest and nurses can concentrate on people who need care. An artificial-intelligence approach in the works could use the alarm data to distinguish false alarms from real ones.

The analytics applications and virtual supercomputers available through Google Genomics could pave the way for other kinds of projects, as well. Working with Google’s engineers, Stanford researchers could make advances in visual learning that might, for example, enable computers to distinguish malignant tumors from benign ones in medical images.

The Stanford-Google collaboration is a critical step on the path to precision health, Minor said. “This is the foundational work for bringing patient health information and other big data to the bedside,” he said.

Google Tech Talk YouTube, 2008

[Pellionisz started his US career of geometrization of neuroscience and genomics at Stanford, 1973]


Leroy Hood 2002: The Genome is a System (needs some system theory)

Book Excerpt: ‘Hood: Trailblazer of the Genomics Age’

Armed with a vision for revolutionizing healthcare, biologist Leroy Hood wasn't going to let anything — or anyone — stand in his way.

08.08.2016 / BY Luke Timmerman

TWO WEEKS BEFORE Christmas 1999, Lee Hood appeared to have it all: A loving family. Money. Fame. Power. He counted Bill Gates, one of the world’s richest men, as a friend and supporter. Eight years earlier, Gates had given the University of Washington $12 million to lure the star biologist from Caltech in what the Wall Street Journal called a “major coup.”

Hood’s assignment on arrival: build a first-of-its-kind research department at the intersection of biology, computer science, and medicine.

Even at 61, the former high school football quarterback could still do 100 pushups in a row. He ran at least three miles a day. He climbed mountains. He traveled the world to give scientific talks to rapt audiences. At a time when many men slow down, Hood maintained a breakneck pace, sleeping just four to five hours a night. He owned a luxurious art-filled mansion on Lake Washington, but otherwise cared little for the finer things in life, sporting a cheap plastic wristwatch and driving an aging Toyota Camry. Those who worked closely with him said he still had the same wonder and enthusiasm for science he had as a student.

Yet here, at the turn of the millennium, Hood was miserable.

His once-controversial vision for “big science” was becoming a reality through the Human Genome Project, yet he didn’t feel like a winner. He felt suffocated. He had a new vision, a more far-sighted and expansive one that he insisted would revolutionize healthcare. But he felt the university bureaucrats were blind to the opportunity. They kept getting in his way. It was time, Hood felt, to have a difficult conversation with his biggest supporter.

On a typically dark and gray December day in Seattle, Hood climbed into his dinged-up Camry and drove across the Highway 520 floating bridge over Lake Washington to meet Gates, the billionaire CEO of Microsoft. Hood shared some startling news: he had resigned his endowed Gates-funded professorship at UW. He wanted to start a new institute free from university red tape. It was the only way to fulfill his dream for biology in the 21st century.

Gates was well aware of Hood’s record of achievements and its catalytic potential. Hood had led a team at Caltech that invented four research instrument prototypes in the 1980s, including the first automated DNA sequencer. The improved machines that followed made the Human Genome Project possible and transformed biology into more of a data-driven, quantitative science. Researchers no longer had to spend several years — an entire graduate student’s career — just to determine the sequence of a single gene. With fast, automated sequencing tools, a new generation of biologists could think broadly about the billions of units of DNA.

The sequences were obviously important, as they held the instructions to make proteins that do the work within cells. Thousands of labs around the world — at research institutions, diagnostic companies, and drugmakers — used the progeny of Hood’s prototype instruments as everyday workhorses. George Rathmann, the founding CEO at Amgen, biotech’s first big success story, once said Hood “accelerated the whole field of biotechnology by a number of years.”

Building on his success at Caltech, Hood had recruited teams at the University of Washington that continued to create new technologies to further medical innovation. One machine analyzed the extent to which genes are expressed in biological samples; another analyzed large numbers of proteins simultaneously. One device quickly sorted through different types of cells.

Hood certainly wasn’t the only biologist looking ahead, imagining what these automated tools could enable. Once scientists had the full parts list of the human genome on their computers, many believed it would lead to greater understanding of disease, paving the way for precise diagnostics and, ultimately, “personalized medicine.” But Hood had an unusually clear and far-reaching view for how biologists could fully exploit the new instruments. His enthusiasm inspired many bright scientists to devote themselves to his vision and to do their best work.

Not that he inspired everyone. Many people throughout his career saw a man who took excessive credit for discoveries made by others, including young scientists who toiled for him around the clock. Critics saw a self-promoting narcissist, someone who could be blind to the ways his actions sometimes hurt people. He had contradictions: Influenced by his teachers, Hood was dedicated to science education his entire career, yet he did little to mentor his own graduate students.

Passionate as he was about his vision, Hood could be strangely detached from the people he asked to carry it out. He had a big ego — an “unshakable confidence,” in his own words, and he thought he was entitled to special treatment, which frustrated university leaders. When people complained Hood was out of control, he usually turned a deaf ear, dismissing them as bureaucrats or whiners. He was quick to point the finger at others when things went wrong, while hardly ever admitting a mistake of his own.

Like many biotech entrepreneurs, Hood made promises he couldn’t keep. He predicted his work would lead to a personalized medicine revolution within a few years. It didn’t. Competitive to the core, he drove himself to stay at the cutting edge. That meant starting multiple projects at once, getting them operational, then leaping ahead to the next thing. He left the slow, painstaking, meticulous work of science to others.

Ever since James Watson and Francis Crick discovered the double helix structure of DNA in 1953, biologists had pursued the promise of molecular biology. Scientists spent the last half of the 20th century drilling ever deeper into understanding one gene, and usually the one protein created by that gene’s instructions, at a time. Each gene was studied in isolation. It was a thrilling time, as biologists saw there was an underlying unity to life: the DNA code was present in animals, plants, bacteria — every living organism on the planet. The code held genetic information that had so much influence over life on Earth. Many great discoveries had been made using a narrow, deep approach that sought to understand the meaning of the code in many contexts — different animals, different disease states, different environments.

Hood himself, as a graduate student, carried out important immunology research in this tradition. Yet at the start of the 21st century, Hood believed biology was ready for more ambitious goals. He believed traditional reductionism, looking at one gene at a time, was outdated “small science.” Biology was maturing from a cottage industry into a modern science with fast automated research tools. The time was right, he argued, for scientists to look at hundreds or thousands of genes and proteins together in the complex symphony that makes up a whole human organism with trillions of cells. Biology, like physics, had an opportunity to turn into “big science” — fueled by big money, big teams, and big goals.

The way to tackle such complexity, Hood said, was through what he called “systems biology.” It was a new twist on an old idea that involved bringing scientists together from various disciplines of biology, chemistry, physics, mathematics, and computer science. He wanted the power to recruit the right people to the University of Washington for this mission. These people didn’t always speak the same language, but Hood saw himself as the leader who could cross the scientific cultural divide.

He would break every rule and custom of academia if necessary. If he needed to offer a computer scientist a salary that was competitive with Microsoft’s wages, he would. He wanted to choose whom to hire and whom to fire. He needed the flexibility to raise his own money from wealthy donors, some of whom had struck riches in the original internet boom of the late 1990s. If he wanted to out-license a technology for further development to a company, or start a new company, he didn’t want to ask permission. Hood demanded a multi-million dollar facility with enough room for all of his scientists, complete with a floor layout that tore down traditional walls between departments to enable more collaboration. As negotiations at the University of Washington dragged on, Hood realized he wasn’t going to get what he wanted. And university officials were growing weary with his entrepreneurial break-all-the-rules attitude. Hood had approached a wealthy donor without permission. He was actively organizing an independent nonprofit while still on the state payroll. When officials suggested he had run afoul of a new state ethics law, Hood felt threatened and embittered. Abruptly, he quit.

For his final act, Hood wanted to be the boss. New ideas need new organizations to support them, he said. Years earlier, while at Caltech, he had learned this lesson the hard way. Nineteen established instrument companies told him they weren’t interested in developing his DNA sequencer. Hood and some venture capitalists started a successful new company, Applied Biosystems. Emboldened by that experience, Hood now decided he was ready for a whole new kind of risk. In his early 60s, he decided to give up his department chairmanship and tenured faculty position. He had to start his own research institute. It was time to put his money and his reputation on the line.

There were sensitivities that needed to be considered. Would all of Hood’s federally funded grants transfer to his new institute in an orderly manner? How many of his bright students and postdocs would leave an accredited university to join a risky venture? Would peers be supportive, or would they dismiss the institute as a flight of fancy unworthy of grant funding? Could he find lab space? How much of his own money would he need to spend? Would he be condemned by the media for turning his back on the University of Washington?

Many of those questions would take months or years to resolve. But on that gloomy day in December 1999, Hood wanted to break the news to Gates in person. He knew it would be bad form for Gates to hear about it on the evening news. Hood had his assistant call Gates’ office and request a face-to-face meeting. The two men had been close, so Hood got on the calendar. Gates had heard the gist of the institute-within-the-university idea and was curious to hear what was so important that it couldn’t wait.

The men sat down in a couple of comfortable chairs in Gates’ office in Building 8 at the Microsoft campus in Redmond. Hood came quickly to the point. He’d resigned his endowed Gates professorship at UW because the bureaucracy of a public institution would never be flexible enough to let him achieve his goals for multi-disciplinary systems biology. Hardly stopping for breath, Hood barreled through his long list of grievances with administrators who didn’t share his vision. In the same breath, he rhapsodized about the opportunity for systems biology.

The billionaire listened for a solid 15 to 20 minutes. When Gates asked whether the dispute could be resolved some other way, Hood said he had tried for three years to set up such an institute within the university.

When Hood had said his piece, Gates cut to the heart of the matter.

“How are you going to fund this institute?” he asked.

“Well, that’s part of the reason I’m here…” Hood replied.

Gates interjected.

“I never fund anything I think is going to fail,” he said.

Hood was stunned.

He hadn’t expected Gates to commit on the spot to bankrolling a new institute. But he didn’t expect to be flatly dismissed. Gates was a logical thinker, not the impulsive type. He was a kindred spirit, an entrepreneur, a fellow impatient optimist. Years earlier, they bonded on a safari in East Africa; Hood listened to Gates talk about the “digital divide” as hippopotamuses grunted in the night. Often, Gates peppered his biologist friend with questions about the human immune system, widely considered to be the most sophisticated adaptive intelligence system in the universe. The recruitment of Hood helped raise the University of Washington to international prominence in genomics and biotechnology during the 1990s. Given that success, Hood thought he could talk his friend into providing as much as $200 million for a new institute.

Hood didn’t realize it at the time, but Gates was starting to think more seriously about how to make an impact by giving away most of his fortune, by tackling diseases that plague the world’s least fortunate people. By contrast, Hood’s brand of systems biology was abstract, and its applications were likely to come first in rich countries.

The harsh truth, for Hood at least, took years to sink in. Gates didn’t give his institute a penny in its first five years.

Their friendship didn’t end, but the two men would never be quite so close again. “I definitely disappointed Lee,” Gates said years later.

Reflecting on Hood’s split with the university nearly 15 years later, Gates took a nuanced view. He was intrigued by Hood’s new vision, but he also saw why he didn’t work well with others in the university. “He’s a wonderful guy, but a very demanding guy. He’s kind of a classic great scientist,” Gates said. “These things are never black and white.”

On the drive home that fateful day in December 1999, Hood wondered whether he had said something wrong, failed to make a case. But it was a fleeting emotion. Moments of self-doubt, to the extent he had them, were brief. He confided in his wife, Valerie Logan, the one person he knew would give him the support he needed, no matter what. He brooded for a while. “It shocked and hurt me,” Hood said. “It was a statement of skepticism from someone I had hoped would support me.”

Others who were close to the situation understood why the meeting had gone badly. “Bill had not, at that time, been schooled in philanthropy,” said Roger Perlmutter, a former student of Hood’s who went on to run R&D at Amgen and Merck. “This gift to the University of Washington to create Molecular Biotechnology was surely the biggest thing he had done in philanthropy. It was all done to bring Lee here. And then in short order, it unravels? It was a kick in the teeth.”

If Hood’s first thought was that he had possibly damaged his relationship with his most important benefactor, his second thought was that his vision was right and he needed to find other support. He had some money already. Much of it was through his shares in Amgen, the biotech company he advised from its early days, and which had become one of the best performing stocks of the 1990s. He’d also made millions from royalties on DNA sequencers sold by another company he helped start — Applied Biosystems. And Hood had other wealthy friends and companies he could call on for help.

There was a lot to think about beyond science. Where to begin on starting a new institute? Even though he was hailed as one of biotech’s great first-generation entrepreneurs, Hood had never played an executive role in running those enterprises. Now, he would have to act as a startup CEO responsible for not just vision and fundraising, but day-to-day operations. He knew he wasn’t a skilled administrator. He was impatient in meetings, lacked empathy, and made clear to all around him that he didn’t want to hear any bad news. He had a bad habit of avoiding sensitive personnel matters, like whether to fire people.

None of that deterred him. Hood’s son, Eran, an environmental scientist, once said of his father: “He’s always sort of had this narrative in his life of him struggling against people who are trying to keep him from doing what he wants to do. I always joke they should take the Tupac Shakur song, ‘Me Against the World’ and rewrite it as ‘Lee Against the World.’ They could take out the district attorneys and the crooked cops and put in university presidents and the medical school deans who just don’t know, don’t understand, his vision.”

The path ahead was clear. Hood had to prove his vision was right. He would push himself around the clock, to the far ends of the Earth, spend his last nickel. Nothing, he was determined, would get in his way.

Luke Timmerman is the founder and editor of Timmerman Report, a subscription publication for biotech professionals. He is also a contributing biotech writer at Forbes, and the co-host of Signal, a biweekly biotech podcast at STAT. In 2015, he was named one of the 100 most influential people in biotech by Scientific American Worldview.

[Lee Hood, an M.D., Ph.D. went public in 2002 that "The Genome is a System" (implying that it needs some system theory - obviously beyond the realm of Old School Genomics. In the same year, I did not go public (but filed the FractoGene patent), that "The Genome is Fractal and Grows Fractal Organisms". The first seven years later (2009) the Hilbert-fractal of the genome was Cover Picture of Science. Explaining genome (regulation) by Fractal System Theory or by any other System Theory can scare some; Genomeweb encapsulated this in their one-liner "Paired Ends, Lee Hood, Andras Pellionisz". Now, the second seven years later (2016), Big Information Theory companies (Google, Apple, Samsung, Siemens, Microsoft and all) will have to gladly employ mathematical algorithms to meaningfully program their computers. Andras_at_Pellionisz_dot_com ]


What if Dawkin's "The Selfish Gene" would have been "Selfish FractoGene"?

[From "very near misses" to an almost complete embrace of my 2002 FractoGene concept (given to the hands of Dawkins in 2003 in Monterey and to Mandelbrot 2004 in Stanford), the 2004 original of "The Ancestor's Tale" by Dawkins & Wong has a new, 2016 edition (see fractal title-page below). Mandelbrot is gone (2012). While in his entire life he deliberately avoided "mathematization of biology" although he was offered extremely substantial funds since "biologists were not ready" (see his Memoirs), among the few of his illustrations he did feature the "Cauliflower Romanesca" (brought into limelight by Pellionisz, 2008).

Now, in 2016, another giant of influence, Richard Dawkins (Charles Simonyi emeritus Professor of Oxford) embraced the fractal approach to biology. I make some comments "what if Dawkins Selfish Gene would have been Selfish FractoGene?". - Andras_at_Pellionisz_dot_com

Science & Environment published an utterly fascinating recollection after 40 years of "The Selfish Gene" by Richard Dawkins. No, their title was different]:

The gene's still selfish: Dawkins' famous idea turns 40

By Jonathan Webb

Science reporter, BBC News

24 May 2016

From the section Science & Environment

Richard Dawkins, 40 years after "The Selfish Gene".

[Excerps from BBC article] Prof Dawkins hunches over his laptop to dig up examples of biomorphs - the computer-generated "creatures" he conceived in the 1980s to illustrate artificial selection:

Apple Macintosh software "Biomorphs" used for his book Blind Watchmaker.

[I switched from the IBM-type mainframes (that Mandelbrot used) to the precursors of home computers, DEC-15 graphic computers, with an optical cursor (predecessor of the "mouse"). I built a computer model of the entire cerebellar neural network (of the frog), containing 1.68 Million brain cells. In the arduous process of programming the growth of the 5 types of brain cells, along with the most spectacular neuron, the vast array of self-similar Purkinje cells, I know from my "99% perspiration" (from 1965 to 1989) how much information one has to put in, to "grow" the entire cerebellar neural network (this is why I never believed that the less than 1% of the human genome, "the genes" were enough information, it just could not be that the rest was "Junk DNA"). At the moment the Apple Macintosh computer appeared, I through out all obsolete hardware from my New York University Medical Center office, and put in a Macintosh at work and also in my home (Waterside Plaza, NYC 10010 - a five minute walking distance). Thus, I could work "around the clock" on my software development to mathematization (geometrization) of biology. Among the first concept to check out was to follow-up on Mandelbrot's musing in his famous "Fractal book" that "clouds are not spheres, mountains are not cones and even the lightning does not travel along a straight line", with an additional allusion that "maybe even brain cells are fractal". Just as Dawkins, I used the Lindenmayer L-string replacement (one class of fractal recursive algorithms), to duplicate an existing Purkinje cell (from the cerebellum of the guinea pig). Since all fractal algorithms are recursive (like in the famous Mandelbrot Set any new Z point is generated by the previous Z-point squared, plus a C-constant. Mandelbrot's "trick" was that from the Gaston Julia Set the variable was a real number - while Mandelbrot was curious how the recursion would work for Z as a complex number. Unfortunately, few people know how to "square" a complex number, with a real and also an imaginary component; but it is just a couple of lines of code for a computer. With enough number of repetitions by an IBM mainframe, a new world opened up; the Mandelbrot Set, an epitomy of "complexity generated from simplicity"). In my Cambridge University Press book chapter of the fractal Purkinje cell, I even made my allusion in 1989 that "far into the future" perhaps the recursive process necessary to generate a fractal brain cell will lead to The Principle of Recursive Genome Function (2008). With his "Biomorphs", Richard Dawkins had a "very near miss", indeed. Especially with teaming up with Charles Simonyi (Microsoft VP, mastermind of Microsoft Office), they must have been totally familiar with the recursive "programming" of the "Biophorms" (based on the recursive L-string replacement algorithm). However, neither Dawkins nor Simonyi devoted their focus to either fractals, neuroscience at the level of brain cells. Dawkins certainly developed a keen interest in the DNA as a code, but Dawkins and Simonyi neither separately nor together made the "Heureka!" connection between fractal brain cells and the fractal DNA that generates them! Richard Dawkins was interested enough in the DNA as a code to show up from Oxford at the Monterey 2003 50th Anniversary Meeting of "The Double Helix" (Jim Watson was there, Francis Crick was already too ill to attend). I made the short drive from Silicon Valley to Monterey for the entire 3 days of the Celebration, and "elbow rubbing" with those enthused by the fact that just over half a Century not only the Full Human Genome was sequenced, but by 2003 we also knew that the mouse Full DNA Sequence was 98% identical (with the genes) with the human. The "only difference" was, that the so-called "non-coding DNA" (maiden name "Junk DNA") was 1/3 less in the mouse. Throughout the 3 days (with my FractoGene patent already filed in 2002) I was very vocal about my FractoGene interpretation. Prepared a "mini-CD" with the cover-picture of the 4 fractal stages of Purkine neuron growth, illustrating with 4 lines of self-similar DNA sequence-snippets how the obviously repetitious DNA is not only fractal, but its fractality is in a cause-and-effect relationship with the fractal growth of the Purkinje cell it governs. In the "evening entertainment time" (at the Monterey acquarium) I had an informal but rather lengthy chat with Francis Collins why "Junk DNA" can not be discarded. His single main objection was that he cited the fact that some genomes are actually much larger than the human DNA (he cited the rice). Nonetheless, he (and also Craig Venter) seemed to be thinking hard about Ohno's "biggest mistake of the history of molecular biology" (1972). Upon return to Washington D.C., Francis Collins requested funds for ENCODE-I to find out!

In the Monterey meeting I personally met Richard Dawkins (the only time I encountered him face-to-face). When I put into his hands my mini-CD with the FractoGene cover, he looked at it and I will never forget his face and body language. He did not say so, but to me the answer seemed to be "What did I missed with my Biomorphs!!! - This guy must be right!!!". I had plenty of copies of the mini-CD to distribute over 3 days to all who took it. The FractoGene concept, however, was a "double heresy" - it reversed both fundamental dogmas of Old School Genomics. (Any "recursion" was a horrific violation to the still living Crick's "Central Dogma" - that the DNA>RNA>Protein "arrow" NEVER recurses to DNA, and also Ohno's "Junk DNA" nonsense made any "recursion" pointless - since the non-coding DNA was not supposed to have any information. This is why I could only publish my fractal Purkinje cell model in Cambridge University Press book (of a Copenhagen "Neural Net" meeting, where I was the program chairman...). From the initial stonewalling I saw no chance to publish my FractoGene theory - thus filed the utility it implies as a US patent in 2002, granted in 2012 (in force for about the Next Decade).

It is pure fantasy to ponder at this point, "what if" Dawkins-Simonyi would have made the cause-and-effect link between repetitive (to me obviously fractal) DNA, and (to everybody obviously fractal algorithm) of the Lindenmayer L-string replacement fractal algorithm.

Perhaps the Apple software-efforts (both by Dawkins and myself) would have resulted a very early "Big IT interest" of Apple in New School Genomics. Apple, in fact, did make a little attempt, e.g. using their Tower to speed up one of the most often used BLAST algorithm of Old School Genomics... Perhaps by the time Steve Jobs got cancer, Apple would have been very deeply into genome informatics... Perhaps Steve's dilemma in his memoirs "he will be the first to be cured with the help of computers - or the last one to die without computers helping full force" could have ended the other way? If Simonyi picked up the ball, perhaps Microsoft would now be a towering Big IT of New School Genome Informatics (where Bill Gates just forked over $100 M to Editas one of the genome editing companies). Charitable contributions of Charles Simonyi extend to Hungary (a son of Károly Simonyi, famous physicist professor of Hungary, whose lectures I was blessed enough to listen to...). When in 2006 I organized a "PostGenetics" World Symposium in Hungary, to trigger biophysicists and software developers of Hungary to focus on New School Genome Informatics, perhaps I could have turned to Charles Simonyi to tide us over the problem that ENCODE-I only came out with "NO JUNK!" results in 2007 (2006 was a bit early...). Bill Gates was asked very recently if his genome was ever fully sequenced. ("No" - he answered, adding "but it would be the first thing I would do if I would learn any sign of cancer). Craig Venter had his full DNA sequenced ages ago (and had both types of skin cancer-episodes) - but he is on multiple record that the data sit on the shelves awaiting breakthroughs in interpretation. What seems to be sorely missing is a focus on the mathematics of genome regulation - since cancers are clearly "a disease of genome mis-regulation".

Richard Dawkins completed his turn to embrace some "fractal approach" by 2016. Along with Yan Wong, they incorporated into the new 2016 edition of The Ancestor's Tale a "fractal interactive interpretation of evolution", based on the 2012 October 16 PLOS paper by Rosindell and Harmon (just a few days after my FractoGene USPTO 8,280,641 was granted), amply referring to the brilliant OneZoom interactive web-portal (see below). Fractals made the cover picture of Science 2009, 2016 (see in this column) and now "fractal evolution" by Dawkins and Wong.

Andras_at_Pellionisz_dot_com

... he is particularly excited by the way whole-genome sequences can inform our extended family tree. ... As the technology gets cheaper and faster, Prof Dawkins says with excitement, "it will become possible to lay out the complete tree of life".

To emphasise the point, he returns to the laptop and opens OneZoom, a dazzling, all-encompassing representation of the tree of life which uses fractal shapes to allow continued expansion.

Onezoom - an interactive fractal model of evolution, based on the original PLOS paper "OneZoom: A Fractal Explorer for the Tree of Life" by Rosindell and Harmon, 2012, Oct. 16

The Ancestor's Tale 2004 first edition (no "fractal" used), but in the 2016 edition (with co-author Yan Wong) totally adopts a fractal viewpoint:


DNA pioneer James Watson: The cancer moonshot is ‘crap’ but there is still hope

By SHARON BEGLEY @sxbegle

JULY 20, 2016

James Watson, whose 1953 discovery of the structure of DNA with Francis Crick launched the revolution in molecular biology, says recent heart surgery has wreaked havoc on his long-term memory (though not his tennis serve: the 88-year-old can still reach 100 miles per hour). At a celebration of his friend Arthur Pardee’s 95th birthday last weekend at the American Academy of Arts and Sciences in Cambridge, Mass., however, Watson showed no signs of cognitive slowdown, much less of forgetting the world-changing events of 63 years ago.

His acerbic and impolitic wit was also in fine form. Describing one scientist who gave a talk at the meeting, Watson said, he is “so brilliant, he reminds me of Francis,” including being so much smarter than everyone else that “no one wants to work with him.” The public’s embrace of antioxidants may well be fatally misguided, he said, rattling off biochemical data on how reducing antioxidants in cancer cells may be the key to destroying them — while consuming high levels of antioxidants as pills or even in foods may increase the risk of dying of cancer, as he argued in a 2013 paper.

Watson spoke to STAT at the Academy and by phone. Here are excerpts from those conversations and from his remarks at the birthday bash:

On the cancer moonshot announced this year by President Obama:

The depressing thing about the “cancer moonshot” is that it’s the same old people getting together, forming committees, and the same old ideas, and it’s all crap . . .

On the prospects of curing cancer:

Everyone wants to sequence DNA [to treat cancer], but I don’t think that will help you cure late-stage cancer, because the mutations in metastatic cancer are not the same as those that started the cancer. I was pessimistic about curing cancer when gene-targeted drugs began to fail, but now I’m optimistic.

On what he sees as the best hope for treating and even curing advanced (metastatic) cancer: an experimental drug from Boston Biomedical (for which Watson is a paid consultant):

Papers have identified the gene STAT3, a transcription factor [that turns on other genes], as expressed in most kinds of cancer. It causes cancer cells to become filled with antioxidants [which neutralize many common chemotherapies]. In the presence of the experimental drug that targets STAT3, cancers become sensitive to chemotherapies like paclitaxel and docetaxel again. This is the most important advance in the last 40 years. It really looks like late-stage cancer will be partly stopped by a drug.

On his involvement in current cancer research:

I’m not at war with the cancer community, but they ignore me and I ignore them.

On his own anticancer regimen:

I take metformin [a widely used diabetes drug] and aspirin; I try not to eat too much sugar, and I exercise. Put all together, they probably reduce my cancer risk 50 percent. At 88, I give myself five years to see 80 percent of cancers treatable. What we can now say is that lots of untreatable cancers have become treatable. When does “treatable” mean “curable”? I’m not sure, but living five years with pancreatic cancer would be quite something. I don’t want to die until I see that most cancers have become curable.

[The government - and its NIH branch (more precisely the National Cancer Institute of NIH) is at a crisis; directors changed not once lately. Not only Watson (without question the greatest champion alive of genomics) profoundly disagrees with the government's solution, but even an intramural scientist of the National Cancer Institute, who is very well versed in mathematics, displays a "schizophrenic" stance. In his publication first he posted his essay based on the realization that the underlying mathematics is fractal (cited from Mandelbrot to Pellionisz some 50 papers, mostly fractals). Somebody must have tapped him on the shoulder that "Do you know what this would mean? - We would all have to understand fractals!" (Actually it is very easy; just keep repeating a recursive operator, e.g. the "mind-boggling complexity" of the Mandelbrot set is totally determined by Z=Z^2+C. The "glitch" is that to get a new Z, which is a complex number, some do not know how to square a complex number, while adding a Constant is trivial. Nonetheless, the second version of the same NIH paper was stripped of all fractal mathematics. It is a pity, since even medical doctors can remain totally immune to "squaring complex numbers", since computers do much more formidable computations for us on a daily basis. I am absolutely positive that "nobody wants to die (of cancer) before cancers have become curable". Neither Steve Jobs, nor Bill Gates, nobody. For that matter, I would bet that e.g. Bill Gates or Tim Cook (etc) would, by a simple stroke of a pen, fork over any amount of computation for free, if they, or any loved ones, could be liberated from the curse of cancer. What is the problem then? (Since hardly anybody would be totally satisfied with just metformin, aspirin and exercise...). The problem is, as Jim so brutally clearly explains in his hallmark style, that the Old School of Genomics is presently locked horns with a New School of Genomics (thus we have the Janus-double-face of the same NIH Cancer Institute paper). I am totally optimistic, since genome misregulation (a.k.a. cancer) will be solved by the awesome assistance of computers. If it will be Apple, the tool be as "user friendly" to MD-s as e.g. their iPhone (e.g. to recommend the therapy with the best probability to be effective for a certain individual genome; easier than to Google an item). I developed an enormous amount of software in my life, and don't even know how many Gb memory my old iPhone had (I no longer care, since the new model has plentiful). Andras_at_Pellionisz_dot_com ]


Science Cover Issue 2016 July 22 with Fractal folding of DNA and of Proteins

[Twice the 7-year "critical time" lapsed from FractoGene (2002). In 2009 September Pellionisz presented it in George Church' Cold Spring Harbor Meeting. Weeks later, Science Cover picture was the Hilbert Fractal Folding (October, 2009). Now, after another 7 years, Science Cover picture in 2016 shows the self-similar sets of protein-elements, whose "design" is in the fractal DNA. Now further comment is needed, but the following brief Chinese publication of "editing out fractal defects" clearly points to the epoch-making applications e.g. to defeat cancer - Andras_at_Pellionisz_dot_com]


CRISPR Immunotherapy Trial Ahead

Jul 22, 2016

Chinese researchers will be starting a human trial investigating a CRISPR/Cas9-based therapy for metastatic non-small cell lung cancer next month, Nature News reports.

The trial, led by Sichuan University's Lu You, involves isolating T cells from patients who've failed chemotherapy, radiation therapy, and other treatments. The CRISPR/Cas9 approach will then be used to knock out the gene encoding the PD-1 protein that typically prevents the immune system from attacking healthy cells. Through this, the researchers hope to boost the patients' immune response to cancer. The trial, Nature News adds, is starting small, with just one patient and with low doses of altered cells and will gradually increase both the cohort size and dosage.

"Treatment options are very limited," Lu tells Nature News. "This technique is of great promise in bringing benefits to patients, especially the cancer patients whom we treat every day."

Approval for the trial, which the researchers received earlier this month, took about six months, Lu says. A similar trial in the US, led by the University of Pennsylvania's Edward Stadtmauer, has received approvalfrom a National Institutes of Health advisory panel, but it also needs the go-ahead from the Food and Drug Administration and its institutional review board.


Qatari genomes provide a reference for the Middle East

Published online 20 July 2016

Researchers have assembled a reference genome to reflect the variants in Middle Eastern populations.

Written by Sedeer El-Showk

Weill Cornell Medicine - Qatar. A reference genome specific to Arab populations was recently published by geneticists in Qatar and New York, facilitating future research into genetic diseases and the application of precision medicine in the region1.

It is hoped that the availability of a Qatari reference genome will enable doctors should be able to treat Arab patients using precision medicine, a process which involves incorporating information about a patient’s genome in the prediction, diagnosis, and treatment of diseases.

“Precise genome interpretation is key to the successful integration of genomics in healthcare decision-making,” says the study’s lead author, Khalid Fakhro of Weill Cornell Medicine in Qatar (WCM-Q) and the Sidra Medical and Research Center.

Genomic information about Arab populations has so far been sparse, limiting benefit from precision medicine and other fruits of modern genomics.

“When we started sequencing the first 100 Qatari genomes, we were surprised by the unusually high numbers of variants being called. Either the chemistry [of the sequencing reaction] was introducing systematic errors or there was something biologically interesting going on,” says Fakhro. The sequenced Qatari genomes differed from the standard reference genome at more than four million locations — roughly 25% more than expected if sequencing a Caucasian or East Asian genome.

Decoding the genomes of Qatari Bedouins revealed that indigenous Arabs have probably been present in the Arabian Peninsula since the out-of-Africa migration, representing one of the oldest populations outside Africa2. As a result of this ancient divergence, variants which are quite rare in other groups may be quite common in populations of the region, rendering the standard reference genome inadequate for studying them.

To overcome this, the team embarked on creating a reference genome which could be used as the standard for further work in Gulf Arab populations. They collected samples from more than 1,000 Qataris and analysed them using the genomics and advanced computing resources available at WCM-Q and WCM-NY. By only using data from individuals who were unrelated and whose four grandparents had all been born in Qatar, the team ensured that the reference genome would be representative of Qataris in particular and Arabs in general.

“For now, this reference genome will be a good approximation for most Middle Eastern Arabs, but it’s still not better than a truly local reference for each subpopulation,” says Fakhro, who would like to see Arab scientists work together to build a rich data set of Arab genomes.

The researchers found that roughly one out of every six differences between the Qatari genomes and the standard reference were in fact common among Qataris; these were therefore incorporated into the new reference genome. “Surprisingly, some of these were previously reported as Mendelian disease-causing variants,” says Fakhro, explaining that they may have been incorrectly labelled as rare because their prevalence in this population was not known.

"This study is the result of several years of hard work by colleagues in Qatar,” says Fowzan Alkuraya, a human geneticist at King Faisal Specialist Hospital and Research Center in Saudi Arabia. “I hope future studies will explore some of the features of the genome, but this is already a big step forward for genomics research in Qatar."

Alkuraya was not involved in this study. But Saudi Arabia is home to the Saudi Human Genome Project, which aims to discover disease-causing genetic variants in the Saudi population by sequencing 100,000 genomes.

Likewise, the Qatar Genome Programme was launched in 2014 with the goal of sequencing the entire Qatari population to deliver precision medicine. Both projects, along with studies of Arab samples elsewhere in the world, stand to benefit greatly from the new reference genome, which will reduce their computational burden and make false positives much less common.

“Qatar has made a serious commitment to be a leader in precision medicine, and we will continue to do our best to discover as much as possible for the Arab world from the Arab world,” says Fakhro.

doi:10.1038/nmiddleeast.2016.110

Fakhro, R. et al. The Qatar Genome: A population-specific tool for precision medicine in the Middle East. Human Genetic Variation http://dx.doi.org/10.1038/hgv.2016.16 (2016)

Rodriguez-Flores, J.L. et al. Indigenous Arabs are descendants of the earliest split from ancient Eurasian populations. Genome Research http://dx.doi.org/10.1101/gr.191478.115 (2016)

[There is a Worldwide Rush to sequence populations (in Full DNA), that appear to exhibit much more divergence than previously thought. First, why is this unexpected diversity? The reason is, that the 19 thousand or so "genes" are providing "the basic materials", pumping (through RNA) proteins. They are more or less the same in humans as in the mouse (found Next Year 2002 when the mouse was turned out to have 98% the same "genes"). Even the tiny worm C-elegans has about the same number, and same kinds of genes. Not so for the vast amount (98% in human) of "Junk DNA" - that we all agree is anything but "Junk" after a tumultuous struggle since Ohno's 1972 "So much Junk DNA in our Genome" The non-coding DNA provides the fractal design of how the basic materials are architected together - and the different groups of organisms from worms to mammals are just about as different as the great variety of "fractal flowers, leaves, trees". Okay, but what is the significance of deep sequencing the full human genome of different groups of people? The paper emphasizes "precision medicine" (the new term for "personalized medicine"). There is, however, an even more basic utility. Just think of tighly knit ethnic groups in which arranged marriages are frequent. Enormous wealths are scrupulously considered how to preserve them while merging and dividing. Think about this. With all the calculations and pondering about material wealth, should the most important treasure we all have (our hereditary material) be considered perhaps even more carefully? Indeed! There are already scores of structural variants (fractal defects) that one person harbors without him/her or descendants showing any health problems at all. Except, if both parents harbor some identical defects. The closer knit is the community, the more likely it is that in their arranged marriages the partners are blood-relatives. Thus, the probability that "all remains in the family" enhances the chance that rather serious malfunctions can be inherited from both sides. UNLESS the arrangement of marriages meticulously investigates not just the merged wealth, but also the compatibility of closely similar genomes. Does it mean that certain matches should not happen? Fortunately, there is no such bad verdict. Already, there are methods to select those fertilized eggs for full development that e.g. in a 50-50% chance do NOT inherit the defect. Sounds simple? At the level of different societies, it is not all that simple at all. Affordability is only one question - the expertise and cultural acceptance are much more difficult challenges. Yet, such a positive use will absolutily happen! Andras_at_Pellionisz_dot_com ]


CIA Chief Claims that Genome Editing May Be Used For Biological Warfare

CIA Director John Brennan claims that advances in genome editing pose a threat to national security and may be used to create biological weapons.

NEW YORK (Sputnik) — Advances in genome editing pose a threat to national security and may be used to create biological weapons, Central Intelligence Agency Director John Brennan said at the Council on Foreign Relations on Wednesday.

"Nowhere are the stakes higher for our national security than in the field of biotechnology," Brennan stated. "Recent advances in genome editing that offer great potential for breakthroughs in public health are also a cause for concern because the same methods can be used to create genetically engineered biological warfare agents."

In April, the Department of Homeland Security’s Office of Health Affairs Acting Director Aaron Firoved testified in Congress that synthetic biology and gene editing offer terrorist organizations potential to modify organisms for malicious purposes such as manmade pathogens that can rapidly cause disease outbreaks.

Moreover, subnational terrorist entities such as Daesh "would have few compunctions in wielding such a weapon," Brennan noted.

Brennan called on the international community to create national and global strategies to counter such threats, along with the consensus of laws, standards and authorities that needed to counter the threat.

[Nuclear Science started with almost trivial observations that radioactive rocks left some ugly dark spots on a photographic plate. Soon, basic axioms of physics/chemistry (Dalton's "atom theory") became obsolete, since elements in the Mendelejev' table CAN transform into other element(s) - by nuclear fission or fusion. The hard part for science was that Quantum Theory and Nuclear Physics had to be created (by the Coppenhagen Group) to mathematically understand the new science. Still, it remained a brave new but innocent effort of science. Leo Szilárd, however, composed a "heads up" letter and with Edward Teller they had Einstein sign it for the President's perusal. It was pointed out that unheard amounts of energy can be unleashed by nuclear fission - and WWII can be won! Incidentally, the same Leo Szilárd filed the patent for nuclear reactors (a peaceful utilization of nuclear energy - however his patent was not released for many-many years. One may also note that John von Neumann architected a new device [the computer] to help dealing with the mathematical monstocities of the challenge).

Old School Genetics used to be like Dalton's "atom theory". "Genes" counted, but the 98.5%, the rest, was "JunkDNA". "Genes" weren't "jumping", weren't "scattered in pieces" (but each was a contiguous sequence, albeit interrupted by totally inexplicable "introns"), proteins were "never" recursive to the non-coding DNA (see the obsolete Crick's Central Dogma). New School HoloGenomics (Genomics plus Epigenomics) overturned all primitive dogmas. The mathematical laws of genome regulation are in the domain of fractal-chaotic nonlinear dynamics (and e.g. cancer is a systemic breakdown of the hologenome regulation - better stiffen your immune system!). Still HoloGenomics has been a very benevolent (a basic-science effort, with anti-disease applications, though we have pointed out years ago that with understanding of hologenome regulation a totally new type of "antibiotics" can be made by shutting down their genome function).

Curiously, a huge amount of massive download from this column by the Homeland Defense Data Center (Cheyenne, Wyoming) sobered me just a couple days ago that New School HoloGenomics may have left the station of innocence. Nonetheless, at any event, massive funds will be available to mathematically understand genome regulation (e.g. what fractal defects could be edited out to fight diseases, or ... ). Look for Homeland Defense facility (at Livermore, California), DARPA, NSF (etc) suddenly considering the mathematical understanding of hologenome function a super-urgent matter. Andras_at_Pellionisz_dot_com ]


Are Early Harbingers of Alzheimer’s Scattered Across the Genome? [FractoGene!]

13 Jul 2016

Large genome-wide association studies have implicated fewer than two dozen genes in AD, but some scientists believe thousands of other variants may influence the course of the disease, even if only by a smidgen. When considered en masse, these genetic weenies may punch above their weight, according to a study published in the July 6 Neurology. Researchers led by Elizabeth Mormino of Massachusetts General Hospital in Charlestown report that thousands of genetic variants, which by themselves fail to pass significance thresholds imposed by GWAS, nevertheless associate with cognitive and structural brain changes that precede the onset of dementia. They propose using polygenic risk scores derived from these GWAS runners-up to more completely assess genetic risk for AD, rather than focusing on the few strongest loci.

“If we restrict ourselves to the top hits, we may be missing some useful genetic information that is buried beneath the surface of GWAS data,” Mormino told Alzforum. Commenters agreed that the most important implication of the study was that more genetic associations exist below the statistical thresholds used for AD GWAS.

Some studies estimate that genetic variation explains more than half the risk of developing AD (see Gatz et al., 2006). However, the 21 variants thus far found in GWAS together account for only 2 percent of the AD risk attributable to genetics (see Oct 2013 news; Jul 2013 conference coverage). The ApoE4 gene alone takes care of another 6 percent. That leaves a sizable portion of genetic underpinnings of AD unaccounted for.

To unearth the hidden associations, researchers have started lumping together genetic polymorphisms that fall below the statistical GWAS criteria. They generate polygenic risk scores (PGRS) based on how many of these polymorphisms a person has (see Maher 2015). Some studies have revealed that many weakly associated loci cumulate to strengthen the genetic contribution underlying a given disease, while others, such as a recent polygenic study on diabetes, have found that not to be the case (see Fuchsberger et al., 2016). One recent polygenic study analyzed data from the International Genomics of Alzheimer’s Project (IGAP) to find that the collective association of thousands of variants accounted for far more of the AD heritability than did the 21 GWAS hits (see Escott-Price et al., 2015). A more recent study by Sonya Foley and colleagues at Cardiff University, Wales, correlated polygenic risk scores with low hippocampal volume in healthy young adults, suggesting a genetic basis for changes in the brain that precede dementia (see Foley et al., 2016).

Mormino and colleagues more broadly explored the question if polygenic risk scores associate with pathological changes that precede the onset of dementia. The researchers started by revisiting the GWAS data from IGAP, which included more than 17,000 people with AD and 37,000 healthy controls. They lowered the significance threshold from a stringent p value of 5x10-8 to a more liberal cut hoping to uncover variants that together might associate with AD. When they chose a p value of 1x10-2, more than 16,000 polymorphisms emerged. Using this expanded set of SNPs, the researchers generated polygenic risk scores for individual people in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, based on how many of these SNPs each person harbored. Compared with just considering the GWAS hits alone, those scores increased the researchers’ ability to distinguish 166 patients with AD dementia from 194 older cognitively normal people in ADNI by about fourfold, Mormino et al. report. Lowering the significance threshold further (i.e., 10-1 or more), and thus including even more polymorphisms, brought no further improvement in the ability to distinguish between AD patients and controls.

Armed with this expanded set of SNPs, the researchers searched for association between PGRS and early biomarkers of AD in 526 ADNI participants who at baseline were cognitively normal or had mild cognitive impairment but no dementia. The researchers found that higher PGRS associated with lower baseline performance on memory tests and with smaller hippocampal volumes. Over a nearly three-year follow-up period, higher PGRS scores associated with faster decline in memory and executive function, but curiously not with decreases in hippocampal volume. The higher scores also associated with progression: People with high PGRS were more likely to progress to MCI, or from MCI to AD.

What about polygenic risk and amyloid burden? In 505 participants without dementia from ADNI2, higher PGRS correlated with having a positive amyloid-PET scan. In 272 ADNI1 participants with available CSF data, higher PGRS trended toward lower CSF Aβ levels though this association missed statistical significance. The researchers attributed this to the smaller number of participants.

The researchers next tried to relate PGRS to changes that occur decades prior to disease onset. To do this, they measured PGRS in more than 1,300 healthy volunteers, aged 18 to 35, from the Brain Genomics Superstruct Project, an open-access neuroimaging dataset run by Randy Buckner at Massachusetts General Hospital in Boston. As with the Welsh study, high polygenic risk scores marginally associated with small hippocampal volume, indicating that polygenic risk influences brain structure at an early age, even five decades prior to the typical onset of AD in people who ultimately get it. Furthermore, given that most people at this age have no amyloid deposition yet, this polygenic influence on brain structure is likely amyloid-independent, Mormino told Alzforum. “The potential implications of this result are that changes associated with AD may begin earlier than we thought, and also have a genetic basis,” she said.

The polygenic risk score accounted for a modest amount of variance in disease factors. For example, the score explained 2.3 percent of the variance in baseline memory, 3.2 percent of the variance in longitudinal memory, 1 percent of the variance in Aβ deposition, and 2 percent of the variance in baseline hippocampal volume in the ADNI cohort, and just 0.2 percent of the variance of hippocampal volume in the young cohort. Mormino said this is likely due to the presence of unknown rare variants with large effect sizes, or to synergistic relationships between variants, for which the polygenic scores at present cannot account. Furthermore, Mormino said she was measuring association with intermediate phenotypes, such as Aβ deposition and hippocampal volume, rather than the final outcome of AD that had been used to select the polymorphisms in the first place. Because having a small hippocampus, for example, does not always lead to AD, the strength of the associations is inherently limited.

Could polygenic risk scores be used to identify candidates for prevention trials? Mormino considers this use a long way off. Gerard Schellenberg of the University of Pennsylvania in Philadelphia agreed, saying that polygenic risk scores are unlikely to predict who will develop a given phenotype because they account for such a small percentage of the variance. Instead he sees value in combined scores of genetic and non-genetic risk, such as cardiovascular health and lifestyle factors. Schellenberg pointed out a fundamental implication of this paper. “The most important aspect of the study was not the potential application of PGRS, but rather the implication that unknown genes associated with AD still exist,” he said. “But just because they exist doesn’t mean we’ll find them.”

Nick Martin of QIMR Berghofer Medical Research Institute in Queensland, Australia, saw the findings as a motivation to do even larger GWAS to uncover these hidden risk variants. “The lesson of this paper is that even larger sample sizes will find new loci, which may lead us to new pathways and new biology,” he wrote. “This is certainly the experience with the recent success of GWAS for schizophrenia, where 108 separate loci have now been identified, leading us to previously unsuspected biology and strong leads for new therapies.”—Jessica Shugart

[The Old School of Genomics rapidly yields to New School HoloGenomics. Gone are the days (decades, rather) when for a single phenotype (e.g. a disease; from schizophrenia to Alzheimer's) "there should be a single gene responsible". Not so simple. The genome-epigenome (hologenome) obviously demands a SYSTEMIC understanding of the nature of its regulatory system. Genes (in the sense of protein-coding DNA fragments) are not necessarily "bunched up" into any solid block. They are scattered through very large LINEAR distances (that can be minuscle distances with demonstrated "fractal folding". To single out any one element (coding or not) may be valid in case of so-called Mendelian diseases (where a SNP single point mutation may cause through a premature termination codon "incomplete protein" that may be from "not very effective" to severe or even lethal). We are not so lucky with late-onset regulatory diseases, especially with cancers. The need to raise our head and look from a very different SYSTEMIC perspective of the New School of Hologenomics, please consider the dramatic visualization, a movie of a well-known collapse of a SYSTEM.

Play YouTube

3.7 million viewers were interested to view this 4-minute video. Suppose that the structure was riveted. Would be an interesting approach to investigate certain rivets that were sheered? It certainly happened. Would it be interesting to track down the serial number of the rivet, and name it e.g. APOE-4 that first snapped? Is it possible that the certain rivet had some weakness compared to others? Not only possible, but rather likely. Still, such track of "Old School Investigation" may be fundamentally misguided, e.g. by a conclusion that the certain rivet was a "collapse rivet" and actually caused the collapse of the bridge. From a different, SYSTEMIC viewpoint it is obvious that the oscillation-dampening SYSTEM was the cause of the collapse, THE CAUSE WAS A DESIGN FAULT - not any large and distributed set of rivets lurking in the structure. From this viewpoint, it is rather futule e.g. to count the number of rivets that were eventually sheared. Just counting and naming sheared rivets would not advance bridge architecture in a meaningful way. Obviously, the resonance-properties of the design were ill-understood (and were not investigated in wind-tunnel experiments). Was it possible at all - having leant the SYSTEMIC PROPERTIES - to strengthen the design? Absolutely! Much has been learnt by understanding the design-defects, and new bridges were built by "editing in" stiffening elements - possibly using the very same kind of rivets as in the bridge that collapsed not because of a "rivet-problem" - but a design problem. I know some people don't like metphors (others do...), and one might scratch his/her head what we can do. "Ask What You Can Do For Your Genome!" - says the motto of HolGenTech. Ask me, if interested. Andras_at_Pellionisz.com]


[Independence Day] A 27-year-old who worked for Apple as a teenager wants to make a yearly blood test to diagnose cancer — and he just got $5.5 million from Silicon Valley VCs to pull it off

Business Insider

Lydia Ramsey

When Gabe Otte went to college, he already had an Apple internship under his belt. In undergrad, his computer-science professors told him to diversify and pick out another area to focus on so he wasn't bored in class, so he chose biology.

Now, at 27, he just got $5.5 million in a seed-funding round led by Andreessen Horowitz's bio fund to build out a blood test that screens for the earliest signs of cancer. Founders Fund, Data Collective Venture Capital, and Third Kind Venture Capital are also in on the round.

"What we're aiming to do is develop a test that healthy patients would take as part of their annual physical that tells you whether or not somebody's going to have cancer," Otte told Business Insider.

Freenome, a startup Otte started two years ago along with Dr. Charles Roberts and Riley Ennis that's based out of Philadelphia, wants to use supercomputing to crunch the human genome (the entire genetic material that gives our body instructions on how to live and grow) to look for any signs of cancer that are hanging out in the body.

These tests are often referred to as "liquid biopsies," since unlike solid-tumor biopsies, these "liquid" versions just pick up clues from the blood. They rely on something called "circulating tumor DNA," or the bits of DNA that are released from dying tumor cells into the bloodstream. Knowing what abnormalities a specific tumor possesses could help link cancer patients to treatments that specifically target those mutations as a more effective way approach cancer treatment, which is how the tests are being used right now. Better yet, the hope is that you might be able to accurately find these bits of DNA before a person's even diagnosed with cancer.

It's something everyone from Guardant Health, which has been running liquid biopsies on people with cancer to monitor how the disease progresses, and Illumina spin-off Grail, are working on. In May, Guardant launched LUNAR, a collaboration with research institutes that will study Guardant's own diagnostic cancer blood test.

Where Freenome wants to be different

Right now, liquid biopsies screen for specific genetic mutations you can find in a small percentage of the blood that have been vetted and will give you information that you can act on in the course of a patient's treatment. At the moment, that's a bit more limited than solid-tumor biopsies, which is why it's still considered the gold standard by doctors.

But Otte said he wants to go after the whole genome.

"When I talk to a biologist, they'll be like what's wrong with that? You know what you're looking for you're going and looking for it," he said. That's much different than the reaction he gets when he talks to people in tech, who think the idea of throwing out data is ridiculous. "If you can get more data, if you can get the entire genome, why would you throw that out?"

Crunching all that technology takes a lot of computing power and money, but with the decreasing cost of genome sequencing falling quickly, Otte seemed convinced that he can do it at a cheap cost, working methodically. Even the latest funding round wasn't necessary to keep the lights on, he said, thanks to a grant the company received in 2015.

"It needs to be done in this step-wise way, with these validations along the way," he said. "Otherwise, chances are you're going to throw a bunch of money down the drain."

Another Theranos?

The story lines may sound similar — revolutionary blood-testing technology, young founders, promises to disrupt the healthcare industry — but Andreessen Horowitz, to validate the test, sent Freenome five blinded blood tests for the company to sequence on their technology to show that it works. Having blinded studies meant that Freenome's team didn't know what was in the blood ahead of time, so they couldn't fudge the results that make its tech look more advanced and accurate than it actually is. It's a key difference from what's being reported on Theranos and its relationship with Walgreens.

Otte said he's committed to publishing all work and working with regulatory agencies to make sure everything is up to standards. So far, Freenome has tested out its technology in hundreds of samples, and the next step is to validate it on a larger, thousands-of-samples level.

Andreessen Horowitz's first bio-fund investment was in November, in twoXAR, which is looking at how to use digital in the drug-discovery process. Vijay Pande told Business Insider that the bio fund has multiple other investments already in addition to Freenome, but most are working quietly at the moment. Pande seemed confident that this next wave of health-tech entrepreneurs would fare better than others have in the past, in part because founders are getting a better understanding of both computer science and lab sciences.

"There's this new crop of founders that can go deep in biology, and can also go extremely deep in computer science," he said. "They're well versed in both areas, which is much different than founders five to 10 years ago."

---

[Theranos, Guardant, Grail ... and now Freenome. Those who recall their personal experience with "the Internet boom" know this type of explosion very well. Although Google, Apple, Intel (etc) in Silicon Valley could easily acquire Illumina (stock value is about $20 Bn, about the same as Motorola Mobile...), the "Internet Revolution" did not happen to huge companies (see Christiansen's bestseller "The Innovator's Dilemma" - the bigger and better firms are, they are more likely to fail to make the tight turns that can take even "multiple disruptions"). In the Internet Boom "the king of computers" IBM failed miserably both with its home computer hardware (who remembers the extinct "IBM PC?"), and with its home computer software (who remembers the extinct "System 2"?). The winners were totally "noname" (freename) companies, rather, small bunches of young innovators who leaped to Freedom from the slavery of huge companies, like Steve Job's Apple and Bill Gates' Microsoft (helped by Charles Simonyi from Hungary, a VP who developed Microsoft Office, perhaps the most enduring home computer software. (To be fair, even Intel was sparked to become a monster from a tiny company a generation before, by young employee number 4, the now late Andy Grove from Hungary). Lydia Ramsey is also right to point out another key factor. While Big Pharma Roche has been jockeying to do a hostile takeover of Illumina (twice), and thus accomplishing the vision of Complete Genomics (2008) of a "sequencing and Google-type data center", as e.g. Eric Schadt (double-degree biomathematician) can testify, few in the Big Pharma domain (in his case, Merck) resonate totally with advanced mathematics. In my case, when I advocate "The Principle of Recursive Genome Function" and thus conclude with FractoGene (fractal genome grows fractal organisms) typically I meet a "blank face" - many have no (mathematical) idea of what a "fractal" might be. At best, some eminent Venture Capitalists whom I would rather not name here, vaguely remember "fractals as pretty pictures" - and direct me to the Eric Schadt type mathematicians. Thus, while huge entities are jockeying now for the exclusive rights of FractoGene patent (in force for the next decade, 8,280,641) Roche is still stuck at the stage of "searching for known markers", and thus I largely gave up on "Big Pharma" and vetted "Big IT" - all of them are very keen on the "next big thing in Silicon Valley". With all said and done, my small incubator less than a mile from the Apple Spaceship new HQ could pick up those talented developers (and their investors...) who see hyperescalation in an "independent way" from huge entities like Apple, Google, Amazon and the like. Today, as Illumina, PacBio and other manufacturers of sequencers (a commodity) are keen to develop markets in the World for their products to survive, my HolGenTech, Inc. is working with my homeland, Central European Hungary. If the "developing rim" of EU could produce Skype (in Estonia), the five times larger Hungary can easily produce hyperescalation with an excellent record of mathematicians, computer scientists, etc. Andras_at_Pellionisz_dot_com ]


President Obama Hints He May Head to Silicon Valley for His Next Job

Nextshark

...the soon-to-be former-Commander-in-Chief is considering what his next job could be, and it could be one that will shake up Silicon Valley


In a recent interview with Bloomberg, Obama expressed his interest in the tech industry of Silicon Valley, hinting that his next job may be as a venture capitalist.

"Well, you know, it’s hard to say. But what I will say is that—just to bring things full circle about innovation—the conversations I have with Silicon Valley and with venture capital pull together my interests in science and organization in a way I find really satisfying. You know, you think about something like precision medicine: the work we’ve done to try to build off of breakthroughs in the human genome; the fact that now you can have your personal genome mapped for a thousand bucks instead of $100,000; and the potential for us to identify what your tendencies are, and to sculpt medicines that are uniquely effective for you. That’s just an example of something I can sit and listen and talk to folks for hours about."

["Listen and talk to folks for hours about it" is by far the easiest part. "Talking about it" will not make it happen. Investing into the bottleneck genome informatics makes it happen, and ultra-hard as it is, needs all the help it can get. Actually accomplishing multiple paradigm-shifts is a quite a bit harder than "listening to it" - and not everybody gets it right away. Sometimes it takes decades, unfortunately. For instance, a 2008 YouTube Google Tech Talk precisely predicted that with the advent of powerful and affordable genome sequencing there will be such a glut of full human DNA sequences that sequencer companies would head towards bankruptcy taking dollar billions of investments into their no-so-bright-future (remember Silicon Valley's Complete Genomics that had to be sold to China for peanuts of $118 M to avert bankruptcy?). Even today, both Illumina and PacBio are trying hard to build up a market for the "dreaded DNA data deluge". While I wonder that the would-be-venture-capitalist took 58 minutes to view the fairly technical video, many thousands of experts did. Nonetheless, now we have a "deja vu". Al Gore became famous for "inventing the Internet" - and became a Silicon Valley Venture Capitalist (Kleiner Perkins Caufield & Byers - by the way, along with Colin Powell). A key factor probably is, that politicians on the top certainly know "where the next huge pile of money hides". So did Dick Cheney (WellAware), - and many others. If Obama wants big money, he might end up with the now fairly mature Google (where Google Genomics is still under reorg in "Alphabet"). If he is up to "make a new difference", it is not inconceivable that he might team up with Francis Collins and actually make genome-based precision oncology a reality in the private sector in Silicon Valley. Politicians becoming Venture Capitalists almost always are landmarks that a certain formerly "government" sector became "private sector" - and as a result, see a classic example of the Internet - hyperescalated in funds, success and utility. Andras_at_Pellionisz_dot_com.]


Is This the Biggest Threat Yet to Illumina?

The launch of a new gene-sequencing machine made by Pacific Biosciences is underway

Illumina (NASDAQ:ILMN) has held go-to status as the dominant maker of gene-sequencing devices for years, but a new, next-generation gene-sequencing machine made by Pacific Biosciences (NASDAQ:PACB) aims to change that. Can Illumina fend off this competitor?

First, a bit of background

As the cost to sequence a gene falls and researchers increasingly discover the genetic causes of disease, drugmakers are flocking to create genetically inspired, personalized medicine. As a result, there's been a tidal wave of demand for gene-sequencing machines that allow researchers to peer into and analyze DNA.

Currently, Illumina, which markets high-throughput machines that can sequence an entire genome for less than $1,000, is the leading manufacturer of these machines, controlling an estimated 90% market share. Globally, Illumina boasts an installed base of more than 7,500 machines, including 300 top-of-the-line HiSeq X sequencers. As a result, sales of Illumina's sequencers and the consumables used to run them topped $2.2 billion last year, up 19% from 2014.

Mounting a threat

Up until now, Pacific Biosciences has been a bit player in the gene-sequencing industry.

The company's installed base of 160 units and its $93 million in 2015 sales, including $30 million in milestone payments from Roche Holdings, positions it a far cry south of Illumina.

However, Pacific Biosciences' next-generation sequencer -- the Sequel -- may allow it to mount its most significant threat to Illumina yet.

The Sequel's $350,000 price tag is roughly half the price of the company's previous RS II model. Thanks to new technology, the Sequel is far smaller and less heavy than its predecessor too. Additionally, the Sequel can deliver longer read data than Illumina and researchers may find that advantage compelling, especially if they're working in clinical research.

Overall, Sequel's cost and performance advantage could lead to Pacific Biosciences selling a bunch of them -- something that may already be beginning.

In Q4, Pacific Biosciences reports that they received 49 orders for the Sequel, including orders for 10 machines that they installed at customer sites in December. That's arguably a solid start for the Sequel considering that the company only recorded 40 orders for the RS II in 2014.

Hang on a minute...

Does the Sequel mean that Illumina's best days are behind it?

Not necessarily. The Sequel is an intriguing machine, but Illumina is far from on the ropes. Illumina's product line is broader, it's deeply entrenched with its customers, and it's got deep pockets that can allow it to respond to the Sequel with products of its own. Pacific Biosciences has less than $85 million in cash on the books exiting last quarter, while Illumina has a $1.2 billion cash stockpile. That's a big advantage and it should give Illumina the financial flexibility to navigate around the Sequel.

Perhaps, a bigger risk to Illumina will come from Roche Holdings. Roche's deal with Pacific Biosciences allows it to rebrand the Sequel and sell it to clinicians in the field of in vitro diagnostics. Given Roche is a global powerhouse and the role of sequencers as a tool for diagnosing illness and determining treatment protocols could be huge, Roche's sequencing business could pose a big threat to Illumina's MiSeqDx machine, which targets the in vitro diagnostics market.

Looking ahead

Genetic sequencing is fueling the development of increasingly complex medicine and, arguably, that's where the drug industry is heading. If Pacific Biosciences can convince researchers that Sequel's cost and long read advantage are worth it, then this stock could become one of the market's most intriguing growth stories over the next few years.

However, the Sequel's launch is in the early days, so there's no guarantee that customers will continue to flock to it, or if they do, that Pacific Biosciences will turn profitable. Because of those risks, Pacific Biosciences may be worth keeping an eye on, but it's still a high-risk investment.

[The determinant of an Illumina/PacBio battle is NOT the set of their sequencing parameters. The winner will be the sequencer commodity company that will realize the original concept of Complete Genomics - not only rolls out basic tools but supplies the steady stream of ever-improving versions of genome analytics software. Since the technology of Complete Genomics was sold out to China, a likely winner might be BGI. Illumina sales dipped 24% in a week late April, due to lackluster European sales - while BGI already has 13 established outlets in Europe. Fortunately, not yet in Central (formerly, East) Europe. Andras_at_Pellionisz_dot_com]


Big talk about big data, but little collaboration

May 13, 2016, 07:00 am

By Nancy G. Brinker and Eric T. Rosenthal, contributors

There's been much talk lately about big data's potential value in treating cancer, but little effort has been made to make big data bigger — and more effective — by sharing what's being collected.

Big data in clinical cancer care involves collecting vast amounts of data about patients that can be analyzed to identify trends, associations and patterns that would help oncology professionals develop better and more tailored therapies for cancer patients.

Today, most of this information is available in only about an estimated 4 percent of cases for those patients involved in clinical trials.

Big data initiatives — in government, academia and industry — are striving to collect genomic and other patient information from the remaining 96 percent of cases through electronic health records (EHR) and other cancer-related registries.

However, not all EHRs and registries are compatible with one another; there is a lack of standardization; there is not consensus about what constitutes "good" data; there are privacy issues involved; not all patients and physicians understand or are incentivized to contribute to the effort; much of the data collected is proprietary; and many of the entities involved in big data initiatives are competing rather than collaborating.

There are currently numerous big data initiatives underway. Some examples include:

On the federal level: The president's Precision Medicine Initiative, the vice president's cancer "moonshot" program, and the Department of Veterans Affairs' Million Veteran Program all depend on big data.

On the academic and professional society level: The American Society of Clinical Oncology (ASCO) — the world's largest clinical oncology organization — is developing CancerLinQ, a health-information technology platform "assembling vast amounts of usable, searchable, real-world cancer information into a powerful database"; and the American Association for Cancer Research (AACR) — the world's oldest and largest scientific organization focused on cancer research — is working on Project Genie (Genomics, Evidence, Neoplasia, Information, Exchange) to "provide the 'critical mass' of genomic and clinical data necessary to improve clinical decision making and catalyze new clinical and translational research."

On the academic and for-profit level: Together, IBM's Watson supercomputer and the New York Genome Center are developing a "national tumor registry to match genetic characteristics with available treatments for patients."

On the for-profit level: Flatiron Health is "building a disruptive software platform that connects cancer centers across the country."

In addition, there have been numerous conferences convened over the last few years to discuss the issues related to what big data is, how it can be standardized, and how it can be used more meaningfully for patient care.

But what many of these efforts lack is the oversight and will to make these newly created silos share the big data they are collecting to provide a comprehensive clearinghouse of information benefitting all.

Congress has a vested interest in ensuring that government plays a constructive role in the promise that big data can bring to reducing healthcare costs through disease prevention and treatment. Great leaps in society take place through public-private partnerships.

Perhaps Congress should consider legislation to make data-sharing mandatory for all information gathered through any efforts supported by federal dollars and to encourage collaboration between public and private entities for the common good.

Brinker is the founder of Susan G. Komen, the world's largest breast cancer charity. She was previously a Goodwill Ambassador for Cancer Control to the U.N.'s World Health Organization; U.S. chief of protocol; and U.S. ambassador to Hungary. Rosenthal is an independent journalist who covers issues, controversies and trends in oncology as special correspondent for MedPage Today. He is the founder of the h Designated Cancer Centers Public Affairs Network, and helped organize a number of national medicine-and-the-media conferences. The opinions expressed belong solely to the authors.

[Having worked through the "Internet Boom" that started from a tiny US government program and exploded into a mega-business of the private domain, I am rather skeptical that the genie of genome-based oncology can ever be pushed back to the government-bottleneck by legislation of whatever Congress are we electing in November. Nonetheless, let's suppose it will happen (the same US government that forbids by HIPPA the sharing of private health-data would somehow make data-sharing mandatory.. first of all, for any legislation to emerge would take YEARS). How about the rest of the World? Already, the largest number of full human DNA sequences are available NOT IN THE USA - but in China! Would their government obey US legislation? Not very sure... India, though thus far she is the "sleeping giant of genome-based oncology", can catapult at any time (in part, precisely to counterbalance the lead in genomics, especially with genome editing, by arch-enemy China). Would any US legislation be effective in India?? Highly unlikely, since even the legislature of India is not that effective... How about Europe (EU or not)? Having worked with Government, Academia and Industry (not just in the USA, but also in Germany, India and my homeland Hungary) I have my own proposal on the table of interested parties. Andras_at_Pellionisz_dot_com]


Can Silicon Valley Cure Cancer?

Sean Parker, creator of Napster, first President of Facebook

By Roni Selig and Ben Tinker, CNN

Silicon Valley thrives on disrupting the traditional ways we do many things: education, consuming music and other media, communicate with others, even how we stay healthy. Bill Gates and Dr. Patrick Soon-Shiong know a few things about how to spend a lot of money to disrupt mainstream research while searching for cures in medicine.

Sean Parker hopes to join their ranks. In 1999, he co-founded the file-sharing service Napster, and in 2004, he became the first president of Facebook. Today, Parker announced his latest endeavor: a $250 million bet on eradicating cancer. Through the Parker Institute for Cancer Immunotherapy, he says his plan is just a matter of time until it works.

What's unique about Parker's Institute is its structure and design.

It brings together six of the country's leading cancer centers to have them share intellectual property, enabling more than 300 researchers at more than 40 labs across the country to have immediate access to each other's findings.

The institute will license the research findings from each of the cancer centers in order to share.

"That removes a lot of the bureaucratic barriers that would've prevented scientists from immediately sharing or capitalizing upon each others' discoveries," Parker said. "So a breakthrough made by one scientist at one center is immediately available to be used by any scientist within the network, and they improve upon it."

The participating centers are Memorial Sloan Kettering Cancer Center, Stanford Medicine, the University of Texas MD Anderson Cancer Center, the University of Pennsylvania and the University of California campuses in Los Angeles and San Francisco.

"To do the research that really moves the field forward, you need a lot of collaboration, but you (also) need one big, open sandbox for everyone to play in, in order for that collaboration to take place," said Parker. "So a breakthrough made by one scientist at one center is immediately available to be used by any scientist within the network, and they improve upon it. They can move the ball down the field, so to speak, and as a result of that, things can happen much, much faster."

"Sharing enormous amounts of data is not new in the scientific community" said Jean Claude Zenklusen, director of the Cancer Genome Atlas Project at the National Cancer Institute. He cites the Human Genome Project and the Cancer Genome Atlas as examples of successful projects where researchers have access to each others' results.

During his 2016 State of the Union address, President Barack Obama announced the establishment of a new White House Cancer Moonshot Task Force to accelerate cancer research and that he wants a budget of $1 billion. But the problem with government-funded research, said Parker, is that potentially life-saving projects take too long to get funded.

"In our case, it could be 48 hours before a trial is funded, and (just) several weeks before we have approval to conduct that trial in actual humans," said Parker.

According to the FDA, when a sponsor submits a study as part of the initial application for a new drug, the agency has 30 days to review the application and place the study on "hold" if there are any obvious reasons why it should not be conducted. Barring a hold, the study may begin with Institutional Review Board approval.

Parker wants the researchers to lead the charge, not institutions.

"Our model is completely different from the model of a grant-making organization," said Parker. "We internally develop this road map, working with every single scientist. Everything is exhaustively debated. We tell them to throw out their mediocre ideas that maybe they were waiting to get funded or they were standing in line effectively trying to get funding for one of their ideas from the NCI. We say, 'Throw it all away. Tell us the most ambitious thing you want to work on. We want you working on that.' "

Lessons from Silicon Valley

"There's something very entrepreneurial about the (institute's) way of thinking, because entrepreneurs need to be very focused," said Parker. "Entrepreneurs don't have unlimited time. They don't have unlimited resources -- and if they're going to change the world, they need to place bets. I wouldn't call them gambles because you're placing a bet on something where you have every reason to believe that it works and you're choosing amongst all of the things that you could potentially be doing -- the highest value, the highest impact thing."

Every year, 14 million people are diagnosed with cancer and 8.2 million people die of cancer-related causes. To Sean Parker and his team, those numbers are unacceptable.

Parker points to the hundreds of billions of dollars invested to yield a very small increase in overall life expectancy.

"Chemo, radiation, surgery and some targeted drugs are capable of treating about 50% of all cancers. The other 50% are a death sentence, and there hasn't been a significant paradigm shift in the way we treat cancer in quite a long time, said Parker. "There have been a lot of false starts and promises made about new treatment modalities that never materialized, or they resulted in this incremental three- to six-months average life extension, which is what qualifies as a new drug."

"We're focused on immunotherapy for a reason ... because it's a treatment modality that has the potential to treat all cancers," said Parker, founder and chairman of the new foundation. Immunotherapy, back in the 1970s, was seen as a high-tech breakthrough therapy, using the body's immune system to fight cancer cells.

"Cancer cells are very smart; said Dr. Jeffrey Bluestone, president and CEO of the Parker Institute. They mutate, change and learn how to escape the drugs we use to try to treat them. By training (the immune system) to see unique cancer markers ... when it sees it again, it can attack again."

It's personal

One of Parker's best friends, prominent Hollywood producer Laura Ziskin, founded Stand up to Cancer, and she was instrumental in shaping Parker's thinking about the disease. She died of complications from breast cancer in 2011.

"(Laura) was surrounded by all the best doctors in the world and had access to all the resources in the world, so if anybody should have lived, or anybody could have beaten cancer, it should have been Laura," said Parker.

"Her death at age 61, galvanized me to do more, and now I look back on it with a certain degree of frustration and angst because if only I had done a little bit more a little bit faster, if only we had built this network sooner. The treatments that are coming out of even some of our trials now potentially could have cured her. It's a tough thing."

"Twenty years from now, we should look back on cancer as something that our parents worried about -- and even though we'll probably never live in a world without cancer, the treatments should be relatively easy and extremely effective -- so it's not something we have to worry about," said Parker.

[Silicon Valley finally became the epicenter of New School Genomics, concluding in health care applications. Google Genomics, In., ex-Googler Franz Och of the Human Longevity, Inc. (of Venter), Grail, Inc. (of Jay Flatley's Illumina), now the Parker Institute, Inc. (and HolGenTech, Inc. - along with the venerable GenenTech, Inc.) all crowd in a 40-mile radius in the center of Silicon Valley. This all reminds us to Internet - when the meager govenment effort exploded the "brick and mortar" industries into advanced IT companies (Amazon.com, eBay.com and finally Google.com and Facebook.com). Private enterprise will take it from here in an alliance with USA and Global industrial partners. While "Big Players" used to be the Boston, San Diego, Houston areas in the USA, and UK, China - novel participants are tiny Poland, Lithuania and the Sleeping Giant of India - andras_at_pellionisz_dot_com]


Life Code (Bioinformatics): The Most Disruptive Technology Ever?

YouTube

[One and a half decades after revealing the full human DNA, "Old School" makes room for the "New School" based on openly admitting that just knowing all A,C,T,G letters of the Life Code means precious little - without the mathematical understanding of the Code. Juan has no personal stake (he is not a Turing-type "code breaker"). Thus, both schools can take it from him that the pure biochemistry (the 6 billion letters along a Double Helix) must make room for "Bioinformatics" - that is to deliver "The Most Disruptive Technology Ever"; the mathematical understanding of the Code of Life. My FractoGene approach is one that sums it all up "Fractal genome grows fractal organisms" - and thus gives us a mathematical handle for the interpretation of the Code. Many (mistakenly) believe that "the fractal approach" at once makes the hologenomic fractal operator fully understood. No breakthrough, ever, is an end. Breakthroughs are always new beginnings. Nonetheless, the "heureka!" of realizing the cause-and-effect of the de facto fractality of DNA and that of the organisms it grows (brain cells, lungs, cancer tumors, etc), immediately yielded utility (8,280,641, in force for the Next Decade) - andras_at_pellionisz_dot_com]


Craig Venter: We Are Not Ready to Edit Human Embryos Yet

Unless we have sufficient knowledge and wisdom we should not proceed

Discussions on human genome modifications to eliminate disease genes and/or for human enhancement are not new and have been common place since the first discussions on sequencing the human genome occurred in the mid 1980’s. Many a bioethicist has made their careers from such discussions, and currently on Amazon there are dozens of books on a wide range of human enhancement topics including those that predict that editing our genes will lead to the end of humanity. There are also thousands of news stories on the new DNA editing tools called CRISPR.

So why is genome editing so different? If we can use CRISPR techniques to change the letters of the genetic code known to be associated with rare genetic disorders such as Tay-Sachs disease, Huntington’s disease, cystic fibrosis, cycle cell anemia or ataxia telangiectasia, why wouldn’t we just do so and eliminate the diseases from human existence? The answer is both simple and complex at the same time: just because the techniques have become easier to perform, the ethical issues are not easier. In fact, the reality of the technical ease of CRISPR-based genome editing has changed hypothetical, esoteric arguments limited largely to “bioethicists” to here and now discussions and decisions for all of us.

For me there are three fundamental issues of why we should proceed with extreme caution in this brave new world.

1. Insufficient knowledge: Our knowledge of the human genome is just finally beginning to emerge as we sequence tens of thousands of genomes. We have little or no detailed knowledge of how (with a few exceptions) changing the genetic code will effect development and the subtlety associated with the tremendous array of human traits. Genes and proteins rarely have a single function in the genome and we know of many cases in experimental animals where changing a “known function” of a gene results in development surprises. Only a small percentage of human genes are well understood, for most we have little or no clue as to their role.

2. The slippery slope argument: If we allow editing of disease genes, it will open the door to all gene editing for human enhancement. This needs no further explanation: it is human nature and inevitable in my view that we will edit our genomes for enhancements.

3. The global ban on human experimentation: From Mary Shelly’s Frankenstein to Nazi war crimes to the X Men, we have pondered human experimentation. Unless we have sufficient knowledge and wisdom we should not proceed.

CRISPRs and other gene-editing tools are wonderful research tools to understand the function of DNA coding and should proceed. The U.K. approval of editing human embryos to understand human development has no impact on actual genome editing for disease prevention or human enhancement. Some of the experiments planned at the Crick Institute are simple experiments akin to gene knockouts in mice or other species where CRISPR will be used to cut out a gene to see what happens. They will yield some interesting results, but most, I predict, will be ambiguous or not informative as we have seen in this field before.

The only reason the announcement is headline-provoking is that it seems to be one more step toward editing our genomes to change life outcomes. We need to proceed with caution and with open public dialogue so we are all clear on where this exciting science is taking us. I do not think we are ready to edit human embryos yet. I think the scientific community needs to focus on obtaining a much more complete understanding of the whole-genome sequence as our software of life before we begin re-writing this code.

[Venter is "Tesla of Genomics"; not only in the greatest and most formative achievements, but also in the sense that he is keenly aware of the edge between the known and the unknown. His statement above is consistent with the view of Juan Enriquez (above) - both making it crystal clear that in between an ability to have a "text" available and the ability to "edit" - an indispensable element is hitherto largely missing. No sane person should edit any "text" (code, rather) that one does not understand. This is most obvious to software coders; it is unthinkable to "edit" a code without understanding the code. Likewise, at the dawn of the nuclear age, it was obvious that certain atoms violate the axiom of "old school" (that the atom is the smallest unit of an element that can not split). Physicists immediately started tinkering with radioactive materials - the best of the best often died of such tinkering. When the enormity of scientific, economical (and even geopolitical) significance of nuclear physics became clear, mankind embarked on one of the greatest endeavor of creating quantum mechanics (a new branch of mathematical physics), and developed nuclear physics - before starting to build awesome instruments to utilize for energy nuclear fission and fusion. I am not sure that in modern "new school" genomics the parallel is equally clear to all - perhaps because of the multidisciplinary nature of bioinformatics. Andras_at_Pellionisz_dot_com]


Big Data Meets Big Biology in San Diego on March 31: The Agenda

Bruce V. Bigelow

March 9th, 2016

Exconomy, San Diego

In less than a month, Xconomy is bringing some of the big guns in life sciences together in San Diego to talk about the opportunities that are emerging for tech and software innovation in fields like genomics, biotechnology, and digital health. We can now give you a preview of what it’s going to look like.

There’s no question that big data and big biology are coming together in a big way. The only question is how it’s going to happen.

We’re laying out at least part of that roadmap on March 31 at “Big Data Meets Big Science” at the Illumina Theater at the Alexandria, which is on the Torrey Pines mesa at 10996 Torreyana Road. In genomics, much of the technology roadmap already has been charted by Illumina (NASDAQ: ILMN), the San Diego-based maker of next-generation genome sequencing technology, consumables, and genetic analysis tools—and Illumina president Francis deSouza is kicking off our forum.

DeSouza, who will be taking over as Illumina CEO in July, also will talk with venture investors about the bets they are placing on IT innovations in the life sciences. If you’re an entrepreneur, innovator, or investor in the IT sector, you’ll want to be there. I’ve asked other speakers to also highlight the big trends in their respective fields, and to provide examples of the innovation needs they see in high-performance computing, predictive analytics, data storage, and software development.

Examples abound:

—Grail, a San Francisco startup founded earlier this year, is developing diagnostic technology sensitive enough to detect early stage cancer. The nature of the technology challenge, though, became apparent when Grail recently named Jeff Huber—who spent 12 years as the top engineer at Google—as its CEO.

—You don’t need to be a data scientist to innovate in healthcare. Amid a public furor over drug pricing, Santa Monica, CA-based GoodRx and New York’s Blink Health have developed online tools that help consumers get their generic drugs at prices that are far lower than the prices pharmacies typically charge customers who are paying out of pocket instead of through insurance.

—In San Diego, Edico Genome has developed a processor that has been optimized specifically for next-generation genome sequencing machines—reducing the time needed to map a patient’s whole genome from 20 hours to 20 minutes.

Edico Genome is on our agenda. Edico’s founding CEO, Pieter Van Rooyen, will take the stage with one of his principal investors, Lucian Iancovici, a senior investment manager at Qualcomm Life, to discuss how Edico got started.

Nicholas Schork, professor and director of human biology at the J. Craig Venter Institute will provide an overview of the fast-moving trends in genomics, and offer his insights on IT needs. We’ll also hear from Franz Och, an expert in machine learning and language translation, explain why he left a plum job as the head of Google Translate to become the chief data scientist at Human Longevity, a San Diego startup founded by the human genome pioneer J. Craig Venter.

Finally, we have scheduled a series of quick pitches from CureMetrix, Sentrian, Nervana Systems, Applied Proteomics, and ChromaCode to highlight how the local tech community is innovating in life sciences

We’ve posted the agenda for Big Data Meets Big Bio here. Tickets are available at this link. I’m looking forward to this event, and to seeing you at the Alexandria at Torrey Pines on March 31.

[Dr. Pellionisz had participated in the past at Xconomy meetings, e.g. in Seattle when Dr. Eric Schadt publicly affirmed that the CPU-intensive FractoGene approach to the interpretation of full human DNA sequences will be a massive paradigm-shift. Those meetings, however, were only at half-time from the "Is IT ready for the Dreaded DNA Data Deluge" (2008) towards our present day, when both full human DNA sequencing is affordable <$1,000 and Big Data is ready with also affordable (private) cloud computing. FractoGene (US patent 8,280,641, in force through the next decade) draws statistical diagnosis and probabilistic prognosis from "fractal genome grows fractal organisms". Beyond the double-bottleneck of formerly prohibitively expensive full human DNA sequencing and cumbersome and pricey "CPU farms" (before cloud computing), a third massive factor makes the deployment of FractoGene patent both timely and extremely lucrative. "Fractal Defects" were computed by Pellionisz as early as 2007, but at that time there was no way to know when any "genome defects" could be eliminated. Now, with Genome Editing (almost certainly a Nobel for the top triad this year), the motto of Pellionisz' HolGenTech, Inc. "Ask what you can do for your Genome!" also stepped up to a beta-stage, now in advanced negotiations. "Fractal defects", when identified, could be edited out! We have arrived at an age when the industry of "full human genome sequencing" (a commodity, hitherto struggling with a glut of too many sequences), dovetails with cloud computing industry to make even CPU-intensive (fractal) interpretation of DNA possible - and the found fractal defects could be edited out. Dr. Pellionisz will attend the meeting in San Diego to talk to interested parties. (Four-zero-8) 891-7187. ]

An amusing postscript: Big Data is like Teenage Sex: everyone talks about it, nobody really knows how to do it, and everyone thinks everyone else is doing it, so everyone claims they are doing it... (Dan Ariely, Duke University)


Illumina Forms New Company [Grail, see grailbio.com] to Enable Early Cancer Detection via Blood-Based Screening

Significant Development in the War on Cancer

[Press Release by Illumina]

SAN DIEGO--(BUSINESS WIRE)--Jan. 10, 2016-- Illumina, Inc. (NASDAQ:ILMN) today announced GRAIL, a new company formed to enable cancer screening from a simple blood test. Powered by Illumina sequencing technology, GRAIL will develop a pan-cancer screening test by directly measuring circulating nucleic acids in blood.

Detecting cancer at the earliest stages dramatically increases long-term survival, hence the successful development of a pan-cancer screening test for asymptomatic individuals would make the first major dent in global cancer mortality.

GRAIL’s unique structure enables it to take on this grand challenge. GRAIL has been formed as a separate company, majority owned by Illumina. GRAIL is initially funded by over $100 million in Series A financing from Illumina and ARCH Venture Partners, with participating investments from Bezos Expeditions, Bill Gates and Sutter Hill Ventures. GRAIL’s unique relationship with Illumina provides the ability to economically sequence at the high depths necessary to create a screening test with the required sensitivity and a hoped for level of specificity never before achievable for cancer screening.

“We hope today is a turning point in the war on cancer,” said Jay Flatley, Illumina Chief Executive Officer and Chairman of the Board of GRAIL. “By enabling the early detection of cancer in asymptomatic individuals through a simple blood screen, we aim to massively decrease cancer mortality by detecting the disease at a curable stage.”

“The holy grail in oncology has been the search for biomarkers that could reliably signal the presence of cancer at an early stage,” said Dr. Richard Klausner, formerly Illumina Chief Medical Officer and NCI Director, and a Director of GRAIL. “Illumina’s sequencing technology now allows the detection of circulating nucleic acids originating in the cancer cells themselves, a superior approach that provides a direct rather than surrogate measurement.”

“GRAIL’s rigorous, science-based approach with leading medical and policy advisors worldwide is unprecedented in the fight to defeat cancer,” said Robert Nelsen, Managing Director and Co-Founder of ARCH Venture Partners and a Director of GRAIL.

GRAIL has secured the counsel of a world-class set of industry and cancer experts for the company’s advisory board, including Dr. Richard Klausner; Dr. Jose Baselga, Physician In Chief at Memorial Sloan Kettering and President of the American Association of Cancer Research; Dr. Brian Druker, Director, OHSU Knight Cancer Institute; Mostafa Ronaghi, Chief Technology Officer at Illumina; Don Berry, Professor at MD Anderson Cancer Center; Timothy Church, Professor at the University of Minnesota School of Public Health and Charles Swanton, Group Leader at the Francis Crick Institute. The company will initially have a five-member Board of Directors, including Jay Flatley, William Rastetter (Chairman of Illumina), Dr. Richard Klausner, Robert Nelsen, and the CEO. The company is actively recruiting a CEO.

About Illumina

Illumina is improving human health by unlocking the power of the genome. Our focus on innovation has established us as the global leader in DNA sequencing and array-based technologies, serving customers in the research, clinical and applied markets. Our products are used for applications in the life sciences, oncology, reproductive health, agriculture and other emerging segments. To learn more, visit www.illumina.com and follow @illumina.

About GRAIL – Learn more at Grailbio.com.

Forward-Looking Statement for Illumina

This release contains forward looking statements that involve risks and uncertainties, such as Illumina’s expectations for the performance and utility of products to be developed by GRAIL. Important factors that could cause actual results to differ materially from those in any forward-looking statements include challenges inherent in developing, manufacturing, and launching new products and services and the other factors detailed in our filings with the Securities and Exchange Commission, including our most recent filings on Forms 10-K and 10-Q, or in information disclosed in public conference calls, the date and time of which are released beforehand. We do not intend to update any forward-looking statements after the date of this release.

View source version on businesswire.com: http://www.businesswire.com/news/home/20160110005039/en/

Source: Illumina, Inc.

Illumina, Inc.

Investors:

Rebecca Chambers, 858-255-5243

rchambers@illumina.com

or

Media:

Eric Endicott, 858-882-6822

pr@illumina.com

---

GRAIL

Grail: The Problem

Cancer is a leading cause of death worldwide, with over 14 million new cases per year and over 8 million deaths annually. Cancer incidence is expected to increase more than 70% over the next 20 years. At least half of all cancers in the United States are diagnosed in Stage III and Stage IV, leading to lower survival rates. Detecting cancer at the earliest stages dramatically increases the probability of a cure and long-term survival.

The mission of Grail is to enable the early detection of cancer in asymptomatic individuals through a blood screen – with the goal of massively decreasing global cancer mortality by detection at a curable stage. Grail will leverage the power of "ultra-deep" sequencing technology, the best talent in the field and the passion of its leadership to deliver on that promise.

Grail: The Premise

Ultra-deep sequencing to detect circulating tumor DNA has the potential to be the holy grail for early cancer detection in asymptomatic individuals. Most tumors shed nucleic acids into the blood. Circulating tumor DNA is a direct measurement of cancer DNA, rather than an indirect measure of the effects of cancer.

Grail: The Promise

The mission of Grail is to enable the early detection of cancer in asymptomatic individuals through a blood screen – with the goal of massively decreasing global cancer mortality by detection at a curable stage. Grail will leverage the power of "ultra-deep" sequencing technology, the best talent in the field and the passion of its leadership to deliver on that promise.

---

Jeff Huber, Key Google X Exec, Departs to Lead Cancer Diagnosis Startup Grail

Jeff Huber, a veteran Google executive instrumental in the formation of its Google X research lab, is leaving the company to become CEO of Grail, a biotech startup that develops blood tests to detect cancer.

It’s a deeply personal move. Huber’s wife, Laura, passed away from cancer last year. Huber posted about the move on his Google+ page:

My work at Grail is dedicated in remembrance of my wife, Laura. She fought an incredibly brave battle with her cancer, but it was ultimately a losing battle since it was diagnosed so late (at stage 4). If Grail had existed before, and caught her cancer earlier, it’s very possible she’d still be with us today. Things don’t “happen for a reason,” but you can find purpose and meaning in things that do happen. When Grail succeeds, hopefully many, many people and their loved ones can be spared the cancer experience Laura endured.

Grail splashed onto the market last month with a $100 million Series A round that included some marquee investors, like Bill Gates and Jeff Bezos. Beyond the technical challenge, the startup must convince regulators that its approach is a valid early diagnosis tool — at a time when biotech companies are under particular scrutiny, thanks to the public issues with Theranos.

Huber has been at Google since 2003. He was a critical engineering SVP for its ad, apps and maps units before shifting over to Google X in 2013. In 2014, he joined the board of Illumina, a genetics research company that also invested in Grail.

[This is a unique development in which the monopoly of genome sequencing commodity leads into a paradigm-shift, triggered by a human tragedy. From the viewpoint of DNA sequencing, Jay Flatley created a virtual monopoly - with the danger luring that the oversupply of DNA "Big Data" might crush the sequencing industry - if it is not matched in time with a virtually unlimited demand. Enter Jeff Huber, the Google techie, whose young wife, Laura was devastated by a stage IV cancer. At the stage of metastatic cancer throughout the body modern science is virtually helpless - since "oncogenes", even if some are suppressed, yield to flare-up of further "oncogenes", and the relentless onslaught consumes sooner or later even the most formidable therapies. It is a staggering fact that about half of newly diagnosed cancer patients are already at the third or fourth stage. Obviously, the sooner a cancer can be caught, the better is the prognosis for effective therapy up to a cure. (Defined as an at least five years of cancer-free patient). The new company (Grailbio) is not only a brilliant combination of a business model in which the sequencing commodity is paired with an unlimited demand (cancer screening), but is also a bright promise for much earlier detection of cancer.

There is, however, a question lurking in some, with the potential of extending this awesome initiative even further. While virtually everybody believes that cancer is the "disease of the genome", most often the culprits are believed to be some "oncogenes". Since every gene by definition produces RNA and amino-acids (conglomerating into proteins), Grailbio in its present initial form aims at intercepting the disease at the (early) stage when in the blood non-cellular RNA can be detected. This is much-much sooner than detecting (often large) tumors (of proeins).

Others, like me, are convinced that cancer is actually a "genome REGULATION disease" - where the primary cause of the pathological "acting up" of (often terribly mutant) "oncogenes" are not the primary reasons - but secondary consequences. My fractal approach holds, that is presently splits the National Cancer Institute into halves, (see (July 19) National Cancer Institute: Fractal Geometry at Critical Juncture of Cancer Research) that genome regulation is derailed by primary problems in the non-coding (regulatory) DNA. It is, therefore, a distinct possibility of embolden the Grail approach by fully sequencing the cellular DNA - and by looking into "fractal defects" that are caracteristic to the true onset of a cancerous process. True, that such an approach calls for CPU-heavy computation (that is likely to thrill Jeff Huber), but the beauty is that the Intellectual Property is secured for the Next Decade. Andras_at_Pellionisz_dot_com.]


Illumina CEO Jay Flatley Built The DNA Sequencing Market. Now He's Stepping Down

Jay Flatley is stepping down as chief executive of Illumina ILMN -7.05%, the largest maker of the DNA sequencing machines that have transformed the study of biology and the invention of new drugs.

He will be replaced as chief executive by Francis deSouza, an executive Illumina hired from Intel INTC -1.19% in 2013 and who was seen as Flatley’s heir apparent. Flatley will remain as executive chairman, focusing on strategy and on expanding the use of DNA sequencing in medicine.

“This is a magnificent team,” Flatley said in an interview. “While I hired a lot of them, it’s not me that made this all happen.”

Flatley, 63, had held the CEO role for 17 years. He grew San Diego-based Illumina’s revenue from $500,000 to $2.2 billion, and its headcount from 30 to 4,800. But the bigger impact was in cutting the cost of sequencing a human genome. In 2001, it was $100 million (or, by some estimates, $3 billion). Now it is less than $1,000. This has allowed researchers to understand genetics in ways that were previously unimaginable. (For more, see this profile I wrote of Flatley in 2014.)

Other companies, notably 454 Life Sciences, got to faster, cheaper DNA sequencing first. But Flatley beat them at being faster and cheaper. Again and again, he confounded competitors who thought their new, flashy ideas could catch up to Illumina’s sequencers. Thanks largely to perfect execution, helped by an aggressive legal strategy, no one has. Illumina has a near stranglehold on most types of DNA sequencing, which is now being used to help pick cancer drugs and to screen for birth defects.

Whether or not a successor can replicate that success is an open question. New DNA sequencers, from companies like Genia and Oxford Nanopore, are far less accurate than Illumina’s machines and as small as thumb drives. DeSouza says he is ready for the challenge.

[Jay Flatley will go down in history as a giant of the first period after the Human Genome Project. He was by far the most successful CEO with a record of wrestling down the price of full human genome sequencing from the Sky to Earth - much faster than Moore's Law did it for microprocessors. His Era is bygone not at all because he was not successful - but precisely because he WAS SO SUCCESSFUL. This all I predicted in a 2008 YouTube (Is IT ready for the Dreaded DNA Data Deluge? - Also published in peer-reviewed science paper "The Principle of Recursive Genome Function, 2008) Billions of dollars could have been saved if many - especially government agencies - were listened! The news-item directly below shows yet another "Big Science" doing nothing but amassing "Big Data" going down in flames. For gathering more and more data never automatically transfers for understanding - said the classic by Thomas Kuhn "The Structure of Scientific Revolutions (published over half a Century ago). According to yet another classic, The Innovator's Dilemma , the Great Firm of Illumina faces now with the new leadership a crucial question. Either it goes on as the eminent lead "sequencer" - or realizes that with the massive disruption of Genome Editing (that requires the knowledge of the sequence, as well as the "error mining technologies" such as FractoGene) Illumina embraces the new times. Given the excellence of the past and new leaders, I am optimistic. Andras_at_Pellionisz_dot_com.]


CRISPR: gene editing is just the beginning

The real power of the biological tool lies in exploring how genomes work.

Nature

Heidi Ledford

07 March 2016

Whenever a paper about CRISPR–Cas9 hits the press, the staff at Addgene quickly find out. The non-profit company is where study authors often deposit molecular tools that they used in their work, and where other scientists immediately turn to get them. It is also where other scientists immediately turn to get their hands on these reagents. “We get calls within minutes of a hot paper publishing,” says Joanne Kamens, executive director of the company in Cambridge, Massachusetts.

Addgene's phones have been ringing a lot since early 2013, when researchers first reported1, 2, 3 that they had used the CRISPR–Cas9 system to slice the genome in human cells at sites of their choosing. “It was all hands on deck,” Kamens says. Since then, molecular biologists have rushed to adopt the technique, which can be used to alter the genome of almost any organism with unprecedented ease and finesse. Addgene has sent 60,000 CRISPR-related molecular tools — about 17% of its total shipments — to researchers in 83 countries, and the company's CRISPR-related pages were viewed more than one million times in 2015.

Much of the conversation about CRISPR–Cas9 has revolved around its potential for treating disease or editing the genes of human embryos, but researchers say that the real revolution right now is in the lab. What CRISPR offers, and biologists desire, is specificity: the ability to target and study particular DNA sequences in the vast expanse of a genome. And editing DNA is just one trick that it can be used for. Scientists are hacking the tools so that they can send proteins to precise DNA targets to toggle genes on or off, and even engineer entire biological circuits — with the long-term goal of understanding cellular systems and disease.

“For the humble molecular biologist, it's really an extraordinarily powerful way to understand how the genome works,” says Daniel Bauer, a haematologist at the Boston Children's Hospital in Massachusetts. “It's really opened the number of questions you can address,” adds Peggy Farnham, a molecular biologist at the University of Southern California, Los Angeles. “It's just so fun.”

Here, Nature examines five ways in which CRISPR–Cas9 is changing how biologists can tinker with cells.

Broken scissors

There are two chief ingredients in the CRISPR–Cas9 system: a Cas9 enzyme that snips through DNA like a pair of molecular scissors, and a small RNA molecule that directs the scissors to a specific sequence of DNA to make the cut. The cell's native DNA repair machinery generally mends the cut — but often makes mistakes.

That alone is a boon to scientists who want to disrupt a gene to learn about what it does. The genetic code is merciless: a minor error introduced during repair can completely alter the sequence of the protein it encodes, or halt its production altogether. As a result, scientists can study what happens to cells or organisms when the protein or gene is hobbled.

But there is also a different repair pathway that sometimes mends the cut according to a DNA template. If researchers provide the template, they can edit the genome with nearly any sequence they desire at nearly any site of their choosing.

In 2012, as laboratories were racing to demonstrate how well these gene-editing tools could cut human DNA, one team decided to take a different approach. “The first thing we did: we broke the scissors,” says Jonathan Weissman, a systems biologist at the University of California, San Francisco (UCSF).

Weissman learned about the approach from Stanley Qi, a synthetic biologist now at Stanford University in California, who mutated the Cas9 enzyme so that it still bound DNA at the site that matched its guide RNA, but no longer sliced it. Instead, the enzyme stalled there and blocked other proteins from transcribing that DNA into RNA. The hacked system allowed them to turn a gene off, but without altering the DNA sequence4.

The team then took its 'dead' Cas9 and tried something new: the researchers tethered it to part of another protein, one that activates gene expression. With a few other tweaks, they had built a way to turn genes on and off at will5.

Weissman and his colleagues, including UCSF systems biologist Wendell Lim, further tweaked the method so that it relied on a longer guide RNA, with motifs that bound to different proteins. This allowed them to activate or inhibit genes at three different sites all in one experiment7. Lim thinks that the system can handle up to five operations at once. The limit, he says, may be in how many guide RNAs and proteins can be stuffed into a cell. “Ultimately, it's about payload.”

That combinatorial power has drawn Ron Weiss, a synthetic biologist at the Massachusetts Institute of Technology (MIT) in Cambridge, into the CRISPR–Cas9 frenzy. Weiss and his colleagues have also created multiple gene tweaks in a single experiment8, making it faster and easier to build complicated biological circuits that could, for example, convert a cell's metabolic machinery into a biofuel factory. “The most important goal of synthetic biology is to be able to program complex behaviour via the creation of these sophisticated circuits,” he says.

CRISPR epigenetics

When geneticist Marianne Rots began her career, she wanted to unearth new medical cures. She studied gene therapy, which targets genes mutated in disease. But after a few years, she decided to change tack. “I reasoned that many more diseases are due to disturbed gene-expression profiles, not so much the single genetic mutations I had been focused on,” says Rots, at the University Medical Center Groningen in the Netherlands. The best way to control gene activity, she thought, was to adjust the epigenome, rather than the genome itself.

The epigenome is the constellation of chemical compounds tacked onto DNA and the DNA-packaging proteins called histones. These can govern access to DNA, opening it up or closing it off to the proteins needed for gene expression. The marks change over time: they are added and removed as an organism develops and its environment shifts.

In the past few years, millions of dollars have been poured into cataloguing these epigenetic marks in different human cells, and their patterns have been correlated with everything from brain activity to tumour growth. But without the ability to alter the marks at specific sites, researchers are unable to determine whether they cause biological changes. “The field has met a lot of resistance because we haven't had the kinds of tools that geneticists have had, where they can go in and directly test the function of a gene,” says Jeremy Day, a neuroscientist at the University of Alabama at Birmingham.

CRISPR–Cas9 could turn things around. In April 2015, Charles Gersbach, a bioengineer at Duke University in Durham, North Carolina, and his colleagues published9 a system for adding acetyl groups — one type of epigenetic mark — to histones using the broken scissors to carry enzymes to specific spots in the genome.

The team found that adding acetyl groups to proteins that associate with DNA was enough to send the expression of targeted genes soaring, confirming that the system worked and that, at this location, the epigenetic marks had an effect. When he published the work, Gersbach deposited his enzyme with Addgene so that other research groups could use it — and they quickly did. Gersbach predicts that a wave of upcoming papers will show a synergistic effect when multiple epigenetic markers are manipulated at once.

The tools need to be refined. Dozens of enzymes can create or erase an epigenetic mark on DNA, and not all of them have been amenable to the broken-scissors approach. “It turned out to be harder than a lot of people were expecting,” says Gersbach. “You attach a lot of things to a dead Cas9 and they don't happen to work.” Sometimes it is difficult to work out whether an unexpected result arose because a method did not work well, or because the epigenetic mark simply doesn't matter in that particular cell or environment.

Rots has explored the function of epigenetic marks on cancer-related genes using older editing tools called zinc-finger proteins, and is now adopting CRISPR–Cas9. The new tools have democratized the field, she says, and that has already had a broad impact. People used to say that the correlations were coincidental, Rots says — that if you rewrite the epigenetics it will have no effect on gene expression. “But now that it's not that difficult to test, a lot of people are joining the field.”

CRISPR code cracking

Epigenetic marks on DNA are not the only genomic code that is yet to be broken. More than 98% of the human genome does not code for proteins. But researchers think that a fair chunk of this DNA is doing something important, and they are adopting CRISPR–Cas9 to work out what that is.

Some of it codes for RNA molecules — such as microRNAs and long non-coding RNAs — that are thought to have functions apart from making proteins. Other sequences are 'enhancers' that amplify the expression of the genes under their command. Most of the DNA sequences linked to the risk of common diseases lie in regions of the genome that contain non-coding RNA and enhancers. But before CRISPR–Cas9, it was difficult for researchers to work out what those sequences do. “We didn't have a good way to functionally annotate the non-coding genome,” says Bauer. “Now our experiments are much more sophisticated.”

Farnham and her colleagues are using CRISPR–Cas9 to delete enhancer regions that are found to be mutated in genomic studies of prostate and colon cancer. The results have sometimes surprised her. In one unpublished experiment, her team deleted an enhancer that was thought to be important, yet no gene within one million bases of it changed expression. “How we normally classify the strength of a regulatory element is not corresponding with what happens when you delete that element,” she says.

More surprises may be in store as researchers harness CRISPR–Cas9 to probe large stretches of regulatory DNA. Groups led by geneticists David Gifford at MIT and Richard Sherwood at the Brigham and Women's Hospital in Boston used the technique to create mutations across a 40,000-letter sequence, and then examined whether each change had an effect on the activity of a nearby gene that made a fluorescent protein10. The result was a map of DNA sequences that enhanced gene expression, including several that had not been predicted on the basis of gene regulatory features such as chromatin modifications.

Delving into this dark matter has its challenges, even with CRISPR–Cas9. The Cas9 enzyme will cut where the guide RNA tells it to, but only if a specific but common DNA sequence is present near the cut site. This poses little difficulty for researchers who want to silence a gene, because the key sequences almost always exist somewhere within it. But for those who want to make very specific changes to short, non-coding RNAs, the options can be limited. “We cannot take just any sequence,” says Reuven Agami, a researcher at the Netherlands Cancer Institute in Amsterdam.

Researchers are scouring the bacterial kingdom for relatives of the Cas9 enzyme that recognize different sequences. Last year, the lab of Feng Zhang, a bioengineer at the Broad Institute of MIT and Harvard in Cambridge, characterized a family of enzymes called Cpf1 that work similarly to Cas9 and could expand sequence options11. But Agami notes that few alternative enzymes found so far work as well as the most popular Cas9. In the future, he hopes to have a whole collection of enzymes that can be targeted to any site in the genome. “We're not there yet,” he says.

CRISPR sees the light

Gersbach's lab is using gene-editing tools as part of an effort to understand cell fate and how to manipulate it: the team hopes one day to grow tissues in a dish for drug screening and cell therapies. But CRISPR–Cas9's effects are permanent, and Gersbach's team needed to turn genes on and off transiently, and in very specific locations in the tissue. “Patterning a blood vessel demands a high degree of control,” he says.

Gersbach and his colleagues took their broken, modified scissors — the Cas9 that could now activate genes — and added proteins that are activated by blue light. The resulting system triggers gene expression when cells are exposed to the light, and stops it when the light is flicked off12. A group led by chemical biologist Moritoshi Sato of the University of Tokyo rigged a similar system13, and also made an active Cas9 that edited the genome only after it was hit with blue light14.

Others have achieved similar ends by combining CRISPR with a chemical switch. Lukas Dow, a cancer geneticist at Weill Cornell Medical College in New York City, wanted to mutate cancer-related genes in adult mice, to reproduce mutations that have been identified in human colorectal cancers. His team engineered a CRISPR–Cas9 system in which a dose of the compound doxycycline activates Cas9, allowing it to cut its targets15.

The tools are another step towards gaining fine control over genome editing. Gersbach's team has not patterned its blood vessels just yet: for now, the researchers are working on making their light-inducible system more efficient. “It's a first-generation tool,” says Gersbach.

Model CRISPR

Cancer researcher Wen Xue spent the first years of his postdoc career making a transgenic mouse that bore a mutation found in some human liver cancers. He slogged away, making the tools necessary for gene targeting, injecting them into embryonic stem cells and then trying to derive mice with the mutation. The cost: a year and US$20,000. “It was the rate-limiting step in studying disease genes,” he says.

A few years later, just as he was about to embark on another transgenic-mouse experiment, his mentor suggested that he give CRISPR–Cas9 a try. This time, Xue just ordered the tools, injected them into single-celled mouse embryos and, a few weeks later — voilá. “We had the mouse in one month,” says Xue. “I wish I had had this technology sooner. My postdoc would have been a lot shorter.”

Researchers who study everything from cancer to neurodegeneration are embracing CRISPR-Cas9 to create animal models of the diseases (see page 160). It lets them engineer more animals, in more complex ways, and in a wider range of species. Xue, who now runs his own lab at the University of Massachusetts Medical School in Worcester, is systematically sifting through data from tumour genomes, using CRISPR–Cas9 to model the mutations in cells grown in culture and in animals.

Researchers are hoping to mix and match the new CRISPR–Cas9 tools to precisely manipulate the genome and epigenome in animal models. “The real power is going to be the integration of those systems,” says Dow. This may allow scientists to capture and understand some of the complexity of common human diseases.

Take tumours, which can bear dozens of mutations that potentially contribute to cancer development. “They're probably not all important in terms of modelling a tumour,” says Dow. “But it's very clear that you're going to need two or three or four mutations to really model aggressive disease and get closer to modelling human cancer.” Introducing all of those mutations into a mouse the old-fashioned way would have been costly and time-consuming, he adds.

Bioengineer Patrick Hsu started his lab at the Salk Institute for Biological Studies in La Jolla, California, in 2015; he aims to use gene editing to model neurodegenerative conditions such as Alzheimer's disease and Parkinson's disease in cell cultures and marmoset monkeys. That could recapitulate human behaviours and progression of disease more effectively than mouse models, but would have been unthinkably expensive and slow before CRISPR–Cas9.

Even as he designs experiments to genetically engineer his first CRISPR–Cas9 marmosets, Hsu is aware that this approach may be only a stepping stone to the next. “Technologies come and go. You can't get married to one,” he says. “You need to always think about what biological problems need to be solved.”

Nature 531, 156–159 (10 March 2016) doi:10.1038/531156a


Geneticists debate whether focus should shift from sequencing genomes to analysing function.

[If one replaces "whether" to "how to", the article makes even more sense - AJP]

Nature

Heidi Ledford

05 January 2015

A mammoth US effort to genetically profile 10,000 tumours has officially come to an end. Started in 2006 as a US$100-million pilot, The Cancer Genome Atlas (TCGA) is now the biggest component of the International Cancer Genome Consortium, a collaboration of scientists from 16 nations that has discovered nearly 10 million cancer-related mutations.


The question is what to do next. Some researchers want to continue the focus on sequencing; others would rather expand their work to explore how the mutations that have been identified influence the development and progression of cancer.

“TCGA should be completed and declared a victory,” says Bruce Stillman, president of Cold Spring Harbor Laboratory in New York. “There will always be new mutations found that are associated with a particular cancer. The question is: what is the cost–benefit ratio?”

Stillman was an early advocate for the project, even as some researchers feared that it would drain funds away from individual grants. Initially a three-year project, it was extended for five more years. In 2009, it received an additional $100 million from the US National Institutes of Health plus $175 million from stimulus funding that was intended to spur the US economy during the global economic recession.

The project initially struggled. At the time, the sequencing technology worked only on fresh tissue that had been frozen rapidly. Yet most clinical biopsies are fixed in paraffin and stained for examination by pathologists. Finding and paying for fresh tissue samples became the programme’s largest expense, says Louis Staudt, director of the Office for Cancer Genomics at the National Cancer Institute (NCI) in Bethesda, Maryland.

Also a problem was the complexity of the data. Although a few ‘drivers’ stood out as likely contributors to the development of cancer, most of the mutations formed a bewildering hodgepodge of genetic oddities, with little commonality between tumours. Tests of drugs that targeted the drivers soon revealed another problem: cancers are often quick to become resistant, typically by activating different genes to bypass whatever cellular process is blocked by the treatment.

Despite those difficulties, nearly every aspect of cancer research has benefited from TCGA, says Bert Vogelstein, a cancer geneticist at Johns Hopkins University in Baltimore, Maryland. The data have yielded new ways to classify tumours and pointed to previously unrecognized drug targets and carcinogens. But some researchers think that sequencing still has a lot to offer. In January, a statistical analysis of the mutation data for 21 cancers showed that sequencing still has the potential to find clinically useful mutations (M. S. Lawrence et al. Nature 505, 495–501; 2014).

On 2 December, Staudt announced that once TCGA is completed, the NCI will continue to intensively sequence tumours in three cancers: ovarian, colorectal and lung adenocarcinoma. It then plans to evaluate the fruits of this extra effort before deciding whether to add back more cancers.

Expanded scope

But this time around, the studies will be able to incorporate detailed clinical information about the patient’s health, treatment history and response to therapies. Because researchers can now use paraffin-embedded samples, they can tap into data from past clinical trials, and study how mutations affect a patient’s prognosis and response to treatment. Staudt says that the NCI will be announcing a call for proposals to sequence samples taken during clinical trials using the methods and analysis pipelines established by the TCGA.

The rest of the International Cancer Gene Consortium, slated to release early plans for a second wave of projects in February, will probably take a similar tack, says co-founder Tom Hudson, president of the Ontario Institute for Cancer Research in Toronto, Canada. A focus on finding sequences that make a tumour responsive to therapy has already been embraced by government funders in several countries eager to rein in health-care costs, he says. “Cancer therapies are very expensive. It’s a priority for us to address which patients would respond to an expensive drug.”

The NCI is also backing the creation of a repository for data not only from its own projects, but also from international efforts. This is intended to bring data access and analysis tools to a wider swathe of researchers, says Staudt. At present, the cancer genomics data constitute about 20 petabytes (1015 bytes), and are so large and unwieldy that only institutions with significant computing power can access them. Even then, it can take four months just to download them.

Stimulus funding cannot be counted on to fuel these plans, acknowledges Staudt. But cheaper sequencing and the ability to use biobanked biopsies should bring down the cost, he says. “Genomics is at the centre of much of what we do in cancer research,” he says. “Now we can ask questions in a more directed way.”

Nature 517, 128–129 (08 January 2015) doi:10.1038/517128a


Top U.S. Intelligence Official Calls Gene Editing a WMD Threat

MIT Technology Review, Feb. 9, 2016

Antonia Regalado

Easy to use. Hard to control The intelligence community now sees CRISPR as a threat to national safety.

Genome editing is a weapon of mass destruction.

That’s according to James Clapper, U.S. director of national intelligence, who on Tuesday, in the annual worldwide threat assessment report of the U.S. intelligence community, added gene editing to a list of threats posed by “weapons of mass destruction and proliferation.”

Gene editing refers to several novel ways to alter the DNA inside living cells. The most popular method, CRISPR, has been revolutionizing scientific research, leading to novel animals and crops, and is likely to power a new generation of gene treatments for serious diseases (see “Everything You Need to Know About CRISPR’s Monster Year”).

It is gene editing’s relative ease of use that worries the U.S. intelligence community, according to the assessment. “Given the broad distribution, low cost, and accelerated pace of development of this dual-use technology, its deliberate or unintentional misuse might lead to far-reaching economic and national security implications,” the report said.

The choice by the U.S. spy chief to call out gene editing as a potential weapon of mass destruction, or WMD, surprised some experts. It was the only biotechnology appearing in a tally of six more conventional threats, like North Korea’s suspected nuclear detonation on January 6, Syria’s undeclared chemical weapons, and new Russian cruise missiles that might violate an international treaty.

The report is an unclassified version of the “collective insights” of the Central Intelligence Agency, the National Security Agency, and half a dozen other U.S. spy and fact-gathering operations.

Although the report doesn’t mention CRISPR by name, Clapper clearly had the newest and the most versatile of the gene-editing systems in mind. The CRISPR technique’s low cost and relative ease of use—the basic ingredients can be bought online for $60—seems to have spooked intelligence agencies.

“Research in genome editing conducted by countries with different regulatory or ethical standards than those of Western countries probably increases the risk of the creation of potentially harmful biological agents or products,” the report said.

The concern is that biotechnology is a “dual use” technology—meaning normal scientific developments could also be harnessed as weapons. The report noted that new discoveries “move easily in the globalized economy, as do personnel with the scientific expertise to design and use them.”

Clapper didn’t lay out any particular bioweapons scenarios, but scientists have previously speculated about whether CRISPR could be used to make “killer mosquitoes,” plagues that wipe out staple crops, or even a virus that snips at people’s DNA.

“Biotechnology, more than any other domain, has great potential for human good, but also has the possibility to be misused,” says Daniel Gerstein, a senior policy analyst at RAND and a former under secretary at the Department of Homeland Defense. “We are worried about people developing some sort of pathogen with robust capabilities, but we are also concerned about the chance of misutilization. We could have an accident occur with gene editing that is catastrophic, since the genome is the very essence of life.”

Piers Millet, an expert on bioweapons at the Woodrow Wilson Center in Washington, D.C., says Clapper’s singling out of gene editing on the WMD list was “a surprise,” since making a bioweapon—say, an extra-virulent form of anthrax—still requires mastery of a “wide raft of technologies.”

Development of bioweapons is banned by the Biological and Toxin Weapons Convention, a Cold War–era treaty that outlawed biological warfare programs. The U.S., China, Russia, and 172 other countries have signed it. Millet says that experts who met in Warsaw last September to discuss the treaty felt a threat from terrorist groups was still remote, given the complexity of producing a bioweapon. Millet says the group concluded that “for the foreseeable future, such applications are only within the grasp of states.”

The intelligence assessment drew specific attention to the possibility of using CRISPR to edit the DNA of human embryos to produce genetic changes in the next generation of people—for example, to remove disease risks. It noted that fast advances in genome editing in 2015 compelled “groups of high-profile U.S. and European biologists to question unregulated editing of the human germ line (cells that are relevant for reproduction), which might create inheritable genetic changes.”

So far, the debate over changing the next generation’s genes has been mostly an ethical question, and the report didn’t say how such a development would be considered a WMD, although it’s possible to imagine a virus designed to kill or injure people by altering their genomes.

[No public comment, Andras_at_Pellionisz_dot_com]


Craig Venter: We Are Not Ready to Edit Human Embryos Yet

Craig Venter: We Are Not Ready to Edit Human Embryos Yet

J. Craig Venter @JCVenter Feb. 2, 2016

J. Craig Venter, a TIME 100 honoree, is a geneticist known for being one of the first to sequence the human genome.

Unless we have sufficient knowledge and wisdom we should not proceed

Discussions on human genome modifications to eliminate disease genes and/or for human enhancement are not new and have been common place since the first discussions on sequencing the human genome occurred in the mid 1980’s. Many a bioethicist has made their careers from such discussions, and currently on Amazon there are dozens of books on a wide range of human enhancement topics including those that predict that editing our genes will lead to the end of humanity. There are also thousands of news stories on the new DNA editing tools called CRISPR.

So why is genome editing so different? If we can use CRISPR techniques to change the letters of the genetic code known to be associated with rare genetic disorders such as Tay-Sachs disease, Huntington’s disease, cystic fibrosis, cycle cell anemia or ataxia telangiectasia, why wouldn’t we just do so and eliminate the diseases from human existence? The answer is both simple and complex at the same time: just because the techniques have become easier to perform, the ethical issues are not easier. In fact, the reality of the technical ease of CRISPR-based genome editing has changed hypothetical, esoteric arguments limited largely to “bioethicists” to here and now discussions and decisions for all of us.

For me there are three fundamental issues of why we should proceed with extreme caution in this brave new world.

1. Insufficient knowledge: Our knowledge of the human genome is just finally beginning to emerge as we sequence tens of thousands of genomes. We have little or no detailed knowledge of how (with a few exceptions) changing the genetic code will effect development and the subtlety associated with the tremendous array of human traits. Genes and proteins rarely have a single function in the genome and we know of many cases in experimental animals where changing a “known function” of a gene results in development surprises. Only a small percentage of human genes are well understood, for most we have little or no clue as to their role.

2. The slippery slope argument: If we allow editing of disease genes, it will open the door to all gene editing for human enhancement. This needs no further explanation: it is human nature and inevitable in my view that we will edit our genomes for enhancements.

3. The global ban on human experimentation: From Mary Shelly’s Frankenstein to Nazi war crimes to the X Men, we have pondered human experimentation. Unless we have sufficient knowledge and wisdom we should not proceed.

CRISPRs and other gene-editing tools are wonderful research tools to understand the function of DNA coding and should proceed. The U.K. approval of editing human embryos to understand human development has no impact on actual genome editing for disease prevention or human enhancement. Some of the experiments planned at the Crick Institute are simple experiments akin to gene knockouts in mice or other species where CRISPR will be used to cut out a gene to see what happens. They will yield some interesting results, but most, I predict, will be ambiguous or not informative as we have seen in this field before.

The only reason the announcement is headline-provoking is that it seems to be one more step toward editing our genomes to change life outcomes. We need to proceed with caution and with open public dialogue so we are all clear on where this exciting science is taking us. I do not think we are ready to edit human embryos yet. I think the scientific community needs to focus on obtaining a much more complete understanding of the whole-genome sequence as our software of life before we begin re-writing this code.

[If anybody, Venter (the "Tesla of Genomics") would know best. The Venter Institute "edited out" a rather tiny amount of genes from the genome of Mycoplasma Genitalum (with the smallest genome of all free-living organisms). The "edited version" (synthetized) would not work for 15 agonizing years. Why? Because even the smallest genome contains some 7% of "non-coding" (regulatory) DNA, and by slightly reducing the number of "genes" nobody know how to modify the 7% "regulatory sequences" to kick the protein-pumps ("genes") alive. It is pure fantasy to "edit any text" without an understanding what it means. Even spell-checkers of single letters are hopeless in some cases (bad - bed), since the letter in the middle can be an error in one case, while perfect in an other context. If 15 years of sophisticated "trials" were needed to finally arrive at a "working version" of the slightly reduced "set of genes", imagine how many Centuries would be needed to "get the editing right" e.g. in case of cancers, without a mathematical understanding of genome regulation. FracoGene ("Fractal DNA governs growth of fractal organisms") is presently the only mathematical (fractal geometry) approach that serves as a basis for such "cause and effect" understanding of pristine DNA or one laden with Fractal Defects how it governs the growth of physiological or cancerous organisms. Precious (but fiercely debated) IP of Genome Editing must be combined with the IP of Genome Regulation. Andras_at_Pellionisz_dot_com]


UK scientists gain licence to edit genes in human embryos

Team at Francis Crick Institute permitted to use CRISPR–Cas9 technology in embryos for early-development research.

Nature

Ewen Callaway

01 February 2016

Scientists in London have been granted permission to edit the genomes of human embryos for research, UK fertility regulators announced. The 1 February approval by the UK Human Fertilisation and Embryology Authority (HFEA) represents the world's first endorsement of such research by a national regulatory authority.

"It’s an important first. The HFEA has been a very thoughtful, deliberative body that has provided rational oversight of sensitive research areas, and this establishes a strong precedent for allowing this type of research to go forward," says George Daley, a stem-cell biologist at Children's Hospital Boston in Massachusetts.

The HFEA has approved an application by developmental biologist Kathy Niakan, at the Francis Crick Institute in London, to use the genome-editing technique CRISPR–Cas9 in healthy human embryos. Niakan’s team is interested in early development, and it plans to alter genes that are active in the first few days after fertilization. The researchers will stop the experiments after seven days, after which the embryos will be destroyed.

The genetic modifications could help researchers to develop treatments for infertility, but will not themselves form the basis of a therapy.

Robin Lovell-Badge, a developmental biologist at the Crick institute, says that the HFEA’s decision will embolden other researchers who hope to edit the genomes of human embryos. He has heard from other UK scientists who are interested in pursuing embryo-editing research, he says, and expects that more applications will follow. In other countries, he says, the decision “will give scientists confidence to either apply to their national regulatory bodies, if they have them, or just to go ahead anyway”.

Development genes

Niakan’s team has already been granted a licence by the HFEA to conduct research using healthy human embryos that are donated by patients who had undergone in vitro fertilization (IVF) at fertility clinics. But in September last year, the team announced that it had applied to conduct genome editing on these embryos — five months after researchers in China reported that they had used CRISPR–Cas9 to edit the genomes of non-viable human embryos, which sparked a debate about how or whether to draw the line on gene editing in human embryos.

At a press briefing last month, Niakan said that her team could begin experiments within “months” of the HFEA approving the application. Its first experiment will involve blocking the activity of a ‘master regulator’ gene called OCT4, also known as POU5F1, which is active in cells that go on to form the fetus. (Other cells in the embryo go on to form the placenta.) Her team plans to end its test-tube experiments within a week of fertilization, when the fertilized egg has reached the blastocyst stage of development and contains up to 256 cells.

“I am delighted that the HFEA has approved Dr Niakan’s application,” said Crick director Paul Nurse in a statement. “Dr Niakan’s proposed research is important for understanding how a healthy human embryo develops and will enhance our understanding of IVF success rates, by looking at the very earliest stage of human development.”

A local research ethics board (which is similar to an institutional review board in the United States) will now need to approve the research that Niakan’s team has planned. When approving Niakan's application, the HFEA said that no experiments could begin until such ethics approval was granted.

International impact

Sarah Chan, a bioethicist at the University of Edinburgh, UK, says that the decision will reverberate well beyond the United Kingdom. “I think this will be a good example to countries who are considering their approach to regulating this technology. We can have a well-regulated system that is able to make that distinction between research and reproduction,” she says.

It remains illegal to alter the genomes of embryos used to conceive a child in the United Kingdom, but researchers say that the decision to allow embryo-editing research could inform the debate over deploying gene-editing in embryos for therapeutic uses in the clinic.

“This step in the UK will stimulate debate on legal regulation of germline gene editing in clinical settings,” says Tetsuya Ishii, a bioethicist at Hokkaido University in Sapporo, Japan, who notes that some countries do not explicitly prohibit reproductive applications.

“This type of research should prove valuable for understanding the many complex issues around germline editing," adds Daley. "Even though this work isn’t explicitly aiming toward the clinic, it may teach us the potential risks of considering clinical application.”

Nature doi:10.1038/nature.2016.19270

[There is a big difference between genome editing in human embryos for purposes of research - and genome editing (e.g. in case of "single nucleotide polymorphism"-diseases) as a cure. As for complex genome (mis)regulation diseases, like cancer, first the mathematical (fractal) language of the genome has to be known. For natual languages, even for single-letter spelling mistakes a spell-checker might not even be effective with full knowledge of the language (bad or bed could be correct, depending on the context). More substantial editing of natural languages, for instance to edit the grammar, full command of the language is indispensable. Likewise, it might not even make sense, and could be outright dangerous to start editing e.g. cancerous genomes before the understanding of fractal mathematics of genome regulation is understood - Andras_at_Pellionisz_dot_com]


Why Eric Lander morphed from science god to punching bag

STAT

By SHARON BEGLEY JANUARY 25, 2016

Genome-sequencing pioneer Eric Lander, one of the most powerful men in American science, did not embezzle funds from the institute he leads, sexually harass anyone, plagiarize, or fabricate data. But he became the target of venomous online attacks last week because of an essay he wrote on the history of CRISPR, the revolutionary genome-editing technology pioneered partly by his colleagues at the Broad Institute in Cambridge, Mass.To be sure, Lander gave his foes some openings. He and the journal Cell, which published his essay last week, failed to disclose Lander’s potential conflict of interest when it comes to CRISPR. The essay, other scientists said, got several key facts wrong, and Lander later added what he called clarifications. Stirring the greatest anger, critics charged that rather than writing an objective history he downplayed the role of two key CRISPR scientists who happen to be women.Those missteps triggered a bitter online war, including the Twitter hashtag #landergate. Biologist Michael Eisen of the University of California, Berkeley, deemed his essay “science propaganda at its most repellent” and called for its retraction, while anonymous scientists on the post-publication review site PubPeer ripped into Lander’s motives and character. The attacks spread well beyond science, with the feminist website Jezebel.com charging that “one man tried to write women out of CRISPR.”Read more: Controversial CRISPR history sets off an online firestormThe episode created cracks in a dam that had long held back public criticism of Lander.The outpouring of rage directed at him arises from what one veteran biomedical researcher calls “pent-up animosity” toward Lander and the Broad Institute, where he serves as director, that has built up over years.“Science can be a blood sport,” said science historian and policy expert Robert Cook-Deegan of Duke University. “This seems to be one of those times.”Some of the brickbats hurled at Lander reflect professional jealousy, especially since he took an unconventional path into the top echelons of molecular biology. Some seem to be payback for the egos Lander bruised over the years, dating to his role in the Human Genome Project in the late 1990s. Some of the anger seems to stem from still-simmering animosity over what Lander and his institute represent to many: the triumph of Big Science in biology.Lander, 58, told STAT that, while he does not peruse social media, the criticism that he’s aware of “does not feel personal in any way. I appreciate that there are a lot of diverse perspectives, and science needs those.”Current and former colleagues contacted by STAT described Lander as brilliant, prickly, and brash, as having “an ego without end,” as “a visionary” who “doesn’t suffer fools gladly,” and as “an authentic genius” who “sees things the rest of us don’t.” Lander won a MacArthur Foundation “genius” award in 1987 at age 30. Since 2009, he has co-chaired President Obama’s scientific advisory council.“Anything I want to say, he’s ahead of me,” said one scientist who has worked closely with Lander on issues of science policy. “With normal mortals you can see wheels grinding in their head, but with Eric you can’t.”The Broad rose from nonexistence in 2003 to the pinnacle of molecular biology. By 2008 three Broad scientists, including Lander, ranked in the top 10 most-cited authors of recent papers in molecular biology and genetics. In 2011, Lander had more “hot papers” (meaning those cited most by other scientists) in any field, not just biology, than anyone else over the previous two years, according to ThomsonReuters’ ScienceWatch. By 2014, 8 out of what ScienceWatch called “the 17 hottest-of-the-hot researchers” in genomics were at the Broad.The institute was punching well above its weight. It attracted eye-popping donations, including $650 million for psychiatric research from the foundation of philanthropist Ted Stanley in 2014 and, since its 2003 founding, $800 million from Los Angeles developer Eli Broad and his wife Edythe. It won $176.5 million in research grants from the National Institutes of Health in fiscal year 2015, ranking it 34th. Larger institutions got more — $604 million for Johns Hopkins, $563 million for the University of California, San Francisco — but the Broad’s smaller number of core researchers were leaving rivals in the dust in terms of their contributions to and influence in science.To many biomedical researchers at other institutions, said Cook-Deegan, “it feels that these guys from Boston, with more money than God, are trying to muscle in. . . . People [at the Broad] think they work at the best biomedical research institution in the world, and at meetings they let everyone know that.”Cook-Deegan admires Lander: he nominated him for the prestigious Abelson Award for public service to science, which will be given to Lander next month by the American Association for the Advancement of Science.Apart from the resentment Lander inspires because of the Broad’s success, there is lingering animus over what Lander represents: Big Science. Physics became Big Science — dominated by huge collaborations rather than lone investigators — decades ago with the advent of atomic accelerators. A key 2015 paper on the Higgs boson (“the God particle”) had 5,154 authors. Biology went that route with the launch of the Human Genome Project, the international effort to determine the sequence of 6 billion molecular “letters” that make up human DNA.Lander was not present at the creation of the $3 billion project in 1990, but the sequencing center he oversaw at the Whitehead Institute became a powerhouse in the race to complete it. Much of that work was done by robots and involved little creativity (once scientists figured out how to do the sequencing). Some individual investigators felt they couldn’t compete against peers at the sequencing centers in the race for grants.“He became a symbol of plowing lots of resources into industrialized, mindless science that could be run by machines and technicians and so wasn’t real biology,” said one scholar of that period. “Eric came to embody Big Science in that way.”More than that, Lander played an outsized role in the project relative to his background and experience. A mathematician by training, after he graduated from Princeton in 1978 and earned a PhD in math in 1981 at Oxford University as a Rhodes Scholar, he taught managerial economics at Harvard Business School from 1981 to 1990. He slowly became bored by the MBA world and enchanted with biology, however, and in 1990 founded the genome center at the Whitehead. It was hardly the pay-your-dues, do your molecular biology PhD and postdoctoral fellowship route to a leading position in the white-hot field of genomics.Read more: Geneticist Craig Venter helped sequence the human genome. Now he wants yours“Eric appeared to be an upstart to some people in the science establishment, a mathematician interloper in the tight club of molecular biology,” said Fintan Steele, former director of communications and scientific education at the Broad.By the late 1990s, confidential National Institutes of Health documents estimated that the genome project was on track to be no more than two-thirds finished by 2005, when it was supposed to be completed, according to histories of the effort. That would have been a disaster: geneticist Craig Venter and his company, Celera, had launched a competing genome-sequencing project and boasted that they would beat the public project to the finish line. Worse, Venter intended to patent DNA sequences, meaning whatever Celera sequenced first would be owned by a for-profit company. In early 1998, James Watson, codiscoverer of DNA’s double-helix structure and former head of the genome project, asked Lander to persuade NIH to spend more money, faster. Lander thought the problem went beyond funding. The sequencing project was “too bloody complicated, with too many groups,” he told the New Yorker in 2000. Tapping his business acumen, Lander decided the project needed to become more focused, with fewer groups. He also thought that allowing two dozen sequencing labs to each claim part of the genome for their own was “madness,” he told author Victor McElheny for a 2010 book on the genome project. If any lab was slow, the whole project would be late.Lander, therefore, pushed to reorganize the genome project. Scientists who disagreed with his strategy “bellowed in protest,” according to James Shreeve’s 2004 book “The Genome War,” and Lander’s “constant demands” for his lab to sequence more and more “led to a crescendo of heated conversations.” But Lander’s strong-arming worked: the public effort battled Venter to a tie, with both releasing “drafts” of the human genome in 2001. Lander was first among equals, the lead author of the Nature paper unveiling the “book of life.”His success left some veteran geneticists bitter at the upstart who helped rescue the highest-profile scientific endeavor of the 1990s. But “competing with Venter excused a lot of behavior,” said New York University bioethicist Arthur Caplan, a member of the Celera advisory board at the time.Lander attributes the genome project’s “huge success” to, among other things, the fact that “it had the flexibility to bring in people with different perspectives and skills.” On weekly phone calls for five years, he said, “we debated and argued about everything imaginable.”In 2003 Lander was instrumental in moving the genome center from the Whitehead to the just-created Broad. “It wasn’t just the genome center that he took,” said Steele, the former Broad staffer. “It was also the substantial funding that supported the center.”That move was spurred in part by the fact that the genome center had outgrown the Whitehead; it constituted three-quarters of the Whitehead’s budget.The departure of Lander and his genome center to the Broad generated hard feelings at the Whitehead. One veteran of that battle recalls it as “very bloody,” especially because the Whitehead wasn’t raising much money and feared that Lander would vacuum up potential donors. For several years after, Whitehead annual reports showed a picture of its facility in Cambridge’s Kendall Square with the next-door Broad conspicuously missing.In the biotech hotbed that surrounds the Massachusetts Institute of Technology, it seems every biology PhD has founded a company. Lander is a cofounder of Millennium Pharmaceuticals, Infinity Pharmaceuticals, Verastem, and the cancer vaccine startup Neon Therapeutics. He is a founding advisor to cancer genomics company Foundation Medicine and has close ties to venture capital firm Third Rock Ventures, a major investor in the CRISPR company Editas.Although his involvement in the for-profit world hardly makes him unusual — MIT, like many universities, encourages scientists to translate their research into drugs and other products — it has, nonetheless, added to the resentment. With Foundation, said a former Broad scientist, “there was a belief that the Broad researchers had done all this work on cancer genomics, and Foundation is built on that. People were asking, ‘Are these guys going to get rich on our work?'”The most serious misstep in Lander’s Cell essay was arguably a failure to disclose a potential conflict of interest: the Broad is engaged in a bitter fight with the University of California system over CRISPR patents. Lander reported this to Cell, but the journal’s policy is not to note such “institutional” conflicts. A review of CRISPR coauthored by Berkeley’s Jennifer Doudna in the same issue has no disclosure either, even though she cofounded the CRISPR company Caribou Biosciences, and the Twitterverse has not attacked her.Read more: Meet one of the world’s most groundbreaking scientists. He’s 34. Critics say Lander downplayed seminal CRISPR research by Doudna and her key collaborator, Emmanuelle Charpentier, and overstated the contributions of Broad biologist Feng Zhang. That has been portrayed as sexist, an impression supported by the title of the essay: Heroes of CRISPR. With too-frequent cases of sexism and outright sexual harassment by leading scientists, sensitivities on this are high, but his defenders say Lander has long been a strong supporter of women in science.“He has always been one of my greatest advocates,” said Harvard and Broad biologist Pardis Sabeti, who did key genetics work on the recent Ebola outbreak. “He has hired strong, tough, brilliant women scientists for the Broad, and has made it one of the best places for women scientists to work.”Lander said that he wanted his Cell essay simply “to turn the spotlight on 20 years of the backstory of CRISPR,” showing that science is an “ensemble” enterprise in which even key discoveries struggle to be recognized — journals rejected early CRISPR papers. “But I guess it’s only natural that some people will want to focus on current conflicts,” he said.Correction: An earlier version of this story failed to attribute to other scientists the claims of errors in Eric Lander’s essay. It also called his response to those claims corrections when he described them as clarifications.

[Some simply "join the fray", but I do not. My observation is that there are different workers in science. Original contributors can be easily distinguished from integrators - and Eric certainly excels in the latter. In the next segment I am not talking about Eric but the rest of us. Myself, enthused by reading in 1958 John von Neumann's "The Computer and the Brain", I devoted my efforts to a single goal of science (yes, "hypothesis driven", i.e. that there is mathematics even to biology). Von Neumann alluded on the last page of his book that the mathematics of the brain we do not know - but it is certainly different from any known mathematics. My half a Century was spent and the result is astoundingly clear. Geometry is the intrinsic mathematics of the brain, and it is united with the geometrization of genome informatics. Those who are truly interested in the elaboration can look up my homepage. The geometry of the metrical (smooth and derivable) space-time domain uses tensor geometry, Google "Tensor Network Theory". As a result, two basic tenets of Big Science "The Brain Project" are simply not true any more. First, "imaging" of either the structure or function of the brain has been proven to be a "necessary means but an insufficient goal". Second, it is just not true that "we do not understand, in the mathematical sense, any brain function". Tensor Network Theory has established that the cerebellar neural networks act as the metric tensor of spacetime coordination space; transforming (covariant) motor intentions into precise (contravariant) motor execution. TNT has been proven experimentally by independent workers, resulted in Neurophilosophy, and an artificial cerebellum to land on one wing badly injured F15 fighters, based on my blueprint as a Senior National Academy Council adviser to NASA. Germany lured me by the Humboldt Prize for Senior Distinguished American Scientists, and our Neurocomputing-II (MIT Press Book, with 1575 citations) appeared. I was, however, not only half done. Two essentials kept me awake at long nights. First, the principal (Purkinje) neuron of the cerebellar network that coordinates movements in a metrical spacetime domain, appeared to be a fractal mathematical object! (Cambridge Univ. Press book chapter, 1989). Second, the publication clinched that the Purkinje cell can only be grown by a recursive genome function.

Eric with a training in mathematics could integrate the cerebellar neural network models (since his brother directed his attention to them). His business school training, however, resulted in a goal different from understanding; but making the Human Genome Project the epitomy of Big Science (only to be "tied" by the competitive private sector approach by the Tesla of Genomics, Craig Venter). The World was, however, frozen by the flabbergasting result(s) of "full DNA sequencing". 2001 - there were far fewer "genes" in the full human genome than anybody expected. Ohno's "Junk DNA" came in, handy. Next year (2002) it became clear that the "genes" of human and mouse are not only similar in their number - but are 98% the same! Clearly, there was a very significant difference in the amount of "Junk DNA". For me, on February 14, 2002 it yielded the "Eureka moment"(Fig.3, reproduced once 2002 provisional filing was followed by regular submission). Looking at some repeats with visible self-similarity, I connected the dots that were known but separate before. FractoGene is: "Fractal Genome Grows Fractal Organisms". Heralded instantaneously, by the 50th Anniversary of the Double Helix the FractoGene discovery (of a "cause and effect" of two fractals) met a deafening silence. No "Ingrator of the Eric-Kind" was anywhere in sight. In spite of peer-reviwed science publication with the late Malcolm Simons (among the firsts to sacrifice themselves against "Junk DNA"), seeking desperately what it IS, if not Junk (and came to terms with FractoGene), in spite of ENCODE-I (and 7 years after ENCODE-II) eroding even the old definition of "gene" (since e.g. it is fractured; fractal), it was very difficult for most scientists to handle the "double disruption". FractoGene reversed both principal axioms (Junk DNA and Central Dogma false claims). One needed fellow-mathematicians-genomists, playing the important (if progressive) role of an Integrator. In 2007 September, Eric payed a visit to lecture at the University of San Francisco. I put my manuscript The Principle of Recursive Genome Function (2008) (dedicated to Eric) personally into Eric's hand. He instantly looked into it and said "Wow, it even has (fractal) math in it! - I will read it on the plane". The Edison of Genomics, with the most plentiful set of original contributions, George Church (yes, "the other person, in Harvard") suddenly invited me to his own Cold Spring Harbor Lab meeting for a September 16, 2009 presentation of my Fractal Approach. Little did I know that "the other person, at Broad" brewed a massive fractal project! It appeared as the Hilbert-fractal on the Science Magazine cover on October 9, 2009 (senior, last author Eric Lander). The actual work on the DNA globule was pioneered by Dekker, and done mostly by Erez-Lieberman. The Integrator reached back a couple of decades to Grosberg and much deeper (to Hilbert). Thus, some original contributors could be skipped. Overall, this still helped me a lot "Mr. President, The Genome is Fractal!". A double-degree mathematician-genomist Eric Schadt endorsed my fractal approach, and lately so did Nobel Laureate Michael Levitt of Stanford. Much earlier, I took my PostGenetics with FractoGene to my native Hungary in 2006 - the first international symposium in the history to recall "Junk DNA". I started to pour mined Fractal Defects of various diseases caused by genomic glitches. "Interesting - what to do with them?" - so went the overwhelming reply. Today, with Genome Editing a reality, suddenly the IP (8,280,641) is precious (especially since it is in force for more than the decade coming). We have the real chance to edit them out for cure. Fractal Defects occuring in the regulatory DNA (maiden name "Junk DNA") are the most likely to cause complex genomic misregulation like cancers, Alzheimers, Parkinsons etc. Editing any code (or text) assumes, however, that we know the mathematical "language" (before) we edit. Herein lies the ultimate merit of FractoGene as it is devloped since 2002 over many decades to come. I am not too likely to be with it for most of those decades - but I am glad I sowed the seeds for recursive dual representation of proteins by coding and non-coding DNA, and unifying the sparsly metrical functional spaces of neuroscience and genomics. Integrators, looking the other way, may have missed their chance. Edisons, Teslas, etc. can greatly benefit from mathematical understanding. Perhaps even more than from "novel" Big Science projects randomly launched (Moonshots everywhere, resulting in Big Data rather than at least a little understanding). As the mathematically savvy would know, integral is useful - but you can only benefit from an integral if the original function is defined. Perhaps most importantly, the value of an integral can be floated by a totally arbitrary Constant, "C". With high "C", a "Secret of the Genome is that it is Fractal" (22:22), though pioneers are skipped, with low "C" fractal genome goes unmentioned. The NIH Cancer Institute is on the track of Fractals. (- Andras_at_Pellionisz_dot_com]


Easy DNA Editing Will Remake the World. Buckle Up.

WIRED

Amy Maxmen

SPINY GRASS AND SCRAGGLY PINES creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California's Monterey Peninsula hammerheads into the Pacific. It's a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.

They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.

Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.

By 1975, other fields of science—like physics—were subject to broad restrictions. Hardly anyone was allowed to work on atomic bombs, say. But biology was different. Biologists still let the winding road of research guide their steps. On occasion, regulatory bodies had acted retrospectively—after Nuremberg, Tuskegee, and the human radiation experiments, external enforcement entities had told biologists they weren't allowed to do that bad thing again. Asilomar, though, was about establishing prospective guidelines, a remarkably open and forward-thinking move.

At the end of the meeting, Baltimore and four other molecular biologists stayed up all night writing a consensus statement. They laid out ways to isolate potentially dangerous experiments and determined that cloning or otherwise messing with dangerous pathogens should be off-limits. A few attendees fretted about the idea of modifications of the human “germ line”—changes that would be passed on from one generation to the next—but most thought that was so far off as to be unrealistic. Engineering microbes was hard enough. The rules the Asilomar scientists hoped biology would follow didn't look much further ahead than ideas and proposals already on their desks.

Earlier this year, Baltimore joined 17 other researchers for another California conference, this one at the Carneros Inn in Napa Valley. “It was a feeling of déjà vu,” Baltimore says. There he was again, gathered with some of the smartest scientists on earth to talk about the implications of genome engineering.

The stakes, however, have changed. Everyone at the Napa meeting had access to a gene-editing technique called Crispr-Cas9. The first term is an acronym for “clustered regularly interspaced short palindromic repeats,” a description of the genetic basis of the method; Cas9 is the name of a protein that makes it work. Technical details aside, Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people. “These are monumental moments in the history of biomedical research,” Baltimore says. “They don't happen every day.”

Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used Crispr to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals. Startups devoted to Crispr have launched. International pharmaceutical and agricultural companies have spun up Crispr R&D. Two of the most powerful universities in the US are engaged in a vicious war over the basic patent. Depending on what kind of person you are, Crispr makes you see a gleaming world of the future, a Nobel medallion, or dollar signs.

The technique is revolutionary, and like all revolutions, it's perilous. Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.

IN A WAY, humans were genetic engineers long before anyone knew what a gene was. They could give living things new traits—sweeter kernels of corn, flatter bulldog faces—through selective breeding. But it took time, and it didn't always pan out. By the 1930s refining nature got faster. Scientists bombarded seeds and insect eggs with x-rays, causing mutations to scatter through genomes like shrapnel. If one of hundreds of irradiated plants or insects grew up with the traits scientists desired, they bred it and tossed the rest. That's where red grapefruits came from, and most barley for modern beer.

Genome modification has become less of a crapshoot. In 2002, molecular biologists learned to delete or replace specific genes using enzymes called zinc-finger nucleases; the next-generation technique used enzymes named TALENs.

Yet the procedures were expensive and complicated. They only worked on organisms whose molecular innards had been thoroughly dissected—like mice or fruit flies. Genome engineers went on the hunt for something better.

As it happened, the people who found it weren't genome engineers at all. They were basic researchers, trying to unravel the origin of life by sequencing the genomes of ancient bacteria and microbes called Archaea (as in archaic), descendants of the first life on Earth. Deep amid the bases, the As, Ts, Gs, and Cs that made up those DNA sequences, microbiologists noticed recurring segments that were the same back to front and front to back—palindromes. The researchers didn't know what these segments did, but they knew they were weird. In a branding exercise only scientists could love, they named these clusters of repeating palindromes Crispr.

Then, in 2005, a microbiologist named Rodolphe Barrangou, working at a Danish food company called Danisco, spotted some of those same palindromic repeats in Streptococcus thermophilus, the bacteria that the company uses to make yogurt and cheese. Barrangou and his colleagues discovered that the unidentified stretches of DNA between Crispr's palindromes matched sequences from viruses that had infected their S. thermophilus colonies. Like most living things, bacteria get attacked by viruses—in this case they're called bacteriophages, or phages for short. Barrangou's team went on to show that the segments served an important role in the bacteria's defense against the phages, a sort of immunological memory. If a phage infected a microbe whose Crispr carried its fingerprint, the bacteria could recognize the phage and fight back. Barrangou and his colleagues realized they could save their company some money by selecting S. thermophilus species with Crispr sequences that resisted common dairy viruses.

As more researchers sequenced more bacteria, they found Crisprs again and again—half of all bacteria had them. Most Archaea did too. And even stranger, some of Crispr's sequences didn't encode the eventual manufacture of a protein, as is typical of a gene, but instead led to RNA—single-stranded genetic material. (DNA, of course, is double-stranded.)

That pointed to a new hypothesis. Most present-day animals and plants defend themselves against viruses with structures made out of RNA. So a few researchers started to wonder if Crispr was a primordial immune system. Among the people working on that idea was Jill Banfield, a geomicrobiologist at UC Berkeley, who had found Crispr sequences in microbes she collected from acidic, 110-degree water from the defunct Iron Mountain Mine in Shasta County, California. But to figure out if she was right, she needed help.

Luckily, one of the country's best-known RNA experts, a biochemist named Jennifer Doudna, worked on the other side of campus in an office with a view of the Bay and San Francisco's skyline. It certainly wasn't what Doudna had imagined for herself as a girl growing up on the Big Island of Hawaii. She simply liked math and chemistry—an affinity that took her to Harvard and then to a postdoc at the University of Colorado. That's where she made her initial important discoveries, revealing the three-dimensional structure of complex RNA molecules that could, like enzymes, catalyze chemical reactions.

The mine bacteria piqued Doudna's curiosity, but when Doudna pried Crispr apart, she didn't see anything to suggest the bacterial immune system was related to the one plants and animals use. Still, she thought the system might be adapted for diagnostic tests.

Banfield wasn't the only person to ask Doudna for help with a Crispr project. In 2011, Doudna was at an American Society for Microbiology meeting in San Juan, Puerto Rico, when an intense, dark-haired French scientist asked her if she wouldn't mind stepping outside the conference hall for a chat. This was Emmanuelle Charpentier, a microbiologist at Ume˚a University in Sweden.

As they wandered through the alleyways of old San Juan, Charpentier explained that one of Crispr's associated proteins, named Csn1, appeared to be extraordinary. It seemed to search for specific DNA sequences in viruses and cut them apart like a microscopic multitool. Charpentier asked Doudna to help her figure out how it worked. “Somehow the way she said it, I literally—I can almost feel it now—I had this chill down my back,” Doudna says. “When she said ‘the mysterious Csn1’ I just had this feeling, there is going to be something good here.”

Back in Sweden, Charpentier kept a colony of Streptococcus pyogenes in a biohazard chamber. Few people want S. pyogenes anywhere near them. It can cause strep throat and necrotizing fasciitis—flesh-eating disease. But it was the bug Charpentier worked with, and it was in S. pyogenes that she had found that mysterious yet mighty protein, now renamed Cas9. Charpentier swabbed her colony, purified its DNA, and FedExed a sample to Doudna.

Working together, Charpentier’s and Doudna’s teams found that Crispr made two short strands of RNA and that Cas9 latched onto them. The sequence of the RNA strands corresponded to stretches of viral DNA and could home in on those segments like a genetic GPS. And when the Crispr-Cas9 complex arrives at its destination, Cas9 does something almost magical: It changes shape, grasping the DNA and slicing it with a precise molecular scalpel.

Here’s what’s important: Once they’d taken that mechanism apart, Doudna’s postdoc, Martin Jinek, combined the two strands of RNA into one fragment—“guide RNA”—that Jinek could program. He could make guide RNA with whatever genetic letters he wanted; not just from viruses but from, as far as they could tell, anything. In test tubes, the combination of Jinek’s guide RNA and the Cas9 protein proved to be a programmable machine for DNA cutting. Compared to TALENs and zinc-finger nucleases, this was like trading in rusty scissors for a computer-controlled laser cutter. “I remember running into a few of my colleagues at Berkeley and saying we have this fantastic result, and I think it’s going to be really exciting for genome engineering. But I don’t think they quite got it,” Doudna says. “They kind of humored me, saying, ‘Oh, yeah, that’s nice.’”

On June 28, 2012, Doudna’s team published its results in Science. In the paper and in an earlier corresponding patent application, they suggest their technology could be a tool for genome engineering. It was elegant and cheap. A grad student could do it.

The finding got noticed. In the 10 years preceding 2012, 200 papers mentioned Crispr. By 2014 that number had more than tripled. Doudna and Charpentier were each recently awarded the $3 million 2015 Breakthrough Prize. Time magazine listed the duo among the 100 most influential people in the world. Nobody was just humoring Doudna anymore.

MOST WEDNESDAY AFTERNOONS, Feng Zhang, a molecular biologist at the Broad Institute of MIT and Harvard, scans the contents of Science as soon as they are posted online. In 2012, he was working with Crispr-Cas9 too. So when he saw Doudna and Charpentier's paper, did he think he'd been scooped? Not at all. “I didn't feel anything,” Zhang says. “Our goal was to do genome editing, and this paper didn't do it.” Doudna's team had cut DNA floating in a test tube, but to Zhang, if you weren't working with human cells, you were just screwing around.

That kind of seriousness is typical for Zhang. At 11, he moved from China to Des Moines, Iowa, with his parents, who are engineers—one computer, one electrical. When he was 16, he got an internship at the gene therapy research institute at Iowa Methodist hospital. By the time he graduated high school he'd won multiple science awards, including third place in the Intel Science Talent Search.

When Doudna talks about her career, she dwells on her mentors; Zhang lists his personal accomplishments, starting with those high school prizes. Doudna seems intuitive and has a hands-off management style. Zhang … pushes. We scheduled a video chat at 9:15 pm, and he warned me that we'd be talking data for a couple of hours. “Power-nap first,” he said.

Zhang got his job at the Broad in 2011, when he was 29. Soon after starting there, he heard a speaker at a scientific advisory board meeting mention Crispr. “I was bored,” Zhang says, “so as the researcher spoke, I just Googled it.” Then he went to Miami for an epigenetics conference, but he hardly left his hotel room. Instead Zhang spent his time reading papers on Crispr and filling his notebook with sketches on ways to get Crispr and Cas9 into the human genome. “That was an extremely exciting weekend,” he says, smiling.

Just before Doudna's team published its discovery in Science, Zhang applied for a federal grant to study Crispr-Cas9 as a tool for genome editing. Doudna's publication shifted him into hyperspeed. He knew it would prompt others to test Crispr on genomes. And Zhang wanted to be first.

Even Doudna, for all of her equanimity, had rushed to report her finding, though she hadn't shown the system working in human cells. “Frankly, when you have a result that is exciting,” she says, “one does not wait to publish it.”

In January 2013, Zhang's team published a paper in Science showing how Crispr-Cas9 edits genes in human and mouse cells. In the same issue, Harvard geneticist George Church edited human cells with Crispr too. Doudna's team reported success in human cells that month as well, though Zhang is quick to assert that his approach cuts and repairs DNA better.

That detail matters because Zhang had asked the Broad Institute and MIT, where he holds a joint appointment, to file for a patent on his behalf. Doudna had filed her patent application—which was public information—seven months earlier. But the attorney filing for Zhang checked a box on the application marked “accelerate” and paid a fee, usually somewhere between $2,000 and $4,000. A series of emails followed between agents at the US Patent and Trademark Office and the Broad's patent attorneys, who argued that their claim was distinct.

A little more than a year after those human-cell papers came out, Doudna was on her way to work when she got an email telling her that Zhang, the Broad Institute, and MIT had indeed been awarded the patent on Crispr-Cas9 as a method to edit genomes. “I was quite surprised,” she says, “because we had filed our paperwork several months before he had.”

The Broad win started a firefight. The University of California amended Doudna's original claim to overlap Zhang's and sent the patent office a 114-page application for an interference proceeding—a hearing to determine who owns Crispr—this past April. In Europe, several parties are contesting Zhang's patent on the grounds that it lacks novelty. Zhang points to his grant application as proof that he independently came across the idea. He says he could have done what Doudna's team did in 2012, but he wanted to prove that Crispr worked within human cells. The USPTO may make its decision as soon as the end of the year.

The stakes here are high. Any company that wants to work with anything other than microbes will have to license Zhang's patent; royalties could be worth billions of dollars, and the resulting products could be worth billions more. Just by way of example: In 1983 Columbia University scientists patented a method for introducing foreign DNA into cells, called cotransformation. By the time the patents expired in 2000, they had brought in $790 million in revenue.

It's a testament to Crispr's value that despite the uncertainty over ownership, companies based on the technique keep launching. In 2011 Doudna and a student founded a company, Caribou, based on earlier Crispr patents; the University of California offered Caribou an exclusive license on the patent Doudna expected to get. Caribou uses Crispr to create industrial and research materials, potentially enzymes in laundry detergent and laboratory reagents. To focus on disease—where the long-term financial gain of Crispr-Cas9 will undoubtedly lie—Caribou spun off another biotech company called Intellia Therapeutics and sublicensed the Crispr-Cas9 rights. Pharma giant Novartis has invested in both startups. In Switzerland, Charpentier cofounded Crispr Therapeutics. And in Cambridge, Massachusetts, Zhang, George Church, and several others founded Editas Medicine, based on licenses on the patent Zhang eventually received.

Thus far the four companies have raised at least $158 million in venture capital.

ANY GENE TYPICALLY has just a 50–50 chance of getting passed on. Either the offspring gets a copy from Mom or a copy from Dad. But in 1957 biologists found exceptions to that rule, genes that literally manipulated cell division and forced themselves into a larger number of offspring than chance alone would have allowed.

A decade ago, an evolutionary geneticist named Austin Burt proposed a sneaky way to use these “selfish genes.” He suggested tethering one to a separate gene—one that you wanted to propagate through an entire population. If it worked, you'd be able to drive the gene into every individual in a given area. Your gene of interest graduates from public transit to a limousine in a motorcade, speeding through a population in flagrant disregard of heredity's traffic laws. Burt suggested using this “gene drive” to alter mosquitoes that spread malaria, which kills around a million people every year. It's a good idea. In fact, other researchers are already using other methods to modify mosquitoes to resist the Plasmodium parasite that causes malaria and to be less fertile, reducing their numbers in the wild. But engineered mosquitoes are expensive. If researchers don't keep topping up the mutants, the normals soon recapture control of the ecosystem.

Push those modifications through with a gene drive and the normal mosquitoes wouldn't stand a chance. The problem is, inserting the gene drive into the mosquitoes was impossible. Until Crispr-Cas9 came along.

Today, behind a set of four locked and sealed doors in a lab at the Harvard School of Public Health, a special set of mosquito larvae of the African species Anopheles gambiae wriggle near the surface of shallow tubs of water. These aren't normal Anopheles, though. The lab is working on using Crispr to insert malaria-resistant gene drives into their genomes. It hasn't worked yet, but if it does … well, consider this from the mosquitoes' point of view. This project isn't about reengineering one of them. It's about reengineering them all.

Kevin Esvelt, the evolutionary engineer who initiated the project, knows how serious this work is. The basic process could wipe out any species. Scientists will have to study the mosquitoes for years to make sure that the gene drives can't be passed on to other species of mosquitoes. And they want to know what happens to bats and other insect-eating predators if the drives make mosquitoes extinct. “I am responsible for opening a can of worms when it comes to gene drives,” Esvelt says, “and that is why I try to ensure that scientists are taking precautions and showing themselves to be worthy of the public's trust—maybe we're not, but I want to do my damnedest to try.”

Esvelt talked all this over with his adviser—Church, who also worked with Zhang. Together they decided to publish their gene-drive idea before it was actually successful. They wanted to lay out their precautionary measures, way beyond five nested doors. Gene drive research, they wrote, should take place in locations where the species of study isn't native, making it less likely that escapees would take root. And they also proposed a way to turn the gene drive off when an engineered individual mated with a wild counterpart—a genetic sunset clause. Esvelt filed for a patent on Crispr gene drives, partly, he says, to block companies that might not take the same precautions.

Within a year, and without seeing Esvelt's papers, biologists at UC San Diego had used Crispr to insert gene drives into fruit flies—they called them “mutagenic chain reactions.” They had done their research in a chamber behind five doors, but the other precautions weren't there.Church said the San Diego researchers had gone “a step too far”—big talk from a scientist who says he plans to use Crispr to bring back an extinct woolly mammoth by deriving genes from frozen corpses and injecting them into elephant embryos. (Church says tinkering with one woolly mammoth is way less scary than messing with whole populations of rapidly reproducing insects. “I'm afraid of everything,” he says. “I encourage people to be as creative in thinking about the unintended consequences of their work as the intended.”)

Ethan Bier, who worked on the San Diego fly study, agrees that gene drives come with risks. But he points out that Esvelt's mosquitoes don't have the genetic barrier Esvelt himself advocates. (To be fair, that would defeat the purpose of a gene drive.) And the ecological barrier, he says, is nonsense. “In Boston you have hot and humid summers, so sure, tropical mosquitoes may not be native, but they can certainly survive,” Bier says. “If a pregnant female got out, she and her progeny could reproduce in a puddle, fly to ships in the Boston Harbor, and get on a boat to Brazil.”

These problems don't end with mosquitoes. One of Crispr's strengths is that it works on every living thing. That kind of power makes Doudna feel like she opened Pandora's box. Use Crispr to treat, say, Huntington's disease—a debilitating neurological disorder—in the womb, when an embryo is just a ball of cells? Perhaps. But the same method could also possibly alter less medically relevant genes, like the ones that make skin wrinkle. “We haven't had the time, as a community, to discuss the ethics and safety,” Doudna says, “and, frankly, whether there is any real clinical benefit of this versus other ways of dealing with genetic disease.”

That's why she convened the meeting in Napa. All the same problems of recombinant DNA that the Asilomar attendees tried to grapple with are still there—more pressing now than ever. And if the scientists don't figure out how to handle them, some other regulatory body might. Few researchers, Baltimore included, want to see Congress making laws about science. “Legislation is unforgiving,” he says. “Once you pass it, it is very hard to undo.”

In other words, if biologists don't start thinking about ethics, the taxpayers who fund their research might do the thinking for them.

All of that only matters if every scientist is on board. A month after the Napa conference, researchers at Sun Yat-sen University in Guangzhou, China, announced they had used Crispr to edit human embryos. Specifically they were looking to correct mutations in the gene that causes beta thalassemia, a disorder that interferes with a person's ability to make healthy red blood cells.

The work wasn't successful—Crispr, it turns out, didn't target genes as well in embryos as it does in isolated cells. The Chinese researchers tried to skirt the ethical implications of their work by using nonviable embryos, which is to say they could never have been brought to term. But the work attracted attention. A month later, the US National Academy of Sciences announced that it would create a set of recommendations for scientists, policymakers, and regulatory agencies on when, if ever, embryonic engineering might be permissible. Another National Academy report will focus on gene drives. Though those recommendations don't carry the weight of law, federal funding in part determines what science gets done, and agencies that fund research around the world often abide by the academy's guidelines.

THE TRUTH IS, most of what scientists want to do with Crispr is not controversial. For example, researchers once had no way to figure out why spiders have the same gene that determines the pattern of veins in the wings of flies. You could sequence the spider and see that the “wing gene” was in its genome, but all you’d know was that it certainly wasn’t designing wings. Now, with less than $100, an ordinary arachnologist can snip the wing gene out of a spider embryo and see what happens when that spider matures. If it’s obvious—maybe its claws fail to form—you’ve learned that the wing gene must have served a different purpose before insects branched off, evolutionarily, from the ancestor they shared with spiders. Pick your creature, pick your gene, and you can bet someone somewhere is giving it a go.

Academic and pharmaceutical company labs have begun to develop Crispr-based research tools, such as cancerous mice—perfect for testing new chemotherapies. A team at MIT, working with Zhang, used Crispr-Cas9 to create, in just weeks, mice that inevitably get liver cancer. That kind of thing used to take more than a year. Other groups are working on ways to test drugs on cells with single-gene variations to understand why the drugs work in some cases and fail in others. Zhang’s lab used the technique to learn which genetic variations make people resistant to a melanoma drug called Vemurafenib. The genes he identified may provide research targets for drug developers.

The real money is in human therapeutics. For example, labs are working on the genetics of so-called elite controllers, people who can be HIV-positive but never develop AIDS. Using Crispr, researchers can knock out a gene called CCR5, which makes a protein that helps usher HIV into cells. You’d essentially make someone an elite controller. Or you could use Crispr to target HIV directly; that begins to look a lot like a cure.

Or—and this idea is decades away from execution—you could figure out which genes make humans susceptible to HIV overall. Make sure they don’t serve other, more vital purposes, and then “fix” them in an embryo. It’d grow into a person immune to the virus.

But straight-out editing of a human embryo sets off all sorts of alarms, both in terms of ethics and legality. It contravenes the policies of the US National Institutes of Health, and in spirit at least runs counter to the United Nations’ Universal Declaration on the Human Genome and Human Rights. (Of course, when the US government said it wouldn’t fund research on human embryonic stem cells, private entities raised millions of dollars to do it themselves.) Engineered humans are a ways off—but nobody thinks they’re science fiction anymore.

Even if scientists never try to design a baby, the worries those Asilomar attendees had four decades ago now seem even more prescient. The world has changed. “Genome editing started with just a few big labs putting in lots of effort, trying something 1,000 times for one or two successes,” says Hank Greely, a bioethicist at Stanford. “Now it’s something that someone with a BS and a couple thousand dollars’ worth of equipment can do. What was impractical is now almost everyday. That’s a big deal.”

In 1975 no one was asking whether a genetically modified vegetable should be welcome in the produce aisle. No one was able to test the genes of an unborn baby, or sequence them all. Today swarms of investors are racing to bring genetically engineered creations to market. The idea of Crispr slides almost frictionlessly into modern culture.

In an odd reversal, it’s the scientists who are showing more fear than the civilians. When I ask Church for his most nightmarish Crispr scenario, he mutters something about weapons and then stops short. He says he hopes to take the specifics of the idea, whatever it is, to his grave. But thousands of other scientists are working on Crispr. Not all of them will be as cautious. “You can’t stop science from progressing,” Jinek says. “Science is what it is.” He’s right. Science gives people power. And power is unpredictable.

[The ominous last paragraph aside, why should this column take special notice of Genome Editing? A formal reason is that the motto of HolGenTech, Inc. has been for years "Ask what you can do for your genome". Now the answer, in theory, is obvious. "If there are defects in your genome, get them edited out". However, it is a well known question "what is the difference between theory and practice?". "In theory, there is no difference. The difference is in practice". Genome Editing may be "easy" (as the title of this summary says) IF YOU KNOW WHAT TO EDIT OUT AND WHAT THE REPLACEMENT SHOULD BE. In simple cases, like well-known single nucleotide polymorphisms (the ethical barrier - outside of China - aside) genome editing is truly a straightforward process. It is like for a spell-checker to click on a red-lined word, and the single character is changed. However, to edit a language with complex glitches, one must understand the meaning - there is no way around it. "Fractal DNA grows fractal organisms" provides the mathematics (fractal geometry) that leads us to such understanding. If you think (and eveyone should) that Genome Editing "Will remake the World", size up the value of (mathematical) understanding put together with the mechanism of editing! andras_at_pellionisz_dot_com]


Could You Be any Cuter? Genome Editing and the Future of the Human Species

GEORGE W. SLEDGE, JR., MD, Chief of Oncology at Stanford University

Thursday, May 14, 2015

If you want to see what the future holds for us, let me suggest two recent articles. The first, published in the March 5th issue of the MIT Technology Review by Antonio Regalado, is called “Engineering the Perfect Baby.” The second, published in Nature just a week later by a group of concerned scientists, is called “Don’t Edit the Human Germ Line.” Both discuss recent advances that, for all practical purposes, turn science fiction into science. It’s an interesting story.

The story goes back three years to the development of CRISPR/Cas-9 technology for gene editing by Jennifer Doudna and Emmanuelle Charpentier. CRISPRs (short for Clustered Regularly Interspaced Short Palindromic Repeats) are short DNA segments in which segments of viral DNA are inserted, which are then transcribed to a form of RNA (cr-RNA). This viral-specific cr-RNA then directs the nuclease Cas9 to the invading complementary viral DNA, which is cleaved.

We do not think of bacteria as either needing or having an immune system, but CRISPR/Cas9 functions as one in the prokaryote/bacteriophage arms race. It is elegant and simple, a profoundly cool invention far down on the evolutionary tree that somehow failed to make it to mammals.

Doudna and Charpentier had the exceedingly clever, and in retrospect quite obvious, idea that this could be used to edit specific DNA sequences. I say “in retrospect quite obvious,” but it is the sort of retrospective obviousness that turns previously obscure professors working in equally obscure fields into Nobel laureates, as their 2012 Science CRISPR/Cas-9 paper certainly will.

Molecular biologists love this technology, and for good reason. With CRISPR/Cas-9 one can add or subtract genes almost at will. The technology, while not perfect (more on this later), is a straightforward, off-the-shelf tool kit that allows practically anyone to manipulate the genome of practically any cell. It is a game changer for laboratory research. The technology has launched an astonishing number of papers, several new biotech start-ups, and (already) the inevitable ugly patent lawsuits over who got there first.

Because bacterial DNA and human DNA are forged from the same base elements, what one can do inE. coli one can do in H. sapiens. Whether it is wise for H. sapiens to reproduce E. coli technology is the real question.

What Regaldo’s article suggests, and what the Nature article confirms, is that we are close to a tipping point in human history. It is easily conceivable that CRISPR tech can be used to edit the genes of human germ-line cells. We will, in the very near future, be able to alter a baby’s genome, with almost unimaginable consequences.

Is this a line we want to cross? Some, unsurprisingly, find this prospect disturbing. The authors of theNature paper suggested a moratorium on gene editing of human stem cells until we can be work out all of the important practical and ethical issues. Let us slow down, they say, take a deep breath, think things through, and then proceed with caution.

A wonderful idea, but a bit too late, as it turns out. March was so last month. A group of Chinese investigators at the Sun Yat-Sen University in Guangzhou took human stem cells (defective leftovers from a fertility clinic) and used CASPR/Cas-9 to introduce the b-globin gene. b-globin mutations are responsible for beta thalassemia, which afflicts a significant population of patients.

The paper was published in the April 18 issue of Protein & Cell (a journal I had never heard of before), reportedly after having been rejected by Nature and Science on ethical grounds. It is rather like when Gregor Mendel published his article on the genetics of peas in Proceedings of the Natural History Society of Brünn, only now we have PubMed and the world is a very small place. I suspect Protein & Cell’s impact factor just took a quantum leap upwards.

The paper suggests we are not quite there yet: of the 86 embryos where the authors used CRISPR/Cas-9 to introduce the gene, only 4 “took”, and many had off-target mutational events, not a good thing if you are trying to eliminate a genetic defect. In other words, don’t expect this to be available at your local fertility clinic next week.

But if not next week, then maybe next year, or the year after: this field is moving at light speed, and the Chinese doctors were (or so a recent Science article suggests) using last year’s techniques. Lots of very smart people are piling into the field. This will soon be feasible, then eventually trivial, technology.

And as for a moratorium on gene editing of human stem cells? It might stick for a while, but I am not sanguine about its long-term prospects. I think it is a given that any moratorium will eventually fail.

To answer why this is the case, just look at the history of attempts to limit the use of new technologies:

First, the atomic bomb. In 1945, after the first nuclear explosion at Alamogordo, a group of Manhattan Project scientists, led by Leo Szilard (who famously first thought of the nuclear chain reaction that would occur once one split the uranium atom), petitioned the President to halt the use of the bomb. The petition, dated July 17, 1945, stated “the nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale."

The powers that be were not amused. The US government had spent two billion 1945 dollars developing the A bomb as a war measure, it faced the likelihood of an invasion of Japan with untold potential casualties, and it had little sympathy for Japanese civilians. It also saw the bomb as a long-term source of political and military power. The niggling objections of the atomic scientists (and by no means all objected) were ignored, and literally within weeks Hiroshima and Nagasaki ushered in the Atomic Age, in all its frightful glory.

That decision tells you that technologies rapidly get out of control of those who create them. In the Atomic Age, one at least needed a well-heeled nation-state to back you if you wanted to build a bomb, a partial barrier (though only partial: impoverished Pakistan, two generations later, is capable of immolating its neighbors). And nation-states, since 1945, have thankfully not used these weapons on other nation-states, though nuclear proliferation sadly continues.

But in the Genome Era, just about any college biology graduate soon will be able to insert genes that eliminate defects or increase function. For practical purposes, Lichtenstein and Monaco could be the biologic equivalent of today’s nuclear powers five years from now. Unless the moratorium is worldwide, all you would need to do would be to fly somewhere that didn’t share the biomedical ethical stance of the Nature authors. And if I knew I carried a deadly genetic defect, I would do anything to save my children from the same fate.

By the way, you might say that comparing the atom bomb to CRISPR/Cas9 is a somewhat ridiculous comparison given the relative significance of the two. And you would be right, though perhaps not in the way you might first think: CRISPR/Cas9 is likely to be far more significant in the long run. A technology that allows a species to intentionally evolve new characteristics is far more important for the history of that species. Gills, anyone? Chlorophyll rather than melanin in your skin? All those pesky vitamins we don’t make ourselves? Edit them in.

The somewhat more pertinent analogy, and one commented on by many, is the Asilomar conference. After Cohen and Boyer performed the first recombinant DNA experiments, there was a similar terror of Dr. Frankenstein experiments by mad scientists. The city fathers of Cambridge, Massachusetts, appropriately frightened by the proximity of Harvard and MIT, passed a law banning the use of recombinant DNA technology within its city limits.

The then-small community of molecular biologists met at the Asilomar conference center (near San Francisco) in 1975 and voluntarily developed limits on certain types of genetic experiments until their safety could be determined. It was a highly moral stance by the leaders of a new biologic revolution, but also a highly practical one, as it decreased public opposition to recombinant DNA technology.

The moratorium turned out to a brief one (no one, to my knowledge, has ever been killed by recombinant DNA, at least not yet), and with its lifting the biotech industry was born, and we never gave those early qualms a second thought.

I’ve been to Asilomar several times: my Oncology division at Stanford holds its annual scientific retreat there. It is a lovely state park on the Pacific coast, and a great place to hold a conference: watching the sunset over the ocean at Asilomar is an awe-inspiring experience.

But Asilomar is just not the right model for what is happening today. Molecular biology is ubiquitous, a global enterprise carried on by tens or hundreds of thousands of scientists, not the small handful in the 1970s. A few academic scientists no longer drive it; big pharma and biotech call the shots, and can be expected to remain highly ethical just so long as no obscene profits can be made from a new technologic development.

Jennifer Doudna has suggested that we need an Asilomar equivalent for CRISPR/Cas9 gene editing of embryos, and indeed there has already been a preliminary meeting of scientists, lawyers, and bioethicists in Napa Valley’s Carneros Inn earlier this year. By the way, the Carneros Inn is even nicer than Asilomar: one should always hold scientific retreats at great resorts in wine country. It greatly improves the meeting outputs.

The Asilomar scientists had what were, in essence, short-term concerns: will recombinant DNA, let loose on the world, be the scientific equivalent of the Four Horsemen of the Apocalypse? Well, no, and we knew the answer quickly.

But CRISPR-Cas9 stem cell germ-line editing, once the technical wrinkles are worked out, is a technology whose medical and social implications will take generations to play out. The pressure to use it for medical purposes will be enormous. Edit out or fix a gene that causes some dreadful neurodegenerative disease (a Huntington’s chorea or its equivalent) and no one will notice the difference for forty or fifty years. These diseases will go away, and who will miss them? And who among my great-grandchildren will even care, it having been something they have always lived with?

Perhaps (one already knows the objections) we should not assign God-like powers over creation to ourselves, but how long will that dike hold when a Senator’s or a billionaire’s or a dictator’s misbegotten embryo needs genomic resuscitation?

And edit in something that makes one smarter or faster or—dare I say—cuter? Cosmetic editing will be popular the moment we figure out how to do it. Pretty much the first law of the consumer electronics industry is that every new technical advance (viz: VCR, CD-ROM, streaming video) is used almost immediately for pornography. I can only imagine what will happen with gene editing.

I simply do not trust us not to use CRISPR/Cas-9 germ-line editing. There is a certain technologic imperialism that renders it inevitable. We always want to play with the cool new toys, and this one will be really, really easy to play with. What will my descendants look like? Probably not like me. And there are those who would say that is a good thing.

[This overview of Genome Editing is not the latest - meanwhile Drs. Doudna and Charpentier received the $3M Breakthrough Prize for their pioneering, and a couple of days ago a third contributor (Dr. Zhang) was prominently, according to some somewhat myopically so, by a "history overview". We do not get into the issue of personality of reviews. Geography, yes; the recent review shows that even tiny Lithuania edged into postmodern genomics - and the Global Map of Economy is certain to change:

The Twenty-Year Story [as interpreted by E.L.] of CRISPR Unfolded across Twelve Cities in Nine Countries. For each ‘‘chapter’’ in the CRISPR ‘‘story,’’ the map shows the sites where the primary work occurred and the first submission dates of the papers. Green circles

refer to the early discovery of the CRISPR system and its function; red to the genetic, molecular biological, and biochemical characterization; and blue to the final

step of biological engineering to enable genome editing.

[Back to the Stanford review with the "cute" title yet pondering utterly serious global issues, the historical comparison of the impact of "nuclear science and technology" is particularly worth considering. When the atom, that was axiomatically not supposed to split, did split, scientists were flabbergasted for some time. Likewise, when the human DNA was fully sequenced, scientists were flabbergasted by the meager number of "genes", followed by the even more staggering realization next year that the mouse has not only essentially the same number, but practically the same genes as we do. Although the utility of FractoGene ("Fractal DNA grows fractal organisms") was submitted to the US Patent Office 2002 and "Fractal Defects" were revealed by 2007 (the last CIP-date of the patent filing), old school genomists were staring at Fractal Defects with glazed eyes "So what? Can we do anything about them?" For nuclear science and technology the scale of interest catapulted when the very practial benefit was realized (colossal energy released either by nuclear fission or fusion). The Stanford review, with "cute" title, masks the similarly profound global implications of Genome Editing. Exploration of the horizon of what this science and technology may mean for homo sapiens glosses beyond imminent practial opportunities. Let us take another example of an explosion of technology; the Internet. We all know that it started as a small scale utility of computer system administrators to email along the massively connected net. The technology truly took off when private industry discovered the immense profit-making ability by e.g. Amazon, eBay (etc). (Amazon is today the World's largest "store" without a single "brick and mortar store" at all). Genome Editing will not take off in the distant future by making us "cuter". Rather, small countries (e.g. Denmark, Lithuania, etc) may invent extremely lucrative ways to turn genome editing (which is definitely not GMO) into enormous profit. (Back to Internet, Skype developed in Estonia by two students yielded the biggest investment-return, ever, while the HQ and the core of developers are still in Tallin, Estonia). It is just guesswork at this moment, what twists will catapult which country into the lead, e.g. by a combination of Mining Fractal Defects and the use of Genome Editing to elegantly getting rid of them. True, some people do not like to live by metaphors. We can not resist to provide the visual metaphor that "getting rid of inclusions in diamonds" is already a very profitable business. Of course, inclusions in diamonds are visible - while one has to use FractoGene to find Fractal Defects in much murkier DNA - andras_at_pellionisz_dot_com]

[source]


Chinese-scientists-create-designer-dogs-by-genetic-engineering

The Telegraph

Jan. 22.

Madhumita Murgia

Chinese scientists create 'designer dogs' by genetic engineering

Two beagles created using the CRISPR technology were customised to be born with double the amount of muscles as a typical dog

Dogs with double muscles by deleting a single gene called myostatin.

[Note, that Genome Editing is totally different from GMO. Genome Editing does not introduce foreign DNA sequence (like someone who would change an existing text with foreign thoughts). Genome Editing can "fix the spelling" (like a word processor spell-checker does), or in this case takes away (not add) a snippet from an existing DNA - AJP]

Belgian Blue cattle (bull) naturally lacks the myostatin gene and hence is very muscley

[Note that the bull above naturally lacks the myostatin gene, probably as a result of selection by human breeders over raising many generations of cattle. To copy such "invention of nature" in other livestock could yield massive economic benefits to agriculture and animal husbandry - AJP]

You've heard of designer babies in science fiction, but it's getting closer to reality: scientists in China claim they are the first to use gene editing to create "designer dogs" with special characteristics.

Two beagle puppies called Tiangou and Hercules, were created to be extra muscley - with double the amount of muscle mass than typical - by deleting a single gene called myostatin.

The team from the Guangzhou Institutes of Biomedicine and Health reported their results last week in the Journal of Molecular Cell Biology, saying the goal was to create dogs with other DNA mutations, including ones that mimic human diseases such as Parkinson’s and muscular dystrophy, so human treatments could be tested on them.

The muscle-enhanced beagles Tiangou and Hercules were creating using a gene-editing technology called CRISPR-Cas 9 - a sort of cut-and-paste tool for DNA that allows you to design living creatures the way you want on a computer, and then actually create them.

A natural genetic disorder, caused by the myostatin genebeing knocked out, leads to exceptionally muscled whippets.

"It’s the one of the most precise and efficient ways we have of editing DNA in any cell, including humans," said Professor George Church of Harvard University, who is a pioneer in the field of genetic engineering.

It works by digitally designing a piece of nucleic acid that recognises a single place in your genome, and then allows cutting and editing at that point.

According to the MIT Technology Review and the published paper, Chinese researchers inserted this DNA-modifying tool into more than 60 dog embryos and cut out the myostatin gene which blocks muscle production, so that the beagles’ bodies would produce extra muscle.

Of the 27 puppies born, only two of the dogs, Tiangou and Hercules, had both copies of the gene disrupted which should have led to physical changes.

Tiangou, the female beagle, showed obvious physical changes compared to other puppies, while Hercules was still producing some myostatin and was less muscled.

Only a few weeks previously, the Beijing Genomics Institute said it had created designer 'micropigs' that will be sold for $1600 as pets.

Since the technique is relatively simple, many believe humans could be next. In April another Chinese team reported altering human embryos in the laboratory, to try curing beta-thalassemia through gene editing.

"We have already modified embryos of both pigs and primates," Professor Church told the Telegraph. "It might actually be safer, and developmentally important to make corrections in a sperm or embryo, rather than a young child or an adult."

For instance, he said, gene editing can be used to correct some forms of blindness, but it has to be done on babies, or young children, before their neurons become solidified and more resistant to change in adulthood.

But because the technology is so new, the long-term effects are still unclear. "There has to be extensive testing on animals and human adults first," Professor Church said.


Gene edited pigs may soon become organ donors

Denmark News

January 16, 2016

George Church (Harvard) - a pioneer of Genome Editing

WASHINGTON US scientists are closing in on their bid to create designer pigs through new gene editing techniques to source heart, livers and kidneys suitable for transplant into seriously ill people.

After two key gene changes, the scientists say they have cleared the path to the lifesaving transplants.

In a paper published in Science journal, they describe using the CRISPR editing method in pig cells to destroy DNA sequences at 62 sites in the animal's genome that could be potentially harmful to a human recipient. Previous efforts with the technology have only managed to cut away six areas of the genome at one go.

The latest result is the most extreme example to date of the selective trimming of unwanted parts of the genome possible through CRISPR.

The latest study led by Dr. George Church, a geneticist from Harvard Medical School has shown that it is feasible to drastically edit the genome of pigs to remove native porcine endogenous retrovirus (PERV), which has been shown to move from pig to human cells in a dish, and to infect human cells transplanted into mice with weak immune systems.

The report states that the pig DNA is riddled with many copies of a DNA sequence that is the remnant of a virus and can still produce infectious viral particles.

Church, who first presented his study at a workshop at the National Academy of Sciences on October 5, strongly believes the technology will one day make it possible for pig organs to be used as a substitute for human organs for patients in need of a transplant and for whom there are no suitable donor organs.

The wait for suitable donor organs is considerably huge. In the US alone about 122,500 people are waiting for a life-saving organ transplant, and some have argued that a steady supply of pig organs could make up the shortage, because they are similar in size to those of people.

But so far, no one has been able to get around the violent immune response that pig cells provoke.

Working towards a breakthrough, Church has co-founded a biotechnology company 'eGenesis' to produce pigs for organ transplants.

Pig-to-human transplants are not novel. Currently, pig heart valves that have been scrubbed and depleted of pig cells are commonly used to repair faulty human heart valves.

But whole pig organs, which are functionally similar to human organs, have so far not been used for transplant due to associated risks.

Besides studying the potential risks, Church's team is also looking to address ethical concerns of human genome editing.

The ethical debate has been ignited in the wake of reports that biologists in China are allegedly carrying out the first experiment to alter the DNA of human embryos. British scientists have subsequently asked for permission to edit human embryos.

[Interestingly enough, this article was printed in a Journal in Denmark - a small European country with more pigs than people. Next door, Holland has long benefitted from a highly lucrative flower-industry; by coming up with formerly non-existent beuties (like black tulips, etc, etc). People pay premium prices for such genomic novelties - in the case of Holland they are non-edible GMO flowers. Denmark prospers from highly advanced agriculture, e.g. pork food industry. A pound of pork fetches a couple of dollars at the check-out counter. Imagine that a person ANYWHERE would pay for an organ-transplant to save his/her life (e.g. how much Steve Jobs gladly paid for a liver-transplant, a wild guess is 1,000,000x or even a million-x per pound). Would a person think twice if the "replacement organ" would be porcine? Most likely, yes. However, after having learnt the science background and the alternatives, after careful contemplation may opt for it. This article is closely connected to an earlier Nature publication (see below), mentioning that "Novartis initially planned to spend more than $1 billion on xenotransplantation". One can tell that this issue has an enormous potential to health-care, the science of genome analytics, as well as global economy. - andras_at_pellionisz.com]


New life for pig-to-human transplants

Nature 527, 152–154 (12 November 2015) doi:10.1038/527152a

Sara Reardon

10 November 2015

Gene-editing technologies have breathed life into the languishing field of xenotransplantation.

Pale on its bed of crushed ice, the lung looks like offal from a butcher’s counter. Just six hours ago, surgeons at the University of Maryland’s medical school in Baltimore removed it from a hefty adult pig and, with any luck, it will soon be coaxed back to life, turning a rich red and resuming its work in the chest of a six-year-old baboon.

An assistant brings the lung to Lars Burdorf and his fellow surgeons, who currently have their hands in the baboon’s splayed chest. The team then begins the painstaking process of connecting the organ to the baboon’s windpipe and stitching together the appropriate arteries and blood vessels. But this 5-hour, US$50,000 operation is just one data point in a much longer experiment — one that involves dozens of labs and decades of immunological research and genetic engineering to produce a steady and safe source of organs for human transplantation. If the baboon’s immune system tolerates this replacement lung, it will be a sign that the team is on the right track.

Robin Pierson heads the Maryland lab, which has performed about 50 pig-to-primate transplants like this one to test different combinations of genetic modifications in the pig and immune-suppressing drugs in the primate. Even so, the team has not had a primate survive for longer than a few days. The complexities of the immune system and the possibility of infection by pig viruses are formidable and drove large companies out of the field in the early 2000s.

That trend may now be reversing, thanks to improved immunosuppressant drugs and advances in genome-editing technologies such as CRISPR/Cas9. These techniques allow scientists to edit pig genes, which could cause rejection or infection, much more quickly and accurately than has been possible in the past. In October, eGenesis, a life-sciences company in Boston, Massachusetts, announced that it had edited the pig genome in 62 places at once.

Some researchers now expect to see human trials with solid organs such as kidneys from genetically modified pigs within the next few years (see ‘Choice cuts’). United Therapeutics, a biotechnology company in Silver Spring, Maryland, has spent $100 million in the past year to speed up the process of making transgenic pigs for lung transplants — the first major industry investment in more than a decade. It says that it wants pig lungs in clinical trials by 2020. But others think that the timeline is unrealistic, not least because regulators are uneasy about safety and the risk of pig organs transmitting diseases to immunosuppressed humans.

“I think we’re getting closer, in terms of science,” says transplant surgeon Jeremy Chapman of the University of Sydney’s Westmead Hospital in Australia. “But I’m not yet convinced we’ve surpassed all the critical issues that are ahead of us. Xenotransplantation has had a long enduring reality that every time we knock down a barrier, there’s another one just a few steps on.”

Long history

Surgeons have been attempting to put baboon and chimpanzee kidneys into humans since at least the 1960s. They had little success — patients died within a few months, usually because the immune system attacked and rejected the organ. But the idea of xenotransplantation persisted. It could, proponents say, help to save the lives of the tens of thousands of people around the world who die each year while waiting for a suitable human donor. And having a steady supply of farm-grown organs would allow doctors to place recipients on immunosuppressant drugs days ahead of surgery, which should improve survival rates.

When details about why non-human organs are rejected began to emerge in the 1990s, the transplantation field was ready to listen. In 1993, surgeon David Cooper of the University of Pittsburgh in Pennsylvania and his colleagues discovered that most of the human immune reaction was directed at a single pig antigen: a sugar molecule called α-1,3-galactose, or α-gal, on cell surfaces that can cause organ rejection within minutes1. An enzyme called α-1,3-galactosyltransferase is necessary for producing this sugar, and knocking out the gene that produces the enzyme should temper the reaction.

This discovery and other advances in transplantation medicine made the problem seem more tractable to big pharmaceutical companies. In 1996, Novartis in Basel, Switzerland, began to invest heavily in xenotransplantation research, says Geoffrey MacKay, who was the firm’s business director for transplants and immunology at the time and oversaw the xenotransplantation effort. “They wanted to not only put a dent into the organ shortage but really solve it via transgenic pigs.” MacKay is currently interim chief executive at eGenesis.

Novartis initially planned to spend more than $1 billion on xenotransplantation, including both scientific research and planning the infrastructure that would be needed to grow pigs in germ-free facilities around the world. Other companies put some skin in the game, including Boston-based Genzyme and PPL Therapeutics, the British company that collaborated in the creation of Dolly, the first cloned sheep. Regulators such as the US Food and Drug Administration (FDA) began to draw up the guidance and standards that companies would need to meet before the technology could be moved into people.

But the immune system turned out to be much more complex than anticipated, and baboons that received pig organs never survived longer than a few weeks, even when the researchers were able to suppress α-gal production with drugs. A second major concern, especially to regulators, was the risk of infection. Even if pigs could be kept entirely sterile, the pig genome is sprinkled with dozens of dormant porcine endogenous retroviruses (PERVs), and studies conflicted as to whether these could become active in humans.

The challenges proved too daunting, and in the early 2000s Novartis killed its xenotransplantation programme, reshuffling or laying off its researchers. Other companies followed suit. It became, Pierson says, “the third rail of biotech to discuss xenotransplantation as a business plan”.

For the next ten years, the business side of the field went dark, at least as far as solid-organ transplants were concerned. Meanwhile, a few research teams and start-up companies began pursuing pig tissue transplants: a much simpler goal than using solid organs because the immune response is not as severe. In April, Chinese regulators approved the use of pig corneas from which all the cells have been removed2. Also on the near horizon are pig insulin-producing islet cells that might be transplanted into people with diabetes.

The first commercially available islets are likely to come from technology designed by Living Cell Technologies (LCT), a biotech company based in Auckland, New Zealand, that has developed a process to encapsulate pig islet cells in a gelatinous ‘dewdrop’ that protects them from the human immune system. The product, called DIABECELL, is currently in late-stage clinical trials in several countries. Patients implanted with the cells have survived more than nine years without evidence of immune rejection or infection3.

“I think people are coming around to look at xenotransplantation in a more-favourable light knowing that we have strong safety data,” says LCT research lead Jackie Lee. Diatranz Otsuka Limited, in Auckland, is now running the programme.

Solid organs still pose a challenge. The handful of researchers who have continued to work with them have solved some of the problems that vexed Novartis, such as identifying other key pig antigens and the correct combinations of immunosuppressant drugs. But different organs have different problems: kidneys may be safer than hearts, for instance. Lungs, as Pierson’s team has discovered, are extremely difficult to transplant, because they have extensive networks of blood vessels, which provides more opportunities for primate blood to meet pig proteins and to coagulate. Pierson’s current trials use lungs from an α-gal-knockout pig that includes five human genes. The baboon is treated with a combination of four immunosuppressant drugs.

Most US researchers, including Pierson and Cooper, have relied on pigs made by a regenerative-medicine company called Revivicor in Blacksburg, Virginia, that spun-out from PPL Therapeutics. In 2003, Revivicor co-founder David Ayares and his colleagues created the first cloned pig genetically modified to delete α-gal4. The company has since been experimenting with altering other protein antigens that trigger the immune system or cause human blood to coagulate.

These modifications have greatly lengthened the time that an organ can survive in a baboon. In one trial, surgeon Muhammad Mohiuddin at the National Heart, Lung, and Blood Institute in Bethesda, Maryland, and his colleagues took the heart from an α-gal-free pig that had two human genes that protect from coagulation and sewed it into the abdomen of a baboon5. The organ did not replace the baboon’s heart, but the animal lived with the implant for two and a half years.

Mohiuddin says that the group is now attempting a ‘life-supporting’ transplant by replacing the baboon’s heart with a pig heart. The longest life-supporting transplant was published in June6, when Cooper’s group announced that a kidney transplant from a Revivicor pig with six modified genes supported a baboon for 136 days.

Generation game

But the process is slow, Cooper says. It generally takes several generations of breeding to knock out both copies of just one given gene in a pig. Deleting multiple genes or swapping them for their human counterparts takes many more generations, because every litter contains pigs with different combinations of the modified genes.

That is why so many are excited about precise genome-editing tools such as CRISPR/Cas9, which can precisely cut both copies of a gene — or genes — straight from a pig embryo in one go. “Our first [α-]gal-knockout pig took three full years,” says Joseph Tector, a transplant surgeon at Indiana University in Indianapolis. “Now we can make a new pig from scratch in 150 days.” His group recently used CRISPR to knock out two pig genes simultaneously7. The researchers are now beginning to transplant CRISPR-modified pig organs into macaques, one of which has survived for more than three months.

Eventually, gene editing might even eliminate the need for immunosuppression, says Bernhard Hering, a transplant surgeon at the University of Minnesota in Minneapolis. His group is using CRISPR to create pig islets that could be transplanted without the need for drugs. Partly because of LCT’s success with encapsulated islets, many are hopeful that islet cells will be the first genetically modified tissue to make it into clinical trials, paving the regulatory pathway for the more-difficult organs. A non-profit organization has built a germ-free facility in which to raise Hering’s pigs.

Technology revival

The gene-editing advances have brought new investment into the field. In 2011, United Therapeutics acquired Revivicor for about $8 million and announced an ambitious plan to start clinical trials of gene-edited pig lungs by the end of the decade. The company’s co-chief executive, Martine Rothblatt, secured land in North Carolina for a farm that could produce 1,000 pig organs per year and says she expects to break ground by 2017. The facility’s elaborate plans include solar panels and helicopter landing pads to help speed fresh organs to those in need.

In 2014, United Therapeutics formed a $50-million partnership with the biotech firm Synthetic Genomics (SGI) in La Jolla, California, founded by genome-sequencing pioneer Craig Venter. Rather than simply knocking out antigens, SGI is also engineering tissues that sidestep rejection in a different way — such as pig cells that produce surface receptors that act as ‘molecular sponges’ and sop up human immune signalling factors that would otherwise attack the organ. CRISPR and other methods also allow the researchers to make tweaks such as lowering a gene’s expression rather than deleting it completely, says Sean Stevens, head of SGI’s mammalian synthetic-biology group. In September, United Therapeutics committed another $50 million.

Peter Cowan, an immunologist at St Vincent’s Hospital in Melbourne, Australia, is taking a different approach. His group has made pigs that generate antibodies against human immune cells. In their design, the antibodies would be made only by transplanted liver cells, ensuring that the immune system is suppressed just around the organ.

eGenesis was founded in April by bioengineer Luhan Yang and geneticist George Church of the Wyss Institute and Harvard University in Cambridge, Massachusetts. MacKay says that the firm plans to begin transplanting organs into primates next year. To that end, Church says that the company has made embryos that have more than 20 genetic alterations to cell-surface antigens and other factors and is ready to implant the embryos into female pigs. One of its first publications used CRISPR to inactivate 62 occurrences of PERV genes in pig kidney cells8. The researchers have since transferred the cells’ nuclei into pig embryos.

Incidentally, few researchers in the field see the PERV problem as a major safety concern. The virus replicates poorly in human tissues and the risk of spreading it is virtually non-existent, says Jay Fishman, an infectious-disease specialist at Massachusetts General Hospital in Boston. He says that researchers have tracked dozens of people who received unregulated porcine skin grafts, and none seems to have developed disease.

But dealing with PERVs may be a regulatory necessity. The FDA said in an e-mail to Nature that it is still concerned about the possibility of disease caused by PERVs. There are other pathogens to worry about, too. Most major epidemics start with an animal pathogen that jumps to humans, warns Peter Collignon, an infectious-disease scientist at the Australian National University in Canberra. “If you want to do the perfect experiment for finding new novel viruses and letting them multiply, this is it.”

Unless xenotransplants are proved to be extremely safe, the FDA suggests that they be limited to people with life-threatening conditions who have no other options. It will be even harder to get organs from genetically modified pigs to market, the agency says, because regulators must approve both the genetic construct used to make the animal and the organ itself.

Even if safety can be assured, questions remain about whether pig organs would work correctly in their new home, Chapman says. It is unclear whether a pig kidney would, for instance, respond to the human hormones that regulate urination, or whether proteins produced by a pig liver would interact correctly with human systems. And because pigs live for only about ten years, their organs might not survive a human lifetime. Even using a xenotransplant as a ‘bridge’ until a suitable human donor is found will be difficult. After a heart transplant, for instance, fibrous tissue forms around the new organ, making second transplants very difficult, Chapman says.

Given the long list of known hurdles, the surprise setbacks that researchers encounter along the way can be particularly disheartening. About half an hour after its surgery at the University of Maryland, the baboon with a pig’s lung woke up in a cage wearing a small vest that monitored its vital signs. The lung functioned well overnight and was even able to provide enough oxygen to the animal when blood flow to its other lung was temporarily blocked. But the next day, the animal became ill and had to be killed. That was unexpected, Pierson says, because the pig’s multiple genetic modifications seem to have worked well with the baboon’s immune system. A post-mortem examination revealed that fluid had accumulated in the lung and the organ had developed blood clots. Like so many other aspects of xenotransplantation, Pierson says, “this is a problem that we are still learning about”.

Conceptual illustration of a pig farm capable of producing 1,000 organs for transplant per year. Centrally located operating theatres would have helipads for shipping fresh organs for transplant.

[For comment, see the connected article above - andras_at_pellionisz_dot_com]


Genome Editing [What is the code that we are editing?]

MIT Technology Review

The Experiment

By Christina Larson

Until recently, Kunming, capital of China’s southwestern Yunnan province, was known mostly for its palm trees, its blue skies, its laid-back vibe, and a steady stream of foreign backpackers bound for nearby mountains and scenic gorges. But Kunming’s reputation as a provincial backwater is rapidly changing. On a plot of land on the outskirts of the city—wilderness 10 years ago, and today home to a genomic research facility—scientists have performed a provocative experiment. They have created a pair of macaque monkeys with precise genetic mutations.

Last November, the female monkey twins, Mingming and Lingling, were born here on the sprawling research campus of Kunming Biomedical International and its affiliated Yunnan Key Laboratory of Primate Biomedical Research. The macaques had been conceived via in vitro fertilization. Then scientists used a new method of DNA engineering known as CRISPR to modify the fertilized eggs by editing three different genes, and they were implanted into a surrogate macaque mother. The twins’ healthy birth marked the first time that CRISPR has been used to make targeted genetic modifications in primates—potentially heralding a new era of biomedicine in which complex diseases can be modeled and studied in monkeys.

CRISPR, which was developed by researchers at the University of California, Berkeley, Harvard, MIT, and elsewhere over the last several years, is already transforming how scientists think about genetic engineering, because it allows them to make changes to the genome precisely and relatively easily (see “Genome Surgery,” March/April). The goal of the experiment at Kunming is to confirm that the technology can create primates with multiple mutations, explains Weizhi Ji, one of the architects of the experiment.

Ji began his career at the government-affiliated Kunming Institute of Zoology in 1982, focusing on primate reproduction. China was “a very poor country” back then, he recalls. “We did not have enough funding for research. We just did very simple work, such as studying how to improve primate nutrition.” China’s science ambitions have since changed dramatically. The campus in Kunming boasts extensive housing for monkeys: 75 covered homes, sheltering more than 4,000 primates—many of them energetically swinging on hanging ladders and scampering up and down wire mesh walls. Sixty trained animal keepers in blue scrubs tend to them full time.

The lab where the experiment was performed includes microinjection systems, which are microscopes pointed at a petri dish and two precision needles, controlled by levers and dials. These are used both for injecting sperm into eggs and for the gene editing, which uses “guide” RNAs that direct a DNA-cutting enzyme to genes. When I visited, a young lab technician was intently focused on twisting dials to line up sperm with an egg. Injecting each sperm takes only a few seconds. About nine hours later, when an embryo is still in the one-cell stage, a technician will use the same machine to inject it with the CRISPR molecular components; again, the procedure takes just a few seconds.

During my visit in late February, the twin macaques were still only a few months old and lived in incubators, monitored closely by lab staff. Indeed, Ji and his coworkers plan to continue to closely watch the monkeys to detect any consequences of the pioneering genetic modifications.

The Impact

By Amanda Schaffer

The new genome-editing tool called CRISPR, which researchers in China used to genetically modify monkeys, is a precise and relatively easy way to alter DNA at specific locations on chromosomes. In early 2013, U.S. scientists showed it could be used to genetically engineer any type of animal cells, including human ones, in a petri dish. But the Chinese researchers were the first to demonstrate that this approach can be used in primates to create offspring with specific genetic alterations.

“The idea that we can modify primates easily with this technology is powerful,” says Jennifer Doudna, a professor of molecular and cell biology at the University of California, Berkeley, and a developer of CRISPR. The creation of primates with intentional gene alterations could lead to powerful new ways to study complex human diseases. It also poses new ethical dilemmas. From a technical perspective, the Chinese primate research suggests that scientists could probably alter fertilized human eggs with CRISPR; if monkeys are any guide, such eggs could grow to be genetically modified babies. But “whether that would be a good idea is a much harder question,” says Doudna.

The prospect of designer babies remains remote and far from the minds of most researchers developing CRISPR. Far more imminent are the potential opportunities to create animals with mutations linked to human disorders. Experimenting with primates is expensive and can raise concerns about animal welfare, says Doudna. But the demonstration that CRISPR works in monkeys has gotten “a lot of people thinking about cases where primate models may be important.”

At the top of that list is the study of brain disorders. Robert Desimone, director of MIT’s McGovern Institute for Brain Research, says that there is “quite a bit of interest” in using CRISPR to generate monkey models of diseases like autism, schizophrenia, Alzheimer’s disease, and bipolar disorder. These disorders are difficult to study in mice and other rodents; not only do the affected behaviors differ substantially between these animals and humans, but the neural circuits involved in the disorders can be different. Many experimental psychiatric drugs that appeared to work well in mice have not proved successful in human trials. As a result of such failures, many pharmaceutical companies have scaled back or abandoned their efforts to develop treatments.

Primate models could be especially helpful to researchers trying to make sense of the growing number of mutations that genetic studies have linked to brain disorders. The significance of a specific genetic variant is often unclear; it could be a cause of a disorder, or it could just be indirectly associated with the disease. CRISPR could help researchers tease out the mutations that actually cause the disorders: they would be able to systematically introduce the suspected genetic variants into monkeys and observe the results. CRISPR is also useful because it allows scientists to create animals with different combinations of mutations, in order to assess which ones—or which combinations of them—matter most in causing disease. This complex level of manipulation is nearly impossible with other methods.

Guoping Feng, a professor of neuroscience at MIT, and Feng Zhang, a colleague at the Broad Institute and McGovern Brain Institute who showed that CRISPR could be used to modify the genomes of human cells, are working with Chinese researchers to create macaques with a version of autism. They plan to mutate a gene called SHANK3 in fertilized eggs, producing monkeys that can be used to study the basic science of the disorder and test possible drug treatments. (Only a small percentage of people with autism have the SHANK3 mutation, but it is one of the few genetic variants that lead to a high probability of the disorder.)

The Chinese researchers responsible for the birth of the genetically engineered monkeys are still focusing on developing the technology, says Weizhi Ji, who helped lead the effort at the Yunnan Key Laboratory of Primate Biomedical Research in Kunming. However, his group hopes to create monkeys with Parkinson’s, among other brain disorders. The aim would be to look for early signs of the disease and study the mechanisms that allow it to progress.

The most dramatic possibility raised by the primate work, of course, would be using CRISPR to change the genetic makeup of human embryos during in vitro fertilization. But while such manipulation should be technically possible, most scientists do not seem eager to pursue it.

Indeed, the safety concerns would be daunting. When you think about “messing with a single cell that is potentially going to become a living baby,” even small errors or side effects could turn out to have enormous consequences, says Hank Greely, director of the Center for Law and the Biosciences at Stanford. And why even bother? For most diseases with simple genetic causes, it wouldn’t be worthwhile to use CRISPR; it would make more sense for couples to “choose a different embryo that doesn’t have the disease,” he says. This is already possible as part of in vitro fertilization, using a procedure called preimplantation genetic diagnosis.

It’s possible to speculate that parents might wish to alter multiple genes in order to reduce children’s risk, say, of heart disease or diabetes, which have complex genetic components. But for at least the next five to 10 years, that, says Greely, “just strikes me as borderline crazy, borderline implausible.” Many, if not most, of the traits that future parents might hope to alter in their kids may also be too complex or poorly understood to make reasonable targets for intervention. Scientists don’t understand the genetic basis, for instance, of intelligence or other higher-order brain functions—and that is unlikely to change for a long time.

Ji says creating humans with CRISPR-edited genomes is “very possible,” but he concurs that “considering the safety issue, there would still be a long way to go.” In the meantime, his team hopes to use genetically modified monkeys to “establish very efficient animal models for human diseases, to improve human health in the future.”

[2016 hit with full force; the potential of genome editing is both real and colossal. Perhaps the only thing in the history of science and technology to compare is, when after the embarrassing realization that the smallest unit of elements (atoms) were not suppose to split, they did split. The turmoil yielded to the staggering realizations that a) unbelievable amounts of energy are realized both by fission of large atoms, and even larger amount of energy can be gained from fusion of small atoms. Suddenly, a scientific embarrasment changed into a horse-race of superpowers a) to develop the underlying mathematics of nuclear physics (quantum mechanics) and b) to spend Manhattan-project sized funds to hone technology that can actually deliver on the promise. With Genome Editing, we are at the first (a) at the moment (realization of staggering potential). The question, however, is inevitable "What code are we editing?". Simply put, with very few exceptions aside, those highly skilled in the art of genome editing do not really know the mathematics of the code they edit. To illustrate this point, we invoke here the metaphor that it is a generally held notion that "genes" are like keys of a piano - each key creates a tone of certain frequency. An improvement of such "theory of genome function" advanced lately that "genes are turned on and off". Thus, piano music is brutally reduced to "turning keys on and off". Chopin probably would not like that crash oversimplication very much. True, half a year ago the metaphor advanced to "The human genome: a complex orchestra". This is better. One still lacks a true understanding of the art of a music director how to create magnificient music from individual pieces of instruments. A colossal amount of funds are spent on generating Big Data - and now Genome Editing is virtually unstoppable to through out parts of the genome (particularly, of its regulatory system) and replace pieces with something else that is supposed to be better. Imagine a nuclear industry (either peaceful or otherwise) rushing ahead without proper mathematical understanding. It could destroy the World as we know it, some could say. Instead of a trickle at best, we need a massive effort towards laying down the mathematical underpinning of genome regulation, ASAP. - andras_at_pellionisz_dot_com]


CRISPR helps heal mice with muscular dystrophy

Science News

By Jocelyn Kaiser 31 December 2015

The red-hot genome editing tool known as CRISPR has scored another achievement: Researchers have used it to treat a severe form of muscular dystrophy in mice. Three groups report today in Science that they wielded CRISPR to snip out part of a defective gene in mice with Duchenne muscular dystrophy (DMD), allowing the animals to make an essential muscle protein. The approach is the first time CRISPR has been successfully delivered throughout the body to treat grown animals with a genetic disease.

DMD, which mainly affects boys, stems from defects in the gene coding for dystrophin, a protein that helps strengthen and protect muscle fibers. Without dystrophin, skeletal and heart muscles degenerate; people with DMD typically end up in a wheelchair, then on a respirator, and die around age 25. The rare disease usually results from missing DNA or other defects in the 79 exons, or stretches of protein-coding DNA, that make up the long dystrophin gene.

Researchers haven’t yet found an effective treatment for the disorder. It has proven difficult to deliver enough muscle-building stem cells into the right tissues to stop the disease. Conventional gene therapy, which uses a virus to carry a good version of a broken gene into cells, can’t replace the full dystrophin gene because it is too large. Some gene therapists are hoping to give people with DMD a “micro” dystrophin gene that would result in a short but working version of the protein and reduce the severity of the disease. Companies have also developed compounds that cause the cell’s DNA-reading machinery to bypass a defective exon in the dystrophin gene and produce a short but functional form of the crucial protein. But these so-called exon-skipping drugs haven’t yet won over regulators because they have side effects and only modestly improved muscle performance in clinical trials.

Now, CRISPR has entered the picture. The technology, which Science dubbed 2015’s Breakthrough of the Year, relies on a strand of RNA to guide an enzyme called Cas9 to a precise spot in the genome, where the enzyme snips the DNA. Cells then repair the gap either by rejoining the broken strands or by using a provided DNA template to create a new sequence. Scientists have already used CRISPR to correct certain genetic disorders in cells taken from animals or people and to treat a liver disease in adult mice. And last year, researchers showed CRISPR could repair flawed dystrophin genes in mouse embryos.

But using CRISPR to treat people who already have DMD seemed impractical, because mature muscle cells in adults don’t typically divide and therefore don’t have the necessary DNA repair machinery turned on for adding or correcting genes. CRISPR could, however, be used to snip out a faulty exon so that the cell’s gene reading machinery would make a shortened version of dystrophin—similar to the exon-skipping and microgene approaches.

Now, three teams have done just this in young mice with DMD. Graduate student Chengzu Long and others in Eric Olson’s group at University of Texas Southwestern Medical Center in Dallas used a harmless adeno-associated virus to carry DNA encoding CRISPR’s guide RNA and Cas9 into the mice’s muscle cells and cut out the faulty exon. In the treated mice, which had CRISPR-ferrying viruses injected directly into muscles or into their bloodstream, heart and skeletal muscle cells made a truncated form of dystrophin, and the rodents performed better on tests of muscle strength than untreated DMD mice. Teams led by biomedical engineer Charles Gersbach of Duke University in Durham, North Carolina, and Harvard stem cell researcher Amy Wagers, both collaborating with CRISPR pioneer Feng Zhang of Harvard and the Broad Institute in Cambridge, Massachusetts, report similar results. CRISPR’s accuracy was also reassuring. None of the teams found much evidence of off-target effects—unintended and potentially harmful cuts in other parts of the genome.

The Wagers team also showed that the dystrophin gene was repaired in muscle stem cells, which replenish mature muscle tissue. That is “very important,” Wagers says, because the therapeutic effects of CRISPR may otherwise fade, as mature muscle cells degrade over time.

The treatment wasn’t a cure: The mice receiving CRISPR didn’t do as well on muscle tests as normal mice. However, “there’s a ton of room for optimization of these approaches,” Gersbach says. And as many as 80% of people with DMD could benefit from having a faulty exon removed, Olson notes. However, he adds, researchers are years away from clinical trials. His group now plans to show CRISPR performs equally well in mice with other dystrophin gene mutations found in people, then establish that the strategy is safe and effective in larger animals.

Other muscular dystrophy researchers are encouraged. “Collectively the approach looks very promising for clinical translation,” says Jerry Mendell of Nationwide Children’s Hospital in Columbus. Adds Ronald Cohn of the Hospital for Sick Children in Toronto, Canada: “The question we all had is whether CRISPR gene editing can occur in vivo in skeletal muscle.” The new studies, he says, are “an incredibly exciting step forward.”

[Genome Editing is likely to become most promising revolutionary methodology to really cure diseases caused by single nucleotide polymorphysm (one letter of A,C,T,G), that changes a normally protein-coding codon into a stop-codon, thereby producing a "truncated" protein. There are thousands of such diseases. With the disease of DMD the problem is, that in addition to single point mutations of the DNA, Non-Coding RNAs have also been implicated. (Thus, it is listed as a "Junk DNA disease"). Genome Editing is presently in its infancy, presently focusing on animal models (in this case, mice). Further, non-coding DNA and non-coding RNA, along with other "fractal defects" have not yet been replaced by "spell-checked" sequence-snippets, to the knowledge of FractoGene inventor. One must be careful in assessing the integrity of "protein-coding gene"(s), as it is becoming evident, see publication on "microexons", i.e. that "genes" are found fractured in old school (fractal in new school) - Andras_at_Pellionisz_dot_com]


Credit for CRISPR: A Conversation with George Church

The Scientist, Dec 29, 2015

George Church ["The Edison of Genomics - AJP"]

The media frenzy over the gene-editing technique highlights shortcomings in how journalists and award committees portray contributions to scientific discoveries.

Jennifer Doudna, Emmanuelle Charpentier, and Feng Zhang are widely cited as the primary developers of CRISPR/Cas9 technology. These researchers were undoubtedly key to the development of the bacterial immune defense system into a powerful and accessible gene-editing tool, but by assigning credit to just three individuals, most news reports overlook the contributions of countless other scientists, including George Church, who alerted The Scientist to this issue after reading an article on December’s Human Gene Editing Summit.

In the article, my colleague Jef Akst highlighted Doudna, Charpentier, and Zhang as the three seminal figures in the development of CRISPR/Cas9 technology: “The attendees are a veritable who’s who of genome editing: Jennifer Doudna of the University of California, Berkeley, Emmanuelle Charpentier of Max Planck Institute for Infection Biology, and Feng Zhang of the Broad Institute of MIT and Harvard—the three discoverers of the CRISPR-Cas9 system’s utility in gene editing—plus dozens of other big names in genome science,” Akst wrote. In assigning the lion’s share of credit for CRISPR/Cas9 gene editing to Doudna, Charpentier, and Zhang, Akst echoed countless articles on the technology’s origin story.

“I’m trying not to complain,” Church told me when we chatted a few days later. “I’m just making what I thought was a little technical correction, which was the particular way she phrased it.” His point? He and many other scientists also contributed to developing the “CRISPR-Cas9 system’s utility in gene editing.”

If you’ve read anything about CRISPR, you’re likely familiar with the following: in a 2012 Science paper, Doudna, Charpentier, and their colleagues published the first account of programming the CRISPR/Cas9 system to precisely cut naked plasmid and double-stranded DNA. Zhang and his colleagues applied this precision-cutting approach to mouse and human cells in vitro, publishing their results in a February 2013 issue of Science.

But, as is the case whenever intensive scientific inquiry in involved, the story was not nearly so simple. Although it’s not often included with the aforementioned studies, Church’s team published a similar study—using CRISPR/Cas9 to edit genes in human stem cells—in the same issue of Science as Zhang and his colleagues.

Church emphasized that Doudna and Charpentier were major players in elevating CRISPR/Cas9, a naturally occurring form of immune defense employed by bacteria to fight off invading viruses, from a biological curiosity to a potentially transformative gene-editing tool. “They were definitely pioneers in studying this particular enzyme system,” he said. But he contends that their specific contributions don’t constitute the whole story of the technology’s development. “The spark that [Doudna] had was that CRISPR would be a programmable cutting device,” Church said. “But getting it to do precise editing, via homologous recombination, was a whole other thing.”

The CRISPR/Cas system is a naturally occurring form of immune defense employed by bacteria to fight off invading viruses. A small constellation of researchers aided in describing, isolating, and studying CRISPR decades before it was ever imagined as a gene-editing tool.

In 1987, Yoshizumi Ishino and his colleagues at Osaka University in Japan published the sequence of a peculiar short repeat, called iap, in the DNA of E. coli . Eight years later, Francisco Mojica from the University of Alicante in Spain and his colleagues characterized what would become known as a CRISPR locus; The researchers later realized that what they and others had considered disparate repeat sequences actually shared common features.

Mojica and his colleague Ruud Jansen coined the term CRISPR (for clustered regularly-interspaced short palindromic repeats) in correspondence with each other in the late 90s and early 2000s, and Jansen used it in print for the first time in 2002. A steady trickle of research on the prokaryotic immune module followed, with industry scientists such as Philippe Horvath and Rodolphe Barrangou from dairy manufacturer Danisco joining academic researchers—among them, Luciano Marraffini at Rockefeller University, John Van der Oost at Wageningen University in the Netherlands, Sylvain Moineau of Canada’s Laval University, Virginijus Siksnys at Vilnius University in Lithuania, and Eugene Koonin of the University of Haifa in Israel—pursuing a more robust understanding of how CRISPR worked in nature. This early work on CRISPR was “kind of a community effort,” said Church.

Zhang agreed. “This is a remarkable scientific story in its own right, and the work on genome editing . . . was only possible because of a strong, global foundation of basic research into the biology of CRISPR,” he wrote in an email to The Scientist. “Many researchers contributed to the discovery and understanding of CRISPR,” he added. “Any discussion of the development of CRISPR into the genome-editing tool it is today would be incomplete without recognizing the critical contributions of each of these individuals and their teams.”

Now that the technology is being applied, its origin story has been oversimplified in both published accounts and by award organizations. “It’s a litany now,” Church said. “It’s like a hymn.”

And of all the researchers who might deserve more credit for developing CRISPR, Church contends that he’s at the top of the list. “There were definitely at least two teams [Doudna’s and Charpentier’s] involved in getting cutting to work,” Church continued, “and then there were two teams [Zhang’s and mine] that got it to work in humans with homologous recombination. So you could say two and two. But to oversimplify that back down to three, is like consciously omitting one.”

Why that happened isn’t readily apparent, said Doudna. “Looking at peer-reviewed publications, George Church published a paper at the same time in the same issue of Science magazine as Feng Zhang on using CRISPR technology in human cells,” she told The Scientist. “It’s very clear what’s in the scientific record.”

That CRISPR/Cas9 gene-editing was a larger collaborative effort that extends beyond Doudna, Charpentier, and Zhang is an issue that others have spoken and written about. An economic manifestation of the debate, in the form of a patent dispute, has even sprung up within the oft-cited CRISPR trinity. Then there are the prizes. In 2014, Doudna and Charpentier were awarded a $3 million Breakthrough Prize. And last year Thomson Reuters predicted a Nobel Prize in Chemistry for the duo. (The 2015 honors went to a trio of DNA repair researchers instead.)

Meanwhile, the media continues to perpetuate the condensed CRISPR origin story when mentioning the technology’s evolution in the space of a sentence or two. Part of that oversimplification is rooted in the fact that most modern life-science researchers aren’t working to uncover broad biological truths. These days the major discoveries lie waiting in the details, meaning that any one lab is unlikely to shed all the necessary light on a complex phenomenon—much less on how to adopt that phenomenon for human purposes—in isolation. That reality does little to allay what is probably a fundamental human urge to pin a few names and faces on major breakthroughs.

But how do we fix a problem of public perception that stems from the very nature of scientific discovery in the modern age? Doudna had a suggestion. “I think it’s great that journalists look into this and explain the process of science,” she said. “Things don’t happen overnight; they happen through a process of investigation. And very typically there are multiple laboratories that are working in an area, and it’s almost universally true.”

[Comment by Andras_at_Pellionisz_dot_com below]

Pellionisz, Fig 16 of 2009

[George Church invited me to his Cold Spring Harbor meeting in 2009. I searched already at that time "Fractal Defects", see above. At that ime, there was already an established industry to sequence full genomes. However, there was neither an established industry for Synthetic Genomics (to cheaply manufacture sequences of any design). Nor was George Church fully geared at that time for Genome Editing (to insert the edited correct version to replace Fractal Defects"). Today, we have the full triad! Full sequencing is a commodity. In the spirit of the conclusion of the talk with Prof. George Church, the accomplishments of multiple laboratories and broad biological truths a truly enterprising revolutionary move became possible. A triad can be put together even for non-coding DNA segments of a) The protected intellectual property of FractoGene compute Fractal Defects (in force for more than the next decade), b) Synthetic genomics to cheaply manufacture an edited replacement-sequence, and c) Genome editing patent (and I assume tons of pre-existing trade secrets, causing a feeding frenzy in genome editing) - though editors must first know what is e.g. the mathematical (fractal) language of non-coding regulatory DNA. Already in 2009, "glitches could be found". The famed seven years later, by 2016, "glitches might become edited out by a synthetic correct sequence". "Presenilin", linked to Alzheimer's, is present also in mice, and even in the tiny genome of C. Elegans. Fractal Defects, found since 2007 were shown also for Parkinson's-linked sequences (and other genomic syndromes). Presented to the Parkinson Institute, they were not ready for funding before means to do something definite about them. A lucid cartoon of Genome Editing is here. - andras_at_pellionisz_dot_com ]


^2016^


Genome misfolding unearthed as new path to cancer

IDH mutations disrupt how the genome folds, bringing together disparate genes and regulatory controls to spur cancer growth

[Compare to Defects of Hilbert-Fractal Folding Clog "Proximity", see Figure above Table of Contents here, from 2012 Proceedings - Andras_at_Pellionisz_dot_com]

BROAD INSTITUTE OF MIT AND HARVARD

Nature, December 23, 2015]

In a landmark study, researchers from the Broad Institute and Massachusetts General Hospital reveal a completely new biological mechanism that underlies cancer. By studying brain tumors that carry mutations in the isocitrate dehydrogenase (IDH) genes, the team uncovered some unusual changes in the instructions for how the genome folds up on itself. Those changes target key parts of the genome, called insulators, which physically prevent genes in one region from interacting with the control switches and genes that lie in neighboring regions. When these insulators run amok in IDH-mutant tumors, they allow a potent growth factor gene to fall under the control of an always-on gene switch, forming a powerful, cancer-promoting combination. The findings, which point to a general process that likely also drives other forms of cancer, appear in the December 23rd advance online issue of the journal Nature.

"This is a totally new mechanism for causing cancer, and we think it will hold true not just in brain tumors, but in other forms of cancer," said senior author Bradley Bernstein, an institute member at the Broad Institute and a professor of pathology at Massachusetts General Hospital. "It is well established that cancer-causing genes can be abnormally activated by changes in their DNA sequence. But in this case, we find that a cancer-causing gene is switched on by a change in how the genome folds." [Yes, this paper seeds in the 2009 "Mr. President, the Genome is Fractal" Science Cover Article, featuring the Hilbert-curve for the fractal globule of DNA folding. Dr. Bernstein was among the co-authors, with Erez Lieberman as the first-author and Dr. Eric Lander as lead-author. Eric Lander is acknowledged in the reviewed Bernstein et al. Nature-paper [full pdf]- AJP]

When extended from end to end, the human genome measures some six and a half feet. Although it is composed of smaller, distinct pieces (the chromosomes), it is now recognized that the pieces of the genome fold intricately together in three dimensions, allowing them to fit compactly within the microscopic confines of the cell. More than mere packaging, these genome folds consist of a series of physical loops, like those of a tied shoelace, that bring distant genes and gene control switches into close proximity.

By creating these loops -- roughly 10,000 of them in total -- the genome harnesses form to regulate function. "It has become increasingly clear that the functional unit of the genome is not a chromosome or even a gene, but rather these loop domains, which are physically separated -- and thereby insulated -- from neighboring loop domains," said Bernstein.

But Bernstein's group did not set out to study this higher-order packing of the genome. Instead, they sought a deeper molecular understanding of glioma, a form of brain cancer, including the highly aggressive form, glioblastoma. Relatively little progress has been made in the last two decades in treating these often incurable malignancies. In order to unlock these tumors' biology, Bernstein and his colleagues combed through vast amounts of data from recent cancer genome projects, including the Cancer Genome Atlas (TCGA). They detected an unusual trend in IDH-mutant tumors: When a growth factor gene, called PDGFRA, was switched on, so was a faraway gene, called FIP1L1. When PDGFRA was turned off, so, too, was FIP1L1.

"It was really curious, because we didn't see this gene expression signature in other contexts -- we didn't see it in gliomas without IDH mutations," said Bernstein.

What made this signature stand out is that the two genes in question sit in different genomic loops, which are separated by an insulator. Just as the loops of a tied shoelace come together at a central knot, two insulators in the genome bind to one another, forming a loop. These insulators join together through the action of multiple proteins, which bind to specific regions of the genome, called CTCF sites.

Bernstein and his team were surprised to find that this strange phenomenon could be seen across the genome, involving many other CTCF sites and gene pairs, suggesting that IDH-mutant tumors have a global disruption in genome insulation. But how does this happen, and what role does IDH play?

IDH gene mutations signify one of the early success stories to flow from the large-scale sequencing of tumor genomes. Historically, IDH genes were thought to be run-of-the-mill "housekeeping" genes, not likely drivers of cancer -- exactly the kinds of unexpected finds scientists hoped to uncover through systematic searches of the cancer genome.

Fast forward a few years, and the biology of IDH-mutant tumors remains poorly understood. IDH encodes an enzyme that, when mutated, produces a toxic metabolite that interferes with a variety of different proteins. Exactly which ones are relevant in cancer is unknown, but what is known is that the DNA of IDH-mutant tumors is modified in an important way -- it carries an unusually large number of chemical tags, called methyl groups. The significance of this hypermethylation is not yet clear. "Based on the genome-wide defect in insulation that we observed in IDH-mutant gliomas, we looked for a way to put all these pieces of the IDH puzzle together," said Bernstein.

Using a combination of genome-scale approaches, he and his colleagues found that the hypermethylation in IDH-mutant gliomas localizes to CTCF sites across the genome, where it disrupts their insulator functions.

Taken together with their earlier results, their work shows that PDGFRA and FIP1L1, which are normally confined to separate loop domains and rarely interact, become closely associated in IDH-mutant tumors -- like untying a shoelace and then retying it in a new configuration. This unusual relationship emerges as a result of the hypermethylation at the intervening CTCF site.

"A variety of other tumors carry IDH mutations, including forms of leukemia, colon cancer, bladder cancer, and many others," said Bernstein. "It will be very interesting to see how generally this applies beyond glioma."

Although these early findings need to be extended through additional studies of IDH-mutant gliomas as well as other forms of IDH-mutant cancers, they offer some intriguing insights into potential therapeutic approaches. These include IDH inhibitors, which are now in clinical development, as well as agents that reduce the associated DNA methylation or target the downstream cancer genes.

[This landmark paper, clinching experimental support for the Fractal approach by Pellionisz since 2002, will be commented in appropriate detail - andras_at_pellionisz_dot_com.

A most interesting point in case the crisis how the entire NIH (National Cancer Institute) struggles how to come to terms of my "Fractal approach", already endorsed by major, highly mathematically minded leaders (Nobelist Michael Levitt of Stanford, Double-degree biomathematician Eric Schadt of New York, Eric Lander of Broad/MIT, Pioneer of Fractals in Biology and Medicine Gabriele Losa [et al.] of Switzerland, Govindarajan Ethirajan, India, etc, etc); see http://junkdnacom.fatcow.com/Critical-junction-Nonlinear-dynamics-swarm-intelligence-and-cancer-research.php.html.

A significant sector of the Old School is, however, still hesitant to embrace advanced mathematics. It is becoming an embarrassment, since (as illustrated by the 2015 May YouTube by a bright layperson Wai h tsang) Unification of Neuroscience and Genomics is almost taken for granted, just by looking around and spotting "fractals everywhere, sprouting from fractal seeds" even by most every bright (lay)person. Even the behavior of "old schools" shows the typical fractal "self-similarity"; repeating the same mistake over and over again. It has happened many times in the history of science that major disruptions were recognized only after undue delays of several decades. For FractoGene (2002) the first "critical seven years" resulted in recognition of the genome as a Hilbert-fractal (2009). Now, after another critical second seven years, in 2016 the overwhelming evidence may become too embarrassing for true scholars to hide facts.]


The Fractal Brain and Fractal Genome [by layperson Wai h tsang]

YouTube of Wai h tsang

[Googling "Pellionisz" will reveal a good number of peer-reviewed publications (references available through the "Professional Page"), as well as a 2008 Google Tech Talk YouTube on nonlinear dynamics (fractals and chaos) as the common intrinsic mathematics of both the genome and the brain. Particularly important was, after ENCODE-1, to clinch the Principle of Recursive Genome Function. Starting from a fractal model of the Purkinje cell (Pellionisz, 1989, also shown in the above video) first the FractoGene approach explained how fractal genome grows fractal organ(ism)s, and the 2012 paper on "geometrical unification of genomics and neuroscience" textbook-chapter belabors the topic. Happily, the geometrical approach to biology of Pellionisz (since 1965) has apparently found its way to bright minds of a younger generation everywhere. Somewhat sadly, "Old School Neuroscience and Genomics" has had a rather hard time in coming to terms of advanced mathematics (see in www.hologenomics.com "news" column, e.g. the NIH Cancer Institute published a paper in two versions; one based on fractals, while the other version completely devoid of even the word, let alone citing pioneers).Breakthrough, however, is inevitable - though having already wasted over a quarter of a Century (and counting). Meanwhile, hundreds of millions died e.g. of cancer. Time to wake up - the tardiness of old school is becoming an embarrassment for professionals. - AJP]


2016 - The Genome Applicance; Taking the Genome Further in Healthcare

Tech Crunch - December, 2015

Brendan Frey is the CEO and president of Deep Genomics.

Collecting genome data is reliable, fast and cheap. Yet, interpreting that data is unreliable, slow, and expensive — when it’s even possible.

Today, genome interpretation is a burgeoning science, but it’s not yet a technology. A stricken patient has their genes sequenced and their mutations identified. But then, it can take a highly trained, and highly paid, expert many hours to make a judgment call on a single unfamiliar mutation.

All too often, the result is no diagnosis, no therapy and gut wrenching uncertainty. The problem is made worse because there are not enough knowledgeable experts to handle the rising tide of genome data, and there never will be — exponential growth in the number of human experts is not a viable option.

Genome interpretation is already a pain point for doctors, hospitals, diagnostic labs, pharmaceutical companies and insurance providers. That means it’s also a pain point for everyday patients and their families, whether they know it or not.

The capability gap between the collection of genome data and the interpretation of it is widening faster than ever. If that gap is allowed to continue growing unabated, it represents a shameful lost opportunity to avoid heartache and struggle for millions of people.

How will computer-aided genome interpretation be used to improve the lives of patients? Dozens of ventures are attempting to answer this question and, when the dust settles, healthcare will look dramatically different than it does now.

There are exciting entrepreneurial opportunities in genome-driven personalized medicine, arising from huge potential value and extreme uncertainties in the five-year perspective. We can think of them as rungs on the ladder of information value.

First Rung: Genetic Data Generation And Secure Data Storage

These entrepreneurial opportunities provide the raw material for genomic medicine: whole genome sequences, exome sequences, gene panels and rich phenotype information such as an individual’s predisposition to disease.

This data can be used to determine the set of mutations that a patient has, compared to a reference genome, or it can be used to determine the mutations that tumor cells have, compared to healthy cells. Large databases form crucial resources that support higher rungs on the ladder.

Examples include the sequencers developed and in development at Illumina, PacBio and Oxford Nanopore, the data storage systems in development at Google Genomics and DNAnexus, and the genotype-phenotype data being generated at 23andMe and Human Longevity.

The uncertainties here mainly involve rapidly dropping costs of genome sequencing and phenotyping technologies on the one hand, and increasing concerns about patient confidentiality on the other.

Second Rung: Data Organization, Brokering And Visualization

The value added here is in sharing and comparing the data of individual patients, as well as integrating diverse kinds of large-scale datasets. Pertinent datasets may be public or private, and may have conditions attached, such as those involving confidentiality, non-competition and complex licensing.

Brokering “data trades” in a technologically streamlined manner is crucial. These opportunities do not produce actionable information, but they provide important support for higher rungs on the ladder.

Examples include NextBio, SolveBio and DNAstack. Here, there is uncertainty in the gain in value that can be achieved by combining and sharing genomic data, since without proper interpretation and without addressing patient confidentiality the data may not be actionable.

Third Rung: Software To Bridge The Genotype-Phenotype Gap

This is the most challenging, yet potentially highest-value, entrepreneurial opportunity. Currently, there is a lack of technologies that can reliably link genotype to phenotype and address the crucial question of how genetic modifications, whether natural or therapeutic, impact molecular and biological processes involved in disease. Bridging this gap would be highly disruptive in several verticals, including genetic testing, drug targeting, patient stratification, precision medicine and insurance.

In a recent study, it was shown that the success rate of drugs at phase three in clinical trials could be doubled when even the most simplistic genome interpretation data is taken into account. Imagine what could be achieved if accurate systems for genome interpretation were broadly available.

Bridging the genotype-phenotype gap is the most difficult challenge on the ladder, because it addresses a very complex, multi-faceted task.

The genome is a digital recipe book for building cells, written in a language that no human will ever fully understand. [Define "fully", or replace "fully understand" with "understand without the intrinsic mathematics of nature" - AJP]. Our only window into this tiny, complex world is by high-throughput experiments such as DNA and RNA sequencing, proteomics assays, single-cell experiments and gene editing with CRISPR-Cas9 screens.

Identifying valuable experiments is one way entrepreneurs on this rung can create value, but only if they have the computational know-how to make sense of the data. Machine learning is by far the best technology at our disposal for using such data to discover how the underlying biology works. [This is debatable. "Machine Learning" (maiden name: "Artificial Intelligence") was declared "Brain Dead" by the originator & chamption of AI (Marvin Minsky) when we developed the entire new field of "Neural Net Algorithms" [1571 citations] - AJP]. It will play a crucial role in bridging the genotype-phenotype gap.

For this rung, there is no uncertainty about the transformative nature of the technologies and their value. The uncertainty lies in how successful we can be, from a technological standpoint, in bridging the gap. Do we have enough data? The right type of data? The right machine learning algorithms? [The "uncertainty" is not in our technology savvy - the deep question is if "reverse engineering methods" are suitable to "reverse engineer" a natural system that was never ever "engineered" in the first place - AJP]

Fourth Rung: Diagnostics, Therapies, Precision Medicine And Insurance

These opportunities derive their value from directly addressing the needs of patients. Going forward, this rung will increasingly benefit from the lower rungs on the ladder, and companies that fail to leverage the full stack of the ladder will be left behind. Currently, companies on the fourth rung struggle to make full use of genomic data because good systems for genome interpretation are not yet available.

For instance, the reliability of the current generation of computational tools for genome interpretation is unclear, according to the American College of Medical Genetics and Genomics, the widely accepted oversight body. This will inevitably change as systems for genome interpretation improve and are proven.

Examples of diagnostic companies include Counsyl, Invitae, Myriad and Human Longevity’s Health Nucleus; examples of pharmaceutical companies that are increasingly using these systems include the big pharmas, plus data-driven companies such as 23andMe and Capella Biosciences. Risks here include the uncertainties involved in obtaining regulatory approval and sidestepping the dreaded 10-year drug development cycle.

A Way Forward

Bridging the genotype-phenotype gap is one of the most important outstanding challenges for which machine learning is truly needed. Facebook, Google and DeepMind have made amazing progress in helping computers catch up to humans in understanding images, speech and language, but humans already do these tasks every day and we excel at them. Genome interpretation is different; not a part of our daily lives, yet, in a sense, more urgent.

The gap between our ability to merely collect genetic information and our ability to interpret it at scale is widening faster than ever. [Might wish to re-visit the Google Tech Talk forecast in 2008 "Is IT ready for the Dreaded DNA Data Deluge" - AJP]. Closing that gap will change the lives of hundreds of millions of people.

Our objective in this industry should be to 10X multiply the scale, speed and, most of all, accuracy of genome interpretation. I believe we can do this in three to five years by accelerating the pace of development in computational methods for genome interpretation, and especially machine learning.

Genome interpretation is a software problem that will require the concerted efforts of genome biologists, machine learning experts and software engineers. ["Software" is to put mathematical algorithm(s) into executable lines of code. If the algorithms are unsuitable, cost of software development, often very pricey, might be a waste. Further, in the changing climate of IP protection, securing software is not the best approach. 2016 will emerge as the year of "the genome appliance" - AJP]


Whole-Genome Analysis of the Simons Simplex Collection (SSC)

SFARI is pleased to announce that it has awarded five grants in response to the Whole-Genome Analysis for Autism Risk Variants request for applications.

We are also announcing plans for the release of whole-genome sequencing data from the Simons Simplex Collection (SSC) for analysis by the entire research community. There are currently 560 genomes available, and we expect that all 2,160 genomes (from 540 SSC quad families) will be available by the end of February 2016.

[An entry 3 months ago "Head of Mental Health Institute Leaving for Google Life Sciences"; New York Times, Sept. 15, 2015 - see further down in this column) may be a very relevant entry. (Dr. Thomas R. Insel (63), head of NIH-NIMH resigns by November, 2015 to Google Life). Readers may wish to correlate the Sept. 15 and Dec. 15 news; two landmarks in just a few months apart that signal the shift of focus towards privately funded modern genome informatics combatting autism - Andras_at_Pellionisz_dot_com]


The role of big data in medicine

McKinsey

An interview with Eric Schadt

November, 2015

The role of big data in medicine is one where we can build better health profiles and better predictive models around individual patients so that we can better diagnose and treat disease.

One of the main limitations with medicine today and in the pharmaceutical industry is our understanding of the biology of disease. Big data comes into play around aggregating more and more information around multiple scales for what constitutes a disease—from the DNA, proteins, and metabolites to cells, tissues, organs, organisms, and ecosystems. Those are the scales of the biology that we need to be modeling by integrating big data. If we do that, the models will evolve, the models will build, and they will be more predictive for given individuals.

It’s not going to be a discrete event—that all of a sudden we go from not using big data in medicine to using big data in medicine. I view it as more of a continuum, more of an evolution. As we begin building these models, aggregating big data, we’re going to be testing and applying the models on individuals, assessing the outcomes, refining the models, and so on. Questions will become easier to answer. The modeling becomes more informed as we start pulling in all of this information. We are at the very beginning stages of this revolution, but I think it’s going to go very fast, because there’s great maturity in the information sciences beyond medicine.

The life sciences are not the first to encounter big data. We have information-power companies like Google and Amazon and Facebook, and a lot of the algorithms that are applied there—to predict what kind of movie you like to watch or what kind of foods you like to buy—use the same machine-learning techniques. Those same types of methods, the infrastructure for managing the data, can all be applied in medicine.

In the past three or four years, we’ve hired more than 300 people, spanning from the hardware side and big data computing to the sequence informatics and bioinformatics to the CLIA-certified2 genomics core—to generate the information—to the machine-learning and predictive-modeling guys and the quantitative guys, to build the models. And then we’ve linked that up to all the different disease-oriented institutes at Mount Sinai, and to some of the clinics directly, to start pushing this information-driven decision making into the clinical arena.

Not all the physicians were on board and, of course, there are lots of people who will try to cause all sorts of fear about what kind of world we’re going transform into if we are basing medical decisions on sophisticated models where nobody really understands what’s happening. So it was all about partnering with individuals such as key physicians who were viewed as thought leaders—leading their area within the system—and carrying out the right kinds of studies with those individuals.

In all of these different areas, we’re recruiting experts, and we view what we build as sort of a hub node that we want linked to all the different disease-oriented institutes to enable them to take advantage of this great engine. But you need people to help translate it, and that’s what these key hires have done. They have a strong foot within the Icahn Institute, but they also care about disease. And so they form their whole lab around the idea of how to more efficiently translate the information from the big information hub out to the different disease areas. That’s still done mainly by training individuals within those labs to be able to operate at a lower level. I think what needs to happen beyond that is better engagement through software engineering: user-interface designers, user-experience designers who can develop the right kinds of interfaces to engage the human mind in that information.

One of the biggest problems around big data, and the predictive models that could build on that data, really centers on how you engage others to benefit from that information. Beyond the tools that we need to engage noncomputational individuals in this type of information and decision making, training is another element. They’ve grown up in a system that is very counter to this information revolution. So we’ve started placing much more emphasis on the generation of coming physicians and on how we can transform the curriculum of the medical schools. I think it’s a fundamental transformation of the medical-school curriculum, and even the basic life sciences, where it becomes more quantitative, more computational, and where everybody’s taking statistics and combinatorics and machine learning and computing.

Those are just the tools you need to survive. And it has to start at that earlier stage, because it’s very, very difficult to take somebody already trained in biology or a physician and teach them the mathematics and computer science that you need to play that game.

Bringing together the right talent (YouTube video)


Researchers ID Copy Number Changes Associated With Cancer in Normal Cells

Genomeweb

NEW YORK (GenomeWeb) – Researchers from Uppsala University in Sweden have identified copy number alterations typically associated with cancer in normal cells of breast cancer patients, suggesting that the mutations could be early indicators of disease. [Same is true for Autism - high time to measure them for early and exact diagnosis and precision therapy - AJP]

Reporting their work recently in Genome Research, the researchers aimed to look for markers that predict a risk for breast cancer in individuals without a hereditary risk. Approximately 10 percent of women in developed countries get non-familial breast cancer, also called sporadic breast cancer. The disease is heterogeneous and individuals differ in clinical manifestation, radiologic appearance, prognosis, and outcome. Yet, there are no good markers to predict a woman's risk for developing the disease.

Mammography is used to screen older women, yet it has a limited sensitivity and often only identifies disease once a tumor poses a significant mortality risk, the authors wrote.

In order to look for potential markers that could predict risk of disease at an earlier stage, the researchers studied 282 female sporadic breast cancer patients who underwent mastectomy. From each patient, they evaluated primary tumor tissue, several normal-looking tissue samples at various distances from the tumor, and normal blood or skin samples.

The team characterized all the samples via microarrays and three with low-coverage whole-genome sequencing. From 1,162 non-tumor breast tissue, 183 samples from 108 patients had at least one aberration. The researchers noted that the more sites they sampled from a patient, the more likely they were to find one containing an aberration, suggesting that the identified aberrations may represent only a part of all aberrations that might exist in the studied individuals.

Twenty-seven samples had highly aberrant genomes, affecting over 39 percent of the genomes. Alterations spanned large regions, even whole chromosomes, and there were differences between individual cells, suggesting heterogeneity.

Next, they stratified the remaining 157 tissue samples by mutation load. Because the goal was to identify the earliest markers of breast cancer, they first looked at the samples with a low mutation load.

Copy number gains were the most frequent alteration observed, suggesting that "oncogenic activation (up-regulation) of genes via increased copy number might be a pre-dominant mechanism for initiation of the SBC disease process," the authors wrote.

The authors confirmed that the genomic alterations identified in the normal breast tissue were also found in the primary tumor, with two exceptions. In one case, the team identified a deletion to a tumor suppressor gene that was not present in the tumor, and in another case, the researchers found eight alterations in the normal tissue, only four of which were in the primary tumor.

The most common event in samples with low mutational loads was a copy number gain of ERBB2, which was also the third most common event among all samples. The researchers also found this event in patients' epithelial and mesenchymal cells, demonstrating that "early predisposing genetic signatures are present in normal breast parenchyma as an expression of field cancerization and are not likely to be derived from migrating tumor cells," the authors wrote.

Recurrent gains to receptor genes were also found in EGFR, FGFR1, IGF1R, NGFR, and LIFR.

"Our analysis represents a snapshot picture of a progressive process that is likely going on for many years, if not decades," the authors wrote.

The findings raise important questions about tumor resection and point to a new method of early detection, although further validation studies are needed to determine their clinical significance.

For instance, tumor resection in breast cancer patients is a well-established standard of care; however, there is debate about how much tissue should be removed to ensure all cancer cells are taken. The authors reported that their study provides some evidence for altered cells "sometimes located at unexpected distances" from primary tumors. If those cells are left behind, they "may represent the source of local recurrence," the authors wrote.

In addition, if the findings are confirmed, they could point the way toward better and earlier diagnostics. For instance, in the future, researchers could potentially design imaging tests to detect the proteins located on the cell surface of breast cells that are encoded by cancer genes that have copy number gains.

"Such tests could detect an ongoing disease process much earlier (years, possibly even decades) compared to mammography," the authors wrote.

["FractoGene is Fractal Genome Grows Fractal Organisms". This "Heureka!" cause-and-effect realization translates into immediate use for early and mathematically exact diagnosis, and precision therapy. Mr./Mrs. Billionaire, want to wait until pathological fractal organisms (cancerous tumors) show up? By exact measurements of fractal defects and their correlation one can make statistical diagnosis and probabilistic prognosis for precision therapy. A slew of conditions (cancer, autism, schizophrenia, auto-immune diseases, etc) are already closely linked to fractal defects of the genome (that is replete of repeats in all, healthy or not cases). Andras_at_Holgentech_dot_com]


Genome Pioneer: We Have The Dangerous Power To Control Evolution (Interview with Craig Venter)

The World Post

Oct. 5, 2015

[Before reading further, see Dr. Barnsley's Classic - Craig, you'll need FractoGene!]

[The "genes" failed us - your "complexion" is fractal, so is your genome (FractoGene). Franz Och might provide yet another proof of FractoGene and Craig's Longevity, Inc. might get a license... andras_at_pellionisz_dot_com]

You are the pioneering cartographer of the human genome. How much do we know? What percentage of the functions of genes do we know today?

The cell that we’ve designed in the computer has the smallest genome of any self-replicating organism. In this case, 10 percent of the genes, or on the order of about 50 genes in that organism, are of unknown function. All we know is that if certain genes are not present, you can’t get a living cell.

The human genome is almost the flip side. I would say that we only know well the functions of, maybe, 10 percent of our genome. We know a lot about a little bit; we know far less about a lot more. We don’t know most of the real functions of most of the genes. A big percentage of that can potentially come in the next decade as we scale up to get huge numbers and use novel computing to gain a deeper understanding.

...

How are you discovering the genes that determine a person’s facial features?

The way it works in reality is that your genes determine your face, so it’s not a wild stretch of the imagination that it might be doable, right? We all look a little bit different based on the small differences in our genetic code.

We have a series of cameras that snap a 3-D photograph of faces and take about 30,000 unique measurements -- the distance between your eyes, for example, and other physical parameters. We then look into the genome for those 30,000 measurements to see if we can find parts of the genetic code that clearly determine that factor.

Obviously, there’s a lot of variation across the human species, so it’s not a simple algorithm. I'm less confident we will be able to take your genome sequence to predict your voice, though, but we’ll get approximations of it. Perfect pitch is genetic. Cadence is genetic. But there are a lot of other things that go into how we sound.

[A deeper, genome-industry-wide analysis of Dr. Craig Venter's landmark release is available upon request - Andras_at_Pellionisz_dot_com]


Genetic Analysis Supports Prediction That Spontaneous Rare Gene Mutations Cause Half Of All Autism Cases

A new study appears to show devastating “ultra-rare” gene mutations play a causal role in roughly half of all cases of Autism Spectrum Disorder.

Genetic Analysis Supports Prediction That Spontaneous Rare Gene Mutations Cause Half Of All Autism Cases

Quantitative study identifies 239 genes whose “vulnerability” to devastating de novo mutation makes them priority research targets

Peter Tarr • Cold Spring Harbor Laboratory

Cold Spring Harbor, NY – A team led by researchers at Cold Spring Harbor Laboratory (CSHL) this week publishes in PNAS a new analysis of data on the genetics of autism spectrum disorder (ASD). One commonly held theory is that autism results from the chance combinations of commonly occurring gene mutations, which are otherwise harmless. But the authors’ work provides support for a different theory.

They find, instead, further evidence to suggest that devastating “ultra-rare” mutations of genes that they classify as “vulnerable” play a causal role in roughly half of all ASD cases. The vulnerable genes to which they refer harbor what they call an LGD, or likely gene-disruption. These LGD mutations can occur “spontaneously” between generations, and when that happens they are found in the affected child but not found in either parent.

Although LGDs can impair the function of key genes, and in this way have a deleterious impact on health, this is not always the case. The study, whose first author is the quantitative biologist Ivan Iossifov, a CSHL assistant professor and on faculty at the New York Genome Center, finds that “autism genes” – i.e., those that, when mutated, may contribute to an ASD diagnosis – tend to have fewer mutations than most genes in the human gene pool.

This seems paradoxical, but only on the surface. Iossifov explains that genes with devastating de novo LGD mutations, when they occur in a child and give rise to autism, usually don’t remain in the gene pool for more than one generation before they are, in evolutionary terms, purged. This is because those born with severe autism rarely reproduce.

The team’s data helps the research community prioritize which genes with LGDs are most likely to play a causal role in ASD. The team pares down a list of about 500 likely causal genes to slightly more than 200 best “candidate” autism genes.

The current study also sheds new light on the transmission to children of LGDs that are carried by parents who harbor them but whose health is nevertheless not severely affected. Such transmission events were observed and documented in the families used in the study, part of the Simons Simplex Collection (SSC). When parents carry potentially devastating LGD mutations, these are more frequently found in the ASD-affected child than in their unaffected children, and most often come from the mother.

This result supports a theory first published in 2007 by senior author Michael Wigler, a CSHL professor, and Kenny Ye, a statistician at Albert Einstein College of Medicine. They predicted that unaffected mothers are “carriers” of devastating mutations that are preferentially transmitted to children affected with severe ASD. Females have an as yet unexplained factor that protects them from mutations which, when they occur in males, will be significantly more likely to cause ASD. It is well known that at least four times as many males as females have ASD.

Wigler’s 2007 “unified theory” of sporadic autism causation predicted precisely this effect. “Devastating de novo mutations in autism genes should be under strong negative selection,” he explains. “And that is among the findings of the paper we’re publishing today. Our analysis also revealed that a surprising proportion of rare devastating mutations transmitted by parents occurs in genes expressed in the embryonic brain.” This finding tends to support theories suggesting that at least some of the gene mutations with the power to cause ASD occur in genes that are indispensable for normal brain development.

The work described here was supported by the Simons Foundation Autism Research Initiative.

“Low load for disruptive mutations in autism genes and their biased transmission” appears in the Early Edition of Proceedings of the National Academy of Sciences the week of September 21, 2015. The authors are: Ivan Iossifov, Dan Levy, Jeremy Allen, Kenny Ye, Michael Ronemus, Yoon-ha Lee, Boris Yamrom and Michael Wigler. The paper can be obtained [in full .pdf] at: http://www.pnas.org/content/early/recent

About Cold Spring Harbor Laboratory

Celebrating its 125th anniversary in 2015, Cold Spring Harbor Laboratory has shaped contemporary biomedical research and education with programs in cancer, neuroscience, plant biology and quantitative biology. Home to eight Nobel Prize winners, the private, not-for-profit Laboratory is more than 600 researchers and technicians strong. The Meetings & Courses Program hosts more than 12,000 scientists from around the world each year on its campuses in Long Island and in Suzhou, China. The Laboratory’s education arm also includes an academic publishing house, a graduate school and programs for middle and high school students and teachers. For more information, visit www.cshl.edu .

[Autism is a "genome disease" that is leading with the time-proven best approach to science. First, based on preliminary knowledge, theories emerge. Experiments, then follow predictions of a theory, which can be either be supportive of the theory, or contradict to it - leading to improvements of any given theory. (In this particular case, and potential improvement is to cover not only the "autism genes", but including te 98% of the DNA that is not genic. Structural variants of the so-called "non-coding, non-genic" sequences are also very well known to to be among the causes of genomic diseases). is certainly not a shere accident that SFARI (led by a world-class mathematician, James Simons) supports all sorts of approaches to understanding a genome disease, but as a mathematician clearly prefers those that do not stop at "big data gathering", but are based on (at the moment still competing) scientific theories. Based on my mathematization of neuroscience and genomics, by now in pursuit this theory/experimentation approach for nearly half a Century (1966-2015), the sheer cost of genome analysis will force a return to this "mathematical theory-based approach". Andras_at_Pellionisz_dot_com]


Sorry, Obama: Venter has no plans to share genomic data

J. Craig Venter plans to keep the genomic data gathered at Human Longevity tight to the chest.

Much like the White House’s Precision Medicine Initiative, the genomics luminary has announced plans to sequence one million genomes by 2020. So, in keeping with the current vogue of open-sourcing data, does Venter have any interest in commingling his genomic database with the government’s?

“Unlikely,” Venter said, during a keynote speech at the Mayo Clinic’s Individualizing Medicine Conference in Rochester, Minnesota this week.

“I think this notion that you can have genome sequences from public databases is extremely naive,” Venter said. “We’re worried there will be future lawsuits from people who were guaranteed anonymity who will clearly not have it.”

This stance will likely inform Venter’s policy on Human Longevity’s new consumer-facing genomics business, which was just announced today. In collaboration with a South African health insurer, Human Longevity will soon offer whole exome sequencing that tells individuals about their most medically relevant genetic information – for just $250.

This public offering could dramatically increase Human Longevity’s access to larger swaths of diverse DNA – helping make that goal of one million sequenced genomes by 2020 a reality.

Venter said that he’ll keep the Human Longevity data private because it’s challenging to deal with the accuracy and quality of data when it comes from from multiple sources. While the genomes studied at Human Longevity are all sequenced with Illumina’s HiSeq X Ten, Venter has his doubts about the machines and methods used to sequence genomes from other organizations.

“We get the highest quality of data of any center off the Illumina X10 sequencers, and don’t want to comingle it with data from other sources that don’t necessarily have the same degrees of accuracy.

The Human Longevity database will be built on self-generated data, he said, though it’ll likely share information about allele frequencies.

It was interesting to have Venter come straight out and say why Human Longevity is keeping its data proprietary. Venter has skirted the issue in the past, despite participating in White House precision medicine events. Last year, he wrote:

It is encouraging that the US government is discussing taking a role in a genomic-enabled future, especially funding the Food and Drug Administration (FDA) to develop high-quality, curated databases and develop additional genomic expertise. We agree, though, that there are still significant issues that must be addressed in any government-funded and led precision medicine program. Issues surrounding who will have access to the data, privacy and patient medical/genomic records are some of the most pressing.

We look forward to continuing the dialogue with the Administration, FDA and other stakeholders as this is an important initiative in which government must work hand in hand with the commercial sector and academia.

The Mayo Clinic discussion was a much more finite stance on his concerns of privacy in data sharing – and the consistency of data quality. Different scientists and different machines will interpret data from next-generation sequencing in a different manner.

But we’re not looking at a Sony vs. Betamax situation here – it’s unlikely that Human Longevity is competing with the government. This looks more like a matter of efficiency and pushing forward at a pace that’s easier in the private sector than in a bureaucracy.

[Just in the middle of Silicon Valley (Mountain View - Cupertino) now we have Google, Apple and Human Longevity competing in the Genome Informatics business. I never thought I would live this day! The old wisdom said "war is too important to be left for the generals". The up-to-date version is "Genome Informatics is too important to be left for government bureaucrats". Not that they are not good, but they do not create wealth, their greatness is to redistribute it as they please. The very same transition happened to Internet. It started as a government (defense) information-network (of computer system managers) - but it became such a hugely important business tool that President Clinton handed the Internet over to Silicon Valley Private Industry. In experts' hands, it took off in unprecedented ways. Internet Industry, or course, is both very capital-intensive and extremely competitive. Those who ever invested the kind of money needed to cart-in "the Next Big Thing" (or even just a major part of their life-effort) are "unlikely" to throw all their efforts into the wind. Surprising? Venter says "naive". He may be right. Andras_at_Pellionisz_dot_com.

J. Craig Venter to Offer DNA Data to Consumers

A genomic entrepreneur plans to sell genetic workups for as little as $250. But $25,000 gets you “a physical on steroids.”

MIT Technology Review

By Antonio Regalado on September 22, 2015

Fifteen years ago, scientific instigator J. Craig Venter spent $100 million to race the government and sequence a human genome, which turned out to be his own. Now, with a South African health insurer, the entrepreneur says he will sequence the medically important genes of its clients for just $250.

Human Longevity Inc. (HLI), the startup Venter launched in La Jolla, California, 18 months ago, now operates what’s touted as the world’s largest DNA-sequencing lab. It aims to tackle one million genomes inside of four years, in order to create a giant private database of DNA and medical records.

In a step toward building the data trove, Venter’s company says it has formed an agreement with the South African insurer Discovery to partially decode the genomes of its customers, returning the information as part of detailed health reports.

The deal is a salvo in the widening battle to try to bring DNA data to consumers through novel avenues and by subsidizing the cost of sequencing. It appears to be the first major deal with an insurer to offer wide access to genetic information on a commercial basis.

Jonathan Broomberg, chief executive of Discovery Health, which insures four million people in South Africa and the United Kingdom, says the genome service will be made available as part of a wellness program and that Discovery will pay half the $250, with individual clients covering the rest. Gene data would be returned to doctors or genetic counselors, not directly to individuals. The data collected, called an “exome,” is about 2 percent of the genome, but includes nearly all genes, including major cancer risk factors like the BRCA genes, as well as susceptibility factors for conditions such as colon cancer and heart disease. Typically, the BRCA test on its own costs anywhere from $400 to $4,000.

“I hope that we get a real breakthrough in the field of personalized wellness,” Broomberg says. “My fear would be that people are afraid of this and don’t want the information—or that even at this price point, it’s still too expensive. But we’re optimistic.” He says he expects as many as 100,000 people to join over several years.

Venter founded Human Longevity with Rob Hariri and Peter Diamandis (see “Microbes and Metabolites Fuel an Ambitious Aging Project”), primarily to amass the world’s largest database of human genetic and medical information. The hope is to use it to tease out the roles of genes in all diseases, allow accurate predictions about people’s health risks, and suggest ways to avoid those problems. “My view is that we know less than 1 percent of the useful information in the human genome,” says Venter.

The company this year began accumulating genomes by offering to sequence them for partners including Genentech and the Cleveland Clinic, which need the data for research. Venter said HLI keeps a “de-identified” copy along with information about patients’ health. HLI will also retain copies of the South Africans’ DNA information and have access to their insurance records.

“It will bring quite a lot of African genetic material into the global research base, which has been lacking,” says Broomberg.

Deals with other insurers could follow. Venter says that only with huge numbers will the exact relationship between genes and traits become clear. For instance, height—largely determined by how tall a person’s parents are—is probably influenced by at least hundreds of genes, each with a small effect.

Citing similar objectives, the U.S. government this year said it would assemble a study of one million people under Obama’s precision-medicine initiative (see “U.S. to Develop DNA Study of One Million People”), but it may not move as fast as Venter’s effort.

HLI has assembled a team of machine-learning experts in Silicon Valley, led by the creator of Google Translate, to build models that can predict health risks and traits from a person’s genes (see “Three Questions for J. Craig Venter”). In an initial project, Venter says, volunteers have had their facial features mapped in great detail and the company is trying to show it can predict from genes exactly what people look like. He says the project is unfinished but that just from the genetic code, HLI “can already describe the color of your eyes better than you can.”

Venter also said that this October the company will open a “health nucleus” at its La Jolla headquarters, with expanded genetic and health services aimed at self-insured executives and athletes. The center, the first of several he hopes to open, will carry out a full analysis of patients’ genomes, sequence their gut bacteria or microbiome, analyze more than two thousand other body chemicals, and put them through a full-body MRI scan. “Like an executive physical on steroids,” he says.

The health nucleus service will be priced at $25,000. These individuals would also become part of the database, Venter said, and would receive constant updates as discoveries are made.

While the quality of Venter’s science is not in much doubt, this is the first time since he was a medic in Vietnam that he’s doled out medicine directly. “I think it’s a good concept,” says Charis Eng, chair of the Cleveland Clinic’s Genomic Medicine Institute, which collaborates with Venter’s company. “But we who practice genomic medicine—we say HLI has absolutely no experience with patient care. I want to inject caution: it needs to be medically sound as well as scientifically sound.”


Venter has a history of selling big concepts to investors and then using their money to carry out exciting, but not necessarily profitable, science. In 1998 he formed Celera Genomics to privately sequence the human genome, but he was later booted as its president when its business direction changed. The economics of his current plan are also uncertain. Venter’s pitch is that with tens of thousands and ultimately a million genomes, he will uncover the true meaning of each person’s DNA code. But all those discoveries lie in the future.

And at a cost of around $1,000 to $1,500 each, a million completely sequenced genomes add up to an expense of more than a billion dollars. HLI has so far raised $80 million, but Venter says he is now meeting with investors in order to raise far larger sums.

Venter says he intends to offer several other common kinds of testing, including pre-conception screening for parents (to learn if they carry any heritable genetic risks), sequencing of tumors from cancer clinics, and screening of newborns. Those plans could bring HLI into competition with numerous other startups and labs that offer similar services.

“It would be just one more off-the-shelf genetic testing company, if the entire motivation weren’t to build this large database,” he says. “The future game is 100 percent in data interpretation. If we are having this conversation five to 10 years from now, it’s going to be very different. It will be, ‘Look how little we knew in 2015."

[Those who know Craig, will have little doubt that he will very rapidly become "the next generation 23andMe". True, the trailblazing 23andMe started 9.5 years ago, and the affordable technology just wasn't there to interrogate more than SNP (Single Nucleotide Polymorphisms, max. 1.6 million bases out of the full genome of 6.2 billion bases. Now the technology of full genome sequencing is affordable. Yet, there are two main issues to seriously contemplate. First, it paints a sad picture of US that 23andMe was seriously set back by FDA thus can not provide health advice - likewise Craig had to go through South Africa to the United Kingdom to avoid shooting himself in the foot in his homeland. Second, "exomics" (checking the integrity of the amino-acid-coding-sections of "exons") is certainly a big step ahead (there are over 5,000 known "Mendelian diseases" that can be traced back to structural variants of "exons"), focusing on only "less than 2%" is unlikely to yield clues for e.g. cancer, autism, auto-immune diseases etc. When Craig says "The future game is 100 percent in data interpretation", not only I absolutely agree, but sharpen his focus that "within the 100 percent, 99 percent of the game is "understanding genome regulation" - that involves according to ENCODE-2 "more than 80% of the full human genome, that is functional". While Craig, for personal reasons, was absent when I presented in his Institute my FractoGene (2007), based on Recursive Genome Function already in manuscript, his Institute was preoccupied (for 15 years...) by kicking into action a marginally reduced gene-set (of the smallest DNA of free living organisms). Assumption was that "there is not much regulation, if at all, of the ~300 genes, don't bother with it". Why took it 15 years to kick the reduced set to life? Why is Craig's full DNA sitting on the shelves without understanding how it works? The solution may lie NOT in comparing gezillion genomes - but in better understanding a single one. NOTE [to those with domain expertise in physics]: Generating "Big Data" by a super-expensive "super-collider" is unthinkable in physics, without an underlying quantum-physics model. Computers never "compare" gezillions of trajectories of particles - they compare how the experimentally observed trajectories are DIFFERENT from those predicted by models worked out through many decades. Genomics could waste any number of dollar billions, or even trillions, by trying to skip even the basics of solid mathematical theory. Yes, there are some. Look up just the peer reviewed papers, argue if you can. Provide yours if that looks better. Andras_at_Pellionisz_dot_com.]


Google (NASDAQ: GOOG) Dips into Healthcare Business

Alphabet Inc., the newly formed holding company tied to Google (NASDAQ: GOOG), is also joining hands with big investment projects, one of them being advanced medical research. The list of medical companies under the Google umbrella includes Google X, the research laboratory and Calico, the biotechnology company. The life sciences team of the company is also a part of it. Yet to be given a formal name, the team is slated to work on new technologies, pushing from the R&D stage to the final clinical testing.

Alphabet has offered minimal disclosure on its healthcare initiatives. Investment banks, however, hold the belief that the company is on its way to make a new multimillion-dollar business. Investors believe that Google initiatives will unlock substantial value. The initiative will begin to be clearer when the company will divide its finances into two parts: Alphabet and Google Inc. in the fourth quarter. Industry reports say that substantial efforts by Google reveal that it is targeting huge markets with wide-ranging projects. The advantage of such an approach is that the company can recoup its investments even with small success.

Google's strengths lie in three major technological trends: genome sequencing, health data digitization and shift from paying for healthcare dependent on its value. The company's expertise in cloud computing helps in data digitization and the other two are taken care of by the Life Sciences and Calico companies. The former inked a partnership with a pharmaceutical company Dexcom (NASDAQ: DXCM), to manufacture high technology diabetics products. The estimated market for such goods is estimated to be worth a minimum of $20 billion.

High Technology Investments

The Life Sciences team has worked on a number of projects in the past like the nanopill (for detecting cancer), a special sensor for monitoring patients with multiple sclerosis, and a baseline study to make the most comprehensive portrait of human genome and body. The company also worked on a kind of contact lens complete with embedded chip so that blood sugar levels can be monitored in individuals suffering from diabetes.

Investment analysts, however, are not putting any headline number to total basket as of yet. The reason is that development in the medical field proceeds very slowly, hindered by research and regulatory unpredictability. It is estimated that during Q4, when Google demarcates its core business results from Alphabet for its first time, the company will show R&D costs in the $3 billion to $5 billion region outside Google Inc. Considerable chunk of this money will probably be spent on healthcare.

[This is a very promising preview - especially in light of the news-item below. The journalist probably meant by listing Google's strength as "genome sequencing" the more appropriate "genome analytics by Big Data" (that started at some 2 years ago as "Google Genomics"). All in all (along with the news that Microsoft also threw their hat into the ring, see couple of entries below), we are at the point predicted in 2009 (see YouTube remark by pharma-guru Dr. Nikolich, at 104:45m "what if Microsoft would acquire a pharma company, or Google would buy and build a pharma company because it makes sense"). In a mere six years, both (and more, see Intel, GE Health, Amazon Web Services, Apple etc) are happening at the tune of many $Billions. Dr. Insel, for instance, even at NIMH gathered genome sequence data on autism (and NCI did so for cancer). Would any NIH Institute (NIMH, NCI, or any of the 27 Institutes and Centers) become at any time be a World-leader in Informatics? Personally, my experience does not suggest any strong likelihood. On the other hand, there is hardly any doubt, that we are already in a formative period in the IT business. Sure, it my take anything from 1/2 to 2 years when a full-blown IT-driven-Health-Care-pie will be sliced out. Based on what? Mostly on cross-domain expertise, and since it is competitive business, ruled by entrenched in IP, I would say. andras_at_pellionisz_dot_com]


Head of Mental Health Institute Leaving for Google Life Sciences

New York Times

By BENEDICT CAREY

SEPT. 15, 2015

Thomas R. Insel (63), head of NIH-NIMH resigns by November, 2015 to Google Life

Dr. Thomas R. Insel, the director of the National Institute of Mental Health, announced on Tuesday that he planned to step down in November, ending his 13-year tenure at the helm of the world’s leading funder of behavioral-health research to join Google Life Sciences, which seeks to develop technologies for early detection and treatment of health problems.

The announcement is no small personnel matter for the behavioral sciences.

Losing Dr. Insel leaves the agency — which is growing in importance and visibility in the wake of the Obama administration’s brain initiative — with a large hole, mental health experts in and out of government said. Dr. Insel has been an agreeable, determined, politically shrewd presence at an agency that has often taken fire from advocacy groups, politicians and scientists.

In hiring him, Google, which is in the process of reorganizing into a new company called Alphabet, lands a first-rate research scientist and administrator with an exhaustive knowledge of brain and behavioral sciences. He has also recruited some of the top researchers into the brain sciences from other fields.

“Tom’s leaving is a great loss for all of us,” said Dr. E. Fuller Torrey, the executive director of the Stanley Medical Research Institute, a nonprofit that supports research into severe mental illnesses. “He refocused N.I.M.H. on its primary mission — research on the causes and better treatments for individuals with serious mental illness.”

Dr. Francis S. Collins, the director of the National Institutes of Health, appointed Dr. Bruce Cuthbert as acting director of the agency while he looks for a replacement. Dr. Cuthbert, who has held leadership positions within the N.I.M.H., has made it clear he would prefer to continue work on initiatives within the agency, rather than run it, the agency said.

In an interview, Dr. Collins said he planned to fill the position as quickly as he could, “but realistically that means six months at minimum, and maybe not until next summer.” He said he would appoint a search committee, made up of institute directors and outside experts, and would consult with Dr. Insel closely. He said that he and Dr. Insel agreed in broad terms about the direction of the agency, but that the search “would not be about zeroing in on a clone of Tom; there are others out there who will have a slightly different view and that’s fine.”

Dr. Insel’s jump to the private sector represents a clear shift in his own thinking, if not necessarily behavioral sciences as a whole.

A brain scientist who made his name studying the biology of attraction and pair bonding, Dr. Insel took over the N.I.M.H. in 2002 and steered funding toward the most severe mental disorders, like schizophrenia, and into basic biological studies, at the expense of psychosocial research, like new talk therapies. His critics — and there were plenty — often noted that biological psychiatry had contributed nothing useful yet to diagnosis or treatment, and that Dr. Insel’s commitment to basic science was a costly bet, with uncertain payoffs.

“The basic science findings are fascinating, but have failed so far to provide clinically meaningful help to a single patient,” said Dr. Allen James Frances, an emeritus professor of psychiatry at Duke University. “Meanwhile, we neglect 600,000 of the severely ill, allowing them to languish in prison or on the street because treatment and housing are shamefully underfunded.”

In his new job, Dr. Insel will do an about-face of sorts, turning back to the psychosocial realm, only this time with a new set of tools. One project he has thought about is detecting psychosis early, using language analytics — algorithms that show promise in picking up the semantic signature of the disorganized thinking characteristic of psychosis.

“The average duration of untreated psychosis in the U.S. is 74 weeks, which is outrageous, completely unacceptable,” he said in an interview. “I think it’s not unreasonable, with data analytics — Google’s sweet spot — to get that down to seven weeks, by 2020.”

Moment-to-moment mental tracking has also become a commercial reality, he said, and that technology could help identify, and more precisely address, the sources of depression and anxiety, including social interactions or sleep disruption. “The idea is to use the power of data analytics to make behavioral studies much more objective than they have been before,” he said.

[Wikipedia insert - AJP

At NIMH he quickly focused on serious mental illnesses, such as schizophrenia, bipolar illness, and major depressive disorder with a defining theme of these illnesses as disorders of brain circuits. Building on the genomics revolution, he created large repositories of DNA and funded many of the first large genotyping and sequencing efforts to identify risk genes. He established autism as a major area of focus for NIMH and led a large increase of NIH funding for autism research. Under his leadership, autism, as a developmental brain disorder, became a prototype for mental disorders, most of which also emerge during development].

["Budget cuts hit autism research" insert - AJP

Federal support for autism research quadrupled between 2003 and 2010, but those boom days are over, National Institute of Mental Health director Thomas Insel told attendees at the International Meeting for Autism Research in San Diego yesterday. The base budget for the National Institutes of Health (NIH) was slashed by $1.6 billion this year, forcing one percent cuts across the board. Meanwhile, $122 million earmarked for autism research from the American Recovery and Reinvestment Act — the stimulus bill passed early in the Obama administration — ran out in 2010. “We’re concerned and we hope that you are concerned as well,” Insel told the audience. “We are at a turning point.” In 2009, the last year for which numbers are available, the NIH funded two-thirds of the $314 million spent on autism research. This year’s cuts will affect both investigators who already have grants — which will receive one percent less than in 2010 — and those applying for funding. “We also won’t have as much as we like for new and competing grants this year,” Insel said. “We will be reducing the number of new awards very significantly.” The current Congress also appears unlikely to reauthorize the Combating Autism Act of 2006, which created the Interagency Autism Coordinating Committee, which sets priorities for government-funded research, said Insel. “There’s a bit of a taboo in Congress these days to do disease-specific authorizations,” Insel said. Public-private partnerships are one strategy to help meet the federal funding shortfall, Insel suggested.”]


Google Life Sciences is already developing a contact lens that tracks glucose levels for diabetes management, and a heart activity monitor worn on the wrist. Dr. Insel’s ideas for mood and language tracking are, for now, just that — ideas. The company has not yet decided on where first to invest in mental health. [While cancer is the low-hanging fruit for genomic precision diagnosis/therapy, autism and schizophrenia are also eminent candidates. With these "genomic diseases" massive re-arrangements of genomic sequences are already proven - AJP]

When he steps down in November, Dr. Insel, 63, will have been the longest-serving director of N.I.M.H since Dr. Robert H. Felix, the agency’s founder, who left in 1964. Dr. Insel’s tenure spanned four presidential terms, during which he honed an easygoing political persona and an independent vision of the agency’s direction. He was, especially in recent years, outspoken in defense of his methodologies, at one point publicly criticizing establishment psychiatry for its system of diagnosis, which relies on observing behaviors and not any biological markers.

In taking the Life Sciences job, he and his wife, Deborah, a writer, will be moving to the Bay Area, a place they once knew well, when he spent time studying at the University of California, San Francisco. Both of their children were born in that city. But that was more than 20 years ago, and some things have changed, he said.

“We were just out there, looking for a tiny cottage,” he said. “We’re still recovering from sticker shock.”

[See comment after the 6-months old news below - AJP]

---

Outgoing U.S. cancer chief reflects on his record, what’s next

Science

By Jocelyn Kaiser 5 March 2015

Nobelist Harold Varmus (75), head of NIH-NCI resigns, March 15, 2015 to New York Genome Center (etc)

Late yesterday afternoon, as Washington, D.C., was readying to shut down for a snowstorm, National Cancer Institute (NCI) chief and Nobel Prize–winning cancer biologist Harold Varmus announced that he is stepping down at the end of this month. Although few even on his own staff were expecting the news, it was not a big surprise coming less than 2 years before the end of the Obama administration, when many presidential appointees leave for their next job.

In a resignation letter to the research community, Varmus decried the harsh budget climate he has faced and pointed to a list of accomplishments, from creating an NCI center for global health to launching a project to find drugs targeting RAS, an important cell signaling pathway in cancer. “I think he’s done a wonderful job under difficult circumstances,” says cancer biologist Tyler Jacks of the Massachusetts Institute of Technology in Cambridge and chair of NCI’s main advisory board. “He brought tremendous scientific credibility to the position. And he managed to do some new and creative things.” NCI Deputy Director Douglas Lowy will serve as acting director.

In a phone interview this morning as the first snowflakes began to fall, Varmus reflected on his time at NCI and what he will do when he returns full time to New York City. (He has been commuting from his home there to NCI in Bethesda, Maryland.) He will run a “modestly sized” lab at Weill Cornell Medical College in New York City, Varmus wrote in his letter, as well as serve as an adviser to its dean, and work with the New York Genome Center.

[Nobelist Dr. Varmus, at the age of 75, did not overly surprised those 6 months ago who carefully monitor the government-to-private-sphere exodus. With Dr. Varmus' outstanding achievements, staying as a government bureaucrat seems not as attractive than saving his commute from NYC to DC, and greatly contribute to an elite mix of local (NYC) private institutions. An entry in this column already shows that at a "Critical Juncture in the fight against cancer", after the leave of Dr. Varmus, already produced some remarkable symptoms of a profound crisis at NCI. Nobody seems to deny that "cancer is the disease of the genome" - yet some are clueless (bordering on professional/ethical vulnerability) if one or another theory of informatics is the way to go. Not an easy job for the head of NIH to sort out (with a Ph.D. that started from quantum physics).

The case of Dr. Varmus seems "routine" compared to the totally stunning switch by Dr. Insel (63) from MIMH to Google Life. I never expected that in my lifetime I will wittness the dramatic switch of another outstanding scientist, Dr. Insel. It is not just a huge move in the exodus from government bureaucracy to for-profit private sphere. When I published in 1989 (Cambridge University Press) a Fractal Model of a Brain Cell (along with the explicit pointer that genome-driven growth of fractal structures calls for "recursive genome function", (ibid, pp. 461-462) my existing NIH grant was discontinued, and my application for the new NIMH Program "Mathematical/Computational/Theoretical Neuroscience (cited ibid, p.462) was declined. The "reason" was that with my principle of recursive genome function I overturned BOTH cardinal axioms of Old School Genomics ("Junk DNA" AND "Central Dogma"). In all fairness to Dr. Insel it is cardinal to point out that all his happened BEFORE Dr. Insel became director of NIHM).

Now we see the Director of NIMH, who stepped in shortly after the above double fiasco, and he turned increasingly from neuroscience to genomics (e.g. in autism), is on his way to head one of the biggest of Big IT (to Google Life - Apple is even bigger). With the help of Big IT, there is no limit to gainfully engage the world's most sophisticated algorithmic/computing power guided by top domain expertise of New School Genomics/Neuroscience. In part, IP already exists, in part a beautiful mathematics is already emerging to unify neuroscience and genomics. andras_at_pellionisz_dot_com]


Bill Gates and Google back genome editing firm Editas

Wired UK (Aug. 15)

Bill Gates and Google are among some of the high-profile backers of a genome editing company that's raised $120 million (£77 million) to help develop DNA-editing technology.

According to Bloomberg, Editas Medicine Inc. has received funding from Boris Nikolic, former chief adviser for science and technology to Microsoft founder Bill Gates, who has also backed the donations.

In a statement released by the Cambridge, Massachusetts-based biotech company, it was also revealed that Nikolic has joined its board. Other notable investors in Editas include Silicon Valley's Google Ventures and venture-capital firm Deerfield Management Co.

The funding is designed to support development of Crispr-Cas 9, a technology that can be used to treat potentially deadly diseases by "fixing" faulty genes. Editas is currently testing the technology to help correct eye disorders, and is collaborating with Juno Therapeutics Inc., a firm which genetically engineers immune-system cells to help fight cancer.

The pioneering technology enables scientists to "correct" the human genome by removing the malfunctioning sections of DNA -- almost like using highly precise scissors -- and putting healthy, "working" ones in their place. Unlike many other genome editing methods currently used, Crispr is relatively cheap and easy to use, attracting interest from a broad range of scientists looking to modify everything from human cells to plants.

However, the technology has also generated controversy, with some scientists calling for Crispr to be banned from modifying the "human germline": human sperm, eggs and embryos. Although Editas CEO Katrine Bosley said that the company is yet to begin human trials on its treatments, the company has assured that it doesn't work on the human germline.

Jim Flynn, managing partner at Deerfield, which has invested in the project to the tune of $20 million (£13.8 million), said Crispr has "broad applicability". Acknowledging the long-term goals of the company, which is joined by other genome editing firms in the field such as Intellia Therapeutics Inc. and Precision BioSciences Inc., Bosley commented: "This is a marathon that we are in here, and all of these investors understand that."

[A totally new dimension is opened for the HolGenTech, Inc. logo "Ask what you can do for your genome"! With FractoGene ("Fractal DNA grows fractal organisms") there is already a potential to find in the full human DNA "fractal defects" (that can change e.g. a fractal lung into one with a cancerous tumor - by the way, cancerous growth is also fractal, but it is defective). Many may ask (worth writing a book on the subject) "what can I do for my genome?" Genome Editing, while in a rudimentary form is already a reality should not be misinterpreted as an immediate quick solution, certainly removes the presentlyl rather lethargic attitude into a hopeful stance. It is a matter of will of the medical community, the amount of resources devoted and the time required to carry "Genome Editing" into a regular medical practice. It may take years or even decades. Just think about it, however, if Steve Jobs would have more years and Apple would have devoted at least a few percent of its resources, we could all be better off already. Bill Gates apparently fully understands the issue! - andras_at_pellionisz_dot_com]


Zephyr Health grabs $17.5M with infusion from Google Ventures

This health data company expands Google Venture’s portfolio as healthtech becomes top focus of the VC’s investments

Zephyr Health, an up-and-coming health data company, has just completed a new funding round of $17.5 million with the lead investment coming from Google Ventures. The company to date has raised $33.5 million, including participation from investors Kleiner Perkins Caufield & Byers and Icon Ventures.

The company collects data via its Illuminate solution from multiple sources (epidemiology data, sales and profile data for healthcare providers (HCPs), and hospitals according to Zephyr’s website) in order to better inform health professionals on appropriate treatment regimens for patients. The data can also be used to measure which drugs and products are more popular by region and to adjust their sales strategies accordingly with predictive analytics. Their data sync can also integrate with other office organizing software like Salesforce and Oracle according to the company.

Google Ventures increases focus on Health Startups

Google Ventures touts having invested in over 300 companies, comprising a very diverse portfolio up to this point. According to the VC’s website they have “a unique focus on machine learning and life science investing.” The health section of GV’s portfolio jumped from the smallest to largest recipient of its funds between 2012 and 2014. That shift might be reflected in the growth of Google’s Life Sciences division in 2013, which could be poised for more growth following the corporate shakeup that gave birth to Alphabet Inc. two weeks ago.

The health section of that investment strategy is hearty. The VC lists Bill Maris, Krishna Yeshwant, Blake Byers, Scott Davis, Anthony Philippakis and Ben Robbins among its top investing partners. GV has invested in genetics startup 23andme, oncology data company Flatiron, genomic treatment firm Editas Medicine and several more. Flatiron itself has been the recipient of $100 million in Google Ventures investments.

Zephyr counts Genentech, Gilead, Medtronic, Onyx and Amgen among its corporate clients.

The company was founded in 2011 by CEO William King. While the company has its headquarters in San Francisco, they maintain offices in London and Pune, India.

--

http://www.theverge.com/2015/8/21/9187131/google-life-sciences-becomes-first-alphabet-company

Google co-founder and Alphabet president Sergey Brin published a blog post this morning announcing Life Sciences as the first new company created under the Alphabet umbrella. The move was expected, as Alphabet CEO Larry Page wrote during the announcement of the new holding company that this area of focus was the perfect example of why Google needed to restructure itself. It's a bold bet with an enormous potential reward, but one that is far removed from Google's core business, and not likely to be financially self-sustaining anytime soon.

There are a number of already public projects that will be rolled up into life sciences:

Smart contact lenses that can monitor the blood sugar levels of diabetics

Nanoparticles that can be used to detect and fight cancer

A baseline study that will create the richest portrait yet of the human body and genome

The Life Sciences company will be headed up by Andy Conrad, who was previously the head of....Google Life Sciences. Not much will change under Alphabet, in other words, besides a shuffling of titles and corporate structure. Still, there is no denying that the company's goals are an exciting use of Google's ample profits for humanity, if perhaps not as appealing to investors in Google's advertising business.

"They’ll continue to work with other life sciences companies to move new technologies from early stage R&D to clinical testing — and, hopefully — transform the way we detect, prevent, and manage disease," wrote Brin. "The team is relatively new but very diverse, including software engineers, oncologists, and optics experts. This is the type of company we hope will thrive as part of Alphabet and I can’t wait to see what they do next."

Update: This post originally stated that Calico would be part of the new Life Sciences company. Calico was already an independent company from Google and will remain that way under Alphabet. It will not be rolled up into Life Sciences.

What About the Moon?


https://www.genomeweb.com/scan/what-about-moon

Aug 13, 2015

Google's reorganization as Alphabet has left many people wondering just what the move means for the company's various ventures, including its biotech aspirations.

As FierceBiotech notes, this restructuring could open up the company's 'moonshot' ventures, including Calico, Google's project aimed at exploring human longevity, to the scrutiny of investors.

Currently, Calico benefits from an undisclosed budget and a "long leash," FierceBiotech says. At Forbes, Matthew Herper notes that the company, headed by Arthur Levinson, has "been incredibly quiet, and deliberate, and I have no idea what they're doing."

He adds that Calico is "stocked with world-class scientists, people like David Botstein, who helped invent the science of genomics, and Cynthia Kenyon, one of the world's top aging researchers."

But as re/code reports, part of this rearrangement at Google is to increase transparency. And Forbes' Herper, among others, wonders how the glare of investors might affect the prospect of moonshot projects like Calico.

"[G]iving investors a view of how the base business is working through separate financial reports will help calm their nerves," he says, "But do the pitchforks ever come out from the myopic crowds? Could Calico ever be stuck in the terrible, deceitful purgatory of the biotechnology industry, where companies try to break up years-long scientific endeavors into quarterly bites?"

In an email, Levinson tells Herper not to worry, as he doesn't expect "Calico's mission, directions or goals (either near or long- term)" to changes because or the restructuring.

In the end, Herper says that the restructuring itself may not matter. "It's a dramatic way for [Google's Sergey] Brin and [Larry] Page to say that they will remake their company to protect their bets on alpha, their moonshots, that that stuff isn't changing," he says.

[The massive reorganization of traditional Google, creating Alphabet, Inc. with subsidiaries as (new) "G" Google, "L" Life, further distancing the spin-off Calico (etc) might take a while. It seems presently unknown, for instance how Google Genomics will emerge from the reorg. It is unlikely that it will remain part of the "core business" ("G", Google). "Getting a little bit pregnant with genomics" has happened to quite a number of companies that I rather closely witnessed over decades. Thus, to me as a presently neutral genome informatics domain expert, the options appear to be the following: (a) Since "G" in the Alphabet is already taken (by Google search/advertisement), HoloGenomics "H" might become one of the subsidiaries to be deemed as presently marginally profitable, but as "the next big thing with extremely lucrative profits". (b) A lesser alternative is to put GG under "Life". "C" is even weaker (c) Have GG tucked underneath "Cloud". (d) The default is to abort Genome Informatics altogether. In the "Cloud space", Amazon Genomics is already doing better, and Apple is already claiming the most lucrative software/cloud/smartphone "combination slice" of the Genome pie (see their announcement on July 15 in this column, added with the new information that Apple, in addition to the Spaceship 2nd HQ ready next year, bought yet another "campus", 47 acres, with the office-capacity of "Spaceship"). If just in the USA which of the four Big IT companies (Google, Apple, Amazon or Microsoft) will become the winner in the genome space largely depends on an issue that has long been neglected by companies that "gotten little bit pregnant with genomics". A proper "mother" company needs to ensure not only the massive amount of resources, but has to be blessed by the "domain expertise" to carry such a baby to term. There is a noteworthy historical precedent. "Nuclear Technology" (peaceful or not) could have happened either to Germany or to the USA, and the "grey matter" made the fortunate outcome. Success depended which power could successfully recruit the best of the "Copenhagen group" of quantum-mechanics. Without a breakthrough theory it would have been not only foolish, but utterly dangerous to start "nuclear technology" lacking the underpinning of "nuclear physics" - andras_at_pellionisz_dot_com]


Evolution 2.0 by Perry Marshall

[The Book and the Website]

"Armed with computer science and electrical engineering, Perry fights an uphill battle to unite the space between those who believe evolution is random and those who believe species are designed by God, who in some cases deny evolution itself. Some will never yield their 'God-given right to be atheists'. For them, Perry's fluid reasoning, his vivid, readable explanations, easy style and enjoyable storytelling may be deemed 'unreasonable' or 'argued to death'.

Unless, of course, someone wins the technology prize (capped at $10 Million)! Should that happen, nobody will argue with success. Until then, people will be debating this book for years.

Judge the book by the science within its pages - and enjoy the story"

This is how I (with further degrees, one also a Ph.D. in biology) endorsed Perry's book. Both I (and Perry Marshall) agree on he key tenet that "Evolution is a fact". Nonetheless, nobody, not even "Evolution experts in biology" are satisfied with "Evolution as a theory" - in fact "Evolution expert biologists" fiercely fight one-another; see respective blog-infights. Darwin's simplistic concept of "random mutation & natural selection" hardly satisfies anyone in our times. Thus, the real and admirably daring question is depicted by the diagram of Evolution "version two" - where the O-shaped figure contains two "designs". The inner core is the man-made design of a mechanical clock - while the outer shell is a Nature-generated fractal design. Most readers would clearly know that my FractoGene ("fractal DNA grows fractal organisms") opts for the latter.

Perry destroys by a few eminently readable "family conversation" the misbelief if anyone conveniently reduces the label of an apparently complex pattern as "random". Read in his page 281:

"My own musical sweet spot is an odd place where hard rock overlaps with jazz. One day I had the music cranked up, playing a rock/jazz piece that's right in my zone. My wife walks in the room. "Will you please turn that down?" "Oh, you don't like the distorted guitars?" "I don't mind the guitar all that much actually. But I can hear the entire bass line in the other room and I can't stand the randomness."

"Randomness?! That's not random. It's fractal!"

Perry Marshall links on the bottom of the same page (281) to my 2002 website "The Evolution Revolution", and cites for the fractal mathematics a co-authored textbook chapter (Pellionisz et al., 2013) e.g. with Jean-Claude Perez (France), with direct new line of evidence of the fractality of the genome.


Genome researchers raise alarm over big data

Storing and processing genome data will exceed the computing challenges of running YouTube and Twitter, biologists warn.

Erika Check Hayden

07 July 2015

The computing resources needed to handle genome data will soon exceed those of Twitter and YouTube, says a team of biologists and computer scientists who are worried that their discipline is not geared up to cope with the coming genomics flood.

Other computing experts say that such a comparison with other ‘big data’ areas is not convincing and a little glib. But they agree that the computing needs of genomics will be enormous as sequencing costs drop and ever more genomes are analysed.

By 2025, between 100 million and 2 billion human genomes could have been sequenced, according to the report1, which is published in the journal PLoS Biology. The data-storage demands for this alone could run to as much as 2–40 exabytes (1 exabyte is 1018 bytes), because the number of data that must be stored for a single genome are 30 times larger than the size of the genome itself, to make up for errors incurred during sequencing and preliminary analysis.

The team says that this outstrips YouTube’s projected annual storage needs of 1–2 exabytes of video by 2025 and Twitter’s projected 1–17 petabytes per year (1 petabyte is 1015 bytes). It even exceeds the 1 exabyte per year projected for what will be the world’s largest astronomy project, the Square Kilometre Array, to be sited in South Africa and Australia. But storage is only a small part of the problem: the paper argues that computing requirements for acquiring, distributing and analysing genomics data may be even more demanding.

Major change

“This serves as a clarion call that genomics is going to pose some severe challenges,” says biologist Gene Robinson from the University of Illinois at Urbana-Champaign (UIUC), a co-author of the paper. “Some major change is going to need to happen to handle the volume of data and speed of analysis that will be required.”

Narayan Desai, a computer scientist at communications giant Ericsson in San Jose, California, is not impressed by the way the study compares the demands of other disciplines. “This isn’t a particularly credible analysis,” he says. Desai points out that the paper gives short shrift to the way in which other disciplines handle the data they collect — for instance, the paper underestimates the processing and analysis aspects of the video and text data collected and distributed by Twitter and YouTube, such as advertisement targeting and serving videos to diverse formats.

Nevertheless, Desai says, genomics will have to address the fundamental question of how much data it should generate. “The world has a limited capacity for data collection and analysis, and it should be used well. Because of the accessibility of sequencing, the explosive growth of the community has occurred in a largely decentralized fashion, which can't easily address questions like this," he says. Other resource-intensive disciplines, such as high-energy physics, are more centralized; they “require coordination and consensus for instrument design, data collection and sampling strategies”, he adds. But genomics data sets are more balkanized, despite the recent interest of cloud-computing companies in centrally storing large amounts of genomics data.

Coordinated approach

Astronomers and high-energy physicists process much of their raw data soon after collection and then discard them, which simplifies later steps such as distribution and analysis. But genomics does not yet have standards for converting raw sequence data into processed data.

The variety of analyses that biologists want to perform in genomics is also uniquely large, the authors write, and current methods for performing these analyses will not necessarily translate well as the volume of such data rises. For instance, comparing two genomes requires comparing two sets of genetic variants. “If you have a million genomes, you’re talking about a million-squared pairwise comparisons,” says Saurabh Sinha, a computer scientist at the UIUC and a co-author of the paper. “The algorithms for doing that are going to scale badly.”

Observational cosmologist Robert Brunner, also at the UIUC, says that, rather than comparing disciplines, he would have liked to have seen a call to arms for big-data problems that span disciplines and that could benefit from a coordinated approach — such as the relative dearth of career paths for computational specialists in science, and the need for specialized types of storage and analysis capacity that will not necessarily be met by industrial providers.

“Genomics poses some of the same challenges as astronomy, atmospheric science, crop science, particle physics and whatever big-data domain you want to think about,” Brunner says. “The real thing to do here is to say what are things in common that we can work together to solve.”

Nature doi:10.1038/nature.2015.17912

[During the summer of 2015 practically all Big IT companies of the world signed up for "Genomics turned Informatics" - originally heralded by LeRoy Hood, 2002. The line-up is marked by Microsoft also joining the fray in Silicon Valley by Intel, Apple, a reorganized Google Genomics all claiming a slice of the silicon pie. The analysis will show how the present challenge is different from previous disruptive science/technology endeavors; in need of much more cohesion than at any time in the history of basic science breakthroughs translated into immediate applications - Andras_at_Pellionisz_dot_com]


Intricate DNA flips, swaps found in people with autism

Print Jessica Wright

A surprisingly large proportion of people with autism have complex rearrangements of their chromosomes that were missed by conventional genetic screening, researchers reported 2 July in the American Journal of Human Genetics1.

The study does not reveal whether these aberrations are more common in people with autism than in unaffected individuals. But similar chromosomal rearrangements that either duplicate or delete stretches of DNA, called copy number variations, are important contributors to autism as well as to other neuropsychiatric disorders. These more complex variations are likely to be no different, says lead researcher Michael Talkowski, assistant professor of neurology at Harvard University.

Talkowski’s team found intricate cases of molecular origami in which two duplications flank another type of structural variation, such as an inversion or deletion.

“This is going to become an important class of variation to study in autism, long term,” Talkowski says.

The finding is particularly important because current methods of genetic analysis are not equipped to detect this type of chromosomal scrambling. The go-to method for clinical testing — which compares chopped-up fragments of an individual’s DNA with a reference genome on a chip — can spot duplications or deletions. But this method cannot tell when a DNA sequence has been flipped or moved from one chromosomal location to another, for example.

Variations like this even confound genome-sequencing technologies. Last year, for example, researchers published the results of two massive projects that sequenced every gene in thousands of people with autism. But because these genetic jumbles often fall outside gene-coding regions, they remained unnoticed.

“The complexity of genomic variation is far beyond what current genomic sequencing can see,” says James Lupksi, professor of molecular and human genetics at the Baylor College of Medicine in Houston, Texas, who was not involved in the study. “We don't have the analysis tools to see it, even though it's right there before our very eyes.”

Complex chromosomes:

Researchers have long had hints that complex variations exist, but they had no idea how prevalent they are. In 2012, using a method that provides a rough picture of the shape of chromosomes, Talkowski and his team found pieces of DNA swapped between chromosomes in 38 children who have either autism or another neurodevelopmental disorder2.

Lupski’s team also found examples in which two duplications bracket a region that appears in triplicate3. Then last year, Talkowski and his colleagues reported one example of a chromosomal duplication that flanks a flipped, or inverted, section of DNA4.

In the new study, the researchers looked at 259 individuals with autism and found that as many as 21, or 8 percent, harbor this type of duplication-inversion-duplication pattern. And a nearly equal number of individuals have other forms of rearrangement, such as deleted segments sandwiched between duplications.

The researchers were able to reveal these complex variants by sequencing each genome in its entirety. The traditional method chops up the genome into fragments that are about 100 bases long. When mapped back to a reference genome, however, these short fragments may miss small duplications or rearrangements.

The new method instead generates larger fragments, containing roughly 3,700 nucleotides apiece. Scientists then sequence the 100 nucleotides at the ends of each fragment. When mapped back to a reference genome, the large fragments reveal structural changes. For example, when a pair of sequenced ends brackets more DNA than is found in the reference sequence, that fragment may contain a duplication.

Because the approach generates multiple overlapping fragments, researchers also end up with about 100 pieces of sequence that include the junctions, or borders, of the rearranged fragments. The abundance of overlapping sequences provides significantly more detail than the standard method, which covers each nucleotide only a few times.

“The researchers have a found a more novel way to sequence and dug in to an insane degree — it’s work that almost no one else would want to try to attempt, because it’s so difficult,” says Michael Ronemus, research assistant professor at Cold Spring Harbor Laboratory in New York, who was not involved in the study. “The findings give us a sense of how common these things might be in human genomes in general.”

Whether these rearrangements are important contributors to autism and neurodevelopmental disorders is still an open question — one that Talkowski and his colleagues are gearing up to address. The genomes they sequenced came from the Simons Simplex Collection, a database that includes the DNA of children with autism and their unaffected parents and siblings. (The collection is funded by the Simons Foundation, SFARI.org’s parent organization.)

The researchers are using their methods to sequence the genomes of the children’s relatives. This experiment will reveal whether complex variants are more common in people with autism than in unaffected family members.

Already, there are hints that the rearrangements contribute to autism risk in some individuals. Overall, the variants in the study duplicate 27 genes, introduce 3 mutations and in one case fuse two genes together. (The particular genes involved depend on where the mix-up occurs in the genome.) Sequencing studies have tied one of the duplicated genes, AMBP, to autism. And a regulatory gene that is disrupted by the rearrangement, AUTS2, also has strong links to the disorder.

News and Opinion articles on SFARI.org are editorially independent of the Simons Foundation.

References:

1: Brand H. et al. Am. J. Hum. Genet. 97, 170-176 (2015) PubMed

2: Talkowski M.E. et al. Cell 149, 525-537 (2012) PubMed

3: Carvalho C.M. et al. Nat. Genet. 43, 1074-1081 (2011) PubMed

4: Brand H. et al. Am. J. Hum. Genet. 95, 454-461 (2014) PubMed


The case for copy number variations in autism

Print Meredith Wadman

17 March 2008

Following a series of papers in the past two years, what seems irrefutable is that copy number variations ― in which a particular stretch of DNA is either deleted or duplicated ― are important in autism1,2.

Already, "CNVs are the most common cause of autism that we can identify today, by far," notes Arthur Beaudet, a geneticist at the Baylor College of Medicine in Houston.

What confronts researchers now is uncovering when and how CNVs influence autism. Do these variations cause the disease directly by altering key genes, or indirectly, in combination with other distant genes, or are they coincidental observations with no link to the disease?

The answer seems to be all of the above.

"In some cases these CNVs are causing autism; in some they are adding to its complexity; and in some they are incidental," says Stephen Scherer, director of the Center for Applied Genomics at The Hospital for Sick Children in Toronto. "We need to figure out which are which."

In February, Scherer published the latest CNV paper identifying 277 CNVs in 427 unrelated individuals with autism3. In 27 of these patients, the CNVs are de novo, meaning that they appear in children with autism, but not in their healthy parents.

Among the key findings in that paper are de novo CNVs on chromosome 16, at the same spot previously identified by a report published in January by Mark Daly and his colleagues.

Hot spots:

Different teams have documented a few of these 'hot spots' on the genome where CNVs are seen in up to one percent of people with autism ― and virtually never in those without it.

There are intriguing suggestions that CNVs uncovered at these hot spots may not be autism-specific. For example, three of the patients found to have a duplication on chromosome 16 in the January paper have been diagnosed with developmental delay and not autism.

A laundry list of other CNVs have only been identified in a single, individual with autism, making it difficult to tag them as a cause of the disease.

"[When] people publish big lists of regions, there's an implicit thing that if my kid has this, it's going to have autism," says Evan Eichler, a Howard Hughes Medical Institute investigator at the University of Washington in Seattle. But, "there's no proof," he notes.

To replicate lone findings in other individuals with autism, some researchers are trying to screen much larger samples of individuals with autism.

"Screening 5,000 families instead of 500 would really be of huge benefit," says Jonathan Sebat of the Cold Spring Harbor Laboratory in New York. Sebat and Mike Wigler propelled the field forward last year with a a high-profile list of de novo CNVs4. Their team is gearing up to scan 1,500 families with just one affected child ― in whom de novo mutations are more likely to turn up.

Scherer's group is screening the most promising CNVs from their February paper ― those they identified in two or more unrelated people, or that overlap with a gene already suspected in autism ― in a larger sample of nearly 1,000 patients.

Complex scenarios:

The team is drilling down to find smaller changes: deletions or duplications shorter than 1,000 bases in length. But the answers are unlikely to be simple.

For instance, Scherer found one 277 kilobase deletion at the tip of chromosome 22 in a girl with autism. Another team had reported in 20065 that mutations in this region cause autism in several families by crippling one of the body's two copies of the gene coding for SHANK3, a protein that is crucial for healthy communication between brain cells. In the same girl, however, Scherer also found something new: a duplication of a chunk of genome on chromosome 20 that is five times as big as the deletion on chromosome 22.

If the chromosome 22 deletion hadn't already been documented ― and if Scherer's study hadn't resolved down to 277 kilobases ― it would have been easy to assume that the chromosome 20 duplication was entirely responsible for the girl's autism.

As it stands, however, "probably some of the genes that are being duplicated on chromosome 20 are adding complexity to her autism," Scherer says, noting that the girl's symptoms include epilepsy and abnormal physical features.

The fact that the same hot spot has been implicated in different cognitive disorders adds to the complexity. A given CNV "is not always associated just with autism," says Eichler. "That's what messing with people's minds."

Eichler raises another issue that researchers need to resolve: nomenclature.

Copy number variations are a subset of a bigger category of mutations called structural variations. These include other changes such as inversions and translocations of large chunks of sequence, which don't lead to a net gain or loss in sequence as deletions and duplications do, but can still have significant consequences for cognitive function6.

"Copy number is not as good a term," says Eichler. "Structural variation includes inversion and translocation, [and is] a much more encompassing term."

References:

Jacquemont M.L. et al. J. Med. Genet. 43, 843-849 (2006) PubMed ↩

Weiss L.A. et al. N. Engl. J. Med. 358, 667-675 (2008) PubMed ↩

Marshall C.R. et al. Am. J. Hum. Genet. 82, 477-488 (2008) PubMed ↩

Sebat J. et al. Science 316, 445-449 (2007) PubMed ↩

Durand C.M. et al. Nat. Genet. 39, 25-27 (2006) PubMed ↩

Bonaglia, M.C. et al. Am. J. Hum. Genet. 69, 261-268 (2001) PubMed ↩

[A biophysicist to mathematicians: Please note that his article, holding the conclusion "irrefutable is that copy number variations ― in which a particular stretch of DNA is either deleted or duplicated ― are important in autism" originated in 2008 - the proverbial 7 years ago. Biophysicists are overjoyed when the eminently measurable "repeats" are "irrefutably" linked to "mysterious" diseases, such as autism, cancer and a slew of auto-immune diseases, see summary in Pellionisz (2012), Pellionisz et al (2013). Gaining a mathematical handle, indeed, is a major step towards software-enabling algorithms to engage vast computer power to unlock "genomic mysteries". However, mathematicians often drill down to find the definition of any new mathematical-looking entity. In seven years till the above article, CNV (Copy Number Variation) has not been mathematically defined in a generally accepted manner. Some "define" as a "copy" a string of bases that is composed of 1,000 bases, others define "copy" that is composed of 10,000, or 100,000, or even 1,000,000 bases. Too many "definitions" is "no definition". FractoGene is based on the universally accepted fact that the human genome is replete with repeats of different lenghts - and since Pellionisz (2009) the measurable characteristics of control versus diseased genomes are their Zipf-Mandelbrot-Fractal-Parabolic-Distribution-Curves. After the proverbial 7-years, we stand ready for deployment. Andras_at_Pellionisz_com]


The mystery of the instant noodle chromosomes

July 23, 2015

This is an example of hierarchical folded package of globule. Credit: L. Nazarov

A group of researchers from the Lomonosov Moscow State University tried to address one of the least understood issues in the modern molecular biology, namely, how do strands of DNA pack themselves into the cell nucleus. Scientists concluded that packing of the genome in a special state called "fractal globule", apart from other known advantages of this state, allows the genetic machinery of the cell to operate with maximum speed due to comperatively rapid thermal diffusion. The article describing their results was published in Physical Review Letters which is one of the most prestigious physics journals with the impact factor of 7.8.

Fractal globule is a mathematical term. If you drop a long spinning fishing line on the floor, it will curtail immediately into such an unimaginably vile tangle that you will either have to unravel it for hours, or run to the store for a new one. An entangled state like this is an example of the so-called equilibrium globule. Fractal globule is a much more convenient state. Sticking to the fishing line example fractal globule is a lump, where the line is never fastened in a knot, instead it is just curled into series of loops with no loops tangled with each other. Such a structure—a set of free loops of different sizes - can be unraveled by just pulling it by two ends.

Due to this structure of loops or crumples,, which reminds the structure of an instant noodle block, Soviet physicists Alexander Grosberg, Sergey Nechayev and Eugene Shakhnovich, who first predicted it back in 1988, named this structure "crumpled globule". In the recent years it is more often called a fractal globule. On the one hand, this new name just sounds more sophisticated and serious than "crumpled globule", but on the other hand, it fully reflects the properties of such a globule, because, like all fractals, its structure, which, in this case is represented by a set of loops of different sizes, is repeated in the small and large scale.

For a long time the predicted crumpled globule state remained a purely theoretical object. However, the results of the recent studies indicate that the chromosomes in the cell nucleus may be packed into a fractal globule. There is no consensus on this issue in the scientific community, but the specialists working in this area are much intrigued about this possibility and during the last 5-7 years there has been a flood of research on fractal globule packing of the genome.

The idea that chromatin (that is to say, a long strand consisting of DNA and attached proteins) in a cell nucleus may be organized in a fractal globule makes intuitive sense. Indeed, the chromatin is essentially a huge library containing all the hereditary information "known" to a cell, in particular, all the information about synthesis of all the proteins which the organism in principle is able to produce. It seems natural that such a huge amount of data, which should be preserved and kept readable in a predictable way, should be somehow organized. It makes no sense to make the strands consisting different parts of information entangled and knotted around each other, such an action seems akin to gluing or tying up together the volumes in a library: obviously, it makes the contents of the books much less accessible to a visitor.

In addition, it seems natural that a strand in a fractal globule has, in the absence of knots, a greater freedom of movement, which is important for the genome function: it is necessary for the gene transcription regulation that the individual parts of the genome meet each other at the right time, "activating" the signal for reading the entire system and pointing the place where the reading should start. Moreover, all of this must happen quickly enough.

"According to the existing theories if the polymer chain is folded into a regular equilibrium globule, the mean square of the chain link thermal displacement increases with time as time to the power 0.25",—says Mikhail Tamm, a senior researcher at the Department of Polymer and Crystal at the Physics Faculty of the Lomonosov Moscow State University.

According to Mikhail Tamm, he and his colleagues managed to come up with a somewhat similar theory for a link of a polymer chain folded in a fractal globule.

"We were able to evaluate the thermal dynamics inherent to this type of conformation. The computer simulations we have conducted are in good agreement with our theoretical result",—says Mikhail Tamm.

Scientists from the Lomonosov Moscow State University developed a computer modeling algorithm that allows to prepare a chromatin chain packed in a fractal globule state and to monitor the thermal processes taking place there. Importantly, they managed to model a very long chain, consisting of one quarter million units, which is the longest accessible so far.

According to Mikhail Tamm, chains in the modeling need to be long in order to get meaningful results, but modeling of long chains is usually hampered by the fact that it takes them a very long time to equilibrate, while without proper equilibration the results on thermal diffusion as well as other characteristics of the chains are unreliable.

The researchers were able to successfully solve this problem by the combination of a properly constructed software and access CPU time on the MSU supercomputer "Lomonosov", and assess the dynamics of the thermal motion in a fractal globule. It was found that the links of the chromatin chain packed in a fractal globule moves faster than in a comparable equilibrium one. Indeed, the mean square thermal displacement of the link does not grow in proportion to the time to the power 0.25 anymore, but as time to the power 0.4. It means that the movement of the links turns out to be much faster. It seems to be an additional argument in support of the fractal globule model of the chromatin.

The researchers hope that their work will help to provide better insight in the functioning of the gene storage and expression machinery in the cell nucleus.

"From the point of view of dynamics, we would like to understand what are the built-in characteristic times, what processes can occur simply due to thermal motion, and which ones inevitably require the use of active elements to speed up the functioning of DNA",—summed up Mikhail Tamm.

More information: Physical Review Letters DOI: 10.1103/PhysRevLett.114.178102


Can ‘jumping genes’ cause cancer chaos?

Category: Science blog July 10, 2015 Kat Arney

[Fig. 2. of the science article linked below]

Statistically speaking, your genome is mostly junk.

Less than two per cent of it is made up of actual genes – stretches of DNA carrying instructions that tell cells to make protein molecules. A larger (and hotly debated) proportion is given over to regulatory ‘control switches’, responsible for turning genes on and off at the right time and in the right place. There are also lots of sequences that are used to produce what’s known as ‘non-coding RNA’. And then there’s a whole lot that is just boring and repetitive.

As an example, the human genome is peppered with more than half a million copies of a repeated virus-like sequence called Line-1 (also known as L1).

Usually these L1 repeats just sit there, passively padding out our DNA. But a new study from our researchers in Cambridge suggests that they can start jumping around within the genome, potentially contributing to the genetic chaos underpinning oesophageal cancer.

Let’s take a closer look at these so-called ‘jumping genes’, and how they might be implicated in cancer.

Genes on the hop

The secret of L1’s success is that it’s a transposon – the more formal name for a jumping gene. These wandering elements were first discovered in plants by the remarkable Nobel prize-winning scientist Barbara McClintock, back in 1950. [As we know, Barbara McClintock' discovery was denied in the most unprofessional manner from 1950 till 1983 when she received her Nobel-prize. 33 years (a full generation) was so bad that Dr. McClintock could consider her lucky that she survived a systemic denial. The set-back of science by that denial was much longer than 33 years, however. Consider that science actually proceeded "to fight the wrong enemy", to borrow a phrase from Nobelist Jim Watson . How many people died misrable deaths over the negligence? Andras_at_Pellionisz_dot_com ]

They’re only a few thousands DNA ‘letters’ long, and many of them are damaged. But intact L1 transposons contain all the instructions they need to hijack the cell’s molecular machinery and start moving.

Firstly, their genetic code is ‘read’ (through a process called transcription) to produce a molecule of RNA, containing instructions for both a set of molecular ‘scissors’ that can cut DNA, together with an unusual enzyme called reverse transcriptase, which can turn RNA back into DNA.

Together these molecules act as genetic vandals. The scissors pick a random place in the genome and start cutting, while the L1 RNA settles itself into the resulting gap. Then the reverse transcriptase gets to work, converting the RNA into DNA and weaving the invader permanently into the fabric of the genome.

This cutting and pasting is a risky business. Although many transposons will land safely in a stretch of unimportant genomic junk without causing any problems, there’s a chance that one may hopscotch its way into an important gene or control region, affecting its function.

So given that cancers are driven by faulty genes, could hopping L1 elements be responsible for some of this genetic chaos?

In fact, this idea isn’t new.

More than two decades ago, scientists in Japan and the US published a paper looking at DNA from 150 bowel tumour samples. In one of them they discovered that an L1 transposon had jumped into a gene called APC, which normally acts as a ‘brake’ on tumour growth. This presumably caused so much damage that APC could no longer work properly, leading to cancer.

Because every L1 ‘hop’ is a unique event, it’s very difficult to detect them in normal cells in the body. But tumours grow from individual cells or small groups of cells, known as clones. So if a transposon jump happens early on during cancer development, it will probably be detectable in the DNA of most – if not all – of the cells in a tumour.

Thanks to advances in DNA sequencing technology, it’s now possible to detect these events – something that researchers are starting to do in a range of cancer types.

Jumping genes and oesophageal cancer

In the study published today, the Cambridge team – led by Rebecca Fitzgerald and Paul Edwards – analysed the genomes of 43 oesophageal tumour samples, gathered as part of an ongoing research project called the International Cancer Genome Consortium.

Surprisingly, they found new L1 insertions in around three quarters of the samples. On average there were around 100 jumps per tumour, although some had up to 700. And in some cases they had jumped into important ‘driver’ genes known to be involved in cancer.

The findings also have relevance for other researchers studying genetic mutations in cancer. Due to technical issues with analysing and interpreting genomic data, it looks like new L1 insertions are easily mistaken for other types of DNA damage, and may be much more widespread than previously thought.

So what are we to make of this discovery?

Finding evidence of widespread jumping genes doesn’t prove that they’re definitely involved in tumour growth, although it certainly looks very suspicious, and there are a lot of questions still to be answered.

For a start, we need to know more about how L1 jumps affect important genes, and whether they’re fuelling tumour growth.

It’s also unclear why these elements go on the move in cancer cells in such numbers: are they the cause of the genetic chaos, or does their mobilisation result from something else going awry as cancer develops for other reasons?

Looking more widely, and given that it seems to be particularly tricky to correctly identify new L1 jumps in DNA sequencing data, it’s still relatively unknown how widespread they are across many other types of cancer.

Finding the answers to these questions is vital. Rates of oesophageal cancer are rising, particularly among men, yet survival remains generally poor. As part of our research strategy we’ve highlighted the urgent need to change the outlook for people diagnosed with the disease, through research into understanding its origins, earlier diagnosis and more effective treatments.

By understanding what’s going on as L1 elements hopscotch their way across the genome, we’ll gain more insight into the genetic chaos that drives oesophageal cancer.

In turn, this could lead to new ideas for better ways to diagnose, treat and monitor the disease in future. Let’s jump to it.

Reference:

Paterson et al. Mobile element insertions are frequent in oesophageal adenocarcinomas and can mislead paired end sequencing analysis. BioMed Central Genomics. DOI: 10.1186/s12864-015-1685-z.

[It is sinking in deeper and deeper that Nonlinear Dynamics (Chaos & Fractals) are lurking behind cancer. The Old School is becoming brutally oversimplified with "genes" and "Junk DNA". Hundreds of millions are dying of the most dreadful illness ("the disease of the genome", a.k.a. "cancer") - and some may still hide in the denial that the sole cause of cancer is a handful of "genes" ("oncogenes") going wild. While the linked science article does not dip into the mathematics, their cited Fig. 2. shows an obviously "non-random" pattern - look at most of the evolving fractals. Andras _at_Pellionisz_dot_com]


Why you should share your genetic profile [the Noble Academic Dream and the Harsh Business Climate]

Fifteen years ago, a scrappy team of computer geeks at UC Santa Cruz assembled the first complete draft of the human genome from DNA data generated by a global consortium, giving humanity its first glimpse of our genetic heritage.

And then we did something the private corporation competing with us never would have done: We posted the draft on the Web, ensuring that our genetic blueprint would be free and accessible to everyone, forever.

This opened the door to global research and countless scientific breakthroughs that are transforming medicine. Today, every major medical center offers DNA sequencing tests; we can sequence anybody’s genome for about $1,000.

This is a game-changer. The era of precision medicine is upon us.

Consider the 21st century war on cancer: When a patient is diagnosed with cancer, her doctor compares her tumor’s genome to those in an enormous worldwide network of shared genomes, seeking matches that point to the best treatment strategies and the best outcomes.

This is not fantasy. UC Santa Cruz already manages more than 1 quadrillion bytes of cancer-genomics data — the world’s largest collection of genomic data from the most diverse collection of cancerous tumors ever assembled for general scientific use.

A multinational consortium of children’s hospitals is enabling members to compare each child’s cancer genome to this huge set of pediatric and adult cancer genomes. This is how we will decode cancer. It’s how we will tailor treatment to individual patients. It will save lives.

But this will come to pass only if we work together.

Competition among medical centers can make them reluctant to share data with each other. There are ethical and privacy considerations for patients. We need to overcome these challenges, build a secure network of data-sharing, and usher in the long-sought era of precision medicine.

Patients can help by asking their doctors and medical centers to share their genetic profiles — securely — with researchers around the world through the Global Alliance for Genomics and Health. The alliance has mobilized hundreds of institutions worldwide to build the definitive open-source Internet protocols for sharing genomic data. Our goal is to speed doctors’ ability to tailor treatments to the genetic profiles of individual patients.

The power of this data network will be only as strong as it is vast. The bigger the pool of samples, the greater the likelihood of finding molecular matches that benefit patients, as well as patterns that shed new light on how normal cells become malignant. Genomics can help us decode diseases from asthma and arthritis to Parkinson’s and schizophColorenia.

Fifteen years ago, when we released that first sequence of our genome, humanity’s genetic signature became open-source. I remember the feelings of awe and trepidation I experienced that day, realizing that we were passing through a portal through which we could never return, uncertain exactly what it would mean for humanity.

Today, the meaning is clear. We are finally realizing the promise of genomics-driven precision medicine.

David Haussler is professor of biomolecular engineering, director of the Genomics Institute at UC Santa Cruz, and a co-founder of the Global Alliance for Genomics and Health.

[David Haussler, a longtime colleague and friend, is one of the towering Giants of Genome Informatics . His uniquely profuse school at the Genomics Institute at UC Santa Cruz, of turning out perhaps the largest number of brilliant Ph.D. graduates (in Stanford, throughout Academia and some even in business) put the University of California at Santa Cruz (and the parent organization of The University of California System) at a special juncture of history.

There is no doubt that his Academic Dream ("let's all pitch in for free") is the Noblest goal of a High Road. We all believe in dreams and wish good luck to Dave. Incidentally, the dream of Al Gore to create a "free for all Information Superhighway" (The Internet) was based on similarly Noble Aspiration. I took part (at that time, at NASA Ames Research Center in Silicon Valley) of putting together a "Blue Book" that outlined the future of Internet - on a $2 Bn government budget. It was Bill Clinton, who released the Internet (originally a shoe-string project of defense information network, capable of surviving even if the Soviets would blow out major information-hubs like NYC, D.C., Chicago, or even Colorado Springs). The defense-backbone of Internet is now stronger than ever - but President Clinton's decision to release massive development to Private Industry exploded the $2 Bn National Budget to levels, when a few days ago valuation of just a single company (Google) catapulted, on a single day, by $17 Bn.

With "one thousand dollar sequencing, a million dollar interpretation", it is easy to do the math for the budget necessary to build a "1 million human DNA fully sequenced" for a genome-based "precision medicine".

Since the Private Sector (led by Craig Venter) announced such a plan even before the US Government floated a sentence in the "State of the Union", we are talking about a $2 Trillion ticket (one from Government, one from Private Industry, predictably with not much overlap). This makes sense, since the US Health Care System ("Sick Care System", rather, branded by Francis Collins) is in the yearly $2 Trillion range. To effectively change it, one would require commensurate funds. The promise to ask $200 Million from Congress, even if granted, would amount to 1% of the needed expenditure.

The University of California System, on a Sacramento budget and with severe restrictions in its Charter, may be unlikely to catch the tiger of Global Private Industry by the tail. One might argue that even the entire budget of NIH (a yearly $30 Bn) might be unrealistic for this colossal task. On the other hand, in Private Sector, Apple, Google, Microsoft valuation combined is already above the $1 Trillion range - and it is predicted that Google or Apple might reach that valuation alone.

Granted, e.g. Google spends on "Google Genomics" presently "on the side" - at best. However, they have already clinched a business model (see in this column) that for-profit-users of Google Genomics (such as Big Pharma that can easily afford), are already obligated to pay license fees to the Broad Institute for their proprietary software toolkit. (It infuses massive domain expertise into the art & technology of "handling data-deluge" of any kind by Google). It is interesting to note, that the amount of genomic data at Google presently amounts to a mere 1/3 of "YouTube". As I predicted in my 2008 Google Tech Talk YouTube, the problem is NOT "Information Technology", but "Information Theory".

It is predicted herein, that massive amounts will be paid for people with cancer for their extremely precious "genomic data along with medical profile". Individuals might never get a penny of it directly, just like you use Google for "free" (you pay when you buy as a result of a "click-through"). Existing business model and cash-flow is worked out through the monstrous advertising business & coupled "recommendation engines". With cancer, when you will opt for genome-based therapy, you will get a "cut" (virtual payment) if you "freely donate" your genomic data and health profile. Surely, while arriving at a deal with the advertising business is fairly straightforward, forging viable business models with the colossal Health Care System is a bit more advanced. However, it already started, see in this column that Google could even work out a business deal with the non-profit Broad Institute.

Working with Intellectual Property holdings is a breeze - Andras_at_Pellionisz_dot_com ].


Why James Watson says the ‘war on cancer’ is fighting the wrong enemy

Andrew Porterfield | May 26, 2015 | Genetic Literacy Project

Since President Richard Nixon asked Congress for $100 million to declare a “war on cancer” in 1971, hundreds of billions of dollars worldwide have been dedicated to research unlocking the mystery of the various forms of the disease, and how to treat it. But some suggest the war may be being fought on the wrong front.

To be sure, our understanding of genetics, cellular growth and cancers has grown exponentially. We know how cancer can be linked to mutations of genes that either encourage abnormal cell growth, or wreck the internal system of checks and balances that normally stymie that growth. We have narrowed the number of those genes down to several hundred. And, we know about genes that can halt abnormal development. We’re inserting them into cancerous cells in trials. Perhaps most significantly, we’re at a stage in which cancer specialists prefer to refer to cancers by genetic makeup, instead of by the traditional organ of first appearance.

But for many cancers, none of this is working. To be sure, overall cancer death rates have decreased, by 1.8 percent a year for men, and 1.4 percent a year for women in recent decades. But death rates from some cancers have remained stubbornly constant, while others have risen. Additionally, the National Cancer Institute estimates that the number of people with cancer will increase from 14 million to 22 million over the next 20 years.

The thing about war is: if you’re fighting and the enemy’s numbers are increasing (or at least not dropping very much), victory probably isn’t near.

A spreading, migrating issue

One issue might be the fact that primary tumors—cancers that first appear in the body, and are recognized by that location, be it the liver, lung, brain or colon—aren’t the reason most people die from cancer. Most people die because of cancer cells that break off from primary tumors, and settle in other parts of the body. This process of metastasis is responsible for 90 percent of cancer deaths. However, only 5 percent of European government cancer research funds, and 2 percent of U.S. cancer research funds, are earmarked for metastasis research.

So, for as much as we understand the genetics of primary, initial tumors, we know far less about the cancers that truly kill. And to James Watson–the molecular biologist, geneticist and zoologist, best known as one of the co-discoverers of the structure of DNA in 1953–that’s a central problem with cancer research. In a recent “manifesto” published in Open Biology, Watson asked for another war:

The now much-touted genome-based personal cancer therapies may turn out to be much less important tools for future medicine than the newspapers of today lead us to hope. Sending more government cancer monies towards innovative, anti-metastatic drug development to appropriate high-quality academic institutions would better use National Cancer Institute’s (NCI) monies than the large sums spent now testing drugs for which we have little hope of true breakthroughs. The biggest obstacle today to moving forward effectively towards a true war against cancer may, in fact, come from the inherently conservative nature of today’s cancer research establishments.

Watson, who shared a Nobel Prize with Francis Crick and Maurice Wilkins for discovering the structure of DNA, is well known for his pronouncements. which often have been labeled immodest, insulting and worse. But in this case, he also may be right.

What do other scientists say?

Mark Ptashne, a cancer researcher at Memorial Sloan Kettering Cancer Center in New York, agrees that money is being misspent on the wrong kind of drugs. Cancer cells are smart enough to work around the drugs. And cancer cells that have migrated and reformed (metastasized) may be quite different from their original parent tumor cells. Still other cancers have metastasized, but from where is unknown. Finally, in the brain, most adult tumors there are metastatic. This all means that even if a treatment is effective for a primary cancer, it likely won’t be for a metastatic one.

Metastasis is extremely complicated. Very slowly, institutions are starting to look more closely at metastasis, and provide more research funding for it. But, as the Memorial Sloan Kettering Cancer Center warned, it could take a long time before treatments arise. But it’s probably going to take more than 2-5 percent of government cancer research funding.

Dig in for a long war.

Andrew Porterfield is a writer, editor and communications consultant for academic institutions, companies and non-profits in the life sciences. He is based in Camarillo, California. Follow @AMPorterfield on Twitter.

--

[Jim Watson is on record of the Royal Society at least still 2013: "Still dominating NCI's big science budget is The Cancer Genome Atlas (TCGA) project, which by its very nature finds only cancer cell drivers as opposed to vulnerabilities (synthetic lethals). While I initially supported TCGA getting big monies, I no longer do so. Further 100 million dollar annual injections so spent are not likely to produce the truly breakthrough drugs that we now so desperately need." - Andras_at_Pellionisz_dot_com ]


National Cancer Institute: Fractal Geometry at Critical Juncture of Cancer Research

[Dr. Simon Rosenfeld at National Cancer Institute is on record with an original open access text, reproduced below (note the free use, mirrored here). The single-author original manuscript naming the fractalist Dr. Grizzi (in Italy) as "corresponding author" (sic), see mirror, was submitted to a Journal "Fractal Geometry and Nonlinear Analysis in Medicine and Biology" (with an Italian doctor who knows a little bit about fractals as "Editor in Chief" of the brand new Journal on "Fractals"). Once the original manuscript, with appropriate references on fractals, was accepted, the single author "changed his mind" and replaced the original submission (compare to "mirror") by a compromised truncated pdf paper reflecting on a "Critical Junction" of Cancer Research. Excerpts below from the open access text (comprised by the running title of the review article "Fractal Geometry and Nonlinear Analysis in Medicine and Biology) demonstrate another endorsement of the FractoGene approach. FractoGene papers are linked here to the free full pdf files of the original peer-reviewed articles cited in the open access text. Note that 40 of the 50 original references point to "fractal".

Those involved (see above) have been duly notified on the potential IP-issues, but perhaps out of embarrassment and since all of them are (presently) pursue non-profit academic activities, apparently decided to turn down even the ethical minimum of "requests for Erratum". NIH and its National Cancer Institute bears responsibility for the ethical conduct of taxpayer-supported academic decisions at a declared "Critical Junction". Those already pursuing for-profit activities (or with an ambition to do so) should be henceforth on "Notice that IP-infringements are monitored and proper consequences will ensue". Andras_at_Pellionisz_dot_com]

Critical Junction: Nonlinear Dynamics, Swarm Intelligence and Cancer Research

Simon Rosenfeld

National Cancer Institute, Division of Cancer Prevention, USA

E-mail : sr212a@nih.gov

DOI: 10.15761/FGNAMB.1000103

Complex biological systems manifest a large variety of emergent phenomena among which prominent roles belong to self-organization and swarm intelligence. Despite astoundingly wide repertoire of observed forms, there are comparatively simple rules governing evolution of large systems towards self-organization, in general, and towards swarm intelligence, in particular. In this work, an attempt is made to outline general guiding principles in exploration of a wide range of seemingly dissimilar phenomena observed in large communities of individuals devoid of any personal intelligence and interacting with each other through simple stimulus-response rules. Mathematically, these guiding principles are well captured by the Global Consensus Theorem (GCT) allowing for unified approach to such diverse systems as biological networks, communities of social insects, robotic communities, microbial communities, communities of somatic cells, to social networks, and to many other systems. The GCT provides a conceptual basis for understanding the emergent phenomena of self-organization occurring in large communities without involvement of a supervisory authority, without system-wide informational infrastructure, and without mapping of general plan of action onto cognitive/behavioral faculties of its individual members. Cancer onset and proliferation serves as an important example of application of these conceptual approaches. A growing body of evidence confirms the premise that disruption of quorum sensing, an important aspect of swarm intelligence, plays a key role in carcinogenesis. Other aspects of swarm intelligence, such as collective memory, adaptivity (a form of learning from experience) and ability for self-repair are the key for understanding biological robustness and acquired chemoresistance. Yet another aspects of swarm intelligence, such as division of labor and competitive differentiation, may be helpful in understanding of cancer compartmentalization and tumor heterogeneity.Conclusion

Complex hierarchy of perfectly organized entities is a hallmark of biological systems. Attempts to understand why's and how's of this organization lead inquiring minds to various levels of abstraction and depths of interpretation. In this paper, we have attempted to convey the notion that there exists a set of comparatively simple and universal laws of nonlinear dynamics which shape the entire biological edifice as well as all of its compartments. These laws are equally applicable to individual cells, as well as to biochemical networks within the cells, as well as to the societies of cells, as well as to the societies other than the societies of cells, as well as to the populations of individual organisms. These laws are blind, automatic, and universal; they do not require existence of a supervisory authority, system-wide informational infrastructure or some sort of premeditated intelligent design. In large populations of individuals interacting only by stimulus-response rules, these laws generate a large variety of emergent phenomena with self-organization and swarm intelligence being their natural manifestations.

Key words

global consensus theorem, swarm intelligence, biomolecular networks, carcinogenesis

Introduction

Swarm intelligence of social insects and microbial colonies vividly demonstrates how far the evolution may progress having at its disposal only simple rules of interaction between unsophisticated individuals. The Lotka-Volterra (LVS) family of mathematical models, being among the first models capable of describing the very complex systems with very simple rules of interactions, demonstrates how complex may be the behaviors of even a simple food web consisting of only one predator and one prey. The repertoire of behaviors of multispecies populations is virtually unlimited. In particular, it has been shown that swarm intelligence may originate from rather mundane reasons rooted in simple rules of interactions between these entities. The goal of this paper is to provide a brief overview of properties of the multidimensional nonlinear dynamical systems which have a potential of producing the phenomenon of self-organized behavior and manifesting themselves as swarm intelligence.

Swarm intelligence, definitions and manifestations

By definition, swarm intelligence is the organized behavior of large communities without global organizer and without mapping the global behavior onto the cognitive/behavioral abilities of the individual members of the community [1]. It should be emphasized that what is called here communities are not necessarily the communities of living entities like bee hives or ant hills or microbial colonies. Moreover, complexity of collective behavior of the community as a whole does not require its individual members to have any extensive analytical tools or even memory on their own. The key prerequisite for the possibility of community-wide self-organization is that individual members may interact following the stimulus-response rules. Large-scale community-wide behaviors and self-organized modalities are completely determined by these low-level local interactions. There are a number of closely related but distinctly different aspects of swarm intelligence. These are collective memory, adaptivity, division of labor, cooperation, sensing of environment (a.k.a., stigmergy) and quorum sensing. All these aspects are the emergent properties resulting from local member-to-member interactions without a general plan of action, without a supervisory authority, and without a system-wide information infrastructure. From the mathematical standpoint, a large system of locally interacting units is a dynamic network governed by the laws of nonlinear dynamics. The following question, therefore, is in order: what exactly are the laws of local interactions leading to emergence of complex behaviors which are referred to as swarm intelligence?

Mechanistic origins of self-organization and swarm intelligence

A comparatively simple, and abundantly well studied, example of the system manifesting the property of swarm intelligence is neural network (NN) [2,3]. NN functionality originates from and closely mimics the neuronal networks constituting the nervous systems of higher organisms. Among the analytical tools collectively known as artificial intelligence, NNs retain the leading positions in a variety of computational tasks; among them are pattern recognition and classification, short- and long-term storage of information, prediction and decision making, optimization, and other. Due to the fundamental property of being universal approximators, the NNs are capable, in principle, of representing any nonlinear dynamical system. Such systems may possess a number of asymptotically stable attractors. This means that starting with a large variety of initial conditions belonging to a certain basin of attraction the system may evolve towards one of the several well-defined stable manifolds. This process is in fact nothing other than classification of initial states which occurs in the system without any organizational force or supervisory authority.

The Lotka-Volterra Systems (LVS) is a large class of dynamical systems described by the ordinary differential equation with quadratic nonlinearities [4]. Originally inspired by ecology and population dynamics, the LVS theory largely retains their flavors and terminology. In particular, independent variables are assumed to be the population levels of corresponding species, the coefficients describe the rates of reproduction and extinction. Interactions between the species may be mutualistic (cooperative) or antagonistic (competitive). This terminology evokes dramatic visions of struggle for survival, either individually or collectively, so frequently observed in the world of living entities. However, from the mathematical standpoint, there is nothing dramatic in the LVS dynamics: all the systems describable by LVS, whether belonging to biological, or physical, or technological, or social, or financial realms, will have similar dynamical behaviors and analogous emergent properties. Due to this reason, and in order to avoid direct ecological connotations, the variables in LVS are often called quasi-species thus emphasizing that the actual nature of these species is of secondary importance.

A fundamental question pertaining to competitive LVS is the question of dynamic stability. In the context of population dynamics, stability means that, despite the fact that all the species are struggling with each other, they may nevertheless come to some sort of peaceful coexistence or consensus regarding the distribution of limited resources. Since nothing except the pair-wise interactions is included in LVS dynamics, this consensus cannot be a result of collective decision making or planning. The challenge and fundamental importance of the question of stability have been articulated by S. Grossberg: "The following problem, in one form or another, has intrigued philosophers and scientists for hundred of years: How do arbitrarily many individuals, populations, or states, each obeying unique and personal laws, ever succeed in harmoniously interacting with each other to form some sort of stable society, or collective mode of behavior? Otherwise expressed, if each individual obeys complex laws, and is ignorant of other individuals except via locally received signals, how is social chaos averted? How can local ignorance and global order, or consensus, be reconciled? ...What design constrains must be imposed on a system of competing populations in order that it be able to generate a global limiting pattern, or decision, in response to arbitrary initial data?...How complicated can a system be and still generate order? [5]"

The questions outlined above have been successfully resolved within a wide class of competitive nonlinear dynamical systems, with NNs and LVS being their particular cases. In order to avoid cumbersome mathematical notation and explicit definitions within this paper we will call these system G-systems. The fundamental Global Consensus Theorem (GCT), proved by S. Grossberg in a series of publications [5-8] claims that within the class of G-systems the tendency to self-organization is rooted in fairly simple nature of things: any complex system whose unstoppable growth is inhibited by progressively dwindling resources will end up with some sort of self-structuring and consensus regarding the distribution of resources. Generality and simplicity of the G-systems dynamics guarantees its applicability to very wide class of natural, technological and societal phenomena. Transition from the dominance of one quasi-species to another may appear as a struggle for survival, and it is indeed an existential struggle in the predator-prey food chains. Although the metaphor of struggle for survival is widely used beyond the world of living entities, it is obvious from the GCT that the reasons for competitive dynamics leading to consensus may be much simpler and may have nothing to do with personal motivation of a living entity to survive. In this context, it is not out of place to recall that the co-founder of LVS, Alfred Lotka, pointed out that natural selection should be approached more like a physical principle subject to treatment by the methods of statistical mechanics, rather than as struggle of living creatures motivated by the desire to survive [9].

The GCT provides a deep insight into the seemingly miraculous property of complex hierarchical systems to be self-organized at each level without supervisory authority, without informational infrastructure, without necessity for its units to have understanding of the process as a whole, and without invoking the metaphor of struggle for survival. The GST also provides the clues on how such complex emergent phenomenon as swarm intelligence may appear in the systems consisting of only unsophisticated individuals devoid of any personal intelligence and interacting with each other only through simple pair-wise stimulus-response rules.

Swarm intelligence in G-systems

Chemical networks

Perhaps the simplest G-system fully satisfying the provisions of the GCT is a system of concurrent chemical reactions usually called a chemical network. It is not, however, immediately evident whether or not chemical constituents interacting through stimulus-response rules (chemical reactions) may form a network capable of solving intelligent tasks such as pattern recognition or computation. In this venue, the simplest model of a chemical neuron has been proposed by Okamoto et al. [10]. The possibility of connecting the Okamoto-type chemical neurons into a network has been analyzed in-depth in the series of publications by Hjelmfelt and Ross [11-14]. In particular, in Hjelmfelt et al. [11,14] the feasibility of a chemical finite-state computing machine has been demonstrated; such a machine would include the most fundamental elements of traditional electronic computers, namely binary decoder, binary adder, stack of memory and internal clock. The possibility of a programmable chemical NN capable of storing patterns and solving pattern recognition problems has been proved in Hjelmfelt et al. [12]. At last, an ultimate computer science conjecture – whether or not a Turing Machine can be constructed from oscillating chemical reactions – has been also resolved affirmatively [13].

A systematic study of biochemical information-processing systems has been reported in [15]. A detailed comparison of computational capabilities of NNs and those of biochemical networks suggests the idea that these capabilities have very much in common. In a more general context, it should be noted that any system representable through NN may be considered as a version of a Turing Machine. And an even more powerful statement is also valid: any function computable by Turing Machine can be also computed by an appropriately parameterized processor net constructed of biochemical entities [16]. In practical terms, all this means that each biochemical network may be thought as an entity performing certain computation and may be formally represented through an appropriately constructed Turing Machine. And conversely, any function computable by a Turing Machine may also be computed by a specially designed biochemical network.

The famous question posed by Alan Turing in his groundbreaking paper "Can a machine think?" [17] continues to be a highly disputed topic in computer science, cognitive science and philosophy [18]. However, given the convincingly demonstrated equivalence between the NNs and Turing Machines, between the chemical networks and NNs, between NNs and population dynamics, etc., it seems reasonable to pose similar questions: "Can a chemical network think?"; "Can a population of dumb individuals, as a whole, think?"; "Can a microbial community think?; “Can a community of cells think?”. From the discussion above, it is reasonable to infer that a swarm of locally interacting individuals lacking any personal intelligence can think at least in the same sense and at the same level of intelligence as Turing Machines and computers.

Robotic communities

A community of inanimate robots mutually interacting only through stimulus-response rules but lacking any analytical tools for premeditated collective strategy, is well qualified to be such a community of individuals interacting in accordance with LVS rules and satisfying the provisions of GCT. Proof of the principle that these communities may possess the elements of self-organization and swarm intelligence has been convincingly demonstrated in [19,20]. In these works, a group of memoryless micro-robots have been programmed to mimic individual behaviors of cockroaches. The micro-robots, however, were not hard-wired to have any analytical tools to gather information regarding behaviors of other robots or regarding the general plan of action. It has been shown experimentally that this community is capable of reproducing some patterns of collective behavior similar to those of real cockroaches. Division of labor in communities of robots has been studied in [21]. A comprehensive review of various aspects of swarm intelligence in communities of robots and biological entities is given in [22]. Cooperative behaviors in communities of autonomous mobile robots has been reviewed in [23].

Maltzahn et al. [24] constructed a system in which the synthetic biological and nanotechnological components communicate in vivo to enhance disease diagnostics and delivery of therapeutic agents. In these experiments, the swarms typically consisted of about one trillion nanoparticles. It has been shown “that communicating nanoparticle systems can be composed of multiple types of signaling and receiving modules, can transmit information through multiple molecular pathways, can operate autonomously and can target over 40 times higher doses of chemotherapeutics to tumors than non-communicating controls.”

Microbial communities

Highly sophisticated forms of swarm intelligence have been observed in microbial communities. These communities represent a perfect example of species in competition governed by the Lotka-Volterra dynamics [25-27]. Social organization of bacterial communities has been extensively analyzed in [28]. Bacterial communities are found to possess a form of inheritable collective memory and the ability of maintaining self-identity. Secondly, the bacterial communities are capable of collective decision-making, purposeful alterations of the colony structures, and recognition and identification of other colonies. In essence, bacterial communities as a whole may be seen as multicellular organisms with loosely organized cells and a sophisticated form of intelligence [29].

Communities of somatic cells

From the perspective of Lotka-Volterra dynamics, somatic cells are just another example of locally interacting units possessing, as a community, the emergent property of swarm intelligence. As mentioned in [29], “Bacteria invented the rules for cellular organization.” However, in contrast to microbial communities which have a freedom of spatial restructuring, self-organization in a community of somatic cells is mostly manifested through collective shaping their internal phenotypic traits [30]. All this means that a community of somatic cells acts as a self-sufficient intelligent superorganism capable of taking care of its own survival through cooperative manipulation with intra-cellular states.

Disruption of quorum sensing as a prerequisite for triggering carcinogenesis

Carcinogenesis is a complex systemic phenomenon encompassing the entire hierarchy of biological organization. A great emphasis in carcinogenesis is placed on the role of disruption of the cell-to-cell signaling. With destruction of signaling pathways, not only the normal regulation of individual cellular processes is damaged, but also a blow is dealt to the, so to speak, mental capabilities of the community as a whole. Its collective memory is wiped out or distorted, customary division of labor between subpopulations is shifted towards aberrant modalities, community-wide self-defensive mechanisms are weakened or broken. In summary, the community as a whole falls into the state of disarray and amnesia in which it is feverishly searching for new ways towards survival. These processes in turn cause shift in expression profiles and metabolic dynamics and eventually penetrates to the level of DNA causing multiple mutations.

Quorum sensing (QS) is an important aspect of swarm intelligence. Agur et al. [31] provide a brief review of relevant biological facts and propose a mathematical model of QS boiled down to its simplest mechanistic elements. They arrive to important insight that "that cancer initiation is driven by disruption of the QS mechanism, with genetic mutations being only a side-effect of excessive proliferation." Detailed analysis of societal interactions and quorum sensing mechanisms in ovarian cancer metastases is given in [32]. These authors present compelling arguments supporting the view that QS "provides a unified and testable model for many long-observed behaviors of metastatic cells."

Swarm intelligence is a key to understanding acquired chemoresistance

Numerous observations confirm the notion that a cancer tumor may be regarded as a society of cells possessing the faculty of swarm intelligence. One of the important aspects of swarm intelligence is adaptivity which is a form of learning from experience.

It has been also long recognized that cancer cells, after the fleeting inhibitory effect of a chemotherapeutic agent, may develop the capabilities of resistance to treatment. These capabilities termed as acquired resistance, are the manifestations of robustness of cancer cells, both individually and collectively. In literature, in attempts to conceptualize this complex phenomenon, there is a reductionist tendency to associate adaptivity with multiple layers of negative feedback loops [33]. It is obvious, however, that the entire system comprising myriads of such loops cannot succeed in fulfilling its task unless these individual controls are working coherently, sharing a common goal. Observed astounding coherence between all the innumerable elementary processes comprising tumor adaptivity allows one to see tumor as a separate organ [34,35] and to talk about its defensive tactics [36]. Fundamentally, such capabilities are nothing else than manifestations of swarm intelligence in the community of tumor cells. It is, therefore, admissible to hypothesize that, when developing therapeutic strategies against cancer, one needs to recognize that the enemy is intelligent, capable of discerning the weapon applied against it and mounting a counteroffensive.

Conclusion

Complex hierarchy of perfectly organized entities is a hallmark of biological systems. Attempts to understand why's and how's of this organization lead inquiring minds to various levels of abstraction and depths of interpretation. In this paper, we have attempted to convey the notion that there exists a set of comparatively simple and universal laws of nonlinear dynamics which shape the entire biological edifice as well as all of its compartments. These laws are equally applicable to individual cells, as well as to biochemical networks within the cells, as well as to the societies of cells, as well as to the societies other than the societies of cells, as well as to the populations of individual organisms. These laws are blind, automatic, and universal; they do not require existence of a supervisory authority, system-wide informational infrastructure or some sort of premeditated intelligent design. In large populations of individuals interacting only by stimulus-response rules, these laws generate a large variety of emergent phenomena with self-organization and swarm intelligence being their natural manifestations.

References [bold highlights by AJP]

1. Mandelbrot B (1983) The Fractal Geometry of Nature. Freeman, San Francisco.

2. Leonardo da Vinci. Trattato della Pittura. ROMA MDCCCXVII. Nella Stamperia DE ROMANIS. A cura di Guglielmo Manzi Bibliotecario della Libreria Barberina.

3. Mandelbrot B (1977) Fractals, M.B. Form, Chance and Dimension. W.H. Freeman & Company, San Francisco.

4. Belaubre G (2006) L’irruption des Géométries Fractales dans les Sciences.Editions Académie Européenne Interdisciplinaire des Sciences (AEIS), Paris.

5. Loud AV (1968) A quantitative stereological description of the ultrastructure of normal rat liver parenchymal cells. J Cell Biol 37: 27-46. [Crossref]

6. Weibel ER, Stäubli W, Gnägi HR, Hess FA (1969) Correlated morphometric and biochemical studies on the liver cell. I. Morphometric model, stereologic methods, and normal morphometric data for rat liver. J Cell Biol 42: 68-91. [Crossref]

7. Mandelbrot B (1967) How long is the coast of britain? Statistical self-similarity and fractional dimension. Science 156: 636-638. [Crossref]

8. Paumgartner D, Losa G, Weibel ER (1981) Resolution effect on the stereological estimation of surface and volume and its interpretation in terms of fractal dimensions. J Microsc 121: 51-63. [Crossref]

9. Gehr P, Bachofen M, Weibel ER (1978) The normal human lung: ultrastructure and morphometric estimation of diffusion capacity. Respir Physiol 32: 121-140. [Crossref]

10. Rigaut JP (1984) An empirical formulation relating boundary length to resolution in specimens showing ‘‘non-ideally fractal’’ dimensions. J Microsc 13: 41–54.

11. Rigaut JP (1989) Fractals in Biological Image Analysis and Vision. In: Losa GA, Merlini D (Eds) Gli Oggetti Frattali in Astrofisica, Biologia, Fisica e Matematica, Edizioni Cerfim, Locarno, pp. 111–145.

12. Nonnenmacher TF, Baumann G, Barth A, Losa GA (1994) Digital image analysis of self-similar cell profiles. Int J Biomed Comput 37: 131-138. [Crossref]

13. Landini G, Rigaut JP (1997) A method for estimating the dimension of asymptotic fractal sets. Bioimaging 5: 65–70.

14. Dollinger JW, Metzler R, Nonnenmacher TF (1998) Bi-asymptotic fractals: fractals between lower and upper bounds. J Phys A Math Gen 31: 3839–3847.

15. Bizzarri M, Pasqualato A, Cucina A, Pasta V (2013) Physical forces and non linear dynamics mould fractal cell shape. Quantitative Morphological parameters and cell phenotype. Histol Histopathol 28: 155-174.

16. Losa GA, Nonnenmacher TF (1996) Self-similarity and fractal irregularity in pathologic tissues. Mod Pathol 9: 174-182. [Crossref]

17. Weibel ER (1991) Fractal geometry: a design principle for living organisms. Am J Physiol 261: L361-369. [Crossref]

18. Losa GA (2012) Fractals in Biology and Medicine. In: Meyers R (Ed.), Encyclopedia of Molecular Cell Biology and Molecular Medicine, Wiley-VCH Verlag, Berlin.

19. Santoro R, Marinelli F, Turchetti G, et al. (2002) Fractal analysis of chromatin during apoptosis. In: Losa GA, Merlini D, Nonnenmacher TF, Weibel ER (Eds.), Fractals in Biology and Medicine. Basel, Switzerland. Birkhäuser Press 3: 220-225.

20. Bianciardi G, Miracco C, Santi MD et al. (2002) Fractal dimension of lymphocytic nuclear membrane in Mycosis fungoides and chronic dermatitis. In: Losa GA, Merlini D, Nonnenmacher TF, Weibel ER (Eds.), Fractals in Biology and Medicine. Basel, Switzerland, Birkhäuser Press.

21. Losa GA, Baumann G, Nonnenmacher TF (1992) Fractal dimension of pericellular membranes in human lymphocytes and lymphoblastic leukemia cells. Pathol Res Pract 188: 680-686. [Crossref]

22. Mashiah A, Wolach O, Sandbank J, Uzie IO, Raanani P, et al. (2008) Lymphoma and leukemia cells possess fractal dimensions that correlate with their interpretation in terms of fractal biological features. Acta Haematol 119,142–150. [Crossref]

23. Brú A, Albertos S, Luis Subiza J, García-Asenjo JL, Brú I (2003) The universal dynamics of tumor growth. Biophys J 85: 2948-2961. [Crossref]

24. Baish JW, Jain RK (2000) Fractals and cancer. Cancer Res 60: 3683–3688.

25. Tambasco M, Magliocco AM (2008) Relationship between tumor grade and computed architectural complexity in breast cancer specimens. Hum Pathol 39: 740-746. [Crossref]

26. Sharifi-Salamatian V, Pesquet-Popescu B, Simony-Lafontaine J, Rigaut JP (2004) Index for spatial heterogeneity in breast cancer. J Microsc 216: 110-122. [Crossref]

27. Losa GA, Graber R, Baumann G, Nonnenmacher TF (1998) Steroid hormones modify nuclear heterochromatin structure and plasma membrane enzyme of MCF-7 Cells. A combined fractal, electron microscopical and enzymatic analysis. Eur J Histochem 42: 1-9. [Crossref]

28. Landini G, Hirayama Y, Li TJ, Kitano M (2000) Increased fractal complexity of the epithelial-connective tissue interface in the tongue of 4NQO-treated rats. Pathol Res Pract 196: 251-258. [Crossref]

29. Roy HK, Iversen P, Hart J, Liu Y, Koetsier JL, et al. (2004) Down-regulation of SNAIL suppresses MIN mouse tumorigenesis: modulation of apoptosis, proliferation, and fractal dimension. Mol Cancer Ther 3: 1159-1165. [Crossref]

30. Losa GA, De Vico G, Cataldi M, et al. (2009) Contribution of connective and epithelial tissue components to the morphologic organization of canine trichoblastoma. Connect Tissue Res 50: 28-29.

31. Li H, Giger ML, Olopade OI, Lan L (2007) Fractal analysis of mammographic parenchymal patterns in breast cancer risk assessment. Acad Radiol 14: 513-521. [Crossref]

32. Rangayyan RM, Nguyen TM (2007) Fractal analysis of contours of breast masses in mammograms. J Digit Imaging 20: 223-237. [Crossref]

33. De Felipe J (2011) The evolution of the brain, the human nature of cortical circuits, and intellectual creativity. Front Neuroanat 5: 1-16. [Crossref]

34. King RD, Brown B, Hwang M, Jeon T, George AT; Alzheimer's Disease Neuroimaging Initiative (2010) Fractal dimension analysis of the cortical ribbon in mild Alzheimer's disease. Neuroimage 53: 471-479. [Crossref]

35. Werner G (2010) Fractals in the nervous system: conceptual implications for theoretical neuroscience. Front Physiol 1: 15. [Crossref]

36. Losa GA (2014) On the Fractal Design in Human Brain and Nervous Tissue. Applied Mathematics 5: 1725-1732.

37. Smith TG Jr, Marks WB, Lange GD, Sheriff WH Jr, Neale EA (1989) A fractal analysis of cell images. J Neurosci Methods 27: 173-180. [Crossref]

38. Smith TG Jr, Bejar TN (1994) Comparative fractal analysis of cultured glia derived from optic nerve and brain demonstrated different rates of morphological differentiation. Brain Res 634: 181–190.

39. Smith TG Jr, Lange GD, Marks WB (1996) Fractal methods and results in cellular morphology--dimensions, lacunarity and multifractals. J Neurosci Methods 69: 123-136. [Crossref]

40. Smith TG (1994) A Fractal Analysis of Morphological Differentiation of Spinal Cord Neurons in Cell Culture. In: Losa et al., (Eds.), Fractals in Biology and Medicine, Birkhäuser Press, Basel, vol.1.

41. Milosevic NT, Ristanovic D (2006) Fractality of dendritic arborization of spinal cord neurons. Neurosci Lett 396: 172-176. [Crossref]

42. Milosevic NT, Ristanovic D, Jelinek HF, Rajkovic K (2009) Quantitative analysis of dendritic morphology of the alpha and delta retinal ganglions cells in the rat: a cell classification study. J Theor Biol 259: 142-150. [Crossref]

43. Ristanovic D, Stefanovic BD, Milosevic NT, Grgurevic M, Stankovic JB (2006) Mathematical modelling and computational analysis of neuronal cell images: application to dendritic arborization of Golgi-impregnated neurons in dorsal horns of the rat spinal cord. Neurocomputing 69: 403–423.

44. Jelinek HF, Milosevic NT, Ristanovich D (2008) Fractal dimension as a tool for classification of rat retinal ganglion cells. Biol Forum 101: 146-150.

45. Bernard F, Bossu JL, Gaillard S (2001) Identification of living oligodendrocyte developmental stages by fractal analysis of cell morphology. J Neurosci Res 65: 439-445. [Crossref]

46. Pellionisz A, Roy GR, Pellionisz PA, Perez JC (2013) Recursive genome function of the cerebellum: geometric unification of neuroscience and genomics. Berlin: In: Manto M, Gruol DL, Schmahmann JD, Koibuchi N and Rossi F (Eds.), Springer Verlag, “Handbook of the Cerebellum and Cerebellar Disorders”. 1381-1423.

47. Pellionisz AJ (2008) The principle of recursive genome function. Cerebellum 7: 348-359. [Crossref]

48. Di Ieva A, Grizzi F, Jelinek H, Pellionisz AJ, Losa GA (2015) Fractals in the Neurosciences, Part I: General Principles and Basic Neurosciences. The Neuroscientist XX(X) 1–15.

49 Pellionisz A (1989) Neural geometry: towards a fractal model of neurons. Cambridge: Cambridge University Press.

50. Agnati LF, Guidolin D, Carone C, Dam M, Genedani S, et al. (2008) Understanding neuronal molecular networks builds on neuronal cellular network architecture. Brain Res Rev 58: 379–99. [Crossref]

[At "Critical Times in the fight against cancer" one "winner strategy" (but for true scientists, unethical, to so many working for so long and so hard on fractals) might be to "have it both ways". Do not look for the word "fractal" in the version the author modified once his paper as above was accepted in a brand new web-journal on "Fractals". This version contains plenty (40?), the other version has it the other way (Zero). For clarification, ask the Author, Editor-in-Chief or even poor Dr. Grizzi who was even named in this single-author something as "corresponding author" (!) - AJP]


Apple may soon collect your DNA as part of a new ResearchKit program

By Andre Revilla — May 7, 2015

Building a database of the human genome, mostly in an effort to study it, is nothing new. Since we first gained the ability to study DNA, scientists have been keen to study as many samples as possible, in an effort to discover more about disease in the human body, and degenerative disorders such as Parkinson’s disease. Now Apple is joining groups ranging from Google to the U.S. government in expressing an interest to collect a library of DNA samples.

Apple will be teaming up with scientists to collect DNA as part of its ResearchKit program, which launched in March. The program would collect consumers health information through a secure portal, with the added opportunity for users with certain conditions to take part in a number of clinical studies. According to the MIT Technology Review’s report, Apple has two currently planned studies, one at the University of California in San Francisco, and the other with Mount Sinai Hospital in New York.

Related: Apple offering medical trials through Research Kit

Users would participate by spitting and returning the completed kit to an Apple-approved laboratory. The report reads, “The data would be maintained by scientists in a computing cloud, but certain findings could appear directly on consumers’ iPhones as well.” Integrating apps that partner with DNA collection on a platform as popular as iOS would place Apple in a good position to lead the charge in a new realm of genetic databasing.

“Nudging iPhone owners to submit DNA samples to researchers would thrust Apple’s devices into the center of a widening battle for genetic information,” the MIT review states.

The studies are aimed at investigating 100 or so “medically important disease genes.” The future of the connected world is fascinating, and as the review points out, could see us swiping our genetic information at pharmacies to receive information on the drugs we’re picking up. Apple has not given a comment on the report.

Building a database of the human genome, mostly in an effort to study it, is nothing new. Since we first gained the ability to study DNA, scientists have been keen to study as many samples as possible, in an effort to discover more about disease in the human body, and degenerative disorders such as Parkinson’s disease. Now Apple is joining groups ranging from Google to the U.S. government in expressing an interest to collect a library of DNA samples.

Apple will be teaming up with scientists to collect DNA as part of its ResearchKit program, which launched in March. The program would collect consumers health information through a secure portal, with the added opportunity for users with certain conditions to take part in a number of clinical studies. According to the MIT Technology Review’s report, Apple has two currently planned studies, one at the University of California in San Francisco, and the other with Mount Sinai Hospital in New York.

Related: Apple offering medical trials through Research Kit

Users would participate by spitting and returning the completed kit to an Apple-approved laboratory. The report reads, “The data would be maintained by scientists in a computing cloud, but certain findings could appear directly on consumers’ iPhones as well.” Integrating apps that partner with DNA collection on a platform as popular as iOS would place Apple in a good position to lead the charge in a new realm of genetic databasing.

“Nudging iPhone owners to submit DNA samples to researchers would thrust Apple’s devices into the center of a widening battle for genetic information,” the MIT review states.

The studies are aimed at investigating 100 or so “medically important disease genes.” The future of the connected world is fascinating, and as the review points out, could see us swiping our genetic information at pharmacies to receive information on the drugs we’re picking up. Apple has not given a comment on the report.

[There is a veritable "feeding frenzy" around "DNA Data Banks", "DNA API", as well as the inevitable trend that for actual user-friendly applications of genomic data high-powered mobile devices will be used (e.g. the new iPhone with up to 256 GIGAByte of flash memory!!!). There are several contenders in a horse-race for the above highly lucrative goals, separately and especially if it is possible together. Google Genomics publishes about their "DNA API" (they are not telling details). There is hardly any question that Google is super-expert in such API from a computing viewpoint. However, a most logical company eminently suitable for this cardinal role could be Illumina - the strongest USA data-source of genomic information. Illumina, however, with its presently known priorities may not have this crucial item on its agenda & schedule. It may be regrettable, since such asset could very significantly boost the valuation of Illumina - making it more resistant to any further "hostile take-over attempt by Roche/Genentech". (Genentech would also be very suitable for the above role(s), but as a fully owned subsidiary of the genomically leading Big Pharma (Roche) it seems unlikely that Roche is going to push this agenda).

This leaves a most interesting and very suitable company, whenever Google Genomics will trigger "Apple Genomics". (Somewhat unlikely, since Apple makes most cardinal business decisions super-secretly, though the visibly half/ready Apple HQ2 makes the world wonder how the cash-mega-rich Apple is going to expand its horizon.). What are the pro-s and con-s of Apple launching an "Apple Genomics"?

No company in the world could possibly beat Apple in "user-friendly design of advanced computer systems". The new line of "wearables" (iWatch) already compels Apple to massively expand its API to accomodate the myriads of sensors, detectors and personal data collection and storage. This is a huge plus, as well as Apple could emerge (after some rather feeble forays into Old School Genomics many years ago) as the undisputed hardware/software integrator in the historical R&D "explosion" of New School Genomics. A further positive factor is that Apple and Illumina are already on record of attempting trying out this new field. (If Illumina would ever submit to a M/A, imho a merger of Illumina with Apple might make more sense than Illumina under Roche).

Some factors lessen the likelihood of a major business decision. One is that "Calico" already drains resources - though the pursuit of "ethernal youth" and "practical user-friendly applications of today's genomic data" represent no real internal competition.

Perhaps the the most serious challenge is that Apple is not famous for the cross-disciplinary domain-expertise of genomics AND informatics. This challenge, however, can be very easily and quickly overcome in the highly incestuous Silicon Valley. andras_at_pellionisz_dot_com ]


Sequencing the genome creates so much data we don’t know what to do with it

The Washington Post

Robert Gebelhoff, July 7

Get ready for some incomprehensibly big numbers. [Not really - in my 2008 Google Tech YouTube, see slide below, I pointed out at the proverbial 7 years ago, that the "Genome information exploded over 25 orders of magnitude", but a Googel is defined by 100 zeros. Thus, the IT (Information Technology) is definitely ready (though we are talking about billions of dollars). The problem was very clear even at my 2008 YouTube. "Information Theory" was not ready (some still don't have it), to interpret even a single full human DNA. Note, that the entire DNA of both Dr. Jim Watson and Dr. Craig Venter had been sitting on the shelves (hard drives, rather) for years - but without software-enabling algorithmic approaches (such as FractoGene) "crunching A,C,T,G-s amounted to billions of dollars wasted". - Andras_at_Pellionisz_dot_com]



Scientists are predicting that genomics — the field of sequencing human DNA — will soon take the lead as the biggest data beast in the world, eventually creating more digital information than astronomy, particle physics and even popular Internet sites like YouTube. [Okay, take "particle physics" as probably the best example. Would anyone waste billions of dollars in building a super-collider (generating myriads of trajectories) - before Quantum Theory was developed? That effort needed the entire Coppenhagen Group working busily for many decades to build as an entirely new chapter in physics, mathematics and even in philosophy?? - Andras_at_Pellionisz_dot_com]

The claim, published Tuesday in a PLOS Biology study, is a testament to the awesome complexity of the human genome, but it also illustrates a pressing challenge for the 15-year-old field. As genomics expands at an exponential rate, finding the digital space to store and manage all of the data is a major hurdle for the industry.

[The rumors were true: Scientists edited the genomes of human embryos for the first time]

Michael Schatz, co-author of the study and a professor at Cold Spring Harbor Laboratory in New York, called the data challenge one of the most important questions facing biology today.

"Scientists are really shocked at how far genomics has come," Schatz said. "Big data scientists in astronomy and particle physics thought genomics had a trivial amount of data. But we're catching up and probably going to surpass them."

[Worm spends four years burrowing through man’s brain (but at least we’ve sequenced its genome)]

To give some idea as to the amount of data we're talking about, consider YouTube, which generates the most data of any source per year — around 100 petabytes, according to the study. A petabyte is a quadrillion (that's 10 followed by 15 zeroes) bytes, or about 1,000 times the average storage on a personal computer.

Right now, all of the human data generated through genomics — including around 250,000 sequences — takes up about a fourth of the size of YouTube's yearly data production.[We do not have major problems with YouTube, do we? It even generates money. Do not get scared of what Information Technology can do (think of meteorology, war games, above mentioned nuclear physics, financial data and calculations). Get scared of the scarsity of software-enabling Information Theory to interpret a single genome! - Andras_at_Pellionisz_dot_com]. If the data were combined with all the extra information that comes with sequencing genomes and recorded on typical 4-gigabyte DVDs, Schatz said the result would be a stack about half a mile high.

[If you could print out the whole Internet, how many pages would it be?]

But the field is just getting started. Scientists are expecting as many as 1 billion people to have their genomes sequenced by 2025. The amount of data being produced in genomics daily is doubling every seven months, so within the next decade, genomics is looking at generating somewhere between 2 and 40 exabytes a year.

A exabyte — just try to wrap your mind around this — is 1,000 petabytes, or about 1 million times the amount that can be stored on a home computer. In other words, that aforementioned stack of DVDs would easily start reaching into space.

[The triumph of genomic medicine is just beginning]

The study gives a good illustration of how the microscopic details of human genetics rival the complexity of the far-reaching science of the universe. The mountain of data used to analyze human DNA is so large that Schatz jokes people will eventually have to substitute the term "astronomical" with a more appropriate word: "genomical."

"With all of this information, something new is going to emerge," he said. "It might show patterns of how mutations affect different diseases."

IBM's Watson Genomics initiative, for example, is crunching data on the entire genomes of tumors, with the hope of generating personalized medicine for cancer patients.

[Personalized cancer vaccines have already helped treat three patients]

At some point, scientists might be able to save space by not storing sequences in full, similar to the way data is managed in particle physics, where information is read and filtered while it is generated. But at this point, the study says, such data cropping isn't as practical because it's hard to figure out what future data physicians will need for their research — especially when looking at broader human populations.

Right now, most genome research teams store their data through on-site hard drive infrastructure. The New York Genome Center, for example, is generating somewhere between 10 to 3o terabytes of data a day and storing it in an on-site system. They move old data they don't regularly use to cheaper and slower storage.

[The ultimate irony of mindless data-hoarding is likely to be, that the information will be most efficiently stored in DNA. Full circle, spending billions, but accomplishing what exactly? We know since Thomas Kuhn that "knowledge never automatically transpires into understanding - Andras_at_Pellionisz_dot_com]

"At this point, we're continuously expanding file storage," said Toby Bloom, deputy scientific director at the center. "The biggest hurdle is keeping track of what we have and finding what we need."

Organizations like Bloom's are eyeing the possibility of moving the data to cloud storage, but she said that's currently not as cost effective as expanding their physical storage infrastructure.

But size is not the only problem the field faces. Biological data is being collected from many places and in many different formats. Unlike Internet data, which is formatted relatively uniformly, the diverse sets of genomic data makes it difficult for people to use them across datasets, the study says.

Companies like Amazon and Google are developing the infrastructure to put genomic data on public clouds, which would be especially helpful for smaller centers with limited IT staff, but could also help foster collaboration.

Google recently announced a partnership with the Broad Institute of MIT and Harvard aimed at providing its cloud services for scientists combined with a toolkit developed by the institute that can be used to analyze the data. The concept is to put a bunch of the world's genomic data on Google's servers, where scientists from all over can collaborate on a single platform.

"It's extremely likely to see (the cloud model) going forward," Schatz said. "It just makes more sense.""[Do not forget that according to Google, for-profit users, like Big Pharma, must pay license fees to Broad Institute, a Charitable Organization :-) Andras_at_Pellionisz_dot_com ]


The living realm depicted by the fractal geometry (endorsement of FractoGene by Gabriele A. Losa)

[Excerpts] In some recent reports, rather exciting, it has been argued that there is a trend towards a “Unified Fractal Model of the Brain” [46]. These authors suggested that the amount of information necessary to build just a tiny fraction of the human body, that is, just the cerebellum of the nervous system, was a task for which 1.3% of the information that the genome [in the form of "genes", insert by AJP] could contain was totally insufficient. “Fractal genome grows fractal organism; yielding the utility that fractality, e.g. self-similar repetitions of the genome can be used for statistical diagnosis, while the resulting fractality of growth, e.g. cancer, is probabilistically correlated with prognosis, up to cure” [47].

The brain is now accepted as one of nature’s complete networks [48], while the hierarchical organization of the brain, seen at multiple scales from genes to molecular micronetworks and macronetworks organized in building neurons, has a fractal structure as well [49] with various modules that are interconnected in small-world topology [50]. The theoretical significance is that the fractality found in DNA and organisms, for a long time “apparently unrelated,” was put into a “cause and effect” relationship by the principle of recursive genome function [47].


[46] Pellionisz A, Roy GR, Pellionisz PA, Perez JC (2013) Recursive genome function of the cerebellum: geometric unification of neuroscience and genomics. Berlin: In: Manto M, Gruol DL, Schmahmann JD, Koibuchi N and Rossi F (Eds.), Springer Verlag, “Handbook of the Cerebellum and Cerebellar Disorders”. 1381-1423.

[47] Pellionisz AJ (2008) The principle of recursive genome function. Cerebellum 7: 348-359.

[49] Di Ieva A, Grizzi F, Jelinek H, Pellionisz AJ, Losa GA (2015) Fractals in the Neurosciences, Part I: General Principles and Basic Neurosciences. The Neuroscientist 20(4) 403-417.

[50] Pellionisz A (1989) Neural geometry: towards a fractal model of neurons. Cambridge: Cambridge University Press.

[In the recent series of top-level endorsements of FractoGene ("Fractal Genome Governs Growth of Fractal Organisms"), Gabriele Losa is the most established leader of "fractals in biology and medicine". Dr. Losa organized a series of International Meetings in Switzerland, published in four volumes. Thus, acknowledgement by Dr. Losa that the already rather large field of studying fractality of the DNA or fractality of the organisms, simply overlooked their "cause and effect" relationship reminds us to a saying by Mandelbrot himself "to see things that is everybody is looking at but nobody notices". "FractoGene" could not be published since it reversed BOTH of the cardinal axioms of Old School Genomics (the "Central Dogma" and "Junk DNA" misnomers that Dr. Mattick labeled as "the biggest mistake in the history of molecular biology").

The most striking of my revelation was the utility of my discovery in 2002. My FractoGene discovery also reversed the "utility". In the Old School, the only useful (tiny) parts of the DNA were believed the "genes" (protein-coding segments, amounting to less than 1% in the human, and even with the "genes" the function of "introns" was either entirely denied, or the "non-coding" introns were misrepresented as "spacers" to separate "genes").

My discovery deployed a measurable utility derived from the fact that has always been at the plain sight; that both the DNA and the organisms it governs are "replete with repeats". In a "cause and effect" relationship the statistical correlation of repeats (fractals) of DNA and the organisms it governs yielded precious utility for diagnosis, and the probabilistic predictions of the relationship of fractals yielded prognosis. The "Best Methods" were amply "incorporated by reference" by thousands of pages of literature, both on fractals (e.g. Mandelbrot, Losa, etc), and advanced textbooks of statistical and probabilistic mathematics.Thus, 8,280,641 (now issued after an over-a-decade struggle with the US Patent Office, costing me over a $1 M of personal money) was submitted as a patent to establish priority date (Aug. 1, 2002, because of USPTO delays 8,280,541 is in force till 2026, late March).

Once the regular patent was submitted, peer-reviewed scientific publications ensued. An invited Keynote Lecture in 2003, a peer-reviewed scientific publication (with the late M.J. Simons, 2006), where the latter went on record both with citing the original "heureka" diagram of the FractoGene discovery (Fig. 3.), as well as made theoretical predictions. These theoretical predictions were later verified by independent experimental biologists. Once the most recent CIP to the 2002 filing was done (2007), FractoGene was presented in the peer-reviewed scientific publication "The Principle of Recursive Genome Function" (2008), along with wide public dissemination by Google Tech Talk YouTube (2008).

The Principle of Recursive Genome Function was immediately accepted (2009 in Cold Spring Harbor by an invitation by Prof. George Church, without objection by the participants, most notably by Jim Watson). Two weeks after the Cold Spring Harbor presentation, Eric Lander (and a dozen co-workers) put the Hilbert-fractal on the cover of Science Magazine, amounting to a message of the Science Adviser to Obama "Mr. President, the Genome is Fractal!)

Now, after the proverbial 7-year delay, FractoGene is now endorsed e.g. by the top (double-degree) biomathematician (Eric Schadt), fresh Stanford Nobelist (in multi-scale biology, Michael Levitt) - and now by the top-expert in "fractals in biology & medicine" (Prof. Gabriele A. Losa). While non-profit academics compromise only their literacy of published science by NOT citing any/all of the above references (publicly available for free download). However, as Genome Informatics is becoming intertwined with Intellectual Property (representing occasionally very substantial efforts, e.g. since 1989 against a massive head-wind and documented losses), for-profit users are advised to consider infringements. andras_at_pellionisz_dot_com ]


Google and Broad Institute Team Up to Bring Genomic Analysis to the Cloud

By Christina Farr

JUNE 24, 2015

Google has teamed up with one of the world’s top genomics centers, the Broad Institute of MIT and Harvard, to work on a series of projects it claims will propel biomedical research.

For the first joint project, engineers from both organizations will bring “GATK,” the Broad Institute’s widely-used genome analysis toolkit, onto Google’s cloud service and into the hands of researchers.

“The limiting factor is no longer getting the DNA sequenced,” said Dr. Barry Starr, a Stanford geneticist and a contributor to KQED. “It is now interpreting all of that information in a meaningful way.”

The Broad Institute alone analyzed a massive 200 terabytes of raw data in a single month. In the past decade, the institute has genotyped more than 1.4 million biological samples.

Google isn’t the only tech company vying to use cloud-based technology to store and analyze this massive volume of genetic information. This is a point of competition between Google, IBM, Amazon, and Microsoft. ["Competition" of Google, IBM, Amazon, Microsoft? Does not sound at all like an "Open Source Non-Profit Charity". This horserace will largely depend on the Intellectual Property acquired from New School Genome Informatics - andras_at_pellionisz_dot_com]

But Google is now the only public cloud provider to offer the GATK toolkit as a service. By making the software available in the cloud, researchers can run it on large data-sets without access to local computing — and that frees up both time and resources.

“GATK was already available to researchers and tens of thousands have used the software to analyze their data,” said Starr. “Google adds the power of being able to handle much more data at a time.”

Google Genomics’ product manager Jonathan Bingham told KQED two groups will benefit most from this partnership: small research groups who lack sophisticated computing, and any individual who wants to analyze large genomic data sets without needing to download them.

“Broad Institute has got a tremendous amount of expertise working with large numbers of biological samples and huge volumes of genomic data,” Bingham explained. “Meanwhile, Google has built the infrastructure and tools to process and analyze the data and keep it secure.”

The toolkit will be available for free to nonprofits and academics. Businesses will need to pay to license it from the Broad Institute.

Some genetics experts say this announcement is evidence that the health industry is increasingly willing to embrace cloud computing. In the past, health organizations have been hesitant due to concerns about compliance and security.

“This suggests that the genomics industry has moved beyond the cloud debate,” said Jonathan Hirsch, president and co-founder of Syapse, a Silicon Valley-based company that wants to bring more genomics data into routine clinical use.

“It is OK for researchers and clinicians to do genomics work in the cloud, and trust that cloud provider’s hardware and software.”

In the future, Bingham said there may be opportunities to work on projects to further our genetic understanding of cancer and diabetes.

But for now, he said, the organizations are focused on “general purpose” tools that aren’t specific to a disease and can be used by researchers everywhere.


GlaxoSmithKline, Searching For Hit Drugs, Pours $95M Into DNA 'Dark Matter'

GlaxoSmithKline wants to better understand biology so it can discover more medicines, like every other drugmaker. It also wants to quit wasting money on drug candidates that look promising in the lab, but flop years later when given to hundreds or thousands of real people.

Today, London-based GSK is betting that one way around the problem will come from “the living genome” or what some call the “dark matter” of the genome. These mysterious stretches in the genetic instructions don’t contain genes that provide code for making proteins, but they do appear to provide important controls over what genes do in different cells, in different states of health and disease, and in response to different environments.

Rather than invest in its own labs which have been downsized and re-organized in many ways, GSK is investing $95 million over the next five years, and potentially that amount and more over the subsequent five years, in a new nonprofit research center in Seattle called the Altius Institute for Biomedical Sciences. The institute, which stands for “higher” in Latin, is led by John Stamatoyannopoulos, a professor of genome sciences at the University of Washington. He was a leader in the international ENCODE consortium that published a batch of influential papers in the journal Nature in 2012. The findings elevated the importance of regulatory regions in the genome, and even raised some thoughtful questions about the basic definition of a “gene.”

Stam, as he is known for short, will lead a team of 40-80 molecular biologists, chemists, and computer scientists who will seek to find meaning in regions of the genome that control what they call “the cell’s operating system.” GSK is hoping that this understanding of gene control will help it find better molecular targets for drugs, and help it select the right compounds, right doses, target tissues, and all kinds of other aspects critical in drug R&D.


While the breathtaking advances in faster/cheaper DNA sequencing are making it possible to compare genomes from many people to look for differences that play a role in wellness and disease, Altius isn’t focused so much on the underlying sequences on their own. It will not set up a factory-style efficient genome sequencing center—it will contract that work out to others. The Altius group plans to use, and continuously improve technologies around imaging, chemistry, and computation to extract meaningful information from what Stamatoyannopoulos calls “the living genome.”

“The problem is that the genome only encodes some upstream potentiality, and doesn’t read out what the organism is actually doing,” Stamatoyannopoulos said. “It’s packaged in different ways in different cells…we are reading how the cell is working, and using the genome as a scaffold for all the things it does.” Looking at the downstream manifestation of the genome, in cells, he said, “is going to be much more relevant to clinical medicine.”

Lon Cardon, a senior vice president of alternative discovery and development at GlaxoSmithKline, said he and his team were fascinated by the ENCODE consortium’s series of publications starting in September 2012. “The light went on for us,” he said. Historically, pharma has looked at molecular targets as “static” entities, when the reality is much more fluid and dynamic in different cell and tissue types. Better understanding of what the targets are doing in live cells is essential to fundamental R&D challenges, Cardon said.

At the time of the ENCODE team’s public pronouncements, genomics leader Eric Lander at the Broad Institute likened it to Google GOOGLE Maps. The earlier Human Genome Project, he told The New York Times, “was like getting a picture of Earth from space. It doesn’t tell you where the roads are, it doesn’t tell you what traffic is like at what time of the day, it doesn’t tell you where the good restaurants are, or the hospitals or the cities or the rivers.” He called ENCODE a “stunning resource.”

The scientific consortium has continued to march ahead the past several years, but opinions are mixed on whether regulatory regions of the genome are ready for prime time in drug discovery.

“The maps being created from these efforts are absolutely helping lock into cell specific regulatory networks that when combined with methylation data and eQTL [expression quantitative trait loci] data can be very powerful in tuning you into causal regulators that are important for disease,” said Eric Schadt, the director of the Icahn Institute for Genomics and Multiscale Biology in New York.

David Grainger, a partner at Index Ventures in London, said, “John Stam clearly has a record of doing exciting stuff, and I’m sure he will do so again in Altius. Whether any of that will translate into value for a drug developer, only time will tell. Genomics and the control of gene expression would not necessarily have been an area I would have chosen for what is, in effect, company-funded blue skies research. But I look forward to them proving me wrong.”

GSK, like its industry peers, has been experimenting not just with different scientific approaches to discovery, but with various models for financing creative, motivated teams outside of its own walls. It has a corporate venture capital fund (SR One) that invests in biotech startups, a tight relationship with a venture firm (Avalon Ventures) that builds startups it might buy, and it tried (and closed) a number of internal centers for excellence. The idea of a big drug company putting big resources behind a semi-independent nonprofit institute isn’t exactly new—Merck & Co. did something similar in 2012 when it enlisted Peter Schultz to run the California Institute for Biomedical Research in San Diego.

In the past, pharma companies might have just written a check to sponsor research at an academic center like the University of Washington, sit back, and hope for good results to flow back to the company. But those arrangements haven’t borne much fruit. GSK could have just acquired as much of the intellectual property and technology as it could, and brought it in-house, but it was afraid that it might slow things down in a fast-moving field, Cardon said. In all likelihood, it will be easier to recruit the people it wants into a new organization with startup-like focus and urgency. Speed is of the essence in a field going through exponential advances in technology. “We want to stay ahead of that game,” Stamatoyannopoulos said.

While staying small and nimble, the institute will get some big company advantages. Altius will be able to use some of GSK’s fancy instruments, like imaging, chemistry, and robotics tools that it couldn’t possibly corral in an academic institution.

The institute and the company expect to have what sounds like an open-door relationship. Some GSK scientists will be able to go on periodic leaves from their regular job to go work at the Seattle institute, taking what they learn back to the mother ship. Scientists at the institute say they have retained their academic freedom, in the right to publish all of their discoveries without prior review of GlaxoSmithKline, with one exception–when the work applies to proprietary compounds of the parent company.

Clearly, GSK is hoping for a return on its investment. The company is getting the first shot at licensing discoveries from Altius, and the right to spin companies out of it. The knowledge from Altius, ideally, should influence decision-making with a number of its experimental drugs.

The new center is expected to get up and running later this year in offices just north of Seattle’s famed Pike Place Market. Stamatoyannopoulos said he will retain his faculty position at the UW Genome Sciences department, and continue to oversee grant work he has there, including some of the ENCODE consortium efforts. The institute will have its own board of directors, and its own scientific advisory board, but it isn’t yet naming names or even saying how many members will be in each group. The agreement between the institute and the company covers a 10-year term, with $95 million of company support for the basic science and technology exploratory phase in the first 5 years and with additional funding in the latter years for specific drug discovery/development projects. The second half of the collaboration is expected to provide funding on par with first five years, but could be even bigger, Stamatoyannopoulos said.

Incidentally, Stamatoyannopoulos said he and his team don’t use the “dark matter” analogy anymore when describing their work on the regulatory regions of the genome, mainly because they have shed light on where that regulatory DNA is. But there’s still plenty of mystery. “There of course is an enormous amount to learn–but now we have the flashlights and searchbeams,” Stamatoyannopoulos said in an e-mail. “I usually use ‘living genome’ to distinguish from research that focuses just on DNA sequence (the ‘dead genome’), which doesn’t change, while the cell’s regulatory network does back flips in response to its environment or a drug.”

Luke Timmerman is the founder and editor of Timmerman Report, a subscription publication for biotech insiders.

["The Principle of Recursive Genome Function" was published in a peer reviewed scientific publication seven years ago (also popularized on Google Tech Talk YouTube, visited by more than seventeen thousand viewers) and a full free pdf of the peer reviewed paper is available for everyone (see list of publications). While maintaining an obsolete view that genome only encodes some upstream potentiality, and doesn’t read out what the organism is actually doing is the prerogative of any scientist - though any Editor who is convinced otherwise should not let this misimpression spread -any peer-reviewed scientific publication should demonstrate and acknowledge the knowledge of existing literature on the crucial matter of "Recursive Genome Function". The above two articles clinch the trend that Big IT and Big Pharma fiercely compete now for the "high ground". This columnist is already on the Board of USA and India-based Companies, and is available. andras_at_pellionisz_dot_com]


Recurrent somatic mutations in regulatory regions of human cancer genomes (Nature Genetics, dominant author Michael Snyder)

[Popular journalist coverage:

Stanford Team IDs Recurrently Mutated Regulatory Sites Across Cancer Types

Jun 08, 2015 | a GenomeWeb staff reporter]

To identify the regulatory mutations, Mike Snyder's laboratory at Stanford first established an analysis workflow for whole-genome data from 436 individuals from the TCGA. They used two algorithms, MuTect and VarScan 2, to identify SNVs from eight different cancer subtypes.

Next, they annotated the mutation set with gene and regulatory information from the gene annotation project Gencode and RegulomeDB, a database of regulatory data that includes data on transcription factors, epigenetic marks, motifs, and DNA accessibility.

Overall, they found that mutations in coding exons represented between .036 percent and .056 percent of called mutations for each cancer type, while mutations in putative regulatory regions represented between 31 percent and 39 percent of called mutations for each cancer type. The large fraction of regulatory mutations, "underscores the potential for regulatory dysfunction in cancer," the authors wrote.

The team identified a number of recurrently mutated genes and regulatory regions, and they replicated a number of known findings of recurrent mutations in driver genes, including mutations in the coding regions of TP53, AKT1, PIK3CA, PTEN, EGFR, CDKN2A, and KRAS.

They also identified recurrent mutations to the known TERT promoter gene and recurrent mutations in eight new loci in proximity of, and therefore potential regulators of, known cancer genes, including GNAS, INPP4B, MAP2K2, BCL11B, NEDD4L, ANKRD11, TRPM2 and P2RY8.

In addition, they found positive selection for mutations in transcription factor binding sites. For instance, mutations in the binding sites of CEBP factors were "enriched and significant across all cancer types," the authors wrote. In addition, they found enrichment for mutations in transcription factor binding sites that were either likely to "destroy the site or increase affinity of the site for transcription factor binding," the authors wrote. Such mutations could either inactive tumor suppressor genes or activate oncogenes.

"Overall, we expect that many regulatory regions will prove to have important roles in cancer, and the approaches and information employed in this study thus represent a significant advance in the analysis of such regions," the authors wrote.

---

ABSTRACT OF ORIGINAL PAPER: Aberrant regulation of gene expression in cancer can promote survival and proliferation of cancer cells. Here we integrate whole-genome sequencing data from The Cancer Genome Atlas (TCGA) for 436 patients from 8 cancer subtypes with ENCODE and other regulatory annotations to identify point mutations in regulatory regions. We find evidence for positive selection of mutations in transcription factor binding sites, consistent with these sites regulating important cancer cell functions. Using a new method that adjusts for sample- and genomic locus–specific mutation rates, we identify recurrently mutated sites across individuals with cancer. Mutated regulatory sites include known sites in the TERT promoter and many new sites, including a subset in proximity to cancer-related genes. In reporter assays, two new sites display decreased enhancer activity upon mutation. These data demonstrate that many regulatory regions contain mutations under selective pressure and suggest a greater role for regulatory mutations in cancer than previously appreciated.

["Seven years of hesitation" is famous in science. So is for The Principle of Recursive Genome Function (Pellionisz 2008) and the illustration of (fractal) recursive misregulation as the basis of cancer (Pellionisz Google Tech YouTube, 2008). The double paradigm-shift (reversal of both axioms of Old School Genomics) is now validated by first class, independent experimental results. While the Principle of Recursive Genome Function is not widely quoted after 7 years, Dr. Snyder et al. (2007) was among the first pioneers to go on record abolut the need of re-definition of genes and genome function. Now, with clear evidence that intergenic and even intronic non-coding sequences, in a recursive mode are responsible for the most dreaded genome regulation disease (cancer), it seems difficult to find alternative comprehensive theoretical framework for genome (mis)regulation. Andras_at_Pellionisz_dot_com]


Big Data (Stanford) 2015: Nobelist Michael Levitt (multi-scale biology) endorses the Fractal Approach to new school of genomics

Big Data at Stanford (2015) leveled the field of post-ENCODE genomics. On one hand, the insatiable demand for dwindling resources to generate Big (and Bigger) Data clearly ran into financial and data-privacy constraints. This was rather clear from presentations by NIH (putting the $200 M nose of the camel into the $2 Trillion "Precision Medicine Initiative" by sequencing AND interpreting the full DNA of up to 2 million humans, with 1 million people by the government's effort, a questionably overlapping another 1 million by an alternative private effort). In some rather sharp contrast, NSF answered a question if it leaves paradigm-shift genomic R&D to either strategic DARPA projects or for the Private Sector, could refer to a $28 M NSF program ("INSPIRE") that seems insufficient and rather hard even to qualify for. On the other hand, several start-up companies showed up (e.g. DNA Nexus, Seven Bridges, SolveBio, YouGenix - one CEO is a new member of the International Hologenomics Society), all eager to ramp-up their genome interpretation business much quicker than the already committed Big IT (Google Genomics, Amazon Web Services, IBM-Watson, Samsung, Sony, Apple, Siemens, SAP etc). In the forefront are, therefore key algorithms (just as "search engine algorithms" determined in the Age of Internet which company will emerge as a leader). From this viewpoint, it may be remarkable that FractoGene, already on record with no opposition by Nobelist Jim Watson upon presentation in Cold Spring Harbor, 2009, and already enjoying repeated support by "multi-scale biologist" Eric Schadt, at Big Data 2015 was endorsed by Nobelist Michael Levitt (Stanford, "multi-scale biology"). Dr. Levitt provided an unsolicited public endorsement as a "very good idea".


Eric Schadt - Big Data is revealing about the world’s trickiest diseases

Technically Brooklyn

April 16, 2015

If you learned about cystic fibrosis during biology class in high school, it was probably described as an inevitable condition of those whose genes included a specific set of mutations. It was thought to be inevitable because no one had ever found anyone with those mutations that didn’t have it. On the other hand, no one was checking people’s genes to see if they had the mutations when they didn’t show symptoms.

During the 2015 Lynford Lecture at NYU Poly, Mt. Sinai Hospital’s Eric Schadt explained how a big data methodology revealed a remarkable truth: When scientists look at large sets of genomic data of broad pools of test patients, they find small numbers of people with the genetic markers that would make them genetically predisposed to various diseases, and yet they weren’t symptomatic.

The remarkable finding here is that genetics do not necessarily represent an individual’s fate and somehow these individuals’ bodies worked out ways around their genetic disadvantages.

Schadt refers to these people as “heroes” and he believes that by studying them the medical profession can find new strategies of care for patients who are symptomatic.

Schadt is the director of the Icahn Institute for Genomics and Multiscale Biology, among other appointments, at Mt. Sinai. His talk served both as an exploration of a data-driven approach to determining strategies of care, an argument for a network-oriented approach to determining multiple interventions against disease as well as an argument for encouraging non-expert investigation of biological problems.

For this latter point, we have the example of Suneris, a company whose completely novel approach to stopping bleeding was discovered by a college freshman, not a doctor.

Here are some other compelling points from Schadt’s talk:

Bias. A huge stumbling block in the healthcare system is the bias toward acute care. Acute care is treating problems. That’s what hospitals are set up to treat and that’s what they get paid the best to deal with. It is not, however, what is best for patients.

Lots of apps, lots of data. A lot of data is getting collected by something like 50,000-100,000 mobile apps that in one way or another relate to health. With all this data, it’s possible to start getting very serious about targeted, specific prevention strategies for individuals that treat them as a whole person.

Locus of power. In 5-10 years time, there will be far more data about your health outside of medical centers than inside them.

Massive new studies powered by apps. Mt. Sinai just launched an app in collaboration with Apple to study asthma sufferers and help them manage their condition as they did so. It’s in the App Store. Within six hours of announcing it with Tim Cook, Mt. Sinai had enrolled several thousand people, a number that would take traditional studies years to achieve.

Informed consent. Schadt called the informed consent process built into the app its “crowning achievement.” Subsequent testing showed that users who went through their informed consent dialogue understood what they were agreeing to better than people who went through an informed consent process with another person.

Data yields surprises. By building a complete model based on multiple networks and developing it to the point that they were able model how different genes might express themselves under different conditions and different treatments, Mt. Sinai scientists were able to find a drug that was indicated for a wildly other use relating to irritable bowel syndrome. Big data makes it possible to find treatments by just running different inputs through models, regardless of indication or researcher assumptions.

[Eric Schadt is a double-degree mathematician, with Ph.D. in Biomathematics from UCLA. Started to turn "Big Pharma" (Merck) towards Information Technology. Later became the Chief Scientist of the Silicon Valley genome sequencing company Pacific Biosciences, to interpret genome information. In 30 minute compute time identified Haiti epidemic strain. With $600 M, established the Mount Sinai Center of Genomics and Multiscale Biology in Manhattan. Moved North to suburb (454), now lectured in Brooklyn. The almost 2 hour long video could be a Ph.D. thesis on the challenges of the sick-to-health-care IT-led paradigm shift. Not only abandons obsolete "gene/junk" dogma, but now also considers obsolete the "pathways" concept. Strong supporter of the fractal approach - expected to analyze parallel self-similar recursions. There are too many highly relevant comments in Eric's lecture. Suffice to mention that in BGI (China) for every single genome analyzer there are about 50 (fifty) software developers. In the USA this number is 1-3 (about twenty times less). Another bullet-point mentions that very soon there will be a lot more health-data OUTSIDE, not within the hospitals. As an NYU Medical Center professor, I can state with some authority that such "data center" will not be in Manhattan (real estate is way too expensive). Likewise, in the article below (IBM-Apple), in Silicon Valley it is actually very easy to tell where it will be located (hint: I have worked for some years as a Senior Research Council Advisor of the National Academy to NASA Ames Research Center. "Next door" is one of the busiest Internet-hub...) andras_at_pellionisz_dot_com]


IBM Announces Deals With Apple, Johnson And Johnson, And Medtronic In Bid To Transform Health Care

IBM Almaden Research Center, Silicon Valley, California

Apple Second Campus, Silicon Valley

Forbes, April 15, 2015

Experts in health care and information technology agree on the future’s biggest opportunity: the creation of a new computational model that will link together all of the massive computers that now hold medical information. The question remains: who will build it, and how?

IBM IBM -0.61% is today staking its claim to be a major player in creating that cloud, and to use its Watson artificial intelligence – the one that won on the TV game show Jeopardy – to make sense of the flood of medical data that will result. The new effort uses new, innovative systems to keep data secure, IBM executives say, even while allowing software to use them remotely.

“We are convinced that by the size and scale of what we’re doing we can transform this industry,” says John Kelley, Senior Vice President, IBM Research. “I’m convinced that now is the time.”

Big Blue is certainly putting some muscle into medicine. Some 2,000 employees will be involved in a new Watson-in-medicine business unit. The Armonk, N.Y.-based computing giant is making two acquisitions, too, buying Cleveland’s Explorys, an analytics company that has access to 50 million medical records from U.S. patients, and Dallas’ Phytel, a healthcare services head of IBM’s Life Science company that provides feedback to doctors and patients for follow-up care. Deal prices were not disclosed.

It is also announcing some big partnerships:

• Apple AAPL -0.47% will work to integrate Watson-based apps into its HealthKit and ResearchKit tool systems for developers, which allow the collection of personal health data and the use of such data in clinical trials.

• Johnson & Johnson JNJ -0.81%, which is one of the largest makers of knee and hip implants, will use Watson to create a personal concierge service to prepare patients for knee surgery and to help them deal with its after effects.

• Medtronic MDT -1.14%, the maker of implantable heart devices and diabetes products, will use Watson to create an “internet of things” around its medical gadgets, collecting data both for patients’ personal use and, once it’s anonymized, for understanding how well the devices are working. Initially, the focus is on diabetes.

IBM’s pitch is that it will be able to create a new middle layer in the health care system – linking the old electronic records systems, some of which have components dating back to the 1970s, with a new, cloud-based architecture, because of its deep breadth of experience.

And there is no doubt that there is a need for data science that can parse the explosion of information that will soon be created by every patient. Already, there is too much information for the human brain. “If you’re an oncologist there are 170,000 clinical trials going on in the world every year,” says Steve Gold, VP, IBM Watson.

The question is how ready Watson is to take on the challenge. IBM isn’t the only one that sees opportunity here. The billionaire Patrick Soon-Shiong is aiming to create a system to do many of the same things with his NantHealth startup. Flatiron Health, a hot startup in New York, is creating analytics for cancer. The existing health IT giants, Cerner and Epic, both certainly have their eyes on trying to capture some of this new, interconnected market, lest it make them obsolete.

So far, Watson has been a black box when it comes to healthcare. IBM has announced collaborations with Anthem, the health insurer, and medical centers including M.D. Anderson, Memorial Sloan-Kettering Cancer Center, and The Cleveland Clinic. There are lots of positive anecdotal reports, but so far the major published paper from Watson is a computer science paper published by the Baylor College of Medicine that identified proteins that could be useful drug targets.

“I think that ultimately somebody’s going to figure out how to integrate all these sources of data, analyze them, sort the signal to noise, and when someone can do that, it will improve the health care system,” says Robert Wachter, the author of The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age and associate chair of medicine UCSF.

“Does this do that tomorrow? No. But do we need to create the infrastructure to do that? Yes. And are they probably the best-positioned company with the best track record to do this? I think so.”

–Sarah Hedgecock contributed reporting to this story.

[This is a global "game changer", since what I predicted over a decade ago now actually happened. It is "news" but not a "surprise". IBM has long targeted the "health care" traditional market, but with genomics, it was Google Genomics, Amazon Genomics and IBM cloud-genomics that prepared for changing, by means of Information Technology, the $2 Trillion (USA) "IT matters for your health". The IBM announcement includes Apple (informally), but all others, plus global IT companies like Samsung, Sony, Panasonic, BGI, Siemens, SAP (etc) are also in the ring. Information Technology, however, is not even the hardest challenge (see my 2008 YouTube, Information Theory is the bottleneck). As for IT1 (Information Technology), it appears that "multicore" (beyond 128) is not the way to go - "cloud computing" is the name of the game today. However, for instance IBM Research at Almaden (Silicon Valley) points out, the "techie-challenge" is far deeper; it is the "non-von-Neumann computing architecture" (with their prototype SYNAPSE-chip, with over a million neurons and gezillion connections among them, to learn e.g. pattern recognition by neural net algorithms - with power consumption an order of magnitude smaller than what a smart phone battery provides). Science-wise (minus the chip) such a neuronal network model was built a generation ago. As the above long lecture by Eric Schadt shows, however, the gap between the "medical establishment" and the "genome informatics specialists" is visibly stunning. - andras_at_pellionisz_dot_com]


An 'evolutionary relic' of the genome causes cancer

Pseudogenes, a sub-class of long non-coding RNA (lncRNA) that developed from the genome's 20,000 protein-coding genes but lost the ability to produce proteins, have long been considered nothing more than genomic "junk." Yet the retention of these 20,000 mysterious remnants during evolution has suggested that they may in fact possess biological functions and contribute to the development of disease.

Now, a team led by investigators in the Cancer Research Institute at Beth Israel Deaconess Medical Center (BIDMC) has provided some of the first evidence that one of these non-coding "evolutionary relics" actually has a role in causing cancer.

In a new study in the journal Cell, publishing online today, the scientists report that independent of any other mutations, abnormal amounts of the BRAF pseudogene led to the development of an aggressive lymphoma-like disease in a mouse model, a discovery that suggests that pseudogenes may play a primary role in a variety of diseases. Importantly, the new discovery also suggests that with the addition of this vast "dark matter" the functional genome could be tremendously larger than previously thought - triple or quadruple its current known size.

"Our mouse model of the BRAF pseudogene developed cancer as rapidly and aggressively as it would if you were to express the protein-coding BRAF oncogene," explains senior author Pier Paolo Pandolfi, MD, PhD, Director of the Cancer Center and co-founder of the Institute for RNA Medicine (iRM) at BIDMC and George C. Reisman Professor of Medicine at Harvard Medical School. "It's remarkable that this very aggressive phenotype, resembling human diffuse large B-cell lymphoma, was driven by a piece of so-called 'junk RNA.' As attention turns to precision medicine and the tremendous promise of targeted cancer therapies, all of this vast non-coding material needs to be taken into account. In the past, we have found non-coding RNA to be overexpressed, or misexpressed, but because no one knew what to do with this information it was swept under the carpet. Now we can see that it plays a vital role. We have to study this material, we have to sequence it and we have to take advantage of the tremendous opportunity that it offers for cancer therapy."

The new discovery hinges on the concept of competing endogenous RNAs (ceRNA), a functional capability for pseudogenes first described by Pandolfi almost five years ago when his laboratory discovered that pseudogenes and other noncoding RNAs could act as "decoys" to divert and sequester tiny pieces of RNA known as microRNAs away from their protein-coding counterparts to regulate gene expression.

"Our discovery of these 'decoys' revealed a novel new role for messenger RNA, demonstrating that beyond serving as a genetic intermediary in the protein-making process, messenger RNAs could actually regulate expression of one another through this sophisticated new ceRNA 'language,'" says Pandolfi. The team demonstrated in cell culture experiments that when microRNAs were hindered in fulfilling their regulatory function by these microRNA decoys there could be severe consequences, including making cancer cells more aggressive.

In this new paper, the authors wanted to determine if this same ceRNA "cross talk" took place in a living organism—and if it would result in similar consequences.

"We conducted a proof-of-principle experiment using the BRAF pseudogene," explains first author Florian Karreth, PhD, who conducted this work as a postdoctoral fellow in the Pandolfi laboratory. "We investigated whether this pseudogene exerts critical functions in the context of a whole organism and whether its disruption contributes to the development of disease." The investigators focused on the BRAF pseudogene because of its potential ability to regulate the levels of the BRAF protein, a well-known proto-oncogene linked to numerous types of cancer. In addition, says Karreth, the BRAF pseudogene is known to exist in both humans and mice.

The investigators began by testing the BRAF pseudogene in tissue culture. Their findings demonstrated that when overexpressed, the pseudogene did indeed operate as a microRNA decoy that increased the amounts of the BRAF protein. This, in turn, stimulated the MAP-kinase signaling cascade, a pathway through which the BRAF protein controls cell proliferation, differentiation and survival and which is commonly found to be hyperactive in cancer.

When the team went on to create a mouse model in which the BRAF pseudogene was overexpressed they found that the mice developed an aggressive lymphoma-like cancer. "This cancer of B-lymphocytes manifested primarily in the spleens of the animals but also infiltrated other organs including the kidneys and liver," explains Karreth. "We were particularly surprised by the development of such a dramatic phenotype in response to BRAF pseudogene overexpression alone since the development of full-blown cancer usually requires two or more mutational events."

Similar to their findings in their cell culture experiments, the investigators found that the mice overexpressing the BRAF pseudogene displayed higher levels of the BRAF protein and hyperactivation of the MAP kinase pathway, which suggests that this axis is indeed critical to cancer development. They confirmed this by inhibiting the MAP kinase pathway with a drug that dramatically reduced the ability of cancer cells to infiltrate the liver in transplantation experiments.

The Pandolfi team further validated the microRNA decoy function of the BRAF pseudogene by creating two additional transgenic mice, one overexpressing the front half of the BRAF pseudogene, the other overexpressing the back half. Both of these mouse models developed the same lymphoma phenotype as the mice overexpressing the full-length pseudogene, a result which the authors describe as "absolutely astonishing."

"We never expected that portions of the BRAF pseudogene could elicit a phenotype and when both front and back halves induced lymphomas, we were certain the BRAF pseudogene was functioning as a microRNA decoy," says Karreth.

The investigators also found that the BRAF pseudogene is overexpressed in human B-cell lymphomas and that the genomic region containing the BRAF pseudogene is commonly amplified in a variety of human cancers, indicating that the findings in the mouse are of relevance to human cancer development. Moreover, say the authors, silencing of the BRAF pseudogene in human cancer cell lines that expressed higher levels led to reduced cell proliferation, a finding that highlights the importance of the pseudogene in these cancers and suggests that a therapy that reduces BRAF pseudogene levels may be beneficial to cancer patients.

"While we have been busy focusing on the genome's 20,000 coding genes, we have neglected perhaps as many as 100,000 noncoding genetic units," says Pandolfi. "Our new findings not only tell us that we need to characterize the role of all of these non-coding pseudogenes in cancer, but, more urgently, suggest that we need to increase our understanding of the non-coding 'junk' of the genome and incorporate this information into our personalized medicine assays. The game has to start now—we have to sequence and analyze the genome and the RNA transcripts from the non-coding space."

[The game had started at least by 2002 (13 years ago), when FractoGene was submitted, but is ready now with key IP (8,280,641 in force with Trade Secrets to improve Best Methods as of the last CIP in 2007 - that is 8 years ago. andras_at_pellionisz_dot_com]

[What is the equivalent to the "Flat Earth Society" in the "Junk DNA Upholding Blogspace", grave concern about their untenable dogma is quite revealing. While unable to identify the proper DOI there, question is raised if the press release represents the views of the authors. For those behind paywall, here is a verbatim paragraph from the paper [AJP]:]

"Pseudogenes were considered genomic junk for decades, but their retention during evolution argues that they may possess important functions and that their deregulation could contribute to the development of disease. Indeed, several lines of evidence have associated pseudogenes with cellular transformation (Poliseno, 2012). Our study shows that aberrant expression of a pseudogene causes cancer, thus vastly expanding the number of genes that may be involved in this disease. Moreover, our work emphasizes the functional importance of the non-coding dimension of the transcriptome and should stimulate further studies of the role of pseudogenes in the development of disease."


Time Magazine Cover Issue - Closing the Cancer Gap

[We are beyond "the point of no return". As is widely known, potent (and expensive) cancer therapies might be next to ineffective for one person with cancer medically characterized as the same as in the other person (for whom the same therapy could be dramatically effective). The emerging "precision medicine" in cancer already reached "the point of no return". The Time Magazine Cover Story does not qualify as "good news or bad news" its box "Less than 5% of the 1.6 million Americans diagnosed with cancer each year can take advantage of genetic testing" - it clearly indicates to me that 5% is actually "a point of no return". Granted that reimbursed for genomic testing by some insurance companies is "a struggle" and the 5% percentage is unquestionably low, the wide dissemination e.g. by Time Magazine (also with its title) shows that there is no other way to go, and the question is a matter of realization by the public that "science delivers" - of course with proper time/money allocation. The news above (on non-coding "pseudogenes" - by dogmatics held way too long as "junk DNA for the purpose of doing nothing" (Ohno, 1972) - as a lid is also blown away. andras_at_pellionisz_doc_com]


We have run out of money - time to start thinking!

Dr. Harold Varmus to Step Down as NCI Director

A Letter to the NCI Community

March 4, 2015

To NCI staff, grantees, and advisors:

I am writing to let you know that I sent a letter today to President Obama, informing him that I plan to leave the Directorship of the National Cancer Institute at the end of this month.

I take this step with a mixture of regret and anticipation. Regret, because I will miss this job and my working relationships with so many dedicated and talented people. Anticipation, because I look forward to new opportunities to pursue scientific work in the city, New York, that I continue to call home.

The nearly five years in which I have served as NCI Director have not been easy ones for managing this large enterprise—one that offers so much hope for so many. We have endured losses in real as well as adjusted dollars; survived the threats and reality of government shutdowns; and have not yet recovered all the funds that sequestration has taken away. This experience has been especially vivid to those of us who have lived in better times, when NIH was the beneficiary of strong budgetary growth. As Mae West famously said, "I’ve been rich and I’ve been poor, and rich is better."

While penury is never a good thing, I have sought its silver linings. My efforts to cope with budgetary limits have been guided by Lord Rutherford’s appeal to his British laboratory group during a period of fiscal restraint a century ago: "…we’ve run out of money, it is time to start thinking." Rather than simply hold on to survive our financial crisis without significant change, I have tried with essential help from my senior colleagues to reshape some of our many parts and functions. In this way, I have tried to take advantage of some amazing new opportunities to improve the understanding, prevention, diagnosis, and treatment of cancers, despite fiscal duress.

This is not the place for a detailed account of what we have achieved over the past five years. But a brief list of some satisfying accomplishments serves as a reminder that good things can be done despite the financial shortfalls that have kept us from doing more:

The NCI has established two new Centers: one for Global Health, to organize and expand a long tradition of studying cancer in many other countries; and another, for Cancer Genomics, to realize the promise of understanding and controlling cancer as a disorder of the genome.

Our clinical trials programs (now called the National Clinical Trials Network [NCTN] and the NCI Community Oncology Research Program [NCORP]) have been reconfigured to achieve greater efficiencies, adapt to the advent of targeted drugs and immunotherapies, and enhance the contributions of community cancer centers.

Research under a large NCI contract program in Frederick, Maryland, has been redefined as the Frederick National Laboratory for Cancer Research (FNLCR), with more external advice, a large new initiative to study tumors driven by mutant RAS genes, and greater clarity about FNLCR’s role as a supporter of biomedical research.

In efforts to provide greater stability for investigators in these difficult times, we have established a new seven year Outstanding Investigator Award; are discussing new awards to accelerate graduate and post-doctoral training; and are planning to provide individual support for so-called "staff scientists" at extramural institutions.

To strengthen the NCI-designated cancer centers, we are awarding more supplements to the centers’ budgets to encourage work in high priority areas; helping centers to share resources; and working with the center directors to develop more equitable funding plans.

The NCI has attempted to improve the grant-making process in various ways at a time when success rates for applicants have reached all-time lows:

We have engaged our scientists to identify inadequately studied but important questions about cancer—so-called Provocative Questions—and have provided funds for many well-regarded applications to address them.

We have pioneered the use of a descriptive account of an applicant’s past accomplishments, moving away from mere listings of publications, to allow a fairer appraisal of past contributions to science.

Our program leaders now make more nuanced decisions about funding many individual grants, considering a wide range of highly rated applications, not simply those with scores above an arbitrary pay-line.

And we have maintained NCI’s numbers of research project grants, despite the limits on our budget, while continuing to emphasize the importance of balancing unsolicited applications to do basic cancer research against an increasing call for targeted programs to deliver practical applications.

Of course, it is still too early to judge the long-term consequences of most of these actions. But we do know that many good things have happened in cancer research over the past five years as a result of existing investments:

Our understanding of cancer biology has matured dramatically with the near-completion of The Cancer Genome Atlas and with results from other programs that depend on genomics and basic science, including work with model systems.

Many new targeted therapies have been tested in clinical trials, and several have been approved for general use.

Remarkable clinical successes against several kinds of cancers have been reported with immunological tools—natural and synthetic antibodies, checkpoint inhibitors, and chimeric T cell receptors.

More widespread use of a highly effective vaccine against human papilloma viruses (HPV) and the several cancers they cause has been encouraged by further studies and by an important report from the President’s Cancer Panel.

Radiographic screening for lung cancers in heavy smokers—validated by a large-scale trial just after I arrived at the NCI—has now been endorsed for wide-spread use and for reimbursement by Medicare and other insurers.

New computational methods, such as cloud computing and improved inter-operability, are advancing the dream of integrating vast amounts of molecular data on many cancers into the daily care of such cancers.

Some of these advances are now essential features of the President’s recently announced Precision Medicine initiative that will focus initially on cancer.

Such accomplishments have been possible only because the NCI has been able to recruit and retain exceptional people during my years here; I am grateful to all of you. I am also grateful to the many selfless individuals who have made our advisory groups stronger than ever and to the cancer research advocates who regularly remind me—as well as Congress and the public—about the importance of our work to human welfare.

So what is next?

In my remaining few weeks in this position, I will continue to do the NCI Director’s job with customary energy, despite my inevitable status as a "lame duck." I will also schedule a Town Hall meeting to review some of the things that have happened during my tenure here—revisiting the ambitions I announced when I accepted the job and answering questions.

As I just learned today, the White House has approved the appointment of my chief deputy and close friend, Doug Lowy, to serve as Acting Director of the NCI, beginning on April 1st. This gives me enormous pleasure, because Doug—along with Jim Doroshow, the NCI’s Deputy Director for Clinical and Translational Research—made many of NCI’s recent accomplishments possible; is a distinguished scientist, who was recently honored by the President with a National Medal for Technology and Innovation for his work on human papilloma virus vaccines; and is a remarkably congenial person to work with. The NCI will be in excellent hands.

Finally, when I return to New York City full time on April 1st, I will establish a modestly sized research laboratory in the Meyer Cancer Center at the Weill-Cornell Medical College and serve as a senior advisor to the Dean. In addition, I plan to assist the recently founded New York Genome Center as it develops its research and service functions and helps regional institutions introduce genomics into cancer care.

While I look forward to these new adventures and to leading a life concentrated in one place, I know I will miss many of the people, authorities, and ideas that make the NCI Directorship such a stimulating and rewarding position.

With deep respect and gratitude to the entire NCI community,

Harold Varmus

Posted: March 4, 2015

----

--http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4199368/

Genome Res. 2014 Oct; 24(10): 1559–1571.

doi: 10.1101/gr.164871.113

PMCID: PMC4199368

Systems consequences of amplicon formation in human breast cancer

Koichiro Inaki,1,2,9 Francesca Menghi,1,2,9 Xing Yi Woo,1,9 Joel P. Wagner,1,2,3 Pierre-Étienne Jacques,4,5 Yi Fang Lee,1 Phung Trang Shreckengast,2 Wendy WeiJia Soon,1 Ankit Malhotra,2 Audrey S.M. Teo,1 Axel M. Hillmer,1 Alexis Jiaying Khng,1 Xiaoan Ruan,6 Swee Hoe Ong,4 Denis Bertrand,4 Niranjan Nagarajan,4 R. Krishna Murthy Karuturi,4,7 Alfredo Hidalgo Miranda,8 and Edison T. Liucorresponding author1,2,7

1Cancer Therapeutics and Stratified Oncology, Genome Institute of Singapore, Genome, Singapore 138672, Singapore;

2The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut 06030, USA;

3Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA;

4Computational and Systems Biology, Genome Institute of Singapore, Genome, Singapore 138672, Singapore;

5Université de Sherbrooke, Sherbrooke, Québec, J1K 2R1, Canada;

6Genome Technology and Biology, Genome Institute of Singapore, Genome, Singapore 138672, Singapore;

7The Jackson Laboratory, Bar Harbor, Maine 04609, USA;

8National Institute of Genomic Medicine, Periferico Sur 4124, Mexico City 01900, Mexico

corresponding authorCorresponding author.

9These authors contributed equally to this work.

Although in earlier studies the major focus was to find specific driver oncogenes in amplicons and tumor suppressor genes in common regions of loss (primarily using loss of heterozygosity mapping), progressively there emerged an understanding that more than one driver oncogene may be present in any amplicon. Moreover, each amplicon or region of copy number loss alters the expression of many adjacent genes, some with proven conjoint cancer effects (Zhang et al. 2009; Curtis et al. 2012). Thus, any cancer is likely to be a composite of hundreds to thousands of gene changes that contribute to the cancer state. Although specific point mutations contribute to adaptive evolutionary processes, recent genomic analyses from controlled evolutionary experiments in model systems suggest that copy number changes through segmental duplications and rearrangements may play a more prominent role (Chang et al. 2013; Fares et al. 2013).

--

Mutations in noncoding DNA also cause cancer

New discovery could lead to novel field of study within cancer research.

October 12, 2014 - 06:2

.

An international group of cancer researchers have completed the first ever systematic study of noncoding DNA. They found that mutations in the noncoding DNA can, despite previous beliefs, cause cancer.

Until now, scientists have only investigated 1.5 per cent of the total humane DNA. This is the part of the DNA which consists of genes. The remaining 98.5 per cent of the DNA is called noncoding DNA and resides outside of the genes.

The study, just published in Nature Genetics, shows that the majority of cancer patients have mutations in both their genes and the areas outside the genes.

The discovery could lead to a completely new field of study within cancer research and prevention.

"In the long term this may lead to better diagnoses and treatments," says co-author postdoc Anders Jacobsen from the University of Copenhagen at the department of Computational and RNA Biology.

Over the past 10 years scientists have found more and more abnormalities in DNA which lead to cancer.

Colleague is excited

Professor and Head of the Department of Genomic Medicine at Rigshospitalet Finn Cilius Nielsen did not contribute to the study, but has read it and is very excited.

He says the study shows the importance of looking into the noncoding regions of our DNA.

“It's interesting and points to the fact that we could discover clinically relevant information from the noncoding regions," says Nielsen. "Studies like this one could come up with some vital explanations for the causes of cancer," says Nielsen.

Examined 20 different cancer types

The scientists were looking at DNA mutations in 800 cancer patients with more than 20 different types of cancer.

They compared DNA from the patients' tumours with DNA from healthy tissue from the same patients. By doing so they were able to identify the differences between healthy and sick cells and the reason why the tumour had grown.

The scientists were interested in the noncoding regions of the DNA. These regions do not translate into protein as genes do -- instead, they have a different, biochemical task. They regulate how much of a particular gene is expressed. That is, if the gene is to be “on” or “off”.

“For the first time we have been able to see mutations in the noncoding DNA and how these can be the direct cause of cancer,” says Jacobsen.

Mutation gives cancer eternal life

Several mutations connected to the development of cancer were discovered by the scientists. They found that mutations in the front area of the gene which controls the length of telomeres, can trigger cancer.

Telomeres decide how many times a cell can divide and every time a cell divides the telomeres becomes shorter.

This means that at some stage the telomeres are so short that the cells can not longer divide.

However, mutations in the region before the gene TERT makes the gene hyperactive. The length of the telomeres are then extended much more than what is considered normal and a mutation like this will make the cell keep on dividing itself -- eventually forming a tumour.

“This mutation in the noncoding part of the DNA basically gives the cancer cells eternal life," says Jacobsen. "It was exciting that our research proved to have such a concrete result."

The scientists found that this mutation was the most frequent occurrence of cancer-causing mutations outside the gene.

More studies in the future

Jacobsen is convinced there will be many more studies wlooking at the noncoding DNA in the future.

"Our study shows that there's something here which needs to be looked at. With more studies, we can get a much better insight into what happens in cells when cancer occurs,” says Jacobsen. “We can learn a lot about he different cancers and their causes from this. In the long run we hope to develope new treatments.”

Nielsen agrees that there is a need for further studies in the area.

"We need more studies of this kind. I think it'll happen naturally. Within the next 10 to 15 years we'll be able to do complete genome sequencing quickly and cheaply, and then we'll naturally look at mutations in the entire genome -- rather than just in the genes," he says.

--------------

Read the original story in Danish on Videnskab.dk

[Some of us have been thinking, moreover using high-performance computers for quite some time aiming at the "NP-hard" problem of fractal pattern recognition. The first (double) disruption was to replace the mistaken dogmas of "Junk DNA" and "Central Dogma". "Genes failed us" - the very concept of "oncogenes" seemed to exclude the obvious that not only the presently 571 "oncogenes" that have already been found may include ALL genes that can potentially become "misregulated" by fractal defects also in the vast sees in the intergenic "non-coding" (not-Junk) DNA. Any qualified informatics-specialist or physicist would be mesmerized to wittness a PERSON trying to figure out nuclear particles either in fission or fusion (once the "axiom" that the atom would not split was invalidated by its splitting). How many hundreds of millions would have to face a uniquely miserable death till a global effort is directed to the informatics- and computing challenges of "genome misregulation" (a.k.a. cancer)? - andras_at_pellionisz_dot_com]


The Genome (both DNA and RNA) is replete with repeats. These are facts. The question is the mathematics (fractals) that is best suited to interpret self-similar repetitions

Isidore Rigoutsos (Greek-American mathematician) surprised the world in 2006 that the DNA (coding or not) is replete with ";yknon"-s (repetitions). Pointing out their astounding feature of "self-similarity", Pellionisz interpeted Rigoutos' "pyknon"-s as the facts that genome function must be understood in terms of fractals. In a study first shown in Cold Spring Harbor (2009), Pellionisz demonstrated for the smallest genome of a free living organism (Mycoplasma Genitaliae), that the distribution of self-similar repetitions follows the Zipf-Mandelbrot-Parabolic-Fractal Distribution Curve. (See Figure here). In two weeks, Erez Lieberman, Eric Lander (and others) put the Hilbert-fractal globule on Science cover.

Now, about 40 co-authors, with last author Rigoutsos (including the pioneer of RNA, John Mattick) published in PNAS a paper available in full here.

Just a glance at their Fig. 7 (above) will instantly convince all that microRNA-s (that are the culprit of genome regulation with dual valence), manifest "self-similar repetitions". [You may wonder what happens next, andras_at_pellionisz_dot_com]


On the Fractal Design in Human Brain and Nervous Tissue - Losa recognizes FractoGene

... the FractoGene “cause and effect” concept conceived that “fractal genome governs fractal growth of organelles, organs and organisms” Pellionisz, A.J. (2012) The Decade of FractoGene: From Discovery to Utility-Proofs of Concept Open Genome-Based Clinical Applications. International Journal of Systemics, Cybernetics and Informatics, 17-28.. The Principle of this recursive genome function (PRGF) breaks through the double lock of central dogma and junk DNA barriers Pellionisz, A. (1989) Neural Geometry: Towards a Fractal Model of Neurons. Cambridge University Press, Cambridge.. Decades of computer modeling of neurons and neuronal networks suggested that the amount of information necessary to build just a tiny fraction of the human body, i.e. just the cerebellum of the nervous system, was a task for which the 1.3% of the information that the genome could contain [as "genes"] was just totally insufficient Pellionisz, A. (2008) The Principle of Recursive Genome Function. Cerebellum, 7, 348-359., http://dx.doi.org/10.1007/s12311-008-0035-y.

... Among the main fractal peculiarities worth noticing is the process of iteration, whose powerful dynamics allows specific generators to be properly iterated at different scales (small and large) without an a priori choice, by linking efficient genetic programming in order to achieve the formation of viable biological forms and living objects Di Ieva, A., Grizzi, F., Jelinek, H., Pellionisz, A.J. and Losa, G.A. (2013) Fractals in the Neurosciences, Part I: General Principles and Basic Neurosciences. The Neuroscientist. PMID: 24362815

How to cite this paper: Losa, G.A. (2014) On the Fractal Design in Human Brain and Nervous Tissue. Applied Mathematics, 5,

1725-1732. http://dx.doi.org/10.4236/am.2014.512165

[Recognition of FractoGene by Gabriele Losa (and co-publishing in 2014) is significant since Dr. Losa in Switzerland pionereed, in a Four-Volume-Meeting-Book prior and at the Human Genome Project, providing an excellent compilation of book-chapters both on the fractality of genome, and separately on the fractality of organisms. In fact, some contributions contained pointers to both fractalities. However, just about the time "to connect the dots", "The Human Genome Project", with its historically mistaken focus on "genes" (motivated by personal enthusiasm by Jim Watson, such that by mapping all human genes, the "schizophrenia gene" should also be found) the fractal pioneering by Dr. Losa was put on a back-burner. It took another decade till FractoGene (2002) "connected the dots" that the "cause and effect" of fractal genome governs fractal growth of organielles, organs and organisms could break through the double lock of central dogma and junk DNA barriers that unfortunately still prevailed through the Losa Books (1-4). Outside that double straightjacket the enormous utility is now free to roam. "Google Alert" pointed to this Losa paper with delay - Dr. Pellionisz respectfully requests .pdf reprints of publications pertinent to FractoGene be sent ASAP to andras_at_pellionisz_dot_com for proper contemporary compilation and cross-reference. Indeed, as heralded in Google Tech Talk YouTube (2008) time is ripe for a postmodern meeting (with Proceedings). Those interested should contact Dr. Pellionisz]


CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics & Computational Genomics

[The March 16, 2015 issue of Pharmaceutical Intelligence, with the Introduction by Dr. Larry H. Bernstein, puts together an earlier assessment of the disruptive fractal approach to genomics with the new hope of genome editing. "Fractal defects" appear in an entirely new light with genome editing becoming a reality. Pharmaceutical Intelligence excerpts are edited by AJP; hyperlinks and the central email address corrected; andras_at_pellionisz_dot_com]

http://pharmaceuticalintelligence.com/contributors-biographies/members-of-the-board/larry-bernstein/

CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics & Computational Genomics

[About Dr. Larry H. Bernstein] - I retired from a five year position as Chief of the Division of Clinical Pathology (Laboratory Medicine) at New York Methodist Hospital-Weill Cornell Affiliate, Park Slope, Brooklyn in 2008 folowed by an interim consultancy at Norwalk Hospital in 2010. I then became engaged with a medical informatics project called “Second Opinion” with Gil David and Ronald Coifman, Emeritus Professor and Chairman of the Department of Mathematics in the Program in Applied Mathematics at Yale. I went to Prof. Coifman with a large database of 30,000 hemograms that are the most commonly ordered test in medicine because of the elucidation of red cell, white cell and platelet populations in the blood. The problem boiled down to a level of noise that exists in such data, and developing a primary evidence-based classification that technology did not support until the first decade of the 21st century.

Part II B: Computational Genomics

1. Three-Dimensional Folding and Functional Organization Principles of The Drosophila Genome

Sexton T, Yaffe E, Kenigeberg E, Bantignies F,…Cavalli G. Institute de Genetique Humaine, Montpelliere GenomiX, and Weissman Institute, France and Israel. Cell 2012; 148(3): 458-472.

http://dx.doi.org/10.1016/j.cell.2012.01.010/

http://www.cell.com/retrieve/pii/S0092867412000165

http://www.ncbi.nlm.nih.gov/pubmed/22265598

Chromosomes are the physical realization of genetic information and thus form the basis for its readout and propagation. The entire genome is linearly partitioned into well-demarcated physical domains that overlap extensively with active and repressive epigenetic marks.

Chromosomal contacts are hierarchically organized between domains. Global modeling of contact density and clustering of domains show that inactive domains are condensed and confined to their chromosomal territories, whereas active domains reach out of the territory to form remote intra- and interchromosomal contacts.

Moreover, we systematically identify specific long-range intrachromosomal contacts between Polycomb-repressed domains.

Together, these observations allow for quantitative prediction of the Drosophila chromosomal contact map, laying the foundation for detailed studies of chromosome structure and function in a genetically tractable system.

2A. Architecture Reveals Genome’s Secrets

Three-dimensional genome maps - Human chromosome

Genome sequencing projects have provided rich troves of information about stretches of DNA that regulate gene expression, as well as how different genetic sequences contribute to health and disease. But these studies miss a key element of the genome - its spatial organization -which has long been recognized as an important regulator of gene expression.

Regulatory elements often lie thousands of base pairs away from their target genes, and recent technological advances are allowing scientists to begin examining how distant chromosome locations interact inside a nucleus.

The creation and function of 3-D genome organization, some say, is the next frontier of genetics.

Mapping and sequencing may be completely separate processes. For example, it’s possible to determine the location of a gene - to “map” the gene - without sequencing it. Thus, a map may tell you nothing about the sequence of the genome, and a sequence may tell you nothing about the map. But the landmarks on a map are DNA sequences, and mapping is the cousin of sequencing. A map of a sequence might look like this:

On this map, GCC is one landmark; CCCC is another. Here we find, the sequence is a landmark on a map. In general, particularly for humans and other species with large genomes, creating a reasonably comprehensive genome map is quicker and cheaper than sequencing the entire genome, mapping involves less information to collect and organize than sequencing does.

Completed in 2003, the Human Genome Project (HGP) was a 13-year project. The goals were:

* identify all the approximately 20,000-25,000 genes in human DNA,

determine the sequences of the 3 billion chemical base pairs that make up human DNA,

store this information in databases,

improve tools for data analysis,

transfer related technologies to the private sector, and

address the ethical, legal, and social issues (ELSI) that may arise from the project.

Though the HGP is finished, analyses of the data will continue for many years. By licensing technologies to private companies and awarding grants for innovative research, the project catalyzed the multibillion-dollar U.S. biotechnology industry and fostered the development of new medical applications. When genes are expressed, their sequences are first converted into messenger RNA transcripts, which can be isolated in the form of complementary DNAs (cDNAs). A small portion of each cDNA sequence is all that is needed to develop unique gene markers, known as sequence tagged sites or STSs, which can be detected using the polymerase chain reaction (PCR). To construct a transcript map, cDNA sequences from a master catalog of human genes were distributed to mapping laboratories in North America, Europe, and Japan. These cDNAs were converted to STSs and their physical locations on chromosomes determined on one of two radiation hybrid (RH) panels or a yeast artificial chromosome (YAC) library containing human genomic DNA. This mapping data was integrated relative to the human genetic map and then cross-referenced to cytogenetic band maps of the chromosomes. (Further details are available in the accompanying article in the 25 October issue of SCIENCE).

Tremendous progress has been made in the mapping of human genes, a major milestone in the Human Genome Project. Apart from its utility in advancing our understanding of the genetic basis of disease, it provides a framework and focus for accelerated sequencing efforts by highlighting key landmarks (gene-rich regions) of the chromosomes. The construction of this map has been possible through the cooperative efforts of an international consortium of scientists who provide equal, full and unrestricted access to the data for the advancement of biology and human health.

There are two types of maps: genetic linkage map and physical map. The genetic linkage map shows the arrangement of genes and genetic markers along the chromosomes as calculated by the frequency with which they are inherited together. The physical map is representation of the chromosomes, providing the physical distance between landmarks on the chromosome, ideally measured in nucleotide bases. Physical maps can be divided into three general types: chromosomal or cytogenetic maps, radiation hybrid (RH) maps, and sequence maps.

2B. Genome-nuclear lamina interactions and gene regulation.

Kind J, van Steensel B. Division of Gene Regulation, Netherlands Cancer Institute, Amsterdam, The Netherlands.

The nuclear lamina, a filamentous protein network that coats the inner nuclear membrane, has long been thought to interact with specific genomic loci and regulate their expression. Molecular mapping studies have now identified large genomic domains that are in contact with the lamina.

Genes in these domains are typically repressed, and artificial tethering experiments indicate that the lamina can actively contribute to this repression.

Furthermore, the lamina indirectly controls gene expression in the nuclear interior by sequestration of certain transcription factors.

Mol Cell. 2010; 38(4):603-13. http://dx.doi.org/10.1016/j.molcel.2010.03.016

http://MolecCell.com/Molecular maps of the reorganization of genome-nuclear lamina interactions during differentiation/

Peric-Hupkes D, Meuleman W, Pagie L, Bruggeman SW, Solovei I, …., van Steensel B. Division of Gene Regulation, Netherlands Cancer Institute, Amsterdam, The Netherlands.

To visualize three-dimensional organization of chromosomes within the nucleus, we generated high-resolution maps of genome-nuclear lamina interactions during subsequent differentiation of mouse embryonic stem cells via lineage-committed neural precursor cells into terminally differentiated astrocytes. A basal chromosome architecture present in embryonic stem cells is cumulatively altered at hundreds of sites during lineage commitment and subsequent terminal differentiation. This remodeling involves both individual transcription units and multigene regions and affects many genes that determine cellular identity, genes that move away from the lamina are concomitantly activated; others, remain inactive yet become unlocked for activation in a next differentiation step, lamina-genome interactions are widely involved in the control of gene expression programs during lineage commitment and terminal differentiation.

Molecular Maps of the Reorganization of Genome-Nuclear Lamina Interactions during Differentiation

Molecular Cell, Volume 2010; 38 (4): 603-613. http://dx.doi.org/10.1016/j.molcel.2010.03.016

Referred to by: The Silence of the LADs: Dynamic Genome-…

Authors: Daan Peric-Hupkes, Wouter Meuleman, Ludo Pagie, Sophia W.M. Bruggeman, et al.

Various cell types share a core architecture of genome-nuclear lamina interactions. During differentiation, hundreds of genes change their lamina interactions. Changes in lamina interactions reflect cell identity. Release from the lamina may unlock some genes for activation

Fractal “globule”

About 10 years ago - just as the human genome project was completing its first draft sequence - Dekker pioneered a new technique, called chromosome conformation capture (C3) that allowed researchers to get a glimpse of how chromosomes are arranged relative to each other in the nucleus. The technique relies on the physical cross-linking of chromosomal regions that lie in close proximity to one another. The regions are then sequenced to identify which regions have been cross-linked. In 2009, using a high throughput version of this basic method, called Hi-C, Dekker and his collaborators discovered that the human genome appears to adopt a “fractal globule” conformation - a manner of crumpling without knotting.

In the last 3 years, Jobe Dekker and others have advanced technology even further, allowing them to paint a more refined picture of how the genome folds—and how this influences gene expression and disease states. Dekker’s 2009 findings were a breakthrough in modeling genome folding, but the resolution—about 1 million base pairs— was too crude to allow scientists to really understand how genes interacted with specific regulatory elements. The researchers report two striking findings.

First, the human genome is organized into two separate compartments, keeping

* active genes separate and accessible

* while sequestering unused DNA in a denser storage compartment.

* Chromosomes snake in and out of the two compartments repeatedly

* as their DNA alternates between active, gene-rich and inactive, gene-poor stretches.

Second, at a finer scale, the genome adopts an unusual organization known in mathematics as a “fractal.” The specific architecture the scientists found, called

* a “fractal globule,” enables the cell to pack DNA incredibly tightly – the information density in the nucleus is trillions of times higher than on a computer chip — while avoiding the knots and tangles that might interfere with the cell’s ability to read its own genome. Moreover, the DNA can easily Unfold and Refold during

* gene activation,

* gene repression, and

* cell replication.

Dekker and his colleagues discovered, for example, that chromosomes can be divided into folding domains—megabase-long segments within which

genes and regulatory elements associate more often with one another than with other chromosome sections.

The DNA forms loops within the domains that bring a gene into close proximity with a specific regulatory element at a distant location along the chromosome. Another group, that of molecular biologist Bing Ren at the University of California, San Diego, published a similar finding in the same issue of Nature. Dekker thinks the discovery of [folding] domains will be one of the most fundamental [genetics] discoveries of the last 10 years. The big questions now are

* how these domains are formed, and

* what determines which elements are looped into proximity.

“By breaking the genome into millions of pieces, we created a spatial map showing how close different parts are to one another,” says co-first author Nynke van Berkum, a postdoctoral researcher at UMass Medical School in Dekker‘s laboratory. “We made a fantastic three-dimensional jigsaw puzzle and then, with a computer, solved the puzzle.”

Lieberman-Aiden, van Berkum, Lander, and Dekker’s co-authors are Bryan R. Lajoie of UMMS; Louise Williams, Ido Amit, and Andreas Gnirke of the Broad Institute; Maxim Imakaev and Leonid A. Mirny of MIT; Tobias Ragoczy, Agnes Telling, and Mark Groudine of the Fred Hutchison, Cancer Research Center and the University of Washington; Peter J. Sabo, Michael O. Dorschner, Richard Sandstrom, M.A. Bender, and John Stamatoyannopoulos of the University of Washington; and Bradley Bernstein of the Broad Institute and Harvard Medical School.

2C. three-dimensional structure of the human genome

Lieberman-Aiden et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science, 2009; DOI: 10.1126/science.1181369.

Harvard University (2009, October 11). 3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell. ScienceDaily. Retrieved February 2, 2013, from http://www.sciencedaily.com/releases/2009/10/091008142957

Using a new technology called Hi-C and applying it to answer the thorny question of how each of our cells stows some three billion base pairs of DNA while maintaining access to functionally crucial segments. The paper comes from a team led by scientists at Harvard University, the Broad Institute of Harvard and MIT, University of Massachusetts Medical School, and the Massachusetts Institute of Technology. “We’ve long known that on a small scale, DNA is a double helix,” says co-first author Erez Lieberman-Aiden, a graduate student in the Harvard-MIT Division of Health Science and Technology and a researcher at Harvard’s School of Engineering and Applied Sciences and in the laboratory of Eric Lander at the Broad Institute. “But if the double helix didn’t fold further, the genome in each cell would be two meters long. Scientists have not really understood how the double helix folds to fit into the nucleus of a human cell, which is only about a hundredth of a millimeter in diameter. This new approach enabled us to probe exactly that question.”

The mapping technique that Aiden and his colleagues have come up with bridges a crucial gap in knowledge—between what goes on at the smallest levels of genetics (the double helix of DNA and the base pairs) and the largest levels (the way DNA is gathered up into the 23 chromosomes that contain much of the human genome). The intermediate level, on the order of thousands or millions of base pairs, has remained murky. As the genome is so closely wound, base pairs in one end can be close to others at another end in ways that are not obvious merely by knowing the sequence of base pairs. Borrowing from work that was started in the 1990s, Aiden and others have been able to figure out which base pairs have wound up next to one another. From there, they can begin to reconstruct the genome—in three dimensions.

Even as the multi-dimensional mapping techniques remain in their early stages, their importance in basic biological research is becoming ever more apparent. “The three-dimensional genome is a powerful thing to know,” Aiden says. “A central mystery of biology is the question of how different cells perform different functions—despite the fact that they share the same genome.” How does a liver cell, for example, “know” to perform its liver duties when it contains the same genome as a cell in the eye? As Aiden and others reconstruct the trail of letters into a three-dimensional entity, they have begun to see that “the way the genome is folded determines which genes were

2D. “Mr. President; The Genome is Fractal !”

Eric Lander (Science Adviser to the President and Director of Broad Institute) et al. delivered the message on Science Magazine cover (Oct. 9, 2009) and generated interest in this by the International HoloGenomics Society at a Sept meeting [Pellionisz, Sept. 16, 2009 in Cold Springs Harbor]

First, it may seem to be trivial to rectify the statement in “About cover” of Science Magazine by AAAS.

The statement “the Hilbert curve is a one-dimensional fractal trajectory” needs mathematical clarification.

The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces must be complete, a property that stipulates the existence of enough limits in the space to allow the techniques of calculus to be used. A Hilbert curve (also known as a Hilbert space-filling curve) is a continuous fractal space-filling curve first described by the German mathematician David Hilbert in 1891,[1] as a variant of the space-filling curves discovered by Giuseppe Peano in 1890.[2] For multidimensional databases, Hilbert order has been proposed to be used instead of Z order because it has better locality-preserving behavior.

Representation as Lindenmayer system

The Hilbert Curve can be expressed by a rewrite system (L-system).

While the paper itself does not make this statement, the new Editorship of the AAAS Magazine might be even more advanced if the previous Editorship did not reject (without review) a Manuscript by 20+ Founders of (formerly) International PostGenetics Society in December, 2006 - [only an Abstract by Pellionisz could be published at his Symposium in native Budapest, 2006, AJP].

Second, it may not be sufficiently clear for the reader that the reasonable requirement for the DNA polymerase to crawl along a “knot-free” (or “low knot”) structure does not need fractals. A “knot-free” structure could be spooled by an ordinary “knitting globule” (such that the DNA polymerase does not bump into a “knot” when duplicating the strand; just like someone knitting can go through the entire thread without encountering an annoying knot): Just to be “knot-free” you don’t need fractals. Note, however, that

* the “strand” can be accessed only at its beginning – it is impossible to e.g. to pluck a segment from deep inside the “globulus”.

This is where certain fractals provide a major advantage – that could be the “Eureka” moment for many readers. [Below, citing a heavily spammed email address instead of the secured andras_at_pellionisz_dot_com, the "Heureka explanation" borrows from here - AJP] For instance,

* the mentioned Hilbert-curve is not only “knot free” -

* but provides an easy access to “linearly remote” segments of the strand.

* If the Hilbert curve starts from the lower right corner and ends at the lower left corner, for instance

* the path shows the very easy access of what would be the mid-point

* if the Hilbert-curve is measured by the Euclidean distance along the zig-zagged path.

Likewise, even the path from the beginning of the Hilbert-curve is about equally easy to access – easier than to reach from the origin a point that is about 2/3 down the path. The Hilbert-curve provides an easy access between two points within the “spooled thread”; from a point that is about 1/5 of the overall length to about 3/5 is also in a “close neighborhood”.

This may be the “Eureka-moment” for some readers, to realize that

* the strand of “the Double Helix” requires quite a finess to fold into the densest possible globuli (the chromosomes) in a clever way

* that various segments can be easily accessed. Moreover, in a way that distances between various segments are minimized.

This marvellous fractal structure is illustrated by the 3D rendering of the Hilbert-curve. Once you observe such fractal structure, you’ll never again think of a chromosome as a “brillo mess”, would you? It will dawn on you that the genome is orders of magnitudes more finessed than we ever thought so.

Those embarking at a somewhat complex review of some historical aspects of the power of fractals may wish to consult the ouvre of Mandelbrot (also, to celebrate his 85th birthday). For the more sophisticated readers, even the fairly simple Hilbert-curve (a representative of the Peano-class) becomes even more stunningly brilliant than just some “see through density”. Those who are familiar with the classic “Traveling Salesman Problem” know that “the shortest path along which every given n locations can be visited once, and only once” requires fairly sophisticated algorithms (and tremendous amount of computation if n>10 (or much more). Some readers will be amazed, therefore, that for n=9 the underlying Hilbert-curve helps to provide an empirical solution.

refer to [Andras J. Pellionisz, andras_at_pellionisz_dot_com]

Briefly, the significance of the above realization, that the (recursive) Fractal Hilbert Curve is intimately connected to the (recursive) solution of TravelingSalesman Problem, a core-concept of Artificial Neural Networks can be summarized as below.

Accomplished physicist John Hopfield (already a member of the National Academy of Science) aroused great excitement in 1982 with his (recursive) design of artificial neural networks and learning algorithms which were able to find reasonable solutions to combinatorial problems such as the Traveling SalesmanProblem. (Book review Clark Jeffries, 1991, see also 2. J. Anderson, R. Rosenfeld, and A. Pellionisz (eds.), Neurocomputing 2: Directions for research, MIT Press, Cambridge, MA, 1990):

“Perceptions were modeled chiefly with neural connections in a “forward” direction: A -> B -* C — D. The analysis of networks with strong backward coupling proved intractable. All our interesting results arise as consequences of the strong back-coupling” (Hopfield, 1982).

The Principle of Recursive Genome Function [Pellionisz, 2008 in peer reviewed science article, also disseminated as Google Tech Talk YouTube "Is IT Ready for the Dreaded DNA Data Deluge"] surpassed obsolete axioms that blocked, for half a Century, entry of recursive algorithms to interpretation of the structure-and function of (Holo)Genome. This breakthrough, by uniting the two largely separate fields of Neural Networks and Genome Informatics, is particularly important for

* those who focused on Biological (actually occurring) Neural Networks (rather than abstract algorithms that may not, or because of their core-axioms, simply could not

* represent neural networks under the governance of DNA information).

3A. The FractoGene Decade

from Inception in 2002 to Proofs of Concept and Impending Clinical Applications by 2012

[Below, Pharmaceutical Intelligence lists the yearly milestones of FractoGene. The document that also contains all hyperlinks is here http://www.junkdna.com/the_fractogene_decade.pdf ]

Junk DNA Revisited (SF Gate, 2002)

The Future of Life, 50th Anniversary of DNA (Monterey, 2003)

Mandelbrot and Pellionisz (Stanford, 2004)

Morphogenesis, Physiology and Biophysics (Simons, Pellionisz 2005)

PostGenetics; Genetics beyond Genes (Budapest, 2006)

ENCODE-conclusion (Collins, 2007)

The Principle of Recursive Genome Function (paper, YouTube, 2008)

Cold Spring Harbor presentation of FractoGene (Cold Spring Harbor, 2009)

Mr. President, the Genome is Fractal! (2009)

HolGenTech, Inc. Founded (2010)

Pellionisz on the Board of Advisers in the USA and India (2011)

ENCODE – final admission (2012)

Recursive Genome Function is Clogged by Fractal Defects in Hilbert-Curve (2012)

Geometric Unification of Neuroscience and Genomics (2012)

US Patent Office issues FractoGene 8,280,641 to Pellionisz (2012)

http://www.junkdna.com/the_fractogene_decade.pdf

http://www.scribd.com/doc/116159052/The-Decade-of-FractoGene-From-Discovery-to-Utility-Proofs-of-Concept-Open-Genome-Based-Clinical-Applications

http://fractogene.com/full_genome/morphogenesis.html

[Below, Pharmaceutical Intelligence provides some excerpts from a 2002 article in SF-Gate (the electronic version of San Francisco Chronicle). This is a very lucid overview of the beginnings at 2002 - AJP]

When the human genome was first sequenced in June 2000, there were two pretty big surprises. The first was thathumans have only about 30,000-40,000 identifiable genes, not the 100,000 or more many researchers were expecting. The lower –and more humbling — number means humans have just one-third more genes than a common species of worm.

The second stunner was how much human genetic material — more than 90 percent — is made up of what scientists were calling “junk DNA.”

The term was coined to describe similar but not completely identical repetitive sequences of amino acids (the same substances that make genes), which appeared to have no function or purpose. The main theory at the time was that these apparently non-working sections of DNA were just evolutionary leftovers, much like our earlobes.

If biophysicist Andras Pellionisz is correct, genetic science may be on the verge of yielding its third — and by far biggest — surprise.

With a doctorate in physics, Pellionisz is the holder of Ph.D.’s in computer sciences and experimental biology from the prestigious Budapest Technical University and the Hungarian National Academy of Sciences. A biophysicist by training, the 59-year-old is a former research associate professor of physiology and biophysics at New York University, author of numerous papers in respected scientific journals and textbooks, a past winner of the prestigious Humboldt Prize for scientific research, a former consultant to NASA and holder of a patent on the world’s first artificial cerebellum, a technology that has already been integrated into research on advanced avionics systems. Because of his background, the Hungarian-born brain researcher might also become one of the first people to successfully launch a new company by using the Internet to gather momentum for a novel scientific idea.

The genes we know about today, Pellionisz says, can be thought of as something similar to machines that make bricks (proteins, in the case of genes), with certain junk-DNA sections providing a blueprint for the different ways those proteins are assembled. The notion that at least certain parts of junk DNA might have a purpose for example, many researchers now refer to with a far less derogatory term: introns.

In a provisional patent application filed July 31, Pellionisz claims to have unlocked a key to the hidden role junk DNA plays in growth — and in life itself. His patent application covers all attempts to count, measure and compare the fractal properties of introns for diagnostic and therapeutic purposes.

[The patent with priority date of 2002 is now a USPTO issued patent 8,280,641 in force till 2026 late March. The utility of "diagnostic and therapeutic purposes" has just gained a tremendous new market with "genome editing" unfolding. "Fractal Defects" in the genome producing "Fractal Defects" of the organism (perhaps most importantly, cancer) can not only be matched to the therapeutic agents (chemos) with the highest probability to be effective (80% of chemos are NOT effective for the genome of any particular individual). Beyond this vast market, editing out fractal defects that initiate the derailment of fractal genome regulation hold a key to the ultimate "inner sanctum" of providing genomic cures based on mathematical understanding - AJP]

3B. The Hidden Fractal Language of Intron DNA

[Excerpts from San Francisco Chronicle, 2002 continued] -To fully understand Pellionisz’ idea, one must first know what a fractal is.

Fractals are a way that nature organizes matter. Fractal patterns can be found in anything that has a nonsmooth surface (unlike a billiard ball), such as coastal seashores, the branches of a tree or the contours of a neuron (a nerve cell in the brain). Some, but not all, fractals are self-similar and stop repeating their patterns at some stage; the branches of a tree, for example, can get only so small. Because they are geometric, meaning they have a shape, fractals can be described in mathematical terms. It’s similar to the way a circle can be described by using a number to represent its radius (the distance from its center to its outer edge). When that number is known, it’s possible to draw the circle it represents without ever having seen it before.

Although the math is much more complicated, the same is true of fractals. If one has the formula for a given fractal, it’s possible to use that formula to construct, or reconstruct, an image of whatever structure it represents, no matter how complicated.

The mysteriously repetitive but not identical strands of genetic material are in reality building instructions organized in a special type of pattern known as a fractal. It’s this pattern of fractal instructions, he says, that tells genes what they must do in order to form living tissue, everything from the wings of a fly to the entire body of a full-grown human.

In a move sure to alienate some scientists, Pellionisz has chosen the unorthodox route of making his initial disclosures online on his own Web site. He picked that strategy, he says, because it is the fastest way he can document his claims and find scientific collaborators and investors. Most mainstream scientists usually blanch at such approaches, preferring more traditionally credible methods, such as publishing articles in peer-reviewed journals.

[San Francisco Chronicle could not possess the domain expertise to know that the double-disruption (overturning both of the underlying axioms of Genomics, the JunkDNA and Central Dogmas) not only in 2002 made it impossible to publish with the prevailing bias of "peer review", but even in 2006, along with 20+ leading scientists, worldwide, Science Magazine rejected (without review, a violation of their bylaws...) publication. The enormous utility in the scientific breakthrough compelled the scientist-inventor, now seeking the proper class of entrepreneurs, to swiftly file to USPTO, spend well over a million dollars of his own money (to become the sole inventor and "clean as a whistle" owner), in the struggle to see through the patent, approved over ten years of wrangling, finally USPTO throwing in the towel a week after ENCODE-II killed the Old School Dogmas. Meanwhile, both the mathematical theory, software enabling algorithms had to go beyond the "best methods" of the time of last CIP to patent (2007 - now available as "trade secrets"), and once the priority was secured peer reviewed publications could resume. Noteworthy that the scientist-inventor has published well over 100 peer-reviewed papers before his double-disruptive FractoGene. A previous issued patent of Pellionisz took NASA 10 years to improve the avionics of F15 fighter jets. - AJP]

Basically, Pellionisz’ idea is that a fractal set of building instructions in the DNA plays a similar role in organizing life itself. Decode the way that language works, he says, and in theory it could be reverse engineered. Just as knowing the radius of a circle lets one create that circle, the more complicated fractal-based formula would allow us to understand how nature creates a heart or simpler structures, such as disease-fighting antibodies. At a minimum, we’d get a far better understanding of how nature gets that job done.

The complicated quality of the idea is helping encourage new collaborations across the boundaries that sometimes separate the increasingly intertwined disciplines of biology, mathematics and computer sciences.

Hal Plotkin, Special to SF Gate. Thursday, November 21, 2002. http://www.junkdna.com/Special to SF Gate/plotkin.htm (1 of 10)2012.12.13. 12:11:58/

3C. multifractal analysis

The human genome: a multifractal analysis. Moreno PA, Vélez PE, Martínez E, et al.

BMC Genomics 2011, 12:506. http://www.biomedcentral.com/1471-2164/12/506

Background: Several studies have shown that genomes can be studied via a multifractal formalism. Recently, we used a multifractal approach to study the genetic information content of the Caenorhabditis elegans genome. Here we investigate the possibility that the human genome shows a similar behavior to that observed in the nematode.

Results: We report here multifractality in the human genome sequence. This behavior correlates strongly on the presence of Alu elements and to a lesser extent on CpG islands and (G+C) content.

In contrast, no or low relationship was found for LINE, MIR, MER, LTRs elements and DNA regions poor in genetic information.

Gene function, cluster of orthologous genes, metabolic pathways, and exons tended to increase their frequencies with ranges of multifractality and large gene families were located in genomic regions with varied multifractality.

Additionally, a multifractal map and classification for human chromosomes are proposed.

Conclusions

we propose a descriptive non-linear model for the structure of the human genome,

This model reveals

a multifractal regionalization where many regions coexist that are far from equilibrium and this non-linear organization has significant molecular and medical genetic implications for understanding the role of Alu elements in genome stability and structure of the human genome.

Given the role of Alu sequences in gene regulation, genetic diseases, human genetic diversity, adaptation and phylogenetic analyses, these quantifications are especially useful.


Future of genomic medicine depends on sharing information: Eric Lander

Feb 26, 2015 01:34 AM , By Special Correspondent | 0 comments

[Eric Lander goes to Bangalore (Tata Auditorium) early March]

Eric S. Lander, one of the principal leaders of the Human Genome Project that mapped the entire human genetic code in 2003, said on Wednesday that the “real genome project” is about studying huge samples of genomic data to identify disease genes.

While phenomenal technological advances had helped reduce the cost of genome sequencing by a million-fold over the last decade, allowing researchers to map thousands of human genomes, the future of genomic medicine depended on “sharing information” between organisations and countries — including India — Professor Lander said.

In order for therapy to emerge from genetic research, “health systems around the world need to turn into learning systems” that share information, said Prof Lander, delivering a lecture on “The Human Genome and Beyond: A 35 year Journey of Genomic Medicine” as part of the three-city Cell Press-TNQ Distinguished Lectureship Series.

Prof. Lander envisaged a “DNA library” where genes can be cross-referenced to detect “spelling differences” and disease genes. The goal before the scientific community now was to find targets for therapeutic intervention, he said, to a packed auditorium comprising a large number of medical students. There was much to be learnt in the course of clinical care, said Prof. Lander, founding director of the Broad Institute of MIT and Harvard University.

While the “breathless hype” created around the Human Genome Project suggested that it would cure all disease in a couple of years, he said much progress had indeed been made over the last decade with the discovery of several genes responsible for diabetes, schizophrenia and heart attacks.

Prof. Lander will be speaking next on Friday at the JN Tata Auditorium in Bengaluru as part of the lectureship series.

[For Pellionisz, in his 2012 Bangalore-Hyderabad-Trivandrum lectureship series the "Fractal Approach" was an "easy sale in India" - where culture is replete of self-similar repetitions:

[Pellionisz' lectureship series in India, selling fractals, 2012]

Pellionisz initiated FractoGene in 2002 as a US patent application not because he is a scientist driven by money (see about a hundred academic publications to geometrize neuroscience). "Fractal Genome Grows Fractal Organism" was in 2002 a "double lucid heresy" (reversing both mistaken dogmas of old-schoool genomics; the "Junk DNA" misnomer and "Central Dogma"). Not only no peer-review would accept it (even in 2006, prior to releasing ENCODE-I Science rejected without review a manuscript submitted with dozens of world-class co-authors). In fact, after publishing the seminal concept in 1989 of fractal recursion to the genome in a Cambridge University Press book (a Proceedings of a Neural Networks meeting in which Pellionisz was on the Program Committee), his ongoing NIH grant was discontinued and the application to a new NIH program, promoting informatics was not accepted (see "acknowledgement" in the 1989 paper). Now the utility is a US patent in force, 8,280,641), academically followed by Lander putting the Hilbert-fractal on the cover of Science magazine (2009). In the "Global $2 Trillion Trilemma" (see essay below) India can contribute with huge numbers of human genomes (both control and cancerous), along with the much less regulated personal data, and much more economical genome-based chemo-matching. Pellionisz put forward this plan as his Proceedings of award-winning lecture-tour in India. Francis Collins toured Bangalore at about the same time, and now Eric Lander has a chance to bring the international collaboration to success with Ratan Tata. The video of Eric' pitch (taped in New Delhi) answers the reporter's question "what is the single biggest thing (towards breakthrough of understanding genomic underpinning of e.g. cancer)?" in an interesting manner: "The diagram of a cell".

With due respect, a fractal diagram of a (Purkinje) cell, generated by the fractal recursive genome function, is already available, and India is keenly aware of the powerful architecture of self-similar repetitions (fractals) both by a presentation and Proceedings.

[Samples from presentation in lecture-tour of Pellionisz in India, 2012]

Eric Lander also visited Bangalore and Chennai, and concluded with the prediction that 'India Will Lead the Genetic Revolution':

The New Indian Express

By Papiya Bhattacharya

BENGALURU:India will lead the genetic revolution, said Broad Institute of MIT and Harvard’s core member Prof Eric Lander, while delivering the last of his lectures in the Cell Press-TNQ India Distinguished Lectureship Series 2015 here on Friday.

“India is a country of a billion people. It has a special role to play because of its huge diversity of environment, people, their exposure to these environments and a large percentage of consanguinity. All these factors can be put to good use to study the existence and function of human genes for India,“ he said.

Lander is one of the leaders of the Human Genome Project. He and his colleagues are known for sequencing the human genome in 2000 and they have standing interest in applying genomics to understand the molecular basis of human physiology and disease.

Lander has a PhD in Mathematics from Oxford University as a Rhodes scholar. He later turned a biologist and a geneticist.

His mathematical talent came in handy when he turned to interpret the human genome and its sequence.

On Friday, he spoke on the history of genetics, its birth in 1911 to 1980 when he and his collaborators spent $3 billion to sequence the human genome.

“Now the job is to find the genes responsible for diseases so that drugs can target those genes and the proteins they make and help in treating diseases,” he said.

The future belongs to precision medicine where all medical decisions, medicines and products will be tailored to suit the patients individual needs of the body and genome, he added.


Genetic Geometry Takes Shape

By: Ivan Amato

February 25, 2015

nuclei from a half-million human cells could all fit inside a single poppy seed. Yet within each and every nucleus resides genomic machinery that is incredibly vast, at least from a molecular point of view. It has billions of parts, many used to activate and silence genes — an arrangement that allows individual cells to specialize as brain cells, heart cells and some 200 other different cell types. What’s more, each cell’s genome is atwitter with millions of mobile pieces that swarm throughout the nucleus and latch on here and there to tweak the genetic program. Every so often, the genomic machine replicates itself.

At the heart of the human genome’s Lilliputian machinery is the two meters’ worth of DNA that it takes to embody a person’s 3 billion genetic letters, or nucleotides. Stretch out all of the genomes in all of your body’s trillions of cells, says Tom Misteli, the head of the cell biology of genomes group at the National Cancer Institute in Bethesda, Md., and it would make 50 round trips to the sun. Since 1953, when James Watson and Francis Crick revealed the structure of DNA, researchers have made spectacular progress in spelling out these genetic letters. But this information-storage view reveals almost nothing about what makes specific genes turn on or off at different times, in different tissue types, at different moments in a person’s day or life.

To figure out these processes, we must understand how those genetic letters collectively spiral about, coil, pinch off into loops, aggregate into domains and globules, and otherwise assume a nucleus-wide architecture. “The beauty of DNA made people forget about the genome’s larger-scale structure,” said Job Dekker, a molecular biologist at the University of Massachusetts Medical School in Worcester who has built some of the most consequential tools for unveiling genomic geometry. “Now we are going back to studying the structure of the genome because we realize that the three-dimensional architecture of DNA will tell us how cells actually use the information. Everything in the genome only makes sense in 3-D.”

Genome archaeologists like Dekker have invented and deployed molecular excavation techniques for uncovering the genome’s architecture with the hope of finally discerning how all of that structure helps to orchestrate life on Earth. For the past decade or so, they have been exposing a nested hierarchy of structural motifs in genomes that are every bit as elemental to the identity and activity of each cell as the double helix.

A Better Genetic Microscope

A close investigation of the genomic machine has been a long time in coming. The early British microscopist Robert Hooke coined the word cell as a result of his mid-17th-century observations of a thin section of cork. The small compartments he saw reminded him of monks’ living quarters — their cells. By 1710, Antonie van Leeuwenhoek had spied tiny compartments within cells, though it was Robert Brown, of Brownian motion fame, who coined the word nucleus to describe these compartments in the early 1830s. A half-century later, in 1888, the German anatomist Heinrich Wilhelm Gottfried von Waldeyer-Hartz peered through his microscope and decided to use the word chromosome — meaning “color body” — for the tiny, dye-absorbing threads that he and others could see inside nuclei with the best microscopes of their day.

During the 20th century, biologists found that the DNA in chromosomes, rather than their protein components, is the molecular incarnation of genetic information. The sum total of the DNA contained in the 23 pairs of chromosomes is the genome. But how these chromosomes fit together largely remained a mystery.

Then in the early 1990s, Katherine Cullen and a team at Vanderbilt University developed a method to artificially fuse pieces of DNA that are nearby in the nucleus — a seminal feat that made it possible to analyze the ultrafolded structure of DNA merely by reading the DNA sequence. This approach has been improved over the years. One of its latest iterations, called Hi-C, makes it possible to map the folding of entire genomes.

The first step in a Hi-C experiment is to treat a sample of millions of cells with formaldehyde, which has the chemical effect of cross-linking strands of DNA wherever two strands happen to be close together. Those two nearby bits might be some distance away along the same chromosome that has bent back onto itself, or they may be on separate but adjacent chromosomes.

Next, researchers mince the genomes, harvest the millions of cross-linked snippets, and sequence the DNA of each snippet. The sequenced snippets are like close-up photos of the DNA-DNA contacts in the 3-D genome. Researchers map these snippets onto existing genome-wide sequence data to create a listing of the genome’s contact points. The results of this matching exercise are astoundingly data-rich maps — they look like quilts of nested, color-coded squares of different sizes — that specify the likelihood of any two segments of a chromosome (or even two segments of an entire genome) to be physically close to one another in the nucleus.

So far, most Hi-C data depict an average contact map using contact hits pooled from all of the cells in the sample. But researchers have begun to push the technique so that they can harvest the data from single cells. The emerging capability could lead to the most accurate 3-D renderings yet of chromosomes and genomes inside nuclei.

In addition, Erez Lieberman Aiden, the director of the Baylor College of Medicine Center for Genome Architecture, and his colleagues have recently cataloged DNA-DNA contacts in intact nuclei, rather than in DNA that previously had to be extracted from nuclei, a step that adds uncertainty to the data. The higher-resolution contact maps enable the researchers to discern genomic structural features on the scale of 1,000 genetic letters — a resolution about 1,000 times finer than before. It is like looking right under the hood of a car instead of squinting at the engine from a few blocks away. The researchers published their views of nine cell types, including cancer cells in both humans and mice, in the December 18, 2014, issue of Cell.

The Power of Loops

Using sophisticated algorithms to analyze the hundreds of millions — in some cases, billions — of contact points in these cells, Aiden and his colleagues could see that these genomes pinch off into some 10,000 loops. Cell biologists have known about genomic loops for decades, but were not previously able to examine them with the level of molecular resolution and detail that is possible now. These loops, whose fluid shapes Dekker likens to “snakes all curled up,” reveal previously unseen ways that the genome’s large-scale architecture might influence how specific genes turn on and off, said Miriam Huntley, a doctoral student at Harvard University and a co-author of the Cell article.

In the different cell types, the loops begin and end at different specific chromosomal locations, so each cell line’s genome appears to have a unique population of loops. And that differentiation could provide a structural basis to help explain how cells with the same overall genome nonetheless can differentiate into hundreds of different cell types. “The 3-D architecture is associated with which program the cell runs,” Aiden said.

What do these loops do? Misteli imagines them “swaying in the breeze” inside the fluid interior of the nucleus. As they approach and recede from one another, other proteins might swoop in and stabilize the transient loop structure. At that point, a particular type of protein called a transcription activator can kick-start the molecular process by which a gene gets turned on.

Misteli muses that each cell type — a liver cell or a brain cell, for example — could have a signature network of these transient loop-loop interactions. Loop structures could determine which genes get activated and which get silenced.

Yet the researchers are careful to note that they’ve only found associations between structure and function — it’s still too early to know for sure if one causes the other, and the direction in which the causal arrow points.

As they mined their data on inter-loop interactions, Aiden, Huntley and their colleagues were also able to discern a half-dozen larger structural features in the genome called subcompartments. Aiden refers to them as “spatial neighborhoods in the nucleus” — the nucleic equivalent of New York City’s midtown or Greenwich Village. And just as people gravitate toward one neighborhood or another, different stretches of chromosomes carry a kind of molecular zip code for certain subcompartments and tend to slither toward them.

These molecular zip codes are written in chromatin, the mix of DNA and protein that makes up chromosomes. Chromatin is built when DNA winds around millions of spool-like protein structures called nucleosomes. (This winding is why two meters of DNA can cram inside nuclei with diameters just one-three-hundred-thousandth as wide.)

A large cast of biomolecular players finesses different swaths of this contorted chromatin into more closed or open shapes. Roving parts of the genomic machine can better access the open sections, and so have a better chance of turning on the genes located there.

The increasingly detailed hierarchical picture of the genome that researchers like Dekker, Misteli, Aiden and their colleagues have been building goes something like this: Nucleotides assemble into the famous DNA double helix. The helix winds onto nucleosomes to form chromatin, which winds and winds in its turn into formations similar to what you get when you keep twisting the two ends of a string. Amid all of this, the chromatin pinches off here and there into thousands of loops. These loops, both on the same chromosome and on different ones, engage one another in subcompartments.

As researchers gradually gain more insight into the genome’s hierarchy of structures, they will get closer to figuring out how this macromolecular wonder works in all of its vastness and mechanistic detail. The National Institutes of Health has launched a five-year, $120 million program called 4D Nucleome that is sure to build momentum in the nuclear-architecture research community, and a similar initiative is being launched in Europe. The goal of the NIH program, as described on its website, is “to understand the principles behind the three-dimensional organization of the nucleus in space and time (the fourth dimension), the role nuclear organization plays in gene expression and cellular function, and how changes in the nuclear organization affect normal development as well as various diseases.”

Or, as Dekker says, “It will finally allow us to see the living genome in action, and that would ultimately tell us how it actually works.

[By the completion of the Human Genome Project in 2001, and especially after the shock of finding next year (2002) that the mouse has essentially the same tiny set of "genes", thinkers had to seek principles of the genome function. This was not easy, since the celebrated principle of genome STRUCTURE (the Double Helix, 1953) biased thinking towards a linear (though twisted) "thread". Nothing can take away the significance of the discovery of double-stranded structure, since it is the basis how the genome propagates itself. Nonetheless, the structure (and its propagation) essentially says nothing about how the genome functions; how the genome governs the growth of living organisms. (Transcription is serial, but different kinds of proteins are produced in parallel even within a single cell, moreover the regulation of production of proteins is obviously interactive in a parallel manner). The above journalistic reminder takes us back to 2002 when Job Dekker (and co-workers)”discovered and developed an experimental method (3C) to measure the frequency of interaction between any two genomic loci. The parallel function of the genome was, therefore, experimentally established. Along a separate line of thinking, since 1989 Pellionisz showed that the single cell of a Purkinje neuron develops branchlets in a parallel fashion (just like any tree grows branchlets and leaves in a parallel fashion, certainly not serially one after the other). Moreover, the growth of the cell has proven to be fractal, requiring the Principle of Recursive Genome Function (Pellionisz, 2008). It was the brilliance of Eric Lander, handed over a copy of the manuscript of "The Principle of Recursive Genome Function" dedicated to him in 2007, that connected the two lines of thoughts by means of the spectacular improvement of Dekker's 3C experimental technique to "Hi-C" by Erez Lieberman. The importance of the principle of "structural closeness" in a massively parallel function is elaborated here. The resulting Science cover article (Lieberman, Mirny, Lander, Dekker et al, 2009) experimentally clinched the "fractal globule" of DNA (theoretically predicted by Grosberg et al, 1988, 1993). Already (at 2002, Pellionisz), "FractoGene" utility IP was secured that "genomic fractals are in a cause-effect relationship with fractal growth of organisms" (8,280,641) - a finding corroborated in case of cancer by Mirny et al (2011, see assorted further independent experimental evidence linking fractal defects of the genome to cancer, autism, schizophrenia and autoimmune diseases in Pellionisz 2012). The correlation of genomic variants with cancer therapies is now an exploding area of activities (see Foundation Medicine, with Founding Adviser Eric Lander, and Roche having invested $1Bn into FMI). Nobody claims (any more) any objection against "The Principle of Recursive Genome Function", and "the fractal approach" is now almost taken for granted in the "New School Genomics" (based on fractal/chaotic nonlinear dynamics, with FractoGene just in "patent trolling mode" estimated at $500 M) with an exclusive license value heralded back in 2002 far surpassing this conservative valuation. Andras_at_Pellionisz_dot_com]


The $2 Trillion Trilemma of Global Precision Medicine

As shown below, BGI of China just bought the San Diego-based Irys System, to try to cope with some analytics of the "Dreaded DNA Data Tsunami". Also, Switzerland-based Roche, that acquired Silicon Valley's Genentech for $44 Bn years ago, now bought into Boston-based Foundation Medicine for a $Bn. All this infiltration of the $2 Trillion US Health Care ("Sick Care", rather), is at the time (see news items below) when the USA officially launched their "Precision Medicine" programs. Similar to the Government/Private Sector duel of Human Genome Project (led by Francis Collins/Craig Venter), now Venter's initiative to sequence 1 million humans towards "precision medicine" was announced (see news below), to be closely followed by the competitive US Government Initiative at $215 M in the 2016 budget.

The point is made here that the $2 Trillion traditional Sick Care service of the USA simply can not be transformed into the newfangled "Genome-based Precision Medicine") - unless it is done globally. The trilemma of either the USA, Asia or Europe doing it alone is just not economically feasible.

As the Battelle Report elaborated (see coverage in this column), the $3Bn Human Genome Project (concluding in 2001) generated about $1 Trillion business in the USA alone.

Motivated by earlier and present numbers (and the identical leaders), let's ponder the expected figures of a most likely several decade-long "Global Precision Medicine Program" (with cancer in the focus).

First, in genomics one of the most often cited guestimate for a single human genome is that the present numbers are based on the "one thousand dollar sequencing and a million dollar analysis". Based on this, the two competitive US initiatives will run well over $2 Trillion (just the DNA sequencing might run up a $Bn bill, as 1 M x $1,000= $1 Bn in EACH US-based initiatives). "Precision Medicine" thus appears to be a very noble goal - but not very good mathematics with a US Government budget-proposal of $220 M next year - even if that budget-item would be approved by Congress.

The $2 Trillion ticket appears more interesting in a global sense. China has announced lately "to shop around in the USA for about $2 Trillion worth". Sony has expressed interest in San Diego-based Illumina. Tata Consultants Services are exploring ways of cooperation with the USA for the needed (colossal amount) of software, needed for e.g. fractal genome analytics. Also, investments from Europe (Roche in pharmaceutics, Siemens in medical instrumentation) round up the global picture. Any reform towards "Precision Medicine" of the present USA "Sick care", a vastly lucrative yearly $2 Trillion dollar for-profit business simply represents way too much inertia to adequately respond to small scale initiatives (in the range of couple of hundred milliion dollars). The US faces the trilemma of either going for it alone (extremely unlikely to succeed in a reasonable time-frame), let either Asia or Europe forge ahead and the US just following the trend - or figure out the best ways of global cooperation, also in terms of economy.

Obviously the best resolution for the trilemma is a choreographed cooperation. Especially, since for instance in the disruption from land-line phone systems to smart mobile phone systems such a transition already took place. Some lessons can be directly used. China and India simply skipped development of their land-line phone system and went directly to the supreme technology (with one billion cell phones used in India). Also in China, hospitals are often too far apart - necessitating a "Precision Therapy technology" that is largely IT-based.

Like with the earlier disruption (in phone service), some key innovations will make a crucial difference - for instance the innovation to locate the exact coordinates of the cell phone user. This enables to serve him/her with "precision service" (whenever location is crucial).

Likewise, Information Theory and Technology of Genome Interpretation is presently the most advanced in the USA. Already, this is the most desired essential component of "Precision therapy". By far the most important challenge is (similar to DNA sequencing), to lower the "one million dollar interpretation" price-tag, Moore-Law style.

Clouds, awesome personal computers (disguished as "smart phones") will not listen to anything but (software-enabling) algorithms.

This is what the FractoGene genome interpretation, a double-disruption of overturning the two most fundamental (but wrong) axioms of Genomics accomplished. "Fractal genome governs growth of fractal organisms".

Implementing the "FractoGene Operator" is a new industry, in the footsteps of advanced geometry of nonlinear dynamics.

Is it something that is entire novel? Not at all. Those who figured out how "fractal laws govern the fractal fluctuation of stock-prices" used the software-enabling algorithms and made fortunes.

andras_at_pellionisz_dot_com


BGI Pushing for Analytics - Research Documents Rapid Detection of Structural Variation in a Human Genome Using BioNano's Irys System

SAN DIEGO and SHENZHEN, China, Feb. 9, 2015 /PRNewswire/ -- BioNano Genomics, Inc., the leader in genome mapping, and BGI, the world's largest genomics organization, highlight the publication of a peer-reviewed research article and its accompanying data* in GigaScience. This article describes the rapid detection of structural variation in a human genome using the high-throughput, cost-effective genome mapping technology of the Irys® System. Structural variations are known to play an important role in human genetic diversity and disease susceptibility. However, comprehensive, efficient and unbiased discovery of structural variations has previously not been possible through next generation sequencing (NGS) and DNA arrays with their inherent technology limitations.

This study showed that the Irys System was able to detect more than 600 structural variations larger than 1kb in a single human genome. Approximately 30 percent of detected structural variations affected coding regions, responsible for making proteins. Proteins participate in virtually every process within cells, suggesting that these structural variations may have a deep impact on human health. The Irys System also accurately mapped the sequence of a virus that had integrated into the genome. The ability to provide this type of information may help inform how virus sequence integration can lead to diseases such as cancer.

"We found that BioNano's Irys System helps overcome the technological issues that have severely limited our understanding of the human genome," said Xun Xu, deputy director at BGI. "In a matter of days and with fewer than three IrysChip®, we were able to collect enough data for de novo assembly of a human genome and perform comprehensive structural variation detection without additional technologies or multiple library preparations. BioNano has since improved throughput of the Irys System enabling enough data for human genome de novo assembly to be collected in one day on a single IrysChip."

Genome maps built using the Irys System reveal biologically and clinically significant order and orientation of functionally relevant components in complex genomes. This includes genes, promoters, regulatory elements, the length and location of long areas of repeats, as well as viral integration sites.

"The Irys System provides a single, cost-effective technology platform solution to assemble a comprehensive view of a genome and discover and investigate structural variations," said Han Cao, Ph.D., founder and chief scientific officer of BioNano Genomics. "The Irys System enables de novo assembly of genomes containing complex, highly variable regions and accurate detection of all types of structural variation, both balanced and imbalanced, within complex heterogeneous samples."

The Irys System has previously been used to map the 4.7-Mb highly variable human major histocompatibility complex (MHC) region and to enable a de novo assembly of a 2.1-Mb region in the highly complex genome of Aegilops tauschii, one of three progenitor genomes that make up today's wheat.

BGI acquired the Irys System in 2014 to enable comprehensive exploration of structural variation in the human genome and to provide vastly improved assemblies for various organisms that have very complex genomic structure, including those organisms where no reference exists. Together with other available platforms, BGI aims to provide researchers with the most comprehensive information and comprehensive interpretation.

The article is one of the first articles that are part of GigaScience's series Optical Mapping: New Applications, Advances, and Challenges (http://www.gigasciencejournal.com/series/OpticalMapping), and is available through this link: http://www.gigasciencejournal.com/content/3/1/34.

*The data for this study, as part of the journal's mission of making published research reproducible and data reusable, are available in the Journal's linked database, GigaDB, at http://dx.doi.org/10.5524/100097

[Francis Collins-based US Government versus Craig Venter-based US Private sector are not in a duel for their sequencing and analysis of 1 million people. BGI, especially when the wholly purchased sequencing technology is fully absorbed (made cheaper, faster, better) than Complete Genomics, quite conceiveably China's BGI with its centralized system combining the advantages of both government-subsidy and global entrepreneurship, could actually beat the two leading US efforts. Don't forget that the Switzerland-based Roche, having acquired Genentech and now Foundation Medicine makes the horse-race at least a foursome. The Shenzhen/San Diego setup of BGI/BioNano Genomics is rather interesting at the outset, not only because of making the sprint truly global, but also because if the found structural variants (no longer SNP-s, but larger than 1kb stretches) are only 30 percent in the coding regions, it means that 70 percent of detected "structural variants" are in the non-coding (in the Old School "Junk") parts of the fractal genome. The "Chinese Solution" to penetrate the vastly lucrative US (cancer) hospital market is also interesting. "They just buy it" - earlier BGI bought the Silicon Valley jewel Complete Genomics to save it from bankruptcy caused by a glut of "dreaded DNA data deluge". In 2014 BGI "just bought the Irys System" (why bother with licensing or infringement?). Incidentally, as calculated below, the true cost of Tsunami (after the 2008 Data Deluge) is estimated at $2 Trillion. This is exactly the Chinese budget to shop around for US technologies and businesses, for about $2 Trillion. andras_at_pellionisz_dot_com]


Round II of "Government vs Private Sector" - or "Is Our Understanding of Genome Regulation Ready for the Dreaded DNA Data Tsunami?"

[News items over the last two weeks, Venter's Private Sector Initiative and the US Government's promise of the same goal (to sequence genomes of 1 million people) inevitably trigger strong memories or earlier markedly similar parallel events. In addition, I warned in my 2008 Google Tech Talk YouTube "Is IT Ready for the Dreaded DNA Data Deluge" that data gathering, in itself, not only falls short of "science" (it is an industry), but if supply of data is not matched with demand might result in unsustainable business model (of DNA sequencing companies). The last seven years have proven that billions of dollars of valuation of "sequencing companies" was lost due to the glut (oversupply) of DNA data without matching analysis. Complete Genomics (a USA-investment, crown jewel of Silicon Valley had to be sold to China for a mere $117M). Data gathering is a necessary, but in itself not a satisfactory ingredient of science. Perhaps the bottom line is best expressed: "Altshuler says. “No amount of genome sequencing would ever lead to a new medicine directly.” The bottleneck is our understanding of genome regulation; Andras_at_Pellionisz_dot_com.]

--

Who was next to President Obama at the perhaps critical get-together (2011)?

[Almost three years prior to President Obama at shoulder-to-shoulder with a cancer patient (see above, Ms. Elana Simon), Obama had the chance to have next to him another cancer patient (see below, Steve Jobs). The iconic leader of the world's most valuable company (Apple) claimed in his memoirs that perhaps he will be the first cancer patient to be cured by (repeated) genome sequencing & rough preliminary analysis. Or, the last one to die, since sequencing of his genome came too late for him, and too early for science. The Silicon Valley IT-Giants (labeled by "Financetwitter") could have decided in February 2011 in the home of John Doerr at the dinner to launch Calico, Google Genomics and the sequencing (and analysis?) of one million humans. It is unclear if at that dinner this decision was debated, or mentioned at all. (Please let me know, andras_at_pellionisz_dot_com). We all wish that Ms. Elana Simon will not necessarily be the "first" whom genome sequencing and precision medicine will help, but certainly will be among the hundreds of millions who will benefit from this effort. Since just sequencing the genome costs at present $1,000, it is clear that the "sequencing part" of the project (both at the government, and at the private sector) is going to be many billions of dollars. (It is very common these days to quote "one thousand dollar sequencing and one million dollar analytics"; with such rates each of the two competing projects should be planned at the Grand Total well over Two Trillion Dollars. Unless a theoretical (software enabling algorithmic) understanding of fractal recursive genome function will crush the perhaps untenable further two trillion dollar debt to a sustainable expenditure. Earlier, see 2008 YouTube a similar projection was made that unless the dreaded DNA data deluge is matched by appropriate analytics, billions of dollars invested into sequencing technologies would provide an oversupply of data - and billions of dollars of investment will be lost - or sold to China (for $117 M).]

Latest NewsU.S. proposes effort to analyze DNA from 1 million people

Reuters

BY TONI CLARKE AND SHARON BEGLEY

WASHINGTON Fri Jan 30, 2015 12:22pm EST(Reuters) - The United States has proposed analyzing genetic information from more than 1 million American volunteers as part of a new initiative to understand human disease and develop medicines targeted to an individual's genetic make-up.

At the heart of the "precision medicine" initiative, announced on Friday by President Barack Obama, is the creation of a pool of people - healthy and ill, men and women, old and young - who would be studied to learn how genetic variants affect health and disease.

Officials hope genetic data from several hundred thousand participants in ongoing genetic studies would be used and other volunteers recruited to reach the 1 million total.

"Precision medicine gives us one of the greatest opportunities for new medical breakthroughs we've ever seen," Obama said, promising that it would "lay a foundation for a new era of life-saving discoveries."

The near-term goal is to create more and better treatments for cancer, Dr. Francis Collins, director of the National Institutes of Health (NIH), told reporters on a conference call on Thursday. Longer term, he said, the project would provide information on how to individualize treatment for a range of diseases.

The initial focus on cancer, he said, reflects the lethality of the disease and the significant advances against cancer that precision medicine has already made, though more work is needed.

The president proposed $215 million in his 2016 budget for the initiative. Of that, $130 million would go to the NIH to fund the research cohort and $70 million to NIH's National Cancer Institute to intensify efforts to identify molecular drivers of cancer and apply that knowledge to drug development.

A further $10 million would go to the Food and Drug Administration to develop databases on which to build an appropriate regulatory structure; $5 million would go to the Office of the National Coordinator for Health Information Technology to develop privacy standards and ensure the secure exchange of data.

The effort may raise alarm bells for privacy rights advocates who have questioned the government's ability to guarantee that DNA information is kept anonymous.

Obama promised that "privacy will be built in from day one."

SEQUENCING 1 MILLION GENOMES

The funding is not nearly enough to sequence 1 million genomes from scratch. Whole-genome sequencing, though plummeting in price, still costs about $1,000 per genome, Collins said, meaning this component alone would cost $1 billion.

Instead, he said, the national cohort would be assembled both from new volunteers interested in "an opportunity to take part in something historic," and existing cohorts that are already linking genomic data to medical outcomes.

The most ambitious of these is the Million Veteran Program, launched in 2011 by the Department of Veterans Affairs. Aimed at making genomic discoveries and bringing personalized medicine to veterans, it has enrolled more than 300,000 veterans and determined DNA sequences of about 200,000.

The VA was a pioneer in electronic health records, which it will use to link the genotypes to vets' medical histories.

Academic centers have, with NIH funding, also amassed thousands of genomes and linked them to the risk of disease and other health outcomes. The Electronic Medical Records and Genomics Network, announced by NIH in 2007, aims to combine DNA information on more than 300,000 people and look for connections to diseases as varied as autism, appendicitis, cataracts, diabetes and dementia.

In 2014, Regeneron Pharmaceuticals Inc launched a collaboration with Pennsylvania-based Geisinger Health System to sequence the DNA of 100,000 Geisinger patients and, using their anonymous medical records, look for correlations between genes and disease. The company is sequencing 50,000 samples per year, spokeswoman Hala Mirza said.

"NAIVE ASSUMPTION"

Perhaps the most audacious effort is by the non-profit Human Longevity Inc, headed by Craig Venter. In 2013 it launched a project to sequence 1 million genomes by 2020. Privately funded, it will be made available to pharmaceutical companies such as Roche Holding AG.

"We're happy to work with them to help move the science," Venter said in an interview, referring to the administration's initiative.

But because of regulations surrounding medical privacy, he said, "we can't just mingle databases. It sounds like a naive assumption" if the White House expects existing cohorts to merge into its 1 million-genomes project.

Venter raced the government-funded Human Genome Project to a draw in 2000, sequencing the entire human genome using private funding in less time than it took the public effort.

Collins conceded that mingling the databases would be a challenge but insisted it is doable.

"It is something that can be achieved but obviously there is a lot that needs to be done," he said.

Collating, analyzing and applying the data to develop drugs will require changes to how products are reviewed and approved by health regulators.

Dr. Margaret Hamburg, the FDA's commissioner, said precision medicine "presents a set of new issues for us at FDA." The agency is discussing new ways to approach the review process for personalized medicines and tests, she added.

(Reporting by Toni Clarke in Washington; Editing by Cynthia Osterman and Leslie Adler)

--

J. Craig Venter, Ph.D., Co-Founder and CEO, Human Longevity, Inc. (HLI) Participates in White House Precision Medicine Event

Prepared Statement by J. Craig Venter, Ph.D.

LA JOLLA, Calif., Jan. 30, 2015 /PRNewswire/ -- It is gratifying to see that the Obama Administration realizes the great power and potential for genomic science as a means to better understand human biology, and to aid in disease prevention and treatment. I was honored to participate in today's White House event outlining a potential new, government-funded precision medicine program.

Since the 1980s my teams have been focused on advancing the science of genomics—from the first sequenced genome of a free living organism, the first complete human genome, microbiome and synthetic cell— to better all our lives.

We founded HLI in 2013 with the goal of revolutionizing healthcare and medicine by systematically harnessing genomics data to address disease. Our comprehensive database is already in place with thousands of complete human genomes, microbiomes and phenotypic information together with accompanying clinical records, and is enabling the pharmaceutical industry, academics, physicians and patients to use these data to advance understanding about disease and wellness, and to apply them for personalized care.

We envisioned a new era in medicine when we founded HLI in which millions of lives will be improved through genomics and comprehensive phenotype data.

Now, through sequencing and analyzing thousands of genomes with private funds – with the goal of reaching 1 million genomes by 2020 – we believe that we can get a holistic understanding of human biology and the individual.

It is encouraging that the US government is discussing taking a role in a genomic-enabled future, especially funding the Food and Drug Administration (FDA) to develop high-quality, curated databases and develop additional genomic expertise. We agree, though, that there are still significant issues that must be addressed in any government-funded and led precision medicine program. Issues surrounding who will have access to the data, privacy and patient medical/genomic records are some of the most pressing.

We look forward to continuing the dialogue with the Administration, FDA and other stakeholders as this is an important initiative in which government must work hand in hand with the commercial sector and academia.

Additional Background on Human Longevity, Inc.

HLI, a privately held company headquartered in San Diego, CA was founded in 2013 by pioneers in the fields of genomics and stem cell therapy. Using advances in genomic sequencing, the human microbiome, proteomics, informatics, computing, and cell therapy technologies, HLI is building the world's largest and most comprehensive database of human genomic and phenotype data.

The company is also building advanced health centers – called HLI Health Hubs – which will be the embodiment of our philosophies of genomic science-based longevity care – where we will apply this learning and deliver it to the general public for the greatest benefit. Individuals and families will be seen in welcoming environments for one-stop, advanced evaluations (advanced genotype and phenotype analysis including whole body MRI, wireless digital monitoring, etc.). Our first prototype center is slated to open in July 2015 in San Diego, California.

--

Obama gives East Room rollout to Precision Medicine Initiative

http://news.sciencemag.org/biology/2015/01/obama-gives-east-room-rollout-precision-medicine-initiative

By Jocelyn Kaiser 30 January 2015 4:15 pm 2 Comments

President Barack Obama this morning unveiled the Precision Medicine Initiative he’ll include in his 2016 budget request to a White House East Room audience packed with federal science leaders, academic researchers, patient and research advocacy groups, congressional guests, and drug industry executives. By and large, they seemed to cheer his plan to find ways to use genomics and other molecular information to tailor patient care.

After poking fun at his own knowledge of science—a model of chromosomes made from pink swim noodles “was helpful to me,” he said—Obama explained what precision medicine is: “delivering the right treatments, at the right time, every time to the right person.” Such an approach “gives us one of the greatest opportunities for new medical breakthroughs that we have ever seen,” he added. He went on to describe the $215 million initiative, which includes new support for cancer genomics and molecularly targeted drug trials at the National Cancer Institute (NCI), and a plan to study links among genes, health, and environment in 1 million Americans by pooling participants in existing cohort studies.

“So if we have a big data set—a big pool of people that’s varied—then that allows us to really map out not only the genome of one person, but now we can start seeing connections and patterns and correlations that helps us refine exactly what it is that we’re trying to do with respect to treatment,” the president explained in his 20-minute speech, flanked by a red-and-blue model of the DNA double helix.

In the room were various patients, from Elana Simon, a young survivor of a rare liver cancer who has helped sequence her cancer type, who introduced the president; to towering former basketball great Kareem Abdul-Jabbar, who apparently takes targeted therapy for his leukemia; and cystic fibrosis patient William Elder, a 27-year-old medical student and guest at the State of the Union address who takes a new drug aimed at the genetic flaw underlying his form of the disease.

Representative Diana DeGette (D–CO), who has been working on 21st Century Cures, a plan to speed drug development, and Senator Lamar Alexander (R–TN), who has similar aims, were also present.

Sitting in the front row were the two lieutenants who will carry out the bulk of the precision medicine plan: National Institutes of Health (NIH) Director Francis Collins and NCI Director Harold Varmus. Another attendee was Craig Venter, who led a private effort to sequence the human genome in the late 1990s that competed with a public effort led by Collins. (Fifteen years ago, Venter sat in the same room with Collins when President Bill Clinton announced the first rough draft of the human genome.) Venter is now CEO of a company called Human Longevity Inc. that aims to sequence 1 million participants’ genomes by 2020—a new private competitor to Collins’s federal cohort study, perhaps.

Many other genome-medical biobank projects at academic health centers and companies are clamoring to be part of the 1 million–person cohort study. NIH will begin to explore which studies to include at an 11 to 12 February meeting (agenda here) that will also examine issues ranging from data privacy to using electronic medical records.

Amid all the hoopla, one prominent human geneticist in the audience offered a cautionary note. David Altshuler, who recently left the Broad Institute for Vertex Pharmaceuticals in Boston, which makes Elder’s cystic fibrosis drug, warns that although the new 1 million American cohort study may uncover new possible drug targets, it will be 10 to 15 years before any such discoveries lead to a successful drug.

“This is the first step,” Altshuler says. “No amount of genome sequencing would ever lead to a new medicine directly.”

---

Pellionisz' 2008 Google Tech YouTube

Forget the genome, Australian scientists crack the 'methylome' for an aggressive type of breast cancer

Sidney Morning Herald

February 3rd, 2015

http://www.smh.com.au/technology/sci-tech/forget-the-genome-australian-scientists-crack-the-methylome-for-an-aggressive-type-of-breast-cancer-20150202-1342al.html

Decoding the letters of the human genome revolutionised scientists' understanding of the role of genetic mutations in many diseases, including about one in every five cancers.

Now a team of Australian scientists have gone a step further, inventing a way to decipher another layer of information that garnishes genes, called methyl groups, which may explain the cause of many more cancers.

Methyl groups hang off sections of DNA like Christmas lights and act like a switch, affecting how genes are expressed in different cell types. Collectively called the methylome, they can also switch off tumour suppressor genes and switch on cancer promoting genes.

Susan Clark from the Garvan Institute of Medical Research and her team have for first the first time translated the methylome of breast cancer, finding distinct patterns associated with different types of breast cancer.

They have also found a way to classify women with the worst type of breast cancer, triple-negative, into two groups; those with a highly aggressive form and those with a lower-risk variety with a longer survival time. At present there is no reliable way to divide triple-negative cancers, which do not respond to targeted treatment, into these sub-groups.

With further testing, methylation signatures may be used as predictive biomarkers that doctors use to prescribe more appropriate treatments for women diagnosed with breast cancer in the future.

Professor Clark's team are the first in the world to sequence large chunks of the methylome from samples of cancer tissue that had been archived for up to two decades.

Using historical samples meant they could trace which methylation patterns were linked to patient survival times.

Cancer specialist Paul Mainwaring, who was not involved in the research, said Professor Clark's new technique to decode the entire methylome will have significant implications for cancer research in general.

"The power of this technology is that it's allowing us to get a much sharper view on how cancer starts, progresses, metastasizes, behaves and a new avenue of treatment," said Dr Mainwaring from ICON Cancer Care in Brisbane.

"We'll still be talking about this paper in 20 years," he said.

While specific faults in a person's DNA sequence have been shown to increase their risk of certain cancers – the BRCA 2 mutation which significantly increases a woman's chance of developing breast tumours – in about two-thirds of cancers there are no changes to the DNA code.

In many of these cases scientists are finding changes to the genome that do not affect the underlying code, principally through DNA methylation.

"Every cancer has some sort of mutational profile, but there are multiple layers of where those abnormalities can occur. This is a giving us the ability to read one of those layers," he said.

Dr Mainwaring said the exciting part about identifying methylation patterns was that they are potentially reversible.

"It's the bit of the genome we may be able to influence most, certain regions can be changed either by diet, exercise or drugs," he said.

Professor Clark and team's research was funded by the National Breast Cancer Foundation and has been published in the leading scientific journal Nature Communications.


Houston, We've Got a Problem!

[FractoGene, 2002 yielded fractal defect mining, consistent with repeats algorithmically described as pyknon-s by Rigoutsos, 2006 disseminated in Google Tech Talk Youtube 2008, a year before the Hilbert-fractal of genome folding appeared on Science Cover in 2009]

Paraphrasing the infamous alarm so well pictured in "Apollo 13" of the US Space Program, one would be urged to cry out now: "USA Genome Project, "We've Got a Problem!"

One thing is amiss, that there is no "Command Center" to call with the increasingly obvious alarm that even Craig Venter articulated years ago about our that "our concepts of genome regulation are frighteningly unsophisticated". The Old School of genomics with the fairy tale of 1.3% Genes and 98.7% of Junk, with the bad joke by Crick's Central Dogma falsely arbitrating that "protein to DNA recursion can never happen" has now totally unraveled. Yet, the "New School of Hologenomics", based on advanced mathematics of non-linear dynamics is only budding after hardly more than its first decade (hear double-degree biomathematician Eric Schadt).

Whom to alert? Though even very small Countries (see Estonian Genome Project, Latvian Genome Project, etc, etc) have their "National Genome Project", the USA-led international project, that led to the $3Bn sequencing of a single genome, the project expired one-and-a-half Decade ago. Some consider the NIH-led "ENCODE" its continuation (2003-2007, prompted e.g. by my personal debate with Dr. Francis Collins at the 50th Anniversary of the Double Helix, arguing the importance of settlling the very disturbing result that only about 20 thousand genes were found, and according to my 2002 FractoGene 98.7% of the human genome was NOT JUNK). ENCODE-II (2007-2012) was even less of a "continuation". ENCODE-II essentially reinforced the surprise that "the human genome is pervasively transcribed", and attached a suspiciously arbitrary-looking number (80%) for the "functional" parts of the genome (the exons and introns of genes plus vast sees of intergenic non-directly-coding DNA). However, neither the original US-led Human Genome Project, nor ENCODE I-II addressed the basic question of algorithmic interpretation of (recursive) genome function.

In the absence of any overarching "USA Genome Project" (NIHGR, DoE, NSF, DARPA etc. compete for taxpayer dollars, thus by definition their activities are scattered), whom to alert, for instance, that "microexons" (see two articles below) await not only a definition, but are often self-contradictory? For instance, a paper lists "microexons" of 1nt "long". Since "exon" is defined as protein-coding sequence (of triplets of A,C,T,G in an open reading frame), nothing shorter than 3nt can be called "microexon". Since a single base can not code for protein (amino-acid, rather), the referred single nucleotide could well be part of an "intron". The mathematically dual valence of exons, introns and intergenic non-coding DNA was exposed in a Springer Textbook, but the advanced mathematics of e.g. the significance of dual valence (and fractal eigenstates) are not easily digestible for non-mathematically-minded workers. This is most unfortunate, since after the "genome disease" a.k.a cancers now autism established the case that these major diseases are so complex, involving myriads of coding and non-coding DNA structural variants that the recent Newsweek cover applies "You can not cure a disease that you do not understand". By now it is totally clear that neither cancer nor autism could even be cured, and not even understood, without an algorithmic (mathematical) approach to genome regulation. It is commandable, therefore, that one of the leading "agencies" is not at all an "agency" in the government-sector - but the charitable Simons Foundation (headed by the most accomplished mathematician, Jim Simons, who made $Billions with his stock-market algorithms). Mathematics is also not much of a problem for world-leader Information Technology companies (e.g. my Google Youtube points out near to its end that even the Internet is fractal). Thus, Google Genomics, Amazon Web Services, IBM in the USA, and SAP or Siemens in Germany, Samsung, Sony or even TATA in Asia are the entities that are likely to heed (and lucratively profit from) this "alert". One challenge is, that cross-domain expertise (genomics AND informatics) is required, that is presently a still somewhat unusual combination - but advisership is available. Andras_at_Pellionisz_dot_com


Small snippets of genes may have big effects in autism

Print Kate Yandell

22 January 2015

Small pieces of DNA within genes, dubbed ‘microexons,’ are abnormally regulated in people with autism, suggests a study of postmortem brains published 18 December in Cell1. These sequences, some as short as three nucleotides, moderate interactions between key proteins during development.

“The fact that we see frequent misregulation in autism is telling us that these microexons likely play an important role in the development of the disorder,” says lead researcher Benjamin Blencowe, professor of molecular genetics at the University of Toronto.

Genes are made up of DNA sequences called exons, separated by swaths of noncoding DNA. These exons are mixed and matched to form different versions of a protein. This process, called alternative splicing, is thought to be abnormal in autism.

Many sequencing studies tend to skip over microexons because they are not recorded in reference sequences. Although researchers have known about microexons for decades, they were unsure whether the small segments had any widespread purpose.

The new study confirms microexons’ importance, suggesting that these tiny sequences can have big effects on brain development.

“It’s really a new landscape of regulation that’s associated with a disorder,” says Blencowe. “We have a big challenge ahead of us to start dissecting the function of these microexons in more detail.”

Blencowe and his team developed a tool that flags short segments of RNA flanked by sequences that signal splice sites. They used the tool to identify microexons in RNA sequences from various cell types and species throughout development.

In the brain, microexons are highly conserved across people, mice, frogs, zebrafish and other vertebrates. Alternatively spliced microexons are more likely to be present in neurons than in other cell types, suggesting that they have an important, evolutionarily conserved role in neurons.

Irregular splicing:

The researchers analyzed patterns of microexon splicing in the postmortem brains of 12 people with autism and 12 controls between 15 and 60 years of age.

Nearly one-third of alternatively spliced microexons are present at abnormal levels in autism brains compared with control brains, they found. By contrast, only 5 percent of exons longer than 27 nucleotides are differentially spliced in autism brains.

Genes with microexons that are misregulated in autism tend to be involved in the formation of neurons and the function of synapses — the junctions between neurons. Both of these processes are implicated in autism.

Microexons are particularly likely to be misregulated in autism-linked genes, such as SHANK2 and ANK2. What’s more, the expression of a gene called nSR100, which regulates splicing of microexons, is lower in the brains of people with autism than in those of controls.

One future goal is to determine the biology underlying these differences, says Daniel Geschwind, director of the University of California, Los Angeles Center for Autism Research and Treatment. nSR100 belongs to a module of genes that includes transcription factors — which regulate the expression of other genes — and those that modify chromatin, which helps package DNA into the nucleus. Many of these genes have known links to autism.

To look at microexon splicing throughout development, Blencowe and his team sequenced RNA from mouse embryonic stem cells as they differentiated into neurons. Microexon levels tend to spike after the cells finish dividing, hinting at a role in the late stages of neuronal maturation.

Studying microexon regulation at various stages of normal development in people is another logical next step, says Lilia Iakoucheva, assistant professor of psychiatry at the University of California, San Diego, who was not involved in the study. “Then, of course, we can study gene expression in autism brains and then talk about what’s regulated correctly and what’s misregulated.”

As a complement to the postmortem data, the researchers could also look at how microexons are regulated in developing neurons derived from people with autism, says Chaolin Zhang, assistant professor of systems biology at Columbia University in New York, who was not involved in the study.

“We should not underestimate the potential of more detailed characterization of these splicing variants,” he says. “They really expand the genome and [its] complexity in an exponential way.”

Yang Li, a postdoctoral fellow at Stanford University in California also applauds the attention to the microexons. “There’s still not enough recognition that different [forms of proteins] can have very different functions,” he says. “This is especially true in the brain.”

In an independent study published in December in Genome Research, Li and his colleagues reported that microexons in the brain tend to encode amino acids in locations that are likely to affect protein-protein interactions2. They also found that the autism-linked RBFOX gene family regulates microexon splicing in the brain.

“I definitely think that microexons are important because of how conserved they are in terms of brain function,” says Li. “But I don’t know if they cause autism.”

News and Opinion articles on SFARI.org are editorially independent of the Simons Foundation.

References:

1. Irimia M. et al. Cell 159, 1511-1523 (2014) PubMed

2. Li Y.I. et al. Genome Res. 25, 1-13 (2015) PubMed


Autism genomes add to disorder's mystery

By GEOFFREY MOHAN

Los Angeles Times,

January 26, 2015

Less than a third of siblings with autism shared the same DNA mutations in genes associated with the disorder, according to a new study that is the largest whole-genome sequencing for autism to date.

Canadian researchers sequenced whole genomes from 170 siblings with autism spectrum disorder and both their parents. They found that these sibling pairs shared the same autism-relevant gene variations only about 31% of the time, according to the study published online Monday in the journal Nature Medicine.

More than a third of the mutations believed to be relevant to autism arose in a seemingly random way, the study also found.

“It isn’t really autism; it’s autisms,” said the study’s lead investigator, Dr. Stephen W. Scherer, head of the Center for Applied Genomics, Genetics and Genome Biology at the Hospital for Sick Children in Toronto. In some cases, he added, “it’s like lightning striking twice in the same family.”

The results are part of 1,000 whole genomes that are being made available to researchers via a massive Google database that autism advocates hope will grow to 10 times that size by next year.

The effort, spearheaded by the research and advocacy group Autism Speaks, has been somewhat controversial from the start, with some questioning whether results from the relatively costly and time-consuming process will be too complicated or obscure to yield significant breakthroughs.

Indeed, researchers associated with the effort acknowledged that much of their data remain a mysterious ocean of jumbled, deleted or inserted DNA code, much of which is not located on areas of the genome that program the proteins that directly affect biological functions.

“You might expect that you’d see some commonalities in the mutations between kids in the same family, but that’s actually not the case here,” said Rob Ring, chief science officer of Autism Speaks. “We’re not really sure what might explain that at this time.”

Said Scherer: “We’ve really just scratched the surface of this data.”

That’s where Google’s cloud-based data capabilities will come in, according to Ring and Scherer. Making these whole genomes – potentially 10,000 of them – available to any researcher could yield unexpected connections and order in data that are the equivalent of more than 13 years of streaming high-definition television programming.

Even the more limited data from several hundred genomes sequenced in the study proved difficult to handle. “We couldn’t transfer it over the Internet,” said Scherer. “We had to buy hard drives and Fed-Ex them.”

Autism Speaks hopes the database will attract researchers from varied fields, including those outside of genetics.

“It may be a genetic code as it rolls off of sequencers, but it’s just data and numbers,” Ring said.

Other sequencing studies have examined more children diagnosed with autism, but involved single siblings with the diagnosis and have focused on a narrower part of the genome – a little more than 1% of the genome that codes the proteins that carry out biological processes.

The Canadian study is the largest of so-called "multiplex" families with more than one child diagnosed with the disorder.

The researchers had examined a smaller batch of 32 family genomes in 2013, uncovering damaging variations of four genes not previously correlated to autism spectrum disorder. That study also identified mutations in 17 other known or suspected autism genes. The small variations in DNA coding it found accounted for about 19% of the autism cases, the study found.The current study found autism-relevant mutations in 36 of the 85 families studied. Those mutations were shared by siblings in only 11 of those 36 families, and 10 of those were inherited.

Advocates for whole-genome sequencing argue that their approach picks up all kinds and sizes of mutations, including much smaller additions and deletions of code, than are detected in other forms of sequencing. The study noted that more than 95% of one particular category of coding variation would have been missed by narrower approaches.

The cost and time involved in whole genome sequencing are rapidly declining, while cloud-based computing opens up massive computational power that could potentially make sense of the vast database, advocates say.


Critics have argued that turning up more small oddities may not necessarily be helpful, given that many are so rare that it will be hard to make any statistical sense of them. Even some of the strongest “autism gene” candidates are associated with only a small fraction of autism cases, they note.

Still, genomics is increasingly examining the potential roles of vast stretches of DNA that do not directly code proteins, or that lie outside of genes. Those areas can affect how genes are expressed and how they interact with the environment.

Autism Speaks has committed $50 million to the whole-genome sequencing effort so far, Ring said. The portal to the 1,000 genomes should be in place by the second quarter this year, he said.


Hundreds of Millions Sought for Personalized Medicine Initiative

Jan 26, 2015

US President Barack Obama will seek hundreds of millions of dollars to fund the new personalized medicine initiative he announced in his State of the Union address last week, the New York Times reports.

Such a program would bring about "a new era of medicine  —  one that delivers the right treatment at the right time," Obama said in his speech.

According to the Times, this initiative may have broad, bipartisan support. "This is an incredible area of promise," says Senator Bill Cassidy (R-La.), who is also a gastroenterologist.

The funds would go to both the National Institutes of Health to support biomedical research and to the Food and Drug Administration to regulate diagnostic tests.

Ralph Snyderman, the former chancellor for health affairs at Duke University, tells the Times that he is excited by the prospect of the initiative. "Personalized medicine has the potential to transform our health care system, which consumes almost $3 trillion a year, 80 percent of it for preventable diseases," Snyderman says.

Though new treatments are expensive, Snyderman says personalized therapies will save money, as they will only be given to people for whom they'll work.

[The purpose of "State of the Union Address" by US Presidents is to seek maximally broad-based political support. Thus, most everybody gets a little of the thinly spread promises. However, any "Initiative" would have to be 1) worked out by experts, 2) pushed through (often requiring years) the legislative system of Congress. While according to the above it is questionable how much effect and when such "initiative" might have e.g. on the NIH (with already a thirthy thousand millions of dollars, yearly, thus "hundreds of millions" might barely make a dent with NIH). The Statement might be very useful to stimulate task # 1) (to work out by domain experts the most cost-effective plan). In this regard, in the multiple quality of a) someone whose NIH grant-continuation was cut in 1989 when the colossal disruption by Genomics became a "perceived threat on the establishment" (see acknowledgement in Pellionisz, 1989), b) someone who already contributed to governement-blueprints, see "Decade of the Brain Initiative", c) someone who worked out the mathematical (geometrical) algorithmic approach to unification of neuroscience and genomics (Pellionisz et al, 2013) this worker would add two further improvements that the US government could plan for - if influencing by "hundreds of millions of dollars" the "$3 trillion dollar health care system" is meant as a real catalyzer. First, with the new involvement of the government in health care insurance system, some "catalyzer monies" could be well spent to shape the US health insurance system into the direction of Germany (see news below), France, UK, Canada (where instead of a for-profit "sick care system" health-care is a non-profit government service). Second, (as the news below also clearly indicates), "personalized medicine" will happen by massive involvement of Information Technology giants (SAP in Germany, Google Genomics, Amazon Web Services, IBM etc. in the USA). These monstrous companies, however, typically have a rather hard time embracing "paradigm-shifts" (see the classic best-seller of Christensen "The Innovator's Dilemma"). Indeed, there is a new crop of "personalized medicine start-ups" in the USA (most notably Foundation Medicine in Boston, that is already a post-IPO $Bn business). Government incentives on the scale of "hundreds of millions of dollars" could boost the (existing) "SBIR programs" seeking innovative IT-based solutions for personalized medicine. This is all the more important, since judging from the past history, informatics falls much more into the forte of NSF, DOE, DARPA (etc), rather than the mostly still "old schooler"-dominated NIH. This opinion could be based on the Memoirs of Mandelbrot, that recalls the opportunity "to mathematize biology". The now late Mandelbrot deliberately declined the offer (though it came along with ample funding) since his opinion "biologists were not ready for advanced mathematics" (an opinion he upheld till his passing away; The Fractalist, 2012). This worker would like to note here, that there is also a third, much superior opportunity as well, to be elaborated elsewhere. Andras_at_Pellionisz_dot_com.]


SAP Teams with ASCO to Fight Cancer

[SAP of Germany uses Big Data to fight cancer together with USA - Video]

SAP is teaming with the American Society of Clinical Oncology (ASCO) to develop CancerLinQ, a big data solution that will transform care for cancer patients.The collaboration brings data and expertise from ASCO, a non-profit physician group with over 35,000 members worldwide, onto SAP HANA. CancerLinQ will give doctors new insights in seconds when they are deciding on personalized treatment plans with patients.

[In the USA Health Care ("Sick Care", rather) is well known to be a for-profit business. Thus, it is in the best interest to both hospital systems as well as Pharma to try as many chemo-s on a single patient, as possible. Since 80% of chemos do NOT work for any particular individual, there is a lot of "repeat customer mode" for "sick care" to experiment on humans. This is fortunately not true for countries like Germany, France, UK, Canada (even China...) where Health Care is NOT a for-profit business, but a government-paid public service. For the government budget, it is extremely important for such countries to minimize the ineffective expenditure - and e.g. in Germany that is rich enough to afford expensive cancer-medication but smart and motivated enough to use "Big Data" (genome-matching) to personalize cancer medicine both SAP and Siemens are already engaged in "genome-matched chemo-personalization". In the USA, at least 3 major IT companies (Google Genomics, Amazon/Illumina, IBM/New York Genome Center) already engaged in genome analytics - and e.g. Boston-based Foundation Medicine is already a post-IPO business beyond $Bn valuation). Now the USA is facing an increasingly more potent, and much more motivated competition from Germany, Japan (Riken/Sony), Korea (Samsung) and even China (BGI). While an earlier trend used to be to travel to the USA for the best medical care, these days some cancer patients leave the USA for Germany for more personalized medicine. A key to the best matching is THE ALGORITHM - andras_at_pellionisz_dot_com]


Human Longevity, Genentech Ink Deal to Sequence Thousands of Genomes

Jan 14, 2015 | a GenomeWeb staff reporter

NEW YORK (GenomeWeb) – Human Longevity today announced it has signed a multi-year agreement with Genentech to conduct whole genome sequencing and analysis on tens of thousands of patient samples provided by the drug developer.

Human Longevity will sequence the genomes at 30x coverage with the Illumina HiSeq X Ten machines in its genomic sequencing center, the firm said in a statement.

"We are excited to be working with Genentech so that patient samples can be analyzed according to more precise genetic categories," Human Longevity CEO Craig Venter said in a statement. "The application of our capabilities to discover new diagnostics and targeted therapies is one of the most relevant today."

Genentech Senior VP James Sabry also said that the partnership would advance the firm's drug discovery program.

All sample and patient data elements will be de-identified to protect privacy, the firms added.

Financial details of the agreement were not disclosed.

Human Longevity continues to sign deals giving it more genomes to sequence as it builds its human genotype and phenotype database. Earlier this week, the firm announced it had signed a deal to sequence genomes for the oncology testing firm Personal Genome Diagnostics. In November 2014, the firm signed a deal to gain access to the Twins UK registry and sequence samples from it.

Last week, Genentech signed a deal with 23andMe to sequence the genomes of 3,000 people in the Parkinson's disease community.

[Craig Venter churns it up, again! The announcement is somewhat uncharacteristically understated. The title does not mention that there is no "Genentech" (it is a subsidiary of Roche), and glosses over the brilliance how Craig's latest move towards the private sector put not just Roche, but also Illumina, Amazon and Google into a fiercely competitive mode - serving the interest of science (Craig Venter's style...). Venter rather recently appeared to compete against Google (by snatching Franz Och). As we know, Craig answered the rhetorical question "what's the difference between Celera and God?" by answering "we had computers". IBM wanted to do it for him for free - but he built the largest computer system, instead. Now Illumina could either remain "the King" by providing sequencers - or by a monopoly on algorithms can in addition either catapult Amazon Web Services, or the competitors (Google and/or IBM).The world will never be the same - andras_at_pellionisz_dot_com]


UCSC Receives $1M Grant from Simons Foundation to Create Human Genetic Variation Map

Jan 13, 2015

|

a GenomeWeb staff reporter

NEW YORK (GenomeWeb) – Researchers at the University of California Santa Cruz's Genomics Institute have received a grant for up to $1 million from the Simons Foundation that will support a one-year pilot project to create a comprehensive map of human genetic variation for biomedical research.

Co-leading the project is David Haussler, a professor of biomolecular engineering and director of the Genomics Institute at UC Santa Cruz, and Benedict Paten, a research scientist at the Genomics Institute.

They'll work with scientists at the Broad Institute, Memorial Sloan Kettering Cancer Center, UC San Francisco, Oxford University, the Wellcome Trust Sanger Institute, and the European Bioinformatics Institute to develop algorithms and formulate the best mathematical approaches for constructing a new graph-based human reference genome structure that will better account for and reflect the different kinds of variation that occur across populations. They'll test algorithms developed as part of the project on tricky parts of the genome within the first six months of the pilot, Paten said in a statement.

The researchers will use a dataset of more than 300 complete and ethnically diverse human genomes sequenced by researchers at the Broad Institute to construct the reference structure and they'll also leverage work done to create a standard data model for the structure by members of the reference variation task team, a subgroup of the data working arm of the Global Alliance for Genomics and Health that Paten co-leads.

The project aims to overcome the limitations of the current model for analyzing human genomic data, which relies on mapping newly sequenced data to a single set of arbitrarily chosen reference sequences resulting in biases and mapping ambiguities. "One exemplary human genome cannot represent humanity as a whole, and the scientific community has not been able to agree on a single precise method to refer to and represent human genome variants," Haussler said in a statement. "There is a great deal we still don't know about human genetic variation because of these problems."

Paten added that the proliferation of different genomic databases within the biomedical research community has resulted in hundreds of specialized coordinate systems and nomenclatures for describing human genetic variation. This poses problems for tools such as the widely used UCSC Genome Browser which was developed and is maintained by UCSC researchers. "For now, all our browser staff can do is to serve the data from these disparate sources in their native, mutually incompatible formats," Paten said in a statement. "This lack of comprehensive integration, coupled with the over-simplicity of the reference model, seriously impedes progress in the science of genomics and its use in medicine."

The diversity of genomes in the Broad's dataset, Paten continued, offers a rich data resource that will be used "to define a comprehensive reference genome structure that can be truly representative of human variation." The plan is eventually to expand the graph-structure to include many more genomes, he said.

The researchers expect to have a draft variation map available by the end of the year. Paten and Haussler have also outlined the follow-up activities needed to extend the pilot project and fully realize their vision for the new map.

The new map will make it easier to detect and analyze both simple and complex variants that contribute to conditions with a genetic component such as autism and diabetes. It will also be a valuable tool for understanding recent human evolution, according to the researchers.

[The news talks about "algorithms" and "maps" (of genomic variations). Given that Jim Simons is a most brilliant mathematician (with autism in the family), it is more likely that he invested this sum, relatively minor on his scale, towards having more "algorithms", rather than just"maps" around. "Pathways" and "maps" already abound - both mathematicians and computers are yearning for software-enabling ALGORITHMS to call genomic variants responsible for human diversity from pathological genomic variants. It is almost self-evident that some variants are "self-similar" - thus one of the many (?) algorithmic approaches might be a measure of self-similarity (fractality). andras_at_pellionisz_dot_com]


Silencing long noncoding RNAs with genome-editing tools with full .pdf.

Methods Mol Biol. 2015;1239:241-50. doi: 10.1007/978-1-4939-1862-1_13.

Gutschner T.

Abstract

Long noncoding RNAs (lncRNAs) are a functional and structural diverse class of cellular transcripts that comprise the largest fraction of the human transcriptome. However, detailed functional analysis lags behind their rapid discovery. This might be partially due to the lack of loss-of-function approaches that efficiently reduce the expression of these transcripts. Here, I describe a method that allows a specific and efficient targeting of the highly abundant lncRNA MALAT1 in human (lung) cancer cells. The method relies on the site-specific integration of RNA-destabilizing elements mediated by Zinc Finger Nucleases (ZFNs).

See full .pdf of Chapter 13 here

[Genome Editing, an effort that has long been brewing and broke through with full force by 2015 calls for a crucially important "heads up". In earlier times, efforts towards an effective modification of the genome used to be labelled as "Gene Surgery". Thus, some readers may be under the impression that the classic misunderstanding (that the "genome is your destiny and there is no way to change it") needs perhaps only a slight cosmetics; changes of gene(s) (the protein-coding, though not contiguous, but fractally scattered parts of the genome) could, in theory, be altered. This recent paper should totally dispell any such misunderstanding. First, the paper is not even about "genes" and "non-coding DNA" of the genome - but provides an experimentally verifiable method to alter the function of the (mistakenly believed as "function-less" RNA, more particularly of "Long noncoding RNAs (lncRNAs). The effort would be totally misspent if lncRNAs were without important function in genome regulation - critical to cancer(s), in this case lung cancer, one of the most dreadful and rampant diseases. The first words of the abstract, however, clinch that lncRNAs are "a functional and structural diverse class of cellular transcripts that comprise the largest fraction of the human transcriptome". The Fractal Approach (FractoGene), since its inception (concept in 1989 and utility in 2002) has long been kept at bay (in order to delay a humanly and materially very expensive total paradigm-shift as long as possible) by the rationale that "what is the importance of a mathematical (algorithmic) theory of fractal recursive genome function"? For some time the answer was "to find fractal defects in the genome that are in a cause-and-effect relationship with e.g. cancer devepment by misregulation". While in itself the reason has been totally justified (as a recent cover issue in Newsweek on cancer very properly stated "You can not cure a disease that you do not understand" - and scribbling some equations underneath the graphics), with Genome Editing that (also) matured over the "wilderness of genomics" (1953 of Double Helix to end of Encode-2 in 2012), the enormous importance of "fractal defect mining" resulting in "genome editing" can be trivialized even for those in elementary schools. Before "spelling checkers" and "word processors" anybody could write (as this columnist, for whom English is the sixth language...) maybe important sets of letters, but occasionally laden with typos. In natural languages such errors are not nearly as important as e.g. in "computer languages" (codes, rather). Anybody who ever wrote a line of code knows all too well, that a freshly written code (even if it is "interpreted", not "compiled") for best results should undergo the dove-tailing process of "syntax checking" and subsequently the "debugging". (A recursive computer code may produce an infinitely repeating "uncontrolled cycle" is the "stop" symbol is missing or is at an error. While it is common sense that "wash, rinse, repeat" is "meaningful enough", coders itch to add "after repeating the cycle n times, do not cycle it for the n+1 occasion. This trivialization may not be superfluous since it also brings up the question that came back to fashion after 20 years "how similar, or profoundly different are natural languages from the code of recursive genome function". Withouth a serious probe into this question (for which the NIH newly allocated $28 million), perhaps one might want to read & cite (beyond the 28 citation) the full pdf of 2008 peer-reviewed paper on "The Principle of Recursive Genome Function" - andras_at_pellionisz_dot_com]

Long noncoding RNAs (lncRNAs) are a functional and structural diverse class of cellular transcripts that comprise the largest fraction of the human transcriptome. However, detailed functional analysis lags behind their rapid discovery. This might be partially due to the lack of loss-of-function approaches that efficiently reduce the expression of these transcripts. Here, I describe a method that allows a spe- cific and efficient targeting of the highly abundant lncRNA MALAT1 in human (lung) cancer cells. The method relies on the site-specific integration of RNA-destabilizing elements mediated by Zinc Finger Nucleases (ZFNs).

Key words Cancer, CRISPR, Genome engineering, Homologous recombination, MALAT1, LncRNA, Single cell analysis, TALEN, Zinc finger nuclease

1 Introduction

LncRNAs represent a novel and exciting class of transcripts usually defined by their size (>200 nucleotides) and the lack of an open reading frame of significant length (<100 amino acids). Several studies link the expression of these transcripts to human diseases, e.g., cancer [1]. Functional analysis using RNA interference- mediated knockdown approaches are a common strategy to infer a gene’s cellular role. However, these widely used approaches have multiple limitations [2] and might have limited efficiency for lncRNA research due to the intracellular localization (nuclear) and secondary structure of a large fraction of lncRNA molecules.

To overcome these limitations, a novel gene targeting method was developed to reduce the expression of the lncRNA MALAT1 in human A549 lung cancer cells [3]. MALAT1 is a ~8 kb long, highly abundant, nuclear transcript which was originally discov- ered in a screen for lung cancer metastasis associated genes [4, 5]. The targeting method relies on the site-specific integration of a selection marker (here: GFP) and RNA-destabilizing elements or

Shondra M. Pruett-Miller (ed.), Chromosomal Mutagenesis, Methods in Molecular Biology, vol. 1239, DOI 10.1007/978-1-4939-1862-1_13, © Springer Science+Business Media New York 2015

241

Chapter 13

transcriptional stop signals, e.g., poly(A) signals, into the pro- moter region of the MALAT1 gene. The integration is mediated by ZFNs that specifically introduce a DNA double-strand break (DSB) [6]. The induced DNA damage activates the cellular repair pathways, namely, Nonhomologous end joining (NHEJ) or Homologous Recombination (HR). By providing an appropriate template (donor plasmid) the HR pathway can be used to repair the DSB and to integrate exogenous DNA sequences (Fig. 1). Application of this method to human lung cancer cells yielded a stable, specific and more than 1,000-fold reduction of MALAT1 expression and functional analysis established MALAT1 as an active regulator of lung cancer metastasis [7]. Importantly, the methods’ concept is of broad applicability and allows targeting of protein-coding genes as well as other lncRNAs using any kind of recently developed genome targeting tools, e.g., ZFNs, TALENs, or the CRISPR/Cas9 system.

Store all components according to manufacturer’s recommenda- tions. Use ultrapure water for nucleic acid analysis. ZFNs are com- mercially available from Sigma-Aldrich. Alternative methods were described that allow homemade generation of ZFNs [8, 9] or fast assembly of TALENs [10]. CRISPR/Cas9 plasmids are available from Addgene.

ZF

Fok I

ZF ZF2.1 Cloning

1. Plasmid containing a selection marker of choice, e.g., Green fluorescent protein (GFP) followed by a poly(A) signal, e.g., bovine growth hormone (bGH) poly(A) signal.

2. Genomic DNA from cell line(s) subjected to modifications.

3. Genomic DNA isolation kit.

4. Proofreading DNA Polymerase.

5. Cloning primer for homology arms with appropriate restric- tion sites.

6. Agarose and agarose gel chamber. 7. Gel purification kit. 8. Restriction enzymes needed for cloning of homology arms. 9. PCR purification kit.

10. T4 DNA Ligase.

11. Competent bacteria.

12. LB-Medium: 5 g/L yeast extract, 10 g/L Tryptone, 10 g/L NaCl.

13. LB-Agar plates: LB-Medium with 15 g/L Agar. 14. Antibiotics, e.g., Ampicillin, Kanamycin. 15. Plasmid DNA preparation kits.

1. Cell line of choice.

2. Appropriate complete cell culture medium for cell line of inter- est containing supplements, serum, and antibiotics where appropriate.

3. Transfection reagent of choice. 4. Cell culture plates (96-well, 24-well, 6-well, 10 and 15 cm). 5. 0.05 or 0.25 % Trypsin-EDTA. 6. Phosphate-buffered saline (PBS). 7. 12×75 mm tube with cell strainer cap. 8. Conical centrifuge tubes.

1. Cell sorter.

2. Power SYBR Green Cells-to-CT Kit (Life Technologies, Carlsbad, CA, USA).

3. qPCR primer for reference and target gene.

4. DirectPCR lysis reagent (Peqlab, Wilmington, DE, USA) or mammalian genomic DNA MiniPrep Kit.

5. Integration-PCR primer spanning the nuclease cleavage site. 6. DNA-Polymerase of choice suitable for genotyping PCR. 7. PCR strip tubes or 96-well PCR plates and adhesive films. 8. Thermocycler.

2.2 and Transfection

2.3 Analysis

Cell Culture

Single Cell

LncRNA Silencing with ZFNs 243

244 Tony Gutschner

3 Methods

3.1 Cloning of a Donor Plasmid

The targeting approach requires cloning of a donor plasmid (Subheading 3.1), its transfection into cells together with ZFNs (or any other gene editing tool) (Subheading 3.2). After cell expan- sion, cells need to be enriched using Fluorescence Activated Cell Sorting (FACS) (Subheading 3.3). FACS is also used to distribute single cells into 96-wells for clonal growth. Finally, cell clones are analyzed for site-specific integration events and target gene expres- sion levels (Subheading 3.4). See Fig. 2 for a protocol workflow. Design and cloning of gene-specific ZFNs or other gene-editing tools is highly user-specific and will not be covered here.

1. Use proofreading DNA polymerases and genomic DNA to PCR amplify about 800 nt long left and right homology arms (see Note 1).

2. Run PCR program for 30 cycles and with an elongation time of 1 min per 1 kb.

3. Load PCR products on an agarose gel (1 % w/v) and let run at 5–8 V/cm.

4. Purify PCR products using a Gel Extraction kit according to manufacturer’s recommendations. Elute in 30 μL pre-warmed water (50–60°C). Measure concentration of PCR products.

5. Use about 400 ng of PCR product and incubate for 1 h at 37 °C with appropriate restriction enzymes.

6. Purify PCR products using a PCR purification kit according to manufacturer’s recommendations. Elute in 20 μL pre-warmed water (50–60 °C) and determine concentrations.

7. In parallel, prepare the donor plasmid accordingly by digesting and purifying the plasmid with the same reagents and protocols.

8. Clone the first homology arm into the donor plasmid by ligat- ing the PCR product and the prepared plasmid using T4 DNA ligase. Use a 3:1 M ratio (PCR–Plasmid) for optimal ligation efficiency.

9. Transform competent E.coli, e.g., by heat shock (42 °C for 30–45 s, on ice for 2 min)

10. Streak E. coli on LB plates containing appropriate antibiotics.

11. Incubate plates for 12–16 h at 37 °C.

12. Pick single colonies and inoculate 2.5–5 mL LB-Medium con- taining antibiotics.

13. Grow colonies for 8–12 h and isolate plasmid DNA using a Mini-Prep kit.

14. Sequence-verify your clone harboring the first homology arm.

10d

10d

14-21d

5-10d

LncRNA Silencing with ZFNs 245 Cloning of ZFNs and donor plasmid

Transfection: ZFN and donor plasmid

Expansion of cells

1st FACS: enrich for GFP+ cells

Expansion of cells

2nd FACS: Single cell sort of GFP+ cells

Expansion of single cells clones

Transfer of clones to 24-well plates

Expansion of single cells clones

Transfer to 96-well and 6-well plates

1-2d 5-10d Genotyping or expression analysis Expansion and storage of clones

Identification of KO clones

Functional analysis of KO clones

Fig. 2 Workflow for lncRNA knockout. Single, homozygous clones can be obtained within 6–8 weeks after ZFN and donor plasmid transfection

15. Continue cloning the second homology arm into the plasmid obtained above.

Repeat steps 7–14 accordingly.

246 Tony Gutschner

3.2 Transfection of ZFNs and Donor Plasmid

3.3 Cell Sorting

16. Use 20–40 μL of starting culture used for Mini-Prep and inoculate 25–35 mL LB-Medium containing antibiotics.

17. Perform Plasmid DNA isolation using a Midi-Prep kit.

The optimal transfection protocol highly depends on the cell line that is subjected to manipulations. Transfection conditions should thus be optimized in advance. The protocol introduced here was successfully applied to human A549 lung cancer cells.

1. 2.

3.

4. 5.

6. 7.

8.

1 . 2. 3.

4.

5. 6.

7.

8.

9. 10.

Seed cells (2–3×105 per 6-well) in 2 mL cell culture medium (+10 % FBS, no antibiotics) (see Note 2).

The next day, prepare plasmid mix by combining 3 μg donor plasmid and 0.5 μg of ZFN plasmid each (1 μg ZFN plasmids in total) (see Note 3).

Combine plasmid mix (4 μg) with 8 μL Turbofect transfection reagent (Thermo Scientific) in serum-/antibiotics-free cell cul- ture medium (final volume = 200 μL). Mix briefly.

Incubate for 15 min at room temperature.

Add transfection mix dropwise to cells and shake plate back and forth for equal distribution.

Incubate cells for 4–6 h with transfection mix.

Remove medium and add fresh, complete growth medium to cells.

Cells might be evaluated for GFP expression prior to further processing.

Expand cells for 10 days after donor and ZFN plasmid transfection.

Remove medium, wash cells once with PBS and add Tr ypsin–EDTA.

Incubate cells at 37 °C and allow for detach (5–15 min).

Resuspend cells in complete cell culture medium and transfer into conical centrifuge tube.

Spin down cells at 500×g for 5 min.

Completely remove cell culture medium and resuspend cell pellet in 2–4 mL PBS/FBS (1 % v/v) by pipetting up and down (see Note 4).

Pipet cells into BD Falcon 12×75 mm Tubes using the cell strainer cap to filter the cell suspension.

Perform steps 2–7 with GFP-negative wild-type cells. Put cells on ice and continue with cell sorting.

Use GFP-negative cells to adjust instrument settings and set threshold for GFP-selection.

3.4 Analysis

11. Perform cell sorting to enrich for GFP-positive cells. Sort cells into 1.5 mL reaction tubes containing 50–100 μL complete cell culture medium (see Note 5).

12. Spin down cells in a tabletop centrifuge (800×g, 5 min) and remove supernatant.

13. Resuspend cells in complete growth medium and seed into appropriate cell culture plates (see Note 6).

14. Expand cells for about 10 days to obtain at least one confluent 10 cm plate for further processing.

15. Add 200 μL complete growth medium per well into 96-well plate. Prepare 5–10 plates per cell line/construct/ZFN (see Note 7).

16. Prepare cells and adjust instrument settings as described in steps 2–10.

17. Sort GFP-positive cells into 96-well plates. GFP-negative wild- type cells might be sorted as well to obtain appropriate nega- tive control clones for subsequent biological experiments.

18. Incubate cells at 37 °C. Add 100 μL complete medium after 5–7 days (see Note 8).

1. About 7–10 days after sorting inspect 96-well plates and mark wells that contain cells.

2. Replace cell culture medium in respective wells by carefully removing the old medium using a 200 μL—pipet and sterile tips.

3. Continuously inspect 96-wells and mark wells that contain cells.

4. About 14–21 days after cell sorting first single cell clones might be ready for transfer into 24-well plates: Remove medium, wash once with PBS and add about 40 μL Trypsin–EDTA per 96-well. After incubation at 37 °C inspect cells for complete detachment. Resuspend cell clones in about 150 μL complete medium and transfer into 24-wells containing additional 500 μL complete growth medium.

5. After another 5–10 days, cells in 24-well plates might be con- fluent and are assigned an identification number. Then, cell clones are simultaneously transferred to 96-well and 6-well plates: Remove medium, wash once with PBS and add about 100 μL Trypsin–EDTA per 24-well. After incubation at 37 °C inspect cells for complete detachment. Resuspend cell clones in about 400 μL complete medium and transfer 100 μL into 96-well and 400 μL into a 6-well containing additional 2 mL complete growth medium.

Cell Clone

LncRNA Silencing with ZFNs 247

248 Tony Gutschner

heterozygous

Integration

No Integration

Fig. 3 Genotyping of cell clones by Integration-PCR. Primers cover the ZFN cleavage site. Monoallelic and bial- lelic integration events can be detected due to the different product sizes. In this example, 1 out of 12 clones harbored a biallelic integration of the selection marker after the selection process and thus showed a strong reduction in lncRNA expression (not shown)

4 Notes

6. The next day, cells in 96-wells are subjected to gene expression or genotyping analysis using the Power SYBR Green Cells- to-Ct kit (Life Technologies) or the DirectPCR lysis reagent (Peqlab) or GenElute mammalian genomic DNA MiniPrep Kit (Sigma-Aldrich) according to manufacturer’s recommen- dations respectively.

7. For genotyping analysis an Integration-PCR is performed using primer pairs that span the ZFN cleavage site (see Note 9). A site-directed integration will lead to a longer PCR product (Fig. 3) (see Note 10).

8. Corresponding positive, homozygous clones in the 6-well plates are further expanded and transferred to 10 cm plates (see Note 11).

9. Single cell clones might be frozen and stored in liquid nitrogen.

1. Homology arms should be cloned from the same cell line that will be used for genome editing due to potential single nucleo- tide polymorphisms (SNPs). Homologous recombination strongly depends on perfect homology and can be impaired by SNPs.

2. The cell line(s) used for ZFN-mediated integration of exoge- nous DNA must possess a certain homologous recombination rate. Several cell lines might be tested, if no integration events are detected.

3. Although not absolutely required, linearization of the donor plasmid might increase integration rates. Please note that lin- earized plasmids are less stable and thus a modified transfection

1 kb DNA ladder

homozygous

wildtype

LncRNA Silencing with ZFNs 249

protocol might be used. In this case, ZFN plasmids might be transfected prior to the donor plasmid to allow ZFN protein expression.

4. Careful pipetting should be performed to prevent disruption of cells while obtaining a single cell suspension, which is critical for subsequent single cell sorting. Addition of EDTA (1 mM final conc.) to the PBS/1 % FBS solution might be beneficial to prevent cell aggregation.

5. A total of 1–3 % of GFP-positive cells can be anticipated, but this rate might vary and depends on multiple parameters. Depending on the instrument and exact settings up to 4×105 cells can be sorted into one 1.5 mL reaction tube.

6. Antibiotics should be added to the cell culture medium after cell sorting to avoid contaminations.

7. The cell lines’ capability to grow as a single cell colony should be tested beforehand. If a cell sorter (e.g., BD Bioscience FACS Aria II) is used, optimal sorting conditions should be determined in advance. Roughly, 10–40 single cell colonies can be expected per 96-well plate.

8. Some cell lines might show an improved single cell growth, if conditioned medium or medium with higher serum concen- tration is used (max. 20 % v/v). If conditioned medium is used, sterile filter before applying to single cells to avoid contaminations.

9. Alternatively, a Junction-PCR can be performed for genotyp- ing. Here, one primer anneals to a sequence region outside the homology arms and the second primer specifically binds to the newly integrated (exogenous) sequence, e.g., the selection marker (here: GFP).

10. Different amounts of donor plasmid should be tested, if high rates of random, nonspecific donor plasmid integrations are observed, i.e., GFP-positive cells that lack a site-specific inte- gration of the donor plasmid. Also, an efficient counter selec- tion strategy could be applied, e.g., cloning the herpes simplex virus thymidine kinase gene outside the homology arms. Nonspecific integration and expression of this suicide gene confers sensitivity towards ganciclovir [11].

11. In theory, targeted integration on both chromosomes is neces- sary to obtain an efficient gene knockdown. However, cancer cells might show diverse degrees of gene amplifications and deletions. Also, epigenetically silenced or imprinted genes as well as genes localized on the X or Y-chromosomes represent exceptions of the rule. Thus, a single, site-specific integration might already lead to an efficient silencing. On the other hand, multiple integration events must occur simultaneously

250 Tony Gutschner

in human polyploid cells (e.g., hepatocytes, heart muscle cells, megakaryocytes) or in amplified chromosome regions to significantly impair target gene expression.

The author wishes to acknowledge the support of his colleagues at the German Cancer Research Center (DKFZ) Heidelberg who helped to establish this method and to set up the protocol. A spe- cial thanks goes to Matthias Groß and Dr. Monika Hämmerle for critical reading of the manuscript. T.G. is supported by an Odyssey Postdoctoral Fellowship sponsored by the Odyssey Program and the CFP Foundation at The University of Texas MD Anderson Cancer Center.

Acknowledgement

References

1. Gutschner T, Diederichs S (2012) The hall- 7. marks of cancer: a long non-coding RNA point of view. RNA Biol 9(6):703–719. doi:10. 4161/rna.20481

2. Jackson AL, Linsley PS (2010) Recognizing and avoiding siRNA off-target effects for tar- get identification and therapeutic application. Nat Rev Drug Discov 9(1):57–67. doi:10.1038/nrd3010 8.

3. Gutschner T, Baas M, Diederichs S (2011) Noncoding RNA gene silencing through genomic integration of RNA destabilizing ele- ments using zinc finger nucleases. Genome Res 21(11):1944–1954. doi:10.1101/gr. 9. 122358.111

4. GutschnerT,HammerleM,DiederichsS(2013) MALAT1—a paradigm for long noncoding RNA function in cancer. J Mol Med 91(7):791– 801. doi:10.1007/s00109-013-1028-y

5. Ji P, Diederichs S, Wang W, Boing S, Metzger R, Schneider PM, Tidow N, Brandt B, Buerger H, Bulk E, Thomas M, Berdel WE, Serve H, Muller-Tidow C (2003) MALAT-1, a novel noncoding RNA, and thymosin beta4 predict metastasis and survival in early-stage non-small cell lung cancer. Oncogene 22(39):8031– 8041. doi:10.1038/sj.onc.1206928

6. Miller JC, Holmes MC, Wang J, Guschin DY, Lee YL, Rupniewski I, Beausejour CM, Waite AJ, Wang NS, Kim KA, Gregory PD, Pabo CO, Rebar EJ (2007) An improved zinc-finger nuclease architecture for highly specific genome editing. Nat Biotechnol 25(7):778– 785. doi:10.1038/nbt1319

Gutschner T, Hammerle M, Eissmann M, Hsu J, Kim Y, Hung G, Revenko A, Arun G, Stentrup M, Gross M, Zornig M, MacLeod AR, Spector DL, Diederichs S (2013) The non- coding RNA MALAT1 is a critical regulator of the metastasis phenotype of lung cancer cells. Cancer Res 73(3):1180–1189. doi:10.1158/ 0008-5472.CAN-12-2850

Fu F, Voytas DF (2013) Zinc Finger Database (ZiFDB) v2.0: a comprehensive database of C(2)H(2) zinc fingers and engineered zinc finger arrays. Nucleic Acids Res 41(Database issue):D452–D455. doi:10.1093/nar/gks1167

Sander JD, Dahlborg EJ, Goodwin MJ, Cade L, Zhang F, Cifuentes D, Curtin SJ, Blackburn JS, Thibodeau-Beganny S, Qi Y, Pierick CJ, Hoffman E, Maeder ML, Khayter C, Reyon D, Dobbs D, Langenau DM, Stupar RM, Giraldez AJ, Voytas DF, Peterson RT, Yeh JR, Joung JK (2011) Selection-free zinc-finger- nuclease engineering by context-dependent assembly (CoDA). Nat Methods 8(1):67–69. doi:10.1038/nmeth.1542

10. Cermak T, Doyle EL, Christian M, Wang L, Zhang Y, Schmidt C, Baller JA, Somia NV, Bogdanove AJ, Voytas DF (2011) Efficient design and assembly of custom TALEN and other TAL effector-based constructs for DNA targeting. Nucleic Acids Res 39(12):e82. doi:10.1093/nar/gkr218

11. Moolten FL, Wells JM (1990) Curability of tumors bearing herpes thymidine kinase genes transferred by retroviral vectors. J Natl Cancer Inst 82(4):297–300


Who Owns the Biggest Biotech Discovery of the Century?There’s a bitter fight over the patents for CRISPR, a breakthrough new form of DNA editing.

Control over genome editing could be worth billions. [Yes, there is already ample independent experimental evidence that "fractal defects" of the genome are linked to cancers, schizophrenia, autism, auto-immune diseases (etc). Of course, one needs first to find such "fractal defects"; see US patent in force 8,280,641 - such that one would know what to edit out. FractoGene is a result of "geometrization of genomics". Since mathematization of biology is rarely well received by non-mathematical-minded biologists, result of understanding the sensorimotor coordination function of cerebellar neural nets really broke through not in biology (AJP actually was denied continuation of his grant support since actual mathematics contradicted "Central Dogma" - though Francis Crick confessed later that he did not know either mathematics or what the word "Dogma" actually meant, it just "sounded good"). Since one of the most successful fighter jets in history, the F15 (Israel shot down all enemy aircraft without losing a single F15) could in fact be landed "on one wing" by a superb Israeli pilot, the patent-version of Pellionisz' "Tensor Network Theory" led to automation by NASA, such that landing could be done by any lesser pilot, purely by automation. Geometrization of the function of cerebellar neural net immediately yielded the Alexander von Humboldt Prize from Germany (such that on a 6-months lecture tour in Germany the concepts were widely disseminated, and the inventor faced the trilemma of either switching his professorship at New York University to one in Germany, or his native Hungary - or return to Silicon Valley - today's decisions also include BRICS countries, as the USA is without a streamlined "Genome Program" - genomics is scattered from NIH to NSF and DARPA, DoE and even Homeland Defense). For NASA, it took a decade from the blueprint to actually perform successful implementation. Indeed, intellectual property, especially when university and/or government parties are involved in invention and/or assignment can be mind-boggling, at the time Dr. Pellionisz turned to develop the advanced geometry of recursive genome function, he steered clear of any such cumbersome involvement. This, of course, meant that since the inventor financed the entire development "out of pocket", could not pay for "accelerated issuance" of his patent. It took more than a full decade for the USPTO to understand and to issue the patent 8,280,641 (though, in retrospect, may appear "yeah, sure" to some now - but the patent is in force till late March of 2026). There is a single inventor, and the patent is personal property (assigned to none other than the inventor). Now, some agencies need all the help they can get hard times explaining the $100 million project of "cataloging cancer mutations" (the number is not infinite, given the finite amount of information compressed into the genome, but certainly astronomical, and it makes no sense either scientifically or economically to waste taxpayer's money to "big data" projects that result mostly in prolonged suffering). At least three leading "cloud computing companies" are already set-up for hunting "fractal defects" - with myriads of "wet labs" to hone "genome editing" to clean up genomic glitches. Help is available, given appropriate arrangements - andras_at_pellionisz_dot_com]

By Antonio Regalado on December 4, 2014

Last month in Silicon Valley, biologists Jennifer Doudna and Emmanuelle Charpentier showed up in black gowns to receive the $3 million Breakthrough Prize, a glitzy award put on by Internet billionaires including Mark Zuckerberg. They’d won for developing CRISPR-Cas9, a “powerful and general technology” for editing genomes that’s been hailed as a biotechnology breakthrough.

Not dressing up that night was Feng Zhang (see 35 Innovators Under 35, 2013), a researcher in Cambridge at the MIT-Harvard Broad Institute. But earlier this year Zhang claimed his own reward. In April, he won a broad U.S. patent on CRISPR-Cas9 that could give him and his research center control over just about every important commercial use of the technology.

How did the high-profile prize for CRISPR and the patent on it end up in different hands? That’s a question now at the center of a seething debate over who invented what, and when, that involves three heavily financed startup companies, a half-dozen universities, and thousands of pages of legal documents.

“The intellectual property in this space is pretty complex, to put it nicely,” says Rodger Novak, a former pharmaceutical industry executive who is now CEO of CRISPR Therapeutics, a startup in Basel, Switzerland, that was cofounded by Charpentier. “Everyone knows there are conflicting claims.”

At stake are rights to an invention that may be the most important new genetic engineering technique since the beginning of the biotechnology age in the 1970s. The CRISPR system, dubbed a “search and replace function” for DNA, lets scientists easily disable genes or change their function by replacing DNA letters. During the last few months, scientists have shown that it’s possible to use CRISPR to rid mice of muscular dystrophy, cure them of a rare liver disease, make human cells immune to HIV, and genetically modify monkeys (see “Genome Surgery” and “10 Breakthrough Technologies 2014: Genome Editing”).

No CRISPR drug yet exists. But if CRISPR turns out to be as important as scientists hope, commercial control over the underlying technology could be worth billions.

The control of the patents is crucial to several startups that together quickly raised more than $80 million to turn CRISPR into cures for devastating diseases. They include Editas Medicine and Intellia Therapeutics, both of Cambridge, Massachusetts. Companies expect that clinical trials could begin in as little as three years.

Zhang cofounded Editas Medicine, and this week the startup announced that it had licensed his patent from the Broad Institute. But Editas doesn’t have CRISPR sewn up. That’s because Doudna, a structural biologist at the University of California, Berkeley, was a cofounder of Editas, too. And since Zhang’s patent came out, she’s broken off with the company, and her intellectual property—in the form of her own pending patent—has been licensed to Intellia, a competing startup unveiled only last month. Making matters still more complicated, Charpentier sold her own rights in the same patent application to CRISPR Therapeutics.

In an e-mail, Doudna said she no longer has any involvement with Editas. “I am not part of the company’s team at this point,” she said. Doudna declined to answer further questions, citing the patent dispute.

Few researchers are now willing to discuss the patent fight. Lawsuits are certain and they worry anything they say will be used against them. “The technology has brought a lot of excitement, and there is a lot of pressure, too. What are we going to do? What kind of company do we want?” Charpentier says. “It all sounds very confusing for an outsider, and it’s also quite confusing as an insider.”

Academic labs aren’t waiting for the patent claims to get sorted out. Instead, they are racing to assemble very large engineering teams to perfect and improve the genome-editing technique. On the Boston campus of Harvard’s medical school, for instance, George Church, a specialist in genomics technology, says he now has 30 people in his lab working on it.

Because of all the new research, Zhang says, the importance of any patent, including his own, isn’t entirely clear. “It’s one important piece, but I don’t really pay attention to patents,” he says. “What the final form of this technology is that changes people’s lives may be very different.”

The new gene-editing system was unearthed in bacteria—organisms that use it as a way to identify, and then carve up, the DNA of invading viruses. That work stretched across a decade. Then, in June 2012, a small team led by Doudna and Charpentier published a key paper showing how to turn that natural machinery into a “programmable” editing tool, to cut any DNA strand, at least in a test tube.

The next step was clear—scientists needed to see if the editing magic could work on the genomes of human cells, too. In January 2013, the laboratories of Harvard’s Church and Broad’s Zhang were first to publish papers showing that the answer was yes. Doudna published her own results a few weeks later.

Everyone by then realized that CRISPR might become an immensely flexible way to rewrite DNA, and possibly to treat rare metabolic problems and genetic diseases as diverse as hemophilia and the neurodegenerative disease Huntington’s.

Venture capital groups quickly began trying to recruit the key scientists behind CRISPR, tie up the patents, and form startups. Charpentier threw in with CRISPR Therapeutics in Europe. Doudna had already started a small company, Caribou Biosciences, but in 2013 she joined Zhang and Church as a cofounder of Editas. With $43 million from leading venture funds Third Rock Ventures (see “50 Smartest Companies: Third Rock Ventures”), Polaris Partners, and Flagship Ventures, Editas looked like the dream team of gene-editing startups.

In April of this year, Zhang and the Broad won the first of several sweeping patents that cover using CRISPR in eukaryotes—or any species whose cells contain a nucleus (see “Broad Institute Gets Patent on Revolutionary Gene-Editing Method”). That meant that they’d won the rights to use CRISPR in mice, pigs, cattle, humans—in essence, in every creature other than bacteria.

The patent came as a shock to some. That was because Broad had paid extra to get it reviewed very quickly, in less than six months, and few knew it was coming. Along with the patent came more than 1,000 pages of documents. According to Zhang, Doudna’s predictions in her own earlier patent application that her discovery would work in humans was “mere conjecture” and that, instead, he was the first to show it, in a separate and “surprising” act of invention.

The patent documents have caused consternation. The scientific literature shows that several scientists managed to get CRISPR to work in human cells. In fact, its easy reproducibility in different organisms is the technology’s most exciting hallmark. That would suggest that, in patent terms, it was “obvious” that CRISPR would work in human cells, and that Zhang’s invention might not be worthy of its own patent.


What’s more, there’s scientific credit at stake. In order to show he was “first to invent” the use of CRISPR-Cas in human cells, Zhang supplied snapshots of lab notebooks that he says show he had the system up and running in early 2012, even before Doudna and Charpentier published their results or filed their own patent application. That timeline would mean he hit on the CRISPR-Cas editing system independently. In an interview, Zhang affirmed he’d made the discoveries on his own. Asked what he’d learned from Doudna and Charpentier’s paper, he said “not much.”

Not everyone is convinced. “All I can say is that we did it in my lab with Jennifer Doudna,” says Charpentier, now a professor at the Helmholtz Centre for Infection Research and Hannover Medical School in Germany. “Everything here is very exaggerated because this is one of those unique cases of a technology that people can really pick up easily, and it’s changing researchers’ lives. Things are happening fast, maybe a bit too fast.”

This isn’t the end of the patent fight. Although Broad moved very swiftly, lawyers for Doudna and Charpentier are expected to mount an interference proceeding in the U.S.—that is, a winner-takes-all legal process in which one inventor can take over another’s patent. Who wins will depend on which scientist can produce lab notebooks, e-mails, or documents with the earliest dates.

“I am very confident that the future will clarify the situation,” says Charpentier. “And I would like to believe the story is going to end up well.”


NIH grants aim to decipher the language of gene regulation

Bethesda, Md., Jan. 5, 2015 - The National Institutes of Health has awarded grants of more than $28 million aimed at deciphering the language of how and when genes are turned on and off. These awards emanate from the recently launched Genomics of Gene Regulation (GGR) program of the National Human Genome Research Institute (NHGRI), part of NIH.

"There is a growing realization that the ways genes are regulated to work together can be important for understanding disease," said Mike Pazin, Ph.D., a program director in the Functional Analysis Program in NHGRI's Division of Genome Sciences. "The GGR program aims to develop new ways for understanding how the genes and switches in the genome fit together as networks. Such knowledge is important for defining the role of genomic differences in human health and disease."

With these new grants, researchers will study gene networks and pathways in different systems in the body, such as skin, immune cells and lung. The resulting insights into the mechanisms controlling gene expression may ultimately lead to new avenues for developing treatments for diseases affected by faulty gene regulation, such as cancer, diabetes and Parkinson's disease.

Over the past decade, numerous studies have suggested that genomic regions outside of protein-coding regions harbor variants that play a role in disease. Such regions likely contain gene-control elements that are altered by these variants, which increase the risk for a disease.

"Knowing the interconnections of these regulatory elements is critical for understanding the genomic basis of disease," Dr. Pazin said. "We do not have a good way to predict whether particular regulatory elements are turning genes off or activating them, or whether these elements make genes responsive to a condition, such as infection. We expect these new projects will develop better methods to answer these types of questions using genomic data."

[There is an interesting new scenario. This columnist (AJP; andras_at_pellionisz_dot_com) has devoted close to half a Century of very hard work to develop advanced geometrical understanding of the function of neural and genomic systems, as they arise from their so well known and so beloved structure. Geometrization (mathematization) of biology, however, is rather poorly received (when Mandelbrot was offered to lead, with very significant resources, declined the offer since "biologists were not ready"; Benoit upheld his proper impression through his life, as shown in his Memoirs).


End of cancer-genome project prompts rethink: Geneticists debate whether focus should shift from sequencing genomes to analysing function.

Nature, 2015 January 5.

A mammoth US effort to genetically profile 10,000 tumours has officially come to an end. Started in 2006 as a US$100-million pilot, The Cancer Genome Atlas (TCGA) is now the biggest component of the International Cancer Genome Consortium, a collaboration of scientists from 16 nations that has discovered nearly 10 million cancer-related mutations.

The question is what to do next. Some researchers want to continue the focus on sequencing; others would rather expand their work to explore how the mutations that have been identified influence the development and progression of cancer.

“TCGA should be completed and declared a victory,” says Bruce Stillman, president of Cold Spring Harbor Laboratory in New York. “There will always be new mutations found that are associated with a particular cancer. The question is: what is the cost–benefit ratio?”

Stillman was an early advocate for the project, even as some researchers feared that it would drain funds away from individual grants. Initially a three-year project, it was extended for five more years. In 2009, it received an additional $100 million from the US National Institutes of Health plus $175 million from stimulus funding that was intended to spur the US economy during the global economic recession.

The project initially struggled. At the time, the sequencing technology worked only on fresh tissue that had been frozen rapidly. Yet most clinical biopsies are fixed in paraffin and stained for examination by pathologists. Finding and paying for fresh tissue samples became the programme’s largest expense, says Louis Staudt, director of the Office for Cancer Genomics at the National Cancer Institute (NCI) in Bethesda, Maryland.

Also a problem was the complexity of the data. Although a few ‘drivers’ stood out as likely contributors to the development of cancer, most of the mutations formed a bewildering hodgepodge of genetic oddities, with little commonality between tumours. Tests of drugs that targeted the drivers soon revealed another problem: cancers are often quick to become resistant, typically by activating different genes to bypass whatever cellular process is blocked by the treatment.

Despite those difficulties, nearly every aspect of cancer research has benefited from TCGA, says Bert Vogelstein, a cancer geneticist at Johns Hopkins University in Baltimore, Maryland. The data have yielded new ways to classify tumours and pointed to previously unrecognized drug targets and carcinogens. But some researchers think that sequencing still has a lot to offer. In January, a statistical analysis of the mutation data for 21 cancers showed that sequencing still has the potential to find clinically useful mutations (M. S. Lawrence et al. Nature 505, 495–501; 2014).

On 2 December, Staudt announced that once TCGA is completed, the NCI will continue to intensively sequence tumours in three cancers: ovarian, colorectal and lung adenocarcinoma. It then plans to evaluate the fruits of this extra effort before deciding whether to add back more cancers.

Expanded scope

But this time around, the studies will be able to incorporate detailed clinical information about the patient’s health, treatment history and response to therapies. Because researchers can now use paraffin-embedded samples, they can tap into data from past clinical trials, and study how mutations affect a patient’s prognosis and response to treatment. Staudt says that the NCI will be announcing a call for proposals to sequence samples taken during clinical trials using the methods and analysis pipelines established by the TCGA.

The rest of the International Cancer Gene Consortium, slated to release early plans for a second wave of projects in February, will probably take a similar tack, says co-founder Tom Hudson, president of the Ontario Institute for Cancer Research in Toronto, Canada. A focus on finding sequences that make a tumour responsive to therapy has already been embraced by government funders in several countries eager to rein in health-care costs, he says. “Cancer therapies are very expensive. It’s a priority for us to address which patients would respond to an expensive drug.”

The NCI is also backing the creation of a repository for data not only from its own projects, but also from international efforts. This is intended to bring data access and analysis tools to a wider swathe of researchers, says Staudt. At present, the cancer genomics data constitute about 20 petabytes (1015 bytes), and are so large and unwieldy that only institutions with significant computing power can access them. Even then, it can take four months just to download them.

Stimulus funding cannot be counted on to fuel these plans, acknowledges Staudt. But cheaper sequencing and the ability to use biobanked biopsies should bring down the cost, he says. “Genomics is at the centre of much of what we do in cancer research,” he says. “Now we can ask questions in a more directed way.”

Nature 517, 128–129 (08 January 2015) doi:10.1038/517128a


Variation in cancer risk among tissues can be explained by the number of stem cell divisions

Cristian Tomasetti1,*, Bert Vogelstein2,*

Science 2 January 2015:

Vol. 347 no. 6217 pp. 78-81

DOI: 10.1126/science.1260825

- Author Affiliations

1Division of Biostatistics and Bioinformatics, Department of Oncology, Sidney Kimmel Cancer Center, Johns Hopkins University School of Medicine and Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, 550 North Broadway, Baltimore, MD 21205, USA.

2Ludwig Center for Cancer Genetics and Therapeutics and Howard Hughes Medical Institute, Johns Hopkins Kimmel Cancer Center, 1650 Orleans Street, Baltimore, MD 21205, USA.

↵*Corresponding author. E-mail: ctomasetti@jhu.edu (C.T.); vogelbe@jhmi.edu (B.V.)

ABSTRACT

Some tissue types give rise to human cancers millions of times more often than other tissue types. Although this has been recognized for more than a century, it has never been explained. Here, we show that the lifetime risk of cancers of many different types is strongly correlated (0.81) with the total number of divisions of the normal self-renewing cells maintaining that tissue’s homeostasis. These results suggest that only a third of the variation in cancer risk among tissues is attributable to environmental factors or inherited predispositions. The majority is due to “bad luck,” that is, random mutations arising during DNA replication in normal, noncancerous stem cells. This is important not only for understanding the disease but also for designing strategies to limit the mortality it causes.

EDITOR'S SUMMARY

Crunching the numbers to explain cancer

Why do some tissues give rise to cancer in humans a million times more frequently than others? Tomasetti and Vogelstein conclude that these differences can be explained by the number of stem cell divisions. By plotting the lifetime incidence of various cancers against the estimated number of normal stem cell divisions in the corresponding tissues over a lifetime, they found a strong correlation extending over five orders of magnitude. This suggests that random errors occurring during DNA replication in normal stem cells are a major contributing factor in cancer development. Remarkably, this “bad luck” component explains a far greater number of cancers than do hereditary and environmental factors.

Cancer’s Random Assault

By DENISE GRADY

JAN. 5, 2015

New York Times

It may sound flippant to say that many cases of cancer are caused by bad luck, but that is what two scientists suggested in an article published last week in the journal Science. The bad luck comes in the form of random genetic mistakes, or mutations, that happen when healthy cells divide.

Random mutations may account for two-thirds of the risk of getting many types of cancer, leaving the usual suspects — heredity and environmental factors — to account for only one-third, say the authors, Cristian Tomasetti and Dr. Bert Vogelstein, of Johns Hopkins University School of Medicine. “We do think this is a fundamental mechanism, and this is the first time there’s been a measure of it,” said Dr. Tomasetti, an applied mathematician.

Though the researchers suspected that chance had a role, they were surprised at how big it turned out to be.

“This was definitely beyond my expectations,” Dr. Tomasetti said. “It’s about double what I would have thought.”

The finding may be good news to some people, bad news to others, he added.

Smoking greatly increases the risk of lung cancer, but for other cancers, the causes are not clear. And yet many patients wonder if they did something to bring the disease on themselves, or if they could have done something to prevent it.

“For the average cancer patient, I think this is good news,” Dr. Tomasetti said. “Knowing that over all, a lot of it is just bad luck, I think in a sense it’s comforting.”

Among people who do not have cancer, Dr. Tomasetti said he expected there to be two camps.

“There are those who would like to control every single thing happening in their lives, and for those, this may be very scary,” he said. “ ‘There is a big component of cancer I can just do nothing about.’

“For the other part of the population, it’s actually good news. ‘I’m happy. I can of course do all I know that’s important to not increase my risk of cancer, like a good diet, exercise, avoiding smoking, but on the other side, I don’t want to stress out about every single thing or every action I take in my life, or everything I touch or eat.’ ” Dr. Vogelstein said the question of causation had haunted him for decades, since he was an intern and his first patient was a 4-year-old girl with leukemia. Her parents were distraught and wanted to know what had caused the disease. He had no answer, but time and time again heard the same question from patients and their families, particularly parents of children with cancer.

“They think they passed on a bad gene or gave them the wrong foods or exposed them to paint in the garage,” he said. “And it’s just wrong. It gave them a lot of guilt.”

Dr. Tomasetti and Dr. Vogelstein said the finding that so many cases of cancer occur from random genetic accidents means that it may not be possible to prevent them, and that there should be more of an emphasis on developing better tests to find cancers early enough to cure them.

“Cancer leaves signals of its presence, so we just have to basically get smarter about how to find them,” Dr. Tomasetti said.

Their conclusion comes from a statistical model they developed using data in the medical literature on rates of cell division in 31 types of tissue. They looked specifically at stem cells, which are a small, specialized population in each organ or tissue that divide to provide replacements for cells that wear out.

Dividing cells must make copies of their DNA, and errors in the process can set off the uncontrolled growth that leads to cancer.

The researchers wondered if higher rates of stem-cell division might increase the risk of cancer simply by providing more chances for mistakes.

Dr. Vogelstein said research of this type became possible only in recent years, because of advances in the understanding of stem-cell biology.

Continue reading the main story

RECENT COMMENTS

John 6 hours ago

As my doctors told me, "You're the healthiest guy I've ever seen, except for that life-threatening cancer."

Tim Hunter 7 hours ago

Caused by chance really means "caused by a reason we do not yet understand". I firmly believe that when we live the way we do, surrounded by...

imperato 7 hours ago

So why does a blue whale containing the largest number of cells of any organism on the planet not have a correspondingly high cancer rate?

SEE ALL COMMENTS WRITE A COMMENT

The analysis did not include breast or prostate cancers, because there was not enough data on rates of stem-cell division in those tissues.

A starting point for their research was an observation made more than 100 years ago but never really explained: Some tissues are far more cancer-prone than others. In the large intestine, for instance, the lifetime cancer risk is 4.8 percent — 24 times higher than in the small intestine, where it is 0.2 percent.

The scientists found that the large intestine has many more stem cells than the small intestine, and that they divide more often: 73 times a year, compared with 24 times. In many other tissues, rates of stem cell division also correlated strongly with cancer risk.

Some cancers, including certain lung and skin cancers, are more common than would be expected just from their rates of stem-cell division — which matches up with the known importance of environmental factors like smoking and sun exposure in those diseases. Others more common than expected were linked to cancer-causing genes. To help explain the findings, Dr. Tomasetti cited the risks of a car accident. In general, the longer the trip, the higher the odds of a crash. Environmental factors like bad weather can add to the basic risk, and so can defects in the car.

“This is a good picture of how I see cancer,” he said. “It’s really the combination of inherited factors, environment and chance. At the base, there is the chance of mutations, to which we add, either because of things we inherited or the environment, our lifestyle.”

Dr. Kenneth Offit, chief of the clinical genetics service at Memorial Sloan Kettering Cancer Center in Manhattan, called the article “an elegant biological explanation of the complex pattern of cancers observed in different human tissues.”


Finding the simple patterns in a complex world (Barnsley: "Cancers are fractals")

An ANU mathematician has developed a new way to uncover simple patterns that might underlie apparently complex systems, such as clouds, cracks in materials or the movement of the stockmarket.

The method, named fractal Fourier analysis, is based on new branch of mathematics called fractal geometry.

The method could help scientists better understand the complicated signals that the body gives out, such as nerve impulses or brain waves.

"It opens up a whole new way of analysing signals," said Professor Michael Barnsley, who presented his work at the New Directions in Fractal Geometry conference at ANU.

"Fractal Geometry is a new branch of mathematics that describes the world as it is, rather than acting as though it's made of straight lines and spheres. There are very few straight lines and circles in nature. The shapes you find in nature are rough."

The new analysis method is closely related to conventional Fourier analysis, which is integral to modern image handling and audio signal processing.

"Fractal Fourier analysis provides a method to break complicated signals up into a set of well understood building blocks, in a similar way to how conventional Fourier analysis breaks signals up into a set of smooth sine waves," Professor Barnsley said.

Professor Barnsley's work draws on the work of Karl Weierstrass from the late 19th Century, who discovered a family of mathematical functions that were continuous, but could not be differentiated

"There are terrific advances to be made by breaking loose from the thrall of continuity and differentiability," Professor Barnsley said.

"The body is full of repeating branch structures – the breathing system, the blood supply system, the arrangement of skin cells, even cancer is a fractal."

[Michael Barnsley - with the founder of the field, Benoit Mandelbrot gone - is a paramount leader of both the mathematics of fractals, as well as its applications. Though the hitherto most lucrative application (fractal prediction of the obviously non-derivable stock-price curves) was not led by either of them (see Elliot Wave Theory), chances are that the required mathematical/algorithmic/software development will call for so significant investment, that "cloud computing companies" might spearhead or even monopolize the industry of FractoGene. Cloud computing provides the capital, infrastructure and the built-in capacity of enforcing royalties for algorithms run on myriads of their servers. 2015 is likely to be the year when the horse-race fully unfolds - andras_at_pellionisz_dot_com ]


A fractal geometric model of prostate carcinoma and classes of equivalence

[There is no need to read the poster - or the paper in print. Just looking at the Broccoli Romanesca (and the Hilbert fractal similarly widespread) will remind everyone by 2015 that "fractal genome grows fractal organisms" (FractoGene). What other concept grasps the essence of Recursive Genome Function? - Pellionisz_dot_com