Newsletter of HoloGenomics

Genomics, Epigenomics integrated into Informatics:


fractal cancer - 4,620,000 Google hits, July 17, 2017

fractal cancer - 4,240,000 Google hits, June 28, 2017

fractal cancer - 1,730,000 Google hits, June 5, 2017 (what happened by the end of May?)

fractal cancer - 961,000 Google hits, June 3, 2017

fractal genome - 416,000 Google hits, May 23, 2017


fractogene - "fractal genome grows fractal organisms"

21,900 Google hits, July 17, 2017

4,230 Google hits, June 28, 2017

2,630 Google hits, June 3, 2017

fractogene catapults above 1 million hits of cause or effect

Dr. Losa on FractoGene of Dr. Pellionisz

"The theoretical significance is that the fractality found in DNA and organisms, for a long time “apparently unrelated,” was put into a “cause and effect” relationship by

the principle of recursive genome function"





Four Zero Eight 891-718_Seven

The Next Big Thing in Silicon Valley:

Dawning to Apple (and Google...) that Personal Genome will end up on Smartphones

Grail combines Microsoft, Amazon Founders, Ilumina Board Dir. (Gates, Bezos, Flatley)

Google-Stanford joint Venture for Precision Medicine

Intel going for Precision Medicine

GE Health sponsors Venter's Human Longevity (by Venter, in Silicon Valley with Dr. Och)

All within a 15 mile radius of Silicon Valley.

Technology is ready, Biology is catching up, fast.

Key is the secured Intellectual Property

FractoGene - "Fractal Genome Grows Fractal Organisms"

You See Above The Future: "Double Technology Disruption" Of New School Genomics:

either Apple Smartphone Genome Analytics by SmidgION of Oxford Nanopore, or

Google getting serious about building its own iPhone

A Compilation by Andras J. Pellionisz, see Contact, Bio and References here


Skip to HoloGenomics Table of Contents








2007 Post-Encode

2007 Pre-Encode




The FractoGene Decade (2002-2012)

Pellionisz' FractoGene, 2002 (early media coverage)

Table of Contents

(Jul 19) Whole-genome sequencing reveals mutations outside of protein-coding regions
(Jul 02) Psst, the human genome was never completely sequenced. Some scientists say it should be
(Jun 21) University of Debrecen, Hungary and BGI Group sign MOU (China penetrates Central European Genome Market)
(Jun 17) Fractal Cancer - Fractal Genome - Critical Juncture for NIH and its National Cancer Institute
(Jun 15) Lung Cancer - a Fractal Viewpoint (why is Academia hesitant to embrace disruptive breakthroughs)
(Jun 02) Fractal Cancer - Sokolov Chapter in Cancer Nanotechnology Textbook
(May 31) Researchers discover hundreds of unexpected mutations from new gene editing technology
(May 12) The future of forensic neurosciences with fractal geometry
(Mar 02) Why Genomics Isn't All It's Cracked Up To Be
(Feb 21) Craig Venter Mapped The Genome. Now He's Trying To Decode Death
(Feb 18) China Looks for Fractal Experts - request from Hohhot
(Feb 18) Bill Gates: Bioterrorism could kill more than nuclear war — but no one is ready to deal with it (except a few unmentioned...)
(Feb 16) How Trump can make the most of a nonpartisan cancer 'moonshot'
(Feb 12) China aggressively challenges US lead in precision medicine
(Feb 09) Patriots cheerleader and MIT researcher, Theresa Oei does it all - the new generation leaps over the nonsense of Junk DNA
(Feb 02) NIH to expand critical catalog for genomics research
(Feb 04) The mysterious 98%: Scientists look to shine light on the 'dark genome'
(Feb 02) Debunking the ‘Junk DNA’ Theory
(Feb 01) Two Infants Cured of Terminal Cancer by Breakthrough Gene-Editing Therapy
(Jan 30) Could Cancer Drugs Treat Autism?
(Jan 29) Identification of neutral tumor evolution across cancer types (Tumor Evolution Is Fractal!)
(Jan 26) The Trillioner is Getting IT
(Jan 15) Illumina Taps Garret Hampton, One of the World’s Leading Clinical Genomics Experts, to Head its Clinical Genomics Unit
(Jan 12) Illumina Promises To Sequence Human Genome For $100 -- But Not Quite Yet
(Jan 09) Craig Venter Used Own Posh Health Clinic To Diagnose His Cancer
(Jan 09) Illumina Strikes Deals With Philips, IBM to Interpret Cancer Genomic Data
(Jan 06) Grail Seeks to Raise $1B in Series B; Illumina's Stake to Fall Below 20 Percent
(Jan 05) Biden looks to continue cancer work (Moon Shot exploded before launch)
(Jan 01) Hungary Launches National Oncogenomics Program
(Dec 15) Washington D.C. making history
(Dec 12) Mr. President-Elect, The Genome is Fractal! Mathematical analysis reveals (fractal) architecture of the human genome
(Dec 10) Maryland congressman in running to head NIH?
(Dec 01) iHope - gratis full sequencing (and interpretation?) by Illumina
(Nov 29) Moon Shots - GenomeWeb writes
(Nov 26) The Massive "Next Big Thing" - National Genome Projects
(Nov 20) The Third Hungarian Notation
(Oct 08) Non-coding portions of genome are found to play role in cancer
(Sep 24) Microsoft initiatives treat cancer as a computing problem
(Sep 19) 'Junk DNA' tells mice - and snakes how to grow a backbone
(Sep 10) The Oncoming Double Disruption of Technology of New School Genomics; SmidgION by Oxford Nanopore
(Sep 05) Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com
(Sep 05) Multiscale modeling of cellular epigenetic states: stochasticity in molecular networks, chromatin folding in cell nuclei, and tissue pattern formation of cells
(Sep 05) Fractal Dimension of Tc-99m DTPA GSA Estimates Pathologic Liver Injury due to Chemotherapy in Liver Cancer Patients
(Sep 05) Unique fractal evaluation and therapeutic implications of mitochondrial morphology in malignant mesothelioma
(Sep 05) Systems Medicine of Cancer: Bringing Together Clinical Data and Nonlinear Dynamics of Genetic Networks
(Sep 05) ASXL2 promotes proliferation of breast cancer cells by linking ERα to histone methylation
(Sep 05) Researchers identify two proteins important for the demethylation of DNA
(Sep 05) Mathematical Modelling and Prediction of the Effect of Chemotherapy on Cancer Cells
(Sep 01) Breast cancer researchers look beyond genes to identify more drivers of disease development
(Aug 30) HPE Synergy Shows Promise And Progress With Early Adopters
(Aug 22) Illumina - Can GRAIL Deal A Death Blow In The War Against Cancer?
(Aug 15) Precision medicine: Analytics, data science and EHRs in the new age
(Aug 10) Stanford Medicine, Google team up to harness power of data science for health care
(Aug 09) Leroy Hood 2002; "The Genome is a System" (needs some system theory)
(Aug 08) What if Dawkin's "Selfish Gene" would have been "Selfish FractoGene"?
(Jul 25) DNA pioneer James Watson: The cancer moonshot is ‘crap’ but there is still hope
(Jul 23) Science Cover Issue 2016 July 22 with Fractal folding of DNA and of Proteins
(Jul 22) CRISPR Immunotherapy ahead
(Jul 21) Qatari genomes provide a reference for the Middle East
(Jul 17) CIA Chief Claims that Genome Editing May Be Used For Biological Warfare
(Jul 14) Are Early Harbingers of Alzheimer’s Scattered Across the Genome?
(Jul 04) [Independence Day] A 27-year-old who worked for Apple as a teenager wants to make a yearly blood test to diagnose cancer — and he just got $5.5 million from Silicon Valley VCs to pull it off
(Jun 25) President Obama Hints He May Head to Silicon Valley for His Next Job
(Jun 12) Is This the Biggest Threat Yet to Illumina?
(May 24) Big talk about big data, but little collaboration
(Apr 25) Can Silicon Valley cure cancer?
(Apr 14) Life Code (Bioinformatics): The Biggest Disruption Ever? (Juan Enriquez)
(Apr 02) Craig Venter: We Are Not Ready to Edit Human Embryos Yet
(Mar 29) Big Data Meets Big Biology in San Diego on March 31: The Agenda
(Mar 08) Illumina Forms New Company to Enable Early Cancer Detection via Blood-Based Screening
(Mar 08) Illumina CEO Jay Flatley Built The DNA Sequencing Market. Now He's Stepping Down
(Mar 07) CRISPR: gene editing is just the beginning
(Mar 01) Geneticists debate whether focus should shift from sequencing genomes to analysing function.
(Feb 10) Top U.S. Intelligence Official Calls Gene Editing a WMD Threat
(Feb 06) Craig Venter: We Are Not Ready to Edit Human Embryos Yet
(Feb 01) UK scientists gain licence to edit genes in human embryos
(Jan 30) Why Eric Lander morphed from science god to punching bag
(Jan 24) Easy DNA Editing Will Remake the World. Buckle Up.
(Jan 23) Genome Editing and the Future of Human Species
(Jan 20) Chinese-scientists-create-designer-dogs-by-genetic-engineering
(Jan 16) Gene edited pigs may soon become organ donors
(Jan 13) New life for pig-to-human transplants
(Jan 10) Genome Editing [What is the code that we are editing?]
(Jan 03) CRISPR helps heal mice with muscular dystrophy
(Jan 01) Credit for CRISPR: A Conversation with George Church

For archived HoloGenomics News articles see Archives above


Whole-genome sequencing reveals mutations outside of protein-coding regions

Mutations in gene promoters reveal specific pathway pathologies in pancreatic cancer

May 8, 2017

Cold Spring Harbor Laboratory


A new wave of research is significantly improving our ability to target cancer cells by studying 'the other 98 percent' of DNA in human chromosomes, beyond the 2 percent that encodes proteins. Researchers looked at cells sampled from 308 people with pancreatic cancer, finding mutations in gene promoter regions that provide important clues about pathways perturbed in the illness and suggesting new targets for future treatments.

Over the last decade, it has made good sense to study the genetic drivers of cancer by sequencing a tiny portion of the human genome called the exome -- the 2% of our three billion base pairs that "spell out" the 21,000 genes in our chromosomes. If cancer is a disease precipitated by changes in genes, after all, we need to know lots about how and when different genes change in the many distinctive subtypes of cancer.

But a new wave of research, exemplified by a study published in Nature Genetics by a team at Cold Spring Harbor Laboratory (CSHL), is significantly improving our ability to target cancer cells by studying "the other 98%" of DNA in human chromosomes, sometimes called the genome's "dark matter."

Research led by Michael Feigin, Ph.D., a postdoctoral researcher in the laboratory of CSHL Professor David Tuveson, M.D., Ph.D., looked closely at cells sampled from 308 people with pancreatic cancer, one of the most lethal malignancies, with a 5-year survival rate of only 8%. Importantly, the full genome of the sampled pancreatic cancer cells was sequenced, not just the 2% that comprises the exome.

This enabled Feigin and colleagues including computational biologist Tyler Garvin, Ph.D., formerly of Adjunct Associate Professor Michael Schatz's lab, to focus narrowly on genome segments called gene promoters. These segments of DNA typically lie adjacent to, but not within, the sequences of the genes that they regulate. Therefore, promoters are "invisible" when only the exomes of cells are sequenced, as has been commonplace in cancer genetics research.

"Promoters are important in determining when specific genes are turned on and off," says Feigin, "and I became interested in figuring out whether mutations within promoters -- as opposed to within the genes they regulate -consistently affects the way cancers develop and sustain themselves."

The team "looked all across the genome," Feigin says, "and, interestingly, while we did find mutations in promoters, we never found clusters of these mutations near any of the genes that prior research had already told us were typically mutated in pancreatic cancer."

Genes called KRAS and p53 are mutated in the majority of pancreas cancer cells, for example. But mutations in promoters sifted out of mountains of data by the team's novel mathematical formula, or algorithm, called GECCO, lay in genes never before implicated in pancreatic cancer.

Feigin points out that mutations in a promoter can affect how much protein is generated by the gene its regulates. In this way these mutations are unlike those usually found in KRAS and p53, for example, which impair or otherwise alter the function of the proteins they encode.

While the promoter mutations were not near known pancreatic cancer genes, the team found that they affected some of the same biological pathways in cells. Most prominent among these were promoters affecting genes involved in cell adhesion and axon guidance. Both pathways involve cascades of interactions among dozens or hundreds of proteins, each one encoded by a different gene.

The new data thus "adds depth to our understanding of things that go awry in these critical pathways, sometimes promoting cancer formation, other times providing cancer cells with advantages that enable them to crowd out healthy cells," comments Dr. Tuveson, who in addition to leading a lab at CSHL is the Director of CSHL's NCI-designated Cancer Center and Director of Research for the Lustgarten Foundation, the nation's largest philanthropic funder of pancreatic cancer research.

The cell adhesion pathway affected by newly discovered mutations in gene promoter regions is important for obvious reasons in cancer: cancer cells want to grow and proliferate, a process that can culminate in their migration from their tissue of origin. Once they have broken free, they can travel via the bloodstream to other places in the body, a process called metastasis that is often responsible for cancer fatalities.

The axon guidance pathway associated with promoter mutations has a less obvious but no less important role in pancreatic cancer. "In pancreas cancer, nerves are often attracted to or get attracted to the tumor," explains Feigin, "and sometimes they grow right through the tumor. This is one of the reasons pancreas cancer is so painful."

It's possible, Feigin says, that axon guidance signals -- and indeed cell adhesion signals -- "are actually being used by tumor cells" to gain advantages over healthy cells. "Tumors, for example, can actually spread via nerves; this is called peri-neural invasion."

A question naturally arises: if these and several other pathways were already implicated in pancreatic cancer, what is the advantage of the new knowledge about promoter mutations? The answer, the team explains, has to do with finding ways to fight pancreatic cancer, one of the major cancer types that remains profoundly resistant to all existing treatments. The more that is known about defects in specific pathways in specific cancer types, the more specific molecular targets -- pathway components -- appear in the sights of researchers trying to disable or enhance a given pathway.

Materials provided by Cold Spring Harbor Laboratory

Note: Content may be edited for style and length.

Journal Reference:

Michael E Feigin, Tyler Garvin, Peter Bailey, Nicola Waddell, David K Chang, David R Kelley, Shimin Shuai, Steven Gallinger, John D McPherson, Sean M Grimmond, Ekta Khurana, Lincoln D Stein, Andrew V Biankin, Michael C Schatz, David A Tuveson. Recurrent noncoding regulatory mutations in pancreatic ductal adenocarcinoma. Nature Genetics, 2017; DOI: 10.1038/ng.3861

Psst, the human genome was never completely sequenced. Some scientists say it should be

By SHARON BEGLEY @sxbegle JUNE 20, 2017

Stat News

The feat made headlines around the world: “Scientists Say Human Genome is Complete,” the New York Times announced in 2003. “The Human Genome,” the journals Science and Nature said in identical ta-dah cover lines unveiling the historic achievement.

There was one little problem.

“As a matter of truth in advertising, the ‘finished’ sequence isn’t finished,” said Eric Lander, who led the lab at the Whitehead Institute that deciphered more of the genome for the government-funded Human Genome Project than any other. “I always say ‘finished’ is a term of art.”

“It’s very fair to say the human genome was never fully sequenced,” Craig Venter, another genomics luminary, told STAT.

“The human genome has not been completely sequenced and neither has any other mammalian genome as far as I’m aware,” said Harvard Medical School bioengineer George Church, who made key early advances in sequencing technology.

What insiders know, however, is not well-understood by the rest of us, who take for granted that each A, T, C, and G that makes up the DNA of all 23 pairs of human chromosomes has been completely worked out. When scientists finished the first draft of the human genome, in 2001, and again when they had the final version in 2003, no one lied, exactly. FAQs from the National Institutes of Health refer to the sequence’s “essential completion,” and to the question, “Is the human genome completely sequenced?” they answer, “Yes,” with the caveat — that it’s “as complete as it can be” given available technology.

Perhaps nobody paid much attention because the missing sequences didn’t seem to matter. But now it appears they may play a role in conditions such as cancer and autism.

“A lot of people in the 1980s and 1990s [when the Human Genome Project was getting started] thought of these regions as nonfunctional,” said Karen Miga, a molecular biologist at the University of California, Santa Cruz. “But that’s no longer the case.” Some of them, called satellite regions, misbehave in some forms of cancer, she said, “so something is going on in these regions that’s important.”

Miga regards them as the explorer Livingstone did Africa — terra incognita whose inaccessibility seems like a personal affront. Sequencing the unsequenced, she said, “is the last frontier for human genetics and genomics.”

Church, too, has been making that point, mentioning it at both the May meeting of an effort to synthesize genomes, and at last weekend’s meeting of the International Society for Stem Cell Research. Most of the unsequenced regions, he said, “have some connection to aging and aneuploidy” (an abnormal number of chromosomes such as what occurs in Down syndrome). Church estimates 4 percent to 9 percent of the human genome hasn’t been sequenced. Miga thinks it’s 8 percent.

The reason for these gaps is that DNA sequencing machines don’t read genomes like humans read books, from the first word to the last. Instead, they first randomly chop up copies of the 23 pairs of chromosomes, which total some 3 billion “letters,” so the machines aren’t overwhelmed. The resulting chunks contain from 1,000 letters (during the Human Genome Project) to a few hundred (in today’s more advanced sequencing machines). The chunks overlap. Computers match up the overlaps, assembling the chunks into the correct sequence.

That’s between difficult and impossible to do if the chunks contain lots of repetitive segments, such as TTAATATTAATATTAATA, or TTAATA three times. “The problem is, when you have the same exact words, it’s hard to assemble,” said Lander, just as if jigsaw puzzle pieces show the same exact blue sky.

In 2004, the genome project reported that there were 341 gaps in the sequence. Most of the gaps — 250 — are in the main part of each chromosome, where genes make the proteins that life runs on. These gaps are tiny. Only a few gaps — 33 at last count — lie in or near each chromosome’s centromere (where the two parts of a chromosome connect) and telomeres (the caps at the end of chromosomes), but these 33 are 10 times as long in total as the 250 gaps.

That makes the centromeres in particular the genome’s uncharted Zambezi. Evan Eichler of the University of Washington said every chromosome has such sequence-defying repetitive elements — think of them as DNA stutters — including an infamous one that’s 171 letters long and repeated end-to-end for thousands of letters.

At the beginning of the Human Genome Project, said Lander, now director of the Broad Institute of MIT and Harvard, “it became very clear these highly repetitive sequences would not be tractable with existing technology. It wasn’t a cause of a great deal of agonizing at the time,” since he and other project leaders expected the next generation of scientists to find a solution.

That hasn’t really happened, partly because there hasn’t been much motivation to map these regions. “I’m between agnostic and a little skeptical that these bits will be important for disease, but maybe I’m saying that because we can’t read them,” Lander said.

As new sequencing technology has begun allowing scientists to peek into unsequenced territory, however, they have seen that “these tough-to-sequence regions frequently have important genes,” said Michael Hunkapiller, chairman and CEO of Pacific Biosciences, which makes DNA sequencers. (In 1998, Hunkapiller recruited Venter to his new company, Celera Genomics, to race the government-backed genome project; the race ended in a de facto tie.)

PacBio’s “reason for being” is to increase the length of DNA segments that can be read and assemble them, Hunkapiller said. Longer reads have an effect like enlarging jigsaw puzzle pieces; even though the pieces still contain a lot of repeated blue sky, the greater size makes it more likely they’ll also contain something sufficiently novel to make assembling them easier. PacBio’s maximum DNA read is now about 60,000 letters, Hunkapiller said, and averages 15,000.

With such long reads, Lander said, “you could get through a lot of these nasty [unsequenced] regions.”

That’s looking more and more like a worthy undertaking, and not only because the unsequenced regions might contain actual protein-making genes. There is evidence that the non-gene parts — especially the DNA stutters — “clearly have disease implications,” Hunkapiller said. “Three-quarters of the [genome] differences between one person and another are in [such] variants” rather than the single-letter spelling differences in A’s, T’s, C’s, and G’s which get all the attention. In a 2007 paper, Venter (now the chairman of Human Longevity Inc.) and his team showed that there are more person-to-person differences like this, called structural variants, than there are single-letter changes.

Yet about 90 percent of the structural variants, the vast majority of which weren’t sequenced by either the genome project or a later effort called the 1000 Genomes Project, “have been missed,” Eichler and his colleagues reported last year.

One reason the stutters are unusually influential is that this repetitive DNA can move around, make copies of itself, flip its orientation, and do other acrobatics that “can have quite dramatic functional effects,” Hunkapiller said. For one thing, repetitive elements around the centromeres, called satellites, might cause a dividing cell to become cancerous, Miga said, because they can destabilize the entire genome.

When researchers at Stanford University tried to find the genetic cause of a young man’s mysterious disease, which caused non-cancerous tumors to grow throughout his body, they found nothing using the standard whole-genome sequencing, Hunkapiller said. But the “long reads” made possible by the PacBio machines “looked for structural variants and found the problem right away,” he said.

The stutters might even be what makes us human. Some of these complex duplications “appear to be important for the evolution of higher neuroadaptive function” — aka brain development, Eichler said. A gene called ARHGAP11B, which was created by one such duplication, causes the cortex to develop the myriad folds that support complex thought; SRGAP2C, also a duplication, triggers brain development.

“These are new genes that evolved specifically in our lineage over the last few million years,” said Eichler. The same duplications can also produce DNA rearrangements “associated with neurodevelopmental disorders such as autism and intellectual disability.”

“Finish the sequence!” hasn’t become a rallying cry, but maybe it should be, Venter said: “I’d be the last one to give you a quote saying that we don’t need to bother with these [unsequenced] regions.”

(What is common to Venter, Lander and Church? They all are personally familiar with my FractoGene fractal approach - pursued since 2OO2. True, none of them ever referenced e.g. my paper The Principle of Recursive Genome Function published in 2OO8, though I put the manuscript into the hands of Lander in 2OO7. Yes, ten years ago. But they never told us, either, that "the genome was NOT fully sequenced" (shall I put a smiley in here??). My emphasis on "repeats", since fractals are self-similar repetitions, was also very well known to Collins - not mentioned above - since the 2OO3 Monterey 5O year memorial of The Double Helix. I took my FractoGene to a presentation in the Venter Institute, and George Church did invite my "fractal approach presentation" to his Cold Spring Harbor meeting (couple of weeks before the Lander et al. Science paper on fractal folding of the genome appeared). They are all correct, that repeats were a massive inconvenience both regarding most of the sequencing technologies that hicked up on repeats - unless they used very long reads, like Pacific Bioscience. Also, the fractal interpretation of the cardinal significance of repeats was grossly overlooked, perhaps the biggest mistake in the history of molecular biology. These days, when fractal cancer yields over FOUR MILLION HITS on Google, the World is waking up to the importance of genome repeat fractal defects that cause cancer. There is also much interest now in and long-read sequencing, with BGI massively investing into it, as fractals are huge in China, Japan, India etc. where ancient culture is deeply and pervasively rooted in "self-similar repetitions". Craig Venter has a personal experience in fractal cancer. He became the first genomics leader, ever, to go on record that his prostate cancer originated from "extremely few repeats" (documented by his "full" DNA sequence on the shelves for 15 years before the diagnosis). It is unknown, if pancreatic cancer of Steve Jobs originated from aberrant DNA repeats. Though he was "fully" sequenced (multiple times) and all the computing power of Apple could be made available for the type of very early diagnosis that Grail, Inc. is presently up to, Steve Jobs - and so many hundreds of millions - had to perish with the most dreadful genome-disease of cancer, for lack of timely response to wake-up calls. It was more then "wake up calls". Fairly recently, a funding request was turned down to an eminent Autism agency to quantitate the fractality of repeats in the genome of autist patients - about the time when a high-ranking researcher of autism opinionated that autism was the result of social upbringing. András J. Pellionisz)

University of Debrecen, Hungary and BGI Group sign MOU (China Penetrates Central European Genome Market)


Budapest, Hungary – University of Debrecen (UD), Hungary and BGI Group signed a Memorandum of Understanding, Tuesday 20th June, regarding cooperation on the establishment of a local NGS sequencing hub for local population cohort studies and diagnostic testing, and wider collaboration in the research and development of genetic and genomic sciences.

The MOU was signed by Prof Dr Bartha Elek, Deputy Rector of Education, UD, and Dr. Ning Li, Chief Development Officer of BGI Group Community on Precision Medicine. The signing took place during the CEEC-China Health Ministers’ Meeting, Budapest, Hungary, which aims to facilitate greater cooperation, knowledge sharing and investment in the field of public health. The conference itself can be seen in the context of China’s Belt – Road initiative, which aims to foster economic development along the old Silk Road linking it to Europe. BGI Group has already built close ties with various countries and regions along ‘the Belt and Road’, such as Laos and South Africa, in the fields of scientific research, industrial development and people’s livelihood.

According to the agreement, UD and BGI Group will establish joint scientific and innovation research proposals, joint R&D projects and work together developing national and international research and innovation projects. UD and BGI Group will also explore the possibility of establishing a sequencing hub for local population cohort studies and diagnostic testing. The lab will be staffed by UD, with BGI Group providing its sequencing expertise for the successful establishment of the laboratory, and sequencing platforms and ongoing support for the operation of the laboratory.

The University of Debrecen, as a leading and prominent institution of Hungarian higher education and in line with the spirit of the Magna Charta of European Universities, is dedicated to developing and improving universal scholarship and Hungarian society by providing high-quality, versatile and interdisciplinary educational as well as research and development programs. The main focus of the UD’s activities is directed at the health industry. UD has established a center for clinical genomics and personalized medicine, and many of its research groups are involved in genomic, transcriptomic and proteomic research and development.

BGI Group was established in 1999 with participation in the original Human Genome Project. Since then, BGI Group has grown in to one of the world’s largest genomics organizations, experienced in genomic research, commercial NGS services and sequencer manufacturing. BGI Group is committed to providing solutions to address the research, pharmaceutical, and clinical markets and is dedicated to improving human health and empowering large-scale human, plant and animal genomics research.

Dr Ning Li commented, “UD is the leading genomics institution in Hungary. We believe that by combining their local strengths and BGI Group’s sequencing expertise, international network and sequencing platforms, this MOU establishes the beginning of a long-term strategic partnership between UD and BGI Group, which will lead to our two institutions jointly addressing key issues in human health’.

(I reported on the USD 31 M National Oncogenomics Consortium of Hungary that started January 1st this year, centered in the Capital of Budapest. Until this announcement - thus far only a "Memorandum of Understanding" - Beijing Genome Institute that has the backing of the government of China, did not penetrate Central Europe. Debrecen is a major city in the East of Hungary and it is far smaller than Budapest. It is unclear if China decided to penetrate the Central European Genome market motivated by plans to develop a workshop in Hungary for my FractoGene approach - with a possibility of exporting prototypes to the vast and lucrative US oncogenomics market. We do remember, however, that the full acquisition of Complete Genomics of Silicon Valley by BGI was also meant as a legal pathway to access US markets - András J. Pellionisz)

Fractal Cancer - Fractal Genome - Critical Juncture for NIH and its National Cancer Institute

Before the end of this May, fractal cancer yielded about 666 thousand hits on Google, with fractal genome also around half a million hits. In the early days of this June, numbers shot up by more than a million additional hits! Something must have happened. One may wonder what is going on in the National Cancer Institute (under the direction of NIH). Certainly, the appointment of Norman Sharpless by president Trump for Director of the National Cancer Institute (from University of North Carolina Cancer Center) might be one of the factors. UNC has published recently on fractal features of cancer. About the same time, an intramural worker of NCI published in a new journal on fractals an article on cancer (interestingly, entitled "Critical Juncture") awash with "fractal references" (at least 37 fractal, out of 5O total), reproduced below


1. Mandelbrot B (1983) The Fractal Geometry of Nature. Freeman, San Francisco.

2. Leonardo da Vinci. Trattato della Pittura. ROMA MDCCCXVII. Nella Stamperia DE ROMANIS. A cura di Guglielmo Manzi Bibliotecario della Libreria Barberina.

3. Mandelbrot B (1977) Fractals, M.B. Form, Chance and Dimension. W.H. Freeman & Company, San Francisco.

4. Belaubre G (2006) L’irruption des Géométries Fractales dans les Sciences.Editions Académie Européenne Interdisciplinaire des Sciences (AEIS), Paris.

5. Loud AV (1968) A quantitative stereological description of the ultrastructure of normal rat liver parenchymal cells. J Cell Biol 37: 27-46. [Crossref]

6. Weibel ER, Stäubli W, Gnägi HR, Hess FA (1969) Correlated morphometric and biochemical studies on the liver cell. I. Morphometric model, stereologic methods, and normal morphometric data for rat liver. J Cell Biol 42: 68-91. [Crossref]

7. Mandelbrot B (1967) How long is the coast of britain? Statistical self-similarity and fractional dimension. Science 156: 636-638. [Crossref]

8. Paumgartner D, Losa G, Weibel ER (1981) Resolution effect on the stereological estimation of surface and volume and its interpretation in terms of fractal dimensions. J Microsc 121: 51-63. [Crossref]

9. Gehr P, Bachofen M, Weibel ER (1978) The normal human lung: ultrastructure and morphometric estimation of diffusion capacity. Respir Physiol 32: 121-140. [Crossref]

10. Rigaut JP (1984) An empirical formulation relating boundary length to resolution in specimens showing ‘‘non-ideally fractal’’ dimensions. J Microsc 13: 41–54.

11. Rigaut JP (1989) Fractals in Biological Image Analysis and Vision. In: Losa GA, Merlini D (Eds) Gli Oggetti Frattali in Astrofisica, Biologia, Fisica e Matematica, Edizioni Cerfim, Locarno, pp. 111–145.

12. Nonnenmacher TF, Baumann G, Barth A, Losa GA (1994) Digital image analysis of self-similar cell profiles. Int J Biomed Comput 37: 131-138. [Crossref]

13. Landini G, Rigaut JP (1997) A method for estimating the dimension of asymptotic fractal sets. Bioimaging 5: 65–70.

14. Dollinger JW, Metzler R, Nonnenmacher TF (1998) Bi-asymptotic fractals: fractals between lower and upper bounds. J Phys A Math Gen 31: 3839–3847.

15. Bizzarri M, Pasqualato A, Cucina A, Pasta V (2013) Physical forces and non linear dynamics mould fractal cell shape. Quantitative Morphological parameters and cell phenotype. Histol Histopathol 28: 155-174.

16. Losa GA, Nonnenmacher TF (1996) Self-similarity and fractal irregularity in pathologic tissues. Mod Pathol 9: 174-182.[Crossref]

17. Weibel ER (1991) Fractal geometry: a design principle for living organisms. Am J Physiol 261: L361-369. [Crossref]

18. Losa GA (2012) Fractals in Biology and Medicine. In: Meyers R (Ed.), Encyclopedia of Molecular Cell Biology and Molecular Medicine, Wiley-VCH Verlag, Berlin.

19. Santoro R, Marinelli F, Turchetti G, et al. (2002) Fractal analysis of chromatin during apoptosis. In: Losa GA, Merlini D, Nonnenmacher TF, Weibel ER (Eds.), Fractals in Biology and Medicine. Basel, Switzerland. Birkhäuser Press 3: 220-225.

20. Bianciardi G, Miracco C, Santi MD et al. (2002) Fractal dimension of lymphocytic nuclear membrane in Mycosis fungoides and chronic dermatitis. In: Losa GA, Merlini D, Nonnenmacher TF, Weibel ER (Eds.), Fractals in Biology and Medicine. Basel, Switzerland, Birkhäuser Press.

21. Losa GA, Baumann G, Nonnenmacher TF (1992) Fractal dimension of pericellular membranes in human lymphocytes and lymphoblastic leukemia cells. Pathol Res Pract 188: 680-686. [Crossref]

22. Mashiah A, Wolach O, Sandbank J, Uzie IO, Raanani P, et al. (2008) Lymphoma and leukemia cells possess fractal dimensions that correlate with their interpretation in terms of fractal biological features. Acta Haematol 119,142–150.[Crossref]

23. Brú A, Albertos S, Luis Subiza J, García-Asenjo JL, Brú I (2003) The universal dynamics of tumor growth. Biophys J 85: 2948-2961. [Crossref]

24. Baish JW, Jain RK (2000) Fractals and cancer. Cancer Res 60: 3683–3688.

25. Tambasco M, Magliocco AM (2008) Relationship between tumor grade and computed architectural complexity in breast cancer specimens. Hum Pathol 39: 740-746. [Crossref]

26. Sharifi-Salamatian V, Pesquet-Popescu B, Simony-Lafontaine J, Rigaut JP (2004) Index for spatial heterogeneity in breast cancer. J Microsc 216: 110-122. [Crossref]

27. Losa GA, Graber R, Baumann G, Nonnenmacher TF (1998) Steroid hormones modify nuclear heterochromatin structure and plasma membrane enzyme of MCF-7 Cells. A combined fractal, electron microscopical and enzymatic analysis. Eur J Histochem 42: 1-9. [Crossref]

28. Landini G, Hirayama Y, Li TJ, Kitano M (2000) Increased fractal complexity of the epithelial-connective tissue interface in the tongue of 4NQO-treated rats. Pathol Res Pract 196: 251-258. [Crossref]

29. Roy HK, Iversen P, Hart J, Liu Y, Koetsier JL, et al. (2004) Down-regulation of SNAIL suppresses MIN mouse tumorigenesis: modulation of apoptosis, proliferation, and fractal dimension. Mol Cancer Ther 3: 1159-1165. [Crossref]

30. Losa GA, De Vico G, Cataldi M, et al. (2009) Contribution of connective and epithelial tissue components to the morphologic organization of canine trichoblastoma. Connect Tissue Res 50: 28-29.

31. Li H, Giger ML, Olopade OI, Lan L (2007) Fractal analysis of mammographic parenchymal patterns in breast cancer risk assessment. Acad Radiol 14: 513-521. [Crossref]

32. Rangayyan RM, Nguyen TM (2007) Fractal analysis of contours of breast masses in mammograms. J Digit Imaging 20: 223-237. [Crossref]

33. De Felipe J (2011) The evolution of the brain, the human nature of cortical circuits, and intellectual creativity. Front Neuroanat 5: 1-16. [Crossref]

34. King RD, Brown B, Hwang M, Jeon T, George AT; Alzheimer's Disease Neuroimaging Initiative (2010) Fractal dimension analysis of the cortical ribbon in mild Alzheimer's disease. Neuroimage 53: 471-479. [Crossref]

35. Werner G (2010) Fractals in the nervous system: conceptual implications for theoretical neuroscience. Front Physiol 1: 15. [Crossref]

36. Losa GA (2014) On the Fractal Design in Human Brain and Nervous Tissue. Applied Mathematics 5: 1725-1732.

37. Smith TG Jr, Marks WB, Lange GD, Sheriff WH Jr, Neale EA (1989) A fractal analysis of cell images. J Neurosci Methods 27: 173-180. [Crossref]

38. Smith TG Jr, Bejar TN (1994) Comparative fractal analysis of cultured glia derived from optic nerve and brain demonstrated different rates of morphological differentiation. Brain Res 634: 181–190.

39. Smith TG Jr, Lange GD, Marks WB (1996) Fractal methods and results in cellular morphology--dimensions, lacunarity and multifractals. J Neurosci Methods 69: 123-136. [Crossref]

40. Smith TG (1994) A Fractal Analysis of Morphological Differentiation of Spinal Cord Neurons in Cell Culture. In: Losa et al., (Eds.), Fractals in Biology and Medicine, Birkhäuser Press, Basel, vol.1.

41. Milosevic NT, Ristanovic D (2006) Fractality of dendritic arborization of spinal cord neurons. Neurosci Lett 396: 172-176.[Crossref]

42. Milosevic NT, Ristanovic D, Jelinek HF, Rajkovic K (2009) Quantitative analysis of dendritic morphology of the alpha and delta retinal ganglions cells in the rat: a cell classification study. J Theor Biol 259: 142-150. [Crossref]

43. Ristanovic D, Stefanovic BD, Milosevic NT, Grgurevic M, Stankovic JB (2006) Mathematical modelling and computational analysis of neuronal cell images: application to dendritic arborization of Golgi-impregnated neurons in dorsal horns of the rat spinal cord. Neurocomputing 69: 403–423.

44. Jelinek HF, Milosevic NT, Ristanovich D (2008) Fractal dimension as a tool for classification of rat retinal ganglion cells.Biol Forum 101: 146-150.

45. Bernard F, Bossu JL, Gaillard S (2001) Identification of living oligodendrocyte developmental stages by fractal analysis of cell morphology. J Neurosci Res 65: 439-445. [Crossref]

46. Pellionisz A, Roy GR, Pellionisz PA, Perez JC (2013) Recursive genome function of the cerebellum: geometric unification of neuroscience and genomics. Berlin: In: Manto M, Gruol DL, Schmahmann JD, Koibuchi N and Rossi F (Eds.), Springer Verlag, “Handbook of the Cerebellum and Cerebellar Disorders”. 1381-1423.

47. Pellionisz AJ (2008) The principle of recursive genome function. Cerebellum 7: 348-359. [Crossref]

48. Di Ieva A, Grizzi F, Jelinek H, Pellionisz AJ, Losa GA (2015) Fractals in the Neurosciences, Part I: General Principles and Basic Neurosciences. The Neuroscientist XX(X) 1–15.

49. Pellionisz A (1989) Neural geometry: towards a fractal model of neurons. Cambridge: Cambridge University Press.

50. Agnati LF, Guidolin D, Carone C, Dam M, Genedani S, et al. (2008) Understanding neuronal molecular networks builds on neuronal cellular network architecture. Brain Res Rev 58: 379–99. [Crossref]


The NCI may not, however, have sufficient domain expertise for a full-fledged fractal approach to cancer. Norman Sharpless (with a bachelor degree in mathematics, under the NIH director for some time who started his career in quantum physics) may make a difference at this Crucial Juncture. Mathematization of biology, however, has had an extraordinarily stiff resistance thus far. (Even the above intramural worker turned back and forth on the issue - with the interim NCI director unable to answer requests for clarification). A renewed leadership may wish to change that.

Lung cancer - a fractal viewpoint (why is Academia hesitant to embrace disruptive breakthroughs?)

Frances E. Lennon, Gianguido C. Cianci, Nicole A. Cipriani, Thomas A. Hensing, Hannah J. Zhang, Chin-Tu Chen, Septimiu D. Murgu, Everett E. Vokes, Michael W. Vannier, and Ravi Salgia

Section of Hematology/Oncology (F.E.L., E.E.V., R.S.), Department of Pathology (N.A.C.), Department of Radiology (H.J.Z., C.-T.C., M.W.V.), Department of Medicine (S.D.M.), University of Chicago, 5841 South Maryland Avenue, MC 2115 Chicago, IL 60637, USA. Department of Cell and Molecular Biology, Feinberg School of Medicine, Northwestern University, 303 East Chicago Avenue, Chicago, IL 60611, USA (G.C.C.). NorthShore University Health System, 2650 Ridge Avenue, Evanston, IL 60201, USA (T.A.H.)

Nat Rev Clin Oncol. Author manuscript; available in PMC 2016 Aug 18.

Published in final edited form as:

Nat Rev Clin Oncol. 2015 Nov; 12(11): 664–675.

Published online 2015 Jul 14. doi: 10.1038/nrclinonc.2015.108

PMCID: PMC4989864


Correspondence to: R.S. ude.ogacihcu.dsb.enicidem@aiglasr

Author information ▼ Copyright and License information ▼

Copyright notice and Disclaimer

The publisher's final edited version of this article is available at Nat Rev Clin Oncol

See other articles in PMC that cite the published article.

Go to:


Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed.

Go to:


Despite advances in diagnosis and therapy, lung cancer remains the number one cause of cancer-related mortality in the USA; in 2015, the estimated number of newly diagnosed lung cancer cases is expected to reach 221,200, with the number of deaths cause by lung cancer predicted to reach 158,040—accounting for 27% of all cancer-related deaths.1 At diagnosis, the majority (57%) of patients with lung cancer have locally advanced or metastatic disease, and thus a very poor prognosis.1 Indeed, the estimated overall 5-year survival rate for patients with lung cancer is only 17%.1 Although the incidence of this disease has decreased slightly in recent years, more than 400,000 patients are currently living with lung cancer in the USA alone, and lung cancer continues to account for more cancer-related deaths than the next three most common cancer types combined (breast, colon, and prostate cancer).1,2 The development of more-effective diagnosis, treatment, and surveillance tools, therefore, remains a critical and immediate goal for lung cancer research.

The alterations in lung structure that define the appearance of lung cancer in medical images are more readily perceived than measured. For example, lung nodules are most often characterized by size alone, despite the intricately detailed information present in images, especially CT scans. Use of classic, integer dimension (1D, 2D, 3D, and so on) Euclidean geometry, which is routinely used in computer graphics and medical image analysis, can distinguish gross differences in geometry (volume, density, and so on); however, information that is hidden in the complexity of the structure under examination (such as texture and statistical properties of shape) can often be missed. Images of the lung obtained at different magnifications exhibit self-similarity, thus, they are amenable to characterization and measurement using fractal geometry—fractals are mathematical constructs that can have non-integer (fractal) dimensions and efficiently capture structural features that repeat over a range of scales. The purpose of this Review is to introduce fractals and illustrate the potential of fractal analysis for imaging in patients with lung cancer, with regard to analysis of CT scans as well as histological slides. The potential benefits of using fractals to quantify characteristics of lung cancer lesions and measure response to therapy are explained.

Go to:

Understanding fractals

In biology we are often presented with complex and irregular shapes, such as cell membranes, vascular and neuronal networks, and tumours (Figure 1a,b). The characterization of these structures using simple geometrical quantities, such as length or volume (which, although useful, do not fully characterize the complexity of the shape), can be challenging. Tumour volume is typically used as a measure of tumour burden, and can provide clinically useful information; however, this measure is not ideal, and volume estimates can be unreliable for smaller tumours or those with unfavourable anatomical features, such as high structural complexity and irregular borders.3 Fractal geometry is a mathematical concept that can be used to quantify structures that are poorly represented by conventional Euclidean geometry and might, therefore, be a useful additional parameter in classifying biological structures. Fractals are characterized by three properties: firstly, self-similarity, whereby any small piece of the object is an exact replica of the whole; secondly, scaling—fractals appear the same over multiple scales (for example, at the microscopic and macroscopic levels), a property often referred to as ‘scale invariance’; thirdly, they have a fractional (non-integer) dimension.4

Figure 1

Figure 1

Examples of biological and mathematical fractal patterns. Biological fractals may be statistically self-similar over a limited range of scales, known as a scaling window. a | Rat pulmonary arterial vasculature, imaged via contrast-enhanced CT angiography, ...

In the strict mathematical context, fractal geometry applies to structures with recursive patterns, such as iterative branching structures or the Hilbert curve (Figure 1c,d).5 True fractals are limited to mathematically generated curves, such as the Koch curve or Sierpinski gasket, which are infinitely self-similar.6 Natural fractal objects differ from archetypical mathematical fractals in two important ways. Firstly, they exhibit fractal properties, such as self-similarity, only within a limited (finite) range of scales, referred to as the ‘scaling window’. This scaling window is usually between 1–3 orders of magnitude. Secondly, they can be fractal in a quantitative statistical sense, rather than a strict geometrical sense. An object is statistically self-similar if “a statistical property of every small piece of an object is not significantly different from the same statistical property measured on the whole object”.4,7 This definition of a fractal can be applied to a wide variety of natural and biological objects, as well as related temporal series. For example coastlines, clouds, mountain ranges, stock-market price, and heart-rate fluctuations all exhibit statistical self-similarity.4

Go to:

Fractal dimension

A fundamental feature of fractals is that their measured metrics (such as length, area, and volume) change depending on the scale of the unit used for measurement. Benoit Mandelbrot,8 who developed this concept and coined the term ‘fractal’, gave the measurement of Britain’s coastline as a now classic example. Mandelbrot stated that when measuring a complex irregular curved shape, the outcome of the measurement is dependent on the scale of the units used. In other words, if one were to measure the coast in steps of 1 km in length, for example, and then repeat the process with steps that are 1 m in length, one would obtain considerably different coastal lengths. This disparity arises because using smaller units reveals more of the complexity and detail of the object—that is, the line measured follows a more curvilinear course and thus increases the length measured. As such, a fractal structure has no fixed length, but rather its length is dependent on the scale at which it is considered.

Fractal dimension (FD) is a non-integer value that describes the intrinsic shape of an object; it relates to the relationship between the measured metric and the scale used. In Euclidean geometry, a line has a dimension of 1, a plane has a dimension of 2, and a cube a dimension of 3—lines, planes and cubes are examples of 1D, 2D, and 3D objects, respectively. A fractal, however, can have a dimension between those values, such as 1.50 or 2.33—a fractional spatial dimension. FD represents a quantitative characteristic to describe morphological complexity, and provides information on the self-similarity properties of the shape. For temporal series, FD analysis can be used to quantify short-range and long-range complexity within the data set. When plotting the values of any chosen measure over time, a curve is generated. Simple smooth curves, such as a straight line, a sine wave, or an exponential, are by definition 1D. As the time-series fluctuations become increasingly complex, with many more short-range details, the trace of the data will become rougher, filling more, and more of the (2D) page on which it is plotted. Consequently, the FD will increase above one and towards two for a very complex, plane-filling curve. Thus, a small FD can be interpreted as denoting a lack of complexity and the presence of long-range variations only, whereas a large FD indicates an abundance of short-term variation.9

Several methods have been developed to estimate the FD of an object, with the aim of optimizing computation time, or in search of better precision. However, most approaches follow the same basic premise: measure the particular characteristic of an object at different length scales, plot these points (characteristic versus scale), and fit a least-squares regression line. The slope of the resulting line will be an estimate of the FD of the object.10 Herein, we limit our description to the box-counting method, which calculates the Minkowski–Boulignad dimension.11 This method is one of the most-commonly used and is easily understood. Furthermore, other methods for calculating FD, such as the grid method, are based on the box-counting method. In the box-counting method a set of boxes of a defined size (r), that is, a grid, is placed over the object to be measured, and the number of boxes in that grid (Nr) that are needed to completely cover the shape is counted. This process is then repeated using grids comprising boxes of a different size. The log of Nr is then plotted against the log 1/r, and the slope of the resulting line is the FD (Figure 2). More-complete descriptions of the box-counting and other methods of calculating FD (including the grid, mass-radius, dilation, and power-spectrum methods) have been described elsewhere.10,12–17 Several software packages are available that offer the ability to calculate FD (Box 1).

Figure 2

Figure 2

The box-counting method of calculating FD. Box counting is among the most commonly used methods to calculate the FD of shapes, such as the one presented in this figure. Firstly, the number of boxes (Nr), each of different side lengths (r), that are needed ...

Box 1

Fractal image processing software

Many open-source software packages, which are freely available, enable the computation of fractal geometrical properties from images and the synthesis of fractal patterns. Additionally, modules in some commercially available Picture Archiving and Communication Systems (PACS) and medical image-processing workstations support calculation of fractal properties. Some of the best-known free software modules for fractal image processing include:

▪ ImageJ52

▪ FracLac plugin for ImageJ53

▪ FracLab89

▪ Fractalyse90

▪ Ultra Fractal 591

▪ FDim92

▪ 3DFD87

Of note, FD is not a unique descriptor of the complexity of a shape or object. In fact, shapes that look very different can have the same or a similar FD. FD should, therefore, be considered alongside other parameters and not in isolation. Another useful parameter to examine in this context is ‘lacunarity’.

Go to:


Lacunarity is a geometric measurement used to describe the ‘texture’ or distribution and size of gaps within an object or image—that is, the way in which the object fills space.18 This property can also be thought of as a measure of rotational (or translational) invariance, and can be used to describe both fractal and nonfractal images.13,15 Images or patterns with high heterogeneity are more rotationally variant (they appear different if rotated) and have higher lacunarity in comparison to those with low heterogeneity (Figure 3). Similarly to FD, lacunarity can also be calculated using box-counting methods. However, in this setting, the variation in number of pixels occupied per box at different scales (pixel density) is measured, rather than variation in the number of boxes covering the shape.13 A number of other methods for calculating lacunarity exist, which are beyond the scope of this Review, and more in-depth details and discussion on lacunarity can be found elsewhere.18–20

Figure 3

Figure 3

Lacunarity. Lacunarity is a measure of the texture or distribution of gaps within an image. It can be helpful to think of lacunarity as an indicator of rotational invariance. The images on the right are 90° rotations of the images on the left. ...

Go to:

Fractal nature of the normal lung

The macroscopic lung

One of the most well-known examples of a biological fractal is the lung, which was identified as a fractal by Mandelbrot himself.4 The fractal design and structure of the lung play a large part in determining its functional capacity as a gas-exchanger.21 Two key elements necessary for optimum lung function are a very large surface area and a very thin tissue barrier—the surface area of the lung is estimated to be similar to that of a tennis court and yet it fits into the limited space of the chest cavity.21 Lung morphogenesis is an iterative process that begins with the bifurcation of the developing trachea into the left and right lung buds. Following a sequential branching pattern, the lung buds grow and divide to form a fractal space-filling, tree-like architecture with 23-generations of branching.21–23 Similarly, the pulmonary vasculature develops alongside the airways (Figure 1a), and the gas-exchange surfaces are formed on the peripheral generations of the branching system. Through this developmental pathway, the packing of a large surface area into a very limited space is achieved, along with the coordinated branching and interface of the airway and vascular systems.21,24 The lung can be further divided into the conducting airways (bronchi and bronchioles), which comprise the first 10–16 generations of branching, followed by the gas-exchanging portion (bronchioles, alveolar ducts, and alveolar sacs), also referred to as the acinus, which is composed of nine generations of branches. The acinus has a slightly different branching structure to the conducting region of the lung and its architecture is elegantly described by the Hilbert curve fractal (Figure 1d).25

The fractal nature of the lung branching system confers error or fault tolerance to the lung. In his examination of error tolerance in fractal systems, and the lung in particular, Bruce West26,27 reported that a classic scaling model (dichotomous branching system with one constant scaling fraction) of the lung would propagate an error in an exponential fashion (presumably leading to a serious malformation and dysfunction), whereas the fractal branching model (multiple scales) was essentially unresponsive to error. Thus, the fractal nature of the lung probably ensures that development of this organ is robust and can adapt more easily to genetic alterations or other insults that affect its architecture while preserving its overall morphology and function. Fractal branching is, therefore, a fundamental and crucial feature of lung development.

The fractal branching pattern of the lung also regulates recruitment of the terminal airspaces from a previously unventilated compartment during inhalation. Suki et al.28,29 reported that airway inflation in the lung proceeds in a manner in which ‘avalanches’ of airway openings are seen; the terminal airway or alveolar spaces open in bursts rather than all at once and opening of one space can initiate the opening of other peripheral spaces.28,29 These avalanches are regulated in part by discontinuities in airway resistance along the fractal branching tree, which in turn are regulated by airflow resistance and airway elasticity.28,29 This model also dictates that both the magnitude and timing of pressure changes at the airway entrance (inhalation) are critical in determining the extent of avalanche propagation and, hence, alveolar recruitment/inflation.28,29 Changes in airway structure or elasticity, as occurs in many lung pathologies, including asthma, emphysema, and lung cancer, disrupt the avalanche process leading to decreased alveolar recruitment and declines in lung function.30 In an autopsy study of airway remodelling in asthma, Boser et al.31 reported that airway ‘pruning’ could be detected based on subtle changes in the FD of the lung airways, which was calculated using 2D digital images of negative-pressure silicone-rubber casts. These changes in FD were apparent before any obvious changes in the lung structure that could be visualized on 3D casts.31 FD analysis has also been applied to CT images of the lung and can be useful to detect early pathological changes, and this approach is discussed further in the following sections.

The microscopic lung

Fractal patterns are also observed on a cellular and subcellular level in the lungs and other tissues. The alveolar surface, including the cell membranes of individual cells, can be considered fractal; if these structures are examined at increasing magnification, increasing levels of detail and complexity can be observed.32 The same concept can also be applied to the membranes of subcellular organelles, such as those of the mitochondria, nucleus, and endoplasmic reticulum.33,34

In addition, the organization and packing of chromatin within the cell nucleus (both DNA and associated proteins) has been revealed to be fractal.35,36 The ‘fractal globule’ model of polymer collapse, whereby polymer condensation results in a dense but completely unknotted globule owing to topological constraints that prevent one region of the chain from passing across another was first proposed by Grosberg et al.37 as a model for chromatin organization in 1988. The chromatin fractal globule model essentially considers chromatin as a polymer. According to this model, the chromatin collapse begins with the formation of increasing numbers of small crumples, leading to the formation of a thicker ‘polymer-of-crumples’, which in turn forms large crumples. Grosberg et al.37 postulated that the resulting ‘globule’, with its hierarchy of crumples, formed a self-similar fractal structure. Evidence reported by Bancaud et al. and Liberman-Aiden et al.35,36 support the fractal globule model of chromatin packing, and indicates that this model allows for the dynamic folding and unfolding of any genomic location, to enable active gene transcription, for example, while preserving dense chromatin packing elsewhere. Thus, alterations in the FD could be suggestive of modifications in chromatin packing and might be of prognostic or diagnostic importance in multiple cancer pathologies, as is discussed in the ‘Pathology’ section of this Review.38 Of note, bright-light and confocal microscopy images of nuclei can be quickly and easily used to measure the FD of chromatin and/or the nucleus.38

Fractals and lung genetics

We have described how fractal geometry can be applied to the analysis of the complex shapes and anatomy of the lung, from whole-organ to subcellular scales. However, this geometrical framework has also shown potential for the analysis of biological complexities beyond the physical aspects of the lung. For example, the information stored in the genome and the temporal properties of lung function are amenable to analyses based on fractals.

The fractal nature of DNA sequences can be explored in silico using simple ‘DNA walks’ or ‘chaos game’ representations. A DNA walk is a vectorial representation of the DNA sequence.39,40 Using this technique, the two pairs of complementary DNA nucleotides (A–T and G–C) can be used to transform the DNA sequence into a 2D trajectory; starting at the origin, the walk is moved by one unit/position for each sequential base, up for ‘A’, down for ‘T’, right for G and left for C, to produce a trace. The technique can be applied quickly to any length of DNA sequence, from single genes to entire genomes. A DNA walk can reveal patterns, such as palindromes, repeats, GC skew, translocations and gene duplications, and initially revealed the fractal nature of DNA by uncovering long-range correlations in nucleotide sequences.40,41 We have provided a comparison of the DNA walk trajectories for the ALK and EML4 genes, and an EML4–ALK fusion-protein gene relevant to lung cancer (Figure 4a).

Figure 4

Figure 4

Fractal analysis of DNA sequences. Nucleotide sequences of DNA exhibit fractal properties including self-similarity, which can be illustrated in silico using DNA walks or chaos games. In the DNA walk approach, the nucleotide sequence is represented vectorially, ...

Fractal DNA sequences can also be displayed by means of a chaos game representation.42 In a chaos game, a fractal image is generated using a series of random points within a confined space. The chaos game can be played as follows: label the corners of a triangle with the numbers 1–6, assigning two numbers to each corner; starting at a random point within the triangle, roll a die and move halfway from the starting point towards the corner labelled with the number rolled; repeat this process, rolling the die and moving half way from the current position to the new number, for multiple iterations. After sufficient iterations, the ensemble of points visited during the game forms a fractal image, in this case the Sierpinski gasket (Supplementary Video 1). The chaos game can be thought of as an iterative mapping function—the same operation is repeated using the outcome of one iteration as the input for the next operation. In a DNA chaos game, performed within a square with each of the corners assigned to one of the four DNA bases, the DNA sequence (nonrandom) is used, instead of random numbers, to dictate the movements and construct a fractal pattern. Figure 4b illustrates a chaos game representation for the DNA sequence of chromosome 2 (Supplementary Video 2 highlights the self-similarity of the chromosome 2 chaos game fractal when the image is examined over a range of scales). The first report of a DNA chaos game representation was published in 1990,42 although it might only be now, in the era of ‘big data’, that the full potential of iterative mapping, as a computationally efficient, alignment-free sequence comparison and analysis tool, can be exploited. Indeed, iterative mapping functions are now used in many sequence analysis and genomic reconstruction algothrims.43 These programmes can improve the quality of draft genome assemblies while reducing the time and computing memory needed.44

Fractal analysis of lung physiology

Along with its structural features, some functional aspects of the lung also exhibit fractal properties. For example, breathing or respiratory rate variability (BRV or RRV), follows a nonrandom fractal pattern that shows long-range correlations over time.45,46 The FD of BRV increases with age, indicating that the breathing rate becomes more complex. The identification of fractal patterns in breathing rate, together with an increased understanding of how the fractal structure of the lung regulates alveolar inflation, has led to alterations in the use of mechanical ventilation apparatus. Programming a mechanical ventilator with a fractal or biologically variable breathing pattern has been shown to increase arterial oxygenation and enhances respiratory sinus arrhythmia.46,47 In a study of respiratory variation during mechanical ventilation, Gutierrez et al.48 reported that patients who had the lowest RRV during ventilation support had the highest mortality. Similarly, Seely and colleagues49 reported that altered BRVs are associated with extubation failure in patients undergoing mechanical ventilation. Not all of these studies used fractal analyses per se; however, their results imply that the fractal properties of breathing/respiration rates are not accidental. Furthermore, a change in the FD of BRV might explain differences in the efficiency of the entire respiratory system, and could potentially be predictive of respiratory complications or failure. However, the clinical relevance of fractal alterations in breathing and respiration rates specific to lung cancer remains to be determined.

Go to:

Fractal alterations in lung cancer

We have highlighted how fractals can describe multiple aspects of the lung: morphology, physiology, and genetics. In the following sections we review some examples of the application of fractal geometry to characterization of the cancerous lung.


As discussed, fractals can be used to describe the complex anatomy and cellular characteristics of the normal lung. The gross morphological changes caused by tumour growth should, therefore, have a substantial impact on the FD of the lung tissue (Figure 5). To date, we have seen only limited reports in the literature examining the relationship between FD of lung tumour histopathology images and tumour type. However, FD has been used to differentiate between adenocarcinoma and squamous-cell carcinoma (SCC) of the lung.50 In this study, images of microarrays of formalin-fixed paraffin-embedded (FFPE) tissues stained with fluorescent anti-pan-cytokeratin antibodies were digitally processed to extract the outlines of epithelial structures, and the FD for each image was then calculated using the box-counting method.50 Adenocarcinoma specimens (n = 88) were found to have a statistically significantly lower FD than SCC samples (n = 64) (1.702 versus 1.780; P = 1.179 × 10−8).50 Furthermore, the authors found a positive correlation between survival durations and FD. This correlation was not statistically significant; however, the authors noted that not enough time had passed for their sample population to reach the median survival duration.50 These results suggest that tissue FD might be a useful biomarker in the histological classification of lung cancer subtypes and that it might even be a possible indicator of prognosis.

Figure 5

Figure 5

Fractal analysis of lung cancer histology. Representative haematoxylin and eosin stained tissue slides for normal lung and four common lung cancer histologies (adenocarcinoma, large-cell carcinoma, small-cell carcinoma and a lepidic-type adenocarcinoma; ...

Another interesting and potentially powerful application of FD analysis in pathology was reported by Vasiljevic et al.51 In a large-scale study of bone metastasis (comprising 1,050 patients), this group attempted to identify the primary cancer type (lung, renal, or breast) based on a multifractal classification of cells from haematoxylin and eosin (H&E)-stained slides.51 Multifractal structures represent a further level of complexity compared with (mono)fractals—they are not well described by a single FD but rather by a whole spectrum of FDs. One instance in which multifractality can arise is a scenario in which the FD of an object varies with x–y position (that is, across subregions of the region assessed). For example, one might obtain a different FD when considering the nucleus of a cell, its mitochondria, and its membrane. Using the ImageJ52 plugin FracLac,53 Vasiljevic and colleagues calculated a number of multifractal parameters, including the maximal generalized FD and the maximal multifractal spectrum for each intensity-thresholded image, and then rated the fractal parameters for accuracy, sensitivity, specificity, and precision.51 The accuracy of classification for each parameter in all three groups of primary tumour types was >73%;51 sensitivity, specificity, and precision were greater than 60%, 76%, and 59%, respectively, for each parameter.51 These results indicate that a multifractal analysis of metastatic lesions might be useful in determining the site of the primary tumour in patients with cancer of unknown primary (CUP).

Alterations in chromatin or nuclear FD have been detected in cancer cells, compared with normal tissues, and could potentially prove to be useful diagnostic and prognostic indicators. These differences should not come as a surprise, as alterations in higher-order chromatin structure are associated with changes in gene-expression profiles that occur in all cancers and even pre-malignant cells.54,55 Indeed, the first papers reporting studies that examined changes in chromatin FD in cancer appeared in the early 1990s.56 An attractive feature of FD analysis of chromatin is that the approach can be readily applied to tissue or cells stained with common histological stains, such as H&E. No additional tissue processing is needed, which also enables the retrospective analysis of banked slides. A review by Metze,38 who has published numerous papers on chromatin FD analysis, outlines the potential diagnostic applications of this approach in cancer. Although no dedicated studies on chromatin FD alterations in lung cancer have been reported to date, this approach has been applied to numerous other cancer types. For instance, a study by Streba et al.57 examined differences in nuclear and vascular FD between primary hepatocellular carcinomas and liver metastases from other tumour types, including adenocarcinoma of the lung. Binary images were generated from haematoxylin-stained tumour slides and these images were analysed using ImageJ and FracLac.57 Although this study was limited to one sample for each tumour type, the authors reported statistically significant differences in nuclear and vascular FD between different tumour types, and between the tumour and surrounding normal tissues.57 Used in this way, the authors suggest FD analysis could be a useful diagnostic tool to differentiate between primary hepatic tumours and liver metastases, including hepatic metastases derived from adenocarcinoma of the lung. However, much larger studies are needed to confirm the usefulness of FD in pathology.

For the purposes of this Review, we performed a limited analysis of FD and lacunarity on histological samples from different lung tumour types (Figure 5). We selected six areas at random within each FFPE tumour section (one specimen for each tumour type); images were converted to greyscale and the selected regions were analysed using FracLac (see Supplementary Figure 1).53 We found that the FD of all the assessed lung cancer histological subtypes was increased, to various degrees, compared with normal lung tissue, whereas lacunarity is considerably decreased. Of note, the values for the six regions of each cancer subtype showed marked clustering, although whether this similarity in FD and lacunarity would extend to different specimens of the same histological subtype remains unclear. Nevertheless, these findings suggest that this type of image analysis, together with chromatin, cellular, and radiological analyses, could be useful in the clinical diagnosis and classification of patients with lung cancer.

Radiological imaging

Numerous genetic alterations associated with lung cancer have been identified that might contribute to the development of specific types of lung cancer, patterns of metastasis, drug resistance, or disease recurrence.58 One characteristic common to all lung cancers is the alteration of the normal lung morphology. These morphological abnormalities can be clearly observed in histological samples and radiographic images (Figures 5 and ​and6),6), and implementation of FD analysis to detect clinically relevant changes during lung cancer development and treatment is already beginning to be considered in radiology.59–62 Noise in an image can reduce its diagnostic utility, by obscuring or masking some tumour characteristics (such as texture and fine structure). This loss of detail might hinder correct identification of tumour subtype or accurate staging. An advantage of FD calculation is that it is relatively immune to the effects of noise in an image. Al-Kadi63 compared the effects of noise in CT images on FD and other commonly used texture measurements, including model-based methods (gaussian Markov random fields), statistical-based methods, (co-occurrence matrices, run-length matrices, autocovariance function) and wavelet-based methods (Gabor filters, wavelet packet transform). He reported that FD and wavelet packet transform calculations were least susceptible to noise and also gave the best characterization of lung tissue.63 For FD, this advantage is due to the fact that the calculation encompasses multiple scales, at which the effects of noise are not similar and, therefore, are minimized.

Figure 6

Figure 6

Lung cancer progression and fractal dimension. Sequential CT images of a patient diagnosed with stage I adenocarcinoma of the lung who declined treatment and showed progressive tumour growth over 5 years were subjected to fractal analysis; contrast-enhanced ...

Miwa and colleagues62 have shown that FD analysis of 18F-fluorodeoxyglucose (FDG) uptake imaged by PET can be useful for differential diagnosis of malignant and benign pulmonary nodules. They reported that heterogeneity of FDG uptake in the nodules, reported as density-FD (d-FD; a quantitative measure of intra-lesion heterogeneity of FDG uptake, with higher values indicating greater heterogeneity), was significantly lower in malignant non-small-cell lung cancer (NSCLC) than benign (inflammatory) nodules (P <0.05).62 Miwa and co-workers62 hypothesized that the higher d-FD of the benign inflammatory nodules reflected the metabolically more-varied components (bronchial tissue, inflammatory cells, vascular cells, and the stroma, for example) of benign nodules compared with the relatively homogeneous malignant tissue. Furthermore, the researchers reported that, although the greater diagnostic accuracy of d-FD compared with maximum standardized uptake value (SUVmax) was not statistically significant, d-FD was not dependent on tumour size (whereas SUVmax was correlated with lesion size).62 This finding indicates that d-FD might provide unique information about the tumour, which could potentially be of diagnostic utility.

Dimitrakopoulou-Strauss et al.64 argue that FD analysis performed on time–activity curves might be more suitable than spatial-based calculations (such as SUV) for kinetic analysis of FDG-PET data. In a feasibility study by this group, temporal FD analysis was included as a possible predictor of survival of patients with NSCLC (n = 14) following chemotherapy.64 In this study, dynamic FDG-PET studies were performed before the initiation of chemotherapy and again after the first cycle, and the FD of tumour FDG time–activity curves was calculated independently for each voxel, thus yielding a FD parametric image.64 The pre-chemotherapy and post-chemotherapy FD parametric images were compared and were found to be statistically different (P = 0.05); however, the results did not help to classify the patients with respect to survival durations.64,65

Kido et al.60 applied FD analysis to thin-section CT images of bronchioalveolar carcinoma (BAC) and non-BAC tumours to examine differences in the internal texture of the carcinoma and the peripheral (lung–carcinoma interface) texture. 3D density surfaces were constructed based on CT data for selected regions of interest (internal areas of the tumour and the tumour–stroma interface) and were characterized by FD analysis.60 The researchers concluded that FD-based analysis could differentiate between small, localized BACs (n = 30) with good prognosis from non-BAC tumours (n = 40), which have a poorer prognosis.60 BACs had higher FD values, indicating greater structural complexity calculated by either the internal or peripheral texture analysis.60 Indeed, the authors concluded that the higher FD of BACs was attributable to the high level of aerogenous components, which resulted in a greater variability in the CT data, thus creating more texture in the 3D density surface. Similarly, a study by Al-Kadi and Watson66 assessed whether aggressive and nonaggressive lung tumours could be classified using fractal analysis of contrast-enhanced CT-image time series. The study demonstrated that the FD of lung-tumour tissue was higher than that of normal lung tissue, and that tumour FD was strongly correlated with FDG-PET SUVs.66 The study also reported that more-aggressive tumours (stages III–IV) had a higher FD compared with nonaggressive tumours (stage I), and the accuracy of distinguishing between advanced-stage and early-stage tumours based on FD analysis was 83.3%.66 However, these conclusions were based on evaluation of data from a limited number of patients, and require confirmation in a larger-scale study. The researchers also observed that, independently of tumour stage, higher FD corresponded to lower lacunarity, which the authors hypothesized could reflect the increased homogeneity of these tumours. The implications of lower lacunarity on contrast-enhanced CT images have not been investigated and its relationship to tumour heterogeneity remains to be confirmed. However, this property could be another indicator of tumour vascularization or differentiation. In future studies, correlating radiological imaging data with histopathlogical examination would be useful to address this issue.

To explore the clinical potential of fractal-based analysis for ourselves, we performed an FD analysis of a time series of CT images from a patient diagnosed with stage I adenocarcinoma of the lung who declined treatment using FracLac (Supplementary Figure 2).53 The CT images illustrate the natural progression of the tumour over time (Figure 6). We measured the FD of the tumour–stroma interface and found it increased over time from 1.4095 to 1.625. In addition, we calculated the FD of pretreatment and post-treatment CT images for a patient diagnosed with ALK-positive stage IV adenocarcinoma of the lung who responded to an ALK-targeted therapy (Figure 7); we used a method of FD analysis similar to that reported by Hayano et al.59,67 to detected differences in the FD of tumours in patients with hepatocellular carcinoma before and after treatment with one cycle of either sunitinib or bevacizumab. The FD of the lung tumour on contrast-enhanced CT images decreased from 1.1237 before treatment to 1.0597 post-treatment—a change of 0.064 following treatment in this patient. By comparison, the FD of the patient’s other unaffected lung showed only a minor change of FD of 0.016 (1.0556 before treatment to 1.0396 post-treatment). These results suggest that further investigation of the use of FD as a predictive indicator of therapeutic response or progression is warranted.

Figure 7

Figure 7

Treatment response and FD. CT images of a patient diagnosed with ALK-positive stage IV adenocarcinoma of the lung who responded to an ALK-targeted therapy were analysed to assess changes in FD of the lungs. The FDs of the tumour area in the right lung ...

Go to:

Future directions

We have outlined a number of examples of instances in which FD has been used to characterize lung cancer. Perhaps one of the most-exciting uses of fractal analysis is its application to clinical radiological imaging. FD might be useful in predicting and assessing therapeutic response, as reported by Hayano et al.59,67 Importantly, using fractal analysis, this group was able to detect a change in FD before any detectable change in overall tumour size or volume.59,67 FD alterations soon after initiation of treatment could have critical clinical implications as a potential early indicator of therapeutic response. We believe there is a solid rationale for the addition of FD to the list of parameters (such as tumour volume and SUVmax) routinely considered when examining CT and PET images, because this measure can provide additional useful information that might improve clinical decision-making.

The majority of reports in the literature of the application of fractal analysis to tumour histopathology have focused on detecting changes in the vascularization of tumour samples. Fractal analysis is often used to study neovascularization changes associated with diabetic retinopathy,68–70 and similar techniques are now being applied to quantify tumour vascularity.71–76 In their study of retinopathy, Lee et al.69 demonstrated that FD is sensitive to new vessel formation, but is relatively insensitive to image preprocessing and vessel thickness per se, thereby making it eminently suitable to measure neovascularization-associated changes in tissues. Several reports have emerged in which vascular FD calculated from histological samples has been used to assign tumour stage in head and neck, and brain cancers.71–76 These publications suggest that fractal analysis of lung cancer vascular networks could prove useful in detecting early changes due to tumour growth and angiogenesis, and in assessing tumour perfusion.

In cases of limited sample availability, fractal analyses of nuclei or chromatin might have utility, as a small number of cells can be sufficient to determine the phenotype by FD analysis compared with traditional pathology methods. In a study that used May–Grünwald–Giemsa-stained bone-marrow slides to determine the fractal characteristics of multiple myeloma, Ferro et al.77 reported that 30–40 cells was sufficient to calculate the mean FD (using the Minkowski-Bouligand box-counting method; Figure 2) of the nuclei in a sample. Thus, this method could aid in the identification of tumour-positive samples in which cell numbers are limited, such as fine-needle aspirates.

In 2014, Al-Kadi76 proposed a computer-aided decision-support system for analysis of histopathological images of brain tumours. This analysis method first decomposes the image into wavelet packets, each representing different length-scales. Wavelet decomposition is a mathematical representation akin to the more-common Fourier transform: in the latter, the ‘signal’ is decomposed in terms of sines and cosines of different spatial frequencies (length-scales); in the former, wavelets are used instead. Wavelets are more-generalized oscillatory functions, that can form a complete orthonormal set (packet), and can be used to represent data. Fractal analysis can then be used to choose a set of packets that best denotes the distinguishing features of the image. The result of a Fourier transform of an image is itself an image, but in Fourier space, also known as k-space. In a similar manner, wavelet decomposition of images produces further images, with each wavelet packet producing a different image. These second generation images are then analysed using fractal geometry to determine which packet more completely and accurately describes the original medical image. This approach highlights how the fractal analysis framework can be applied not only to raw or minimally processed images, but also to higher-order datasets derived from them. Furthermore, this method enables the use of colour images in the analysis, removing the need for binary images and preserving more information on tissue texture. This approach could be readily applied to assessments of lung tumour histopathology samples in the future. Indeed, on the basis of these reports relating to multiple cancer types, the application of fractal analysis in lung cancer histopathology has the potential to provide clinically relevant information on multiple aspects of cancer, including vascularization, malignancy, and even patient prognosis.

However, FD analysis is not limited to determination of cell and tumour types; it can be useful in characterizing cellular behaviours in vitro, such as migration, apoptosis, and differentiation. These measurements might be useful for the characterization of lung cancer cell lines. For example, Pasqualato et al.78 reported that the FD of the cell membrane profile of colon cancer cells increased as the cells started to migrate in a wound healing assay (at 0–3 h time points) and then decreased again at 24 h when they had closed the wound. Changes in membrane FD might, therefore, be indicative of the migratory potential or activity of cells. Induction of apoptosis can also cause changes in nuclear and cellular FD and lacunarity before detection of other classic markers of apoptosis, such as DNA fragmentation or membrane damage.79 Such changes could be used to screen for toxicity or resistance to drug treatments in vitro. FD analysis of F-actin has been used to detect cytoskeletal changes in response to shear and mechanical stress, silencing of the gene encoding Rho-GDP dissociation inhibitor α (Rho-GDIα; a factor involved in the regulation of small GTPase function, particularly in the context of F-actin dynamics), or stimulation of cell differentiation.80–83 These examples indicate the usefulness of FD analysis in basic research, and similar evaluations could be readily applied to lung cancer cell lines to monitor invasiveness and/or metastatic potential, drug cytotoxicity, and epithelial–mesenchymal transition.

Go to:

Challenges of fractal analysis

Fractal analysis of clinical and biological images has the potential to become a powerful and useful tool; however, a number of issues remain to be resolved. It should be underscored that many of these limitations are, in general, common to all computer-aided image-analysis techniques. Principal among these issues is a lack of standardization. As discussed, numerous algorithms have been developed for FD calculation, each of which can give slightly discordant results depending on the image analysed. Therefore, it is important that researchers note which method they have used for their calculations. Furthermore, users can often be confronted with a ‘black box’ scenario: they might possess limited knowledge of the inner workings of the algorithms. This lack of technical expertise means that the user might not fully appreciate how sensitive the analysis tools are to the input data. Factors that affect the quality of the input data include region of interest (ROI)-selection accuracy, image resolution, signal-to-noise ratio, and image bit depth. For instance, the resolution of the image is important, as it could limit the fractal scaling window. Image preparation or processing techniques can also introduce some variation into the results. Ideally, tools for fractal analysis should be provided with guidance and tutorials, including examples that highlight these sensitivity issues.

Of note, many of the current FD-analysis tools are limited to assessment of binary images. This means that, in many cases, information contained within the image is lost during the binarization process, and the use of different binarization algorithms could introduce further variation to the results. The development of tools capable of analysing binary, greyscale, or colour images, such as that reported by Al-Kadi,76 will increase the applicability and, probably, the sensitivity of fractal analysis.

Some attempts have been made to develop standardized protocols for FD analysis, but a great deal of variation remains in the techniques being used.84 The use of reference or calibration images, similar to the Brodatz images used in digital imaging for texture classification,85,86 would be extremely useful to researchers and developers, and could enable better calibration of and comparison between image-analysis tools and datasets.

For histological analyses, variation in the processing of the samples themselves (fixation methods, sectioning, and staining) can affect the tissue and is another common problem facing image analysis. These sample artefacts could be subtle enough as to be imperceptible by simple microscopic observation, but could, nevertheless, compound more-sensitive fractal analyses. Careful sample handling and preparation is, therefore, needed to ensure the preservation of tissue and cellular morphology, and to minimize sample-handling artefacts.

Current fractal analysis tools are also largely restricted to evaluation of 2D images. Fractal analysis of clinical images in 3D will probably be even more informative and useful than the current 2D-based approaches. Although some tools for the analysis of 3D images have been developed, many are not readily accessible or require some computer-programming knowledge. A web-based tool, 3DFD, has been developed for 3D analysis of brain MRI images (Box 1).87 The 3DFD online interface allows users to upload their own images for analysis, and the programme uses a box-counting methodology to calculate the FD. 3DFD is an excellent example of a user-friendly fractal analysis tool. The development of similarly user-friendly, widely available tools for the analysis of lung CT or PET images would surely prove invaluable. Vakoc et al.88 have used 3D intravital microscopy in mice and FD analysis to detect differences between the normal and tumour vasculatures in the brain in vivo. Eventually, similar studies in patients with lung cancer could provide information that aids drug development and clinical decision-making.

Go to:


Lung cancer is a heterogeneous disease entity that can be classified into many histological and molecular subtypes. Clinically, lung cancer can be highly aggressive in terms of primary tumour growth and metastasis to the lymph nodes and other organs. New tools are required to obtain a better understanding of the biology of lung cancer development, progression, and molecular heterogeneity. Fractal-based analyses are versatile and sensitive tools that have many potential applications in lung cancer research. Substantial effort is needed to increase the utility of fractal analysis, but already a number of examples of its use in other cancer types have been presented. Importantly, FD calculations can be applied at multiple levels, ranging from DNA-sequence analysis to assessment of cellular, tissue, and whole-organ images. The development of web-based and ImageJ-based software packages (Box 1), have brought fractal analysis within the reach of scientists and clinicians with an interest in quantitative image analysis. The examples presented herein serve only to highlight the potential of fractal analysis, and certainly do not represent an exhaustive illustration of the possible uses of this approach. Indeed, we hope to promote, and look forward to, the development of many more applications of fractal analysis in investigations of the unique anatomy and biology of the lung, and lung cancer.

Key points

▪ Cancer-related structural alterations in lung tissue and individual cells can often be readily observed, but can be difficult to quantify using conventional metrics, such as length or volume

▪ Fractals are mathematical constructs that appear infinitely self-similar over a range of scales

▪ Many biological entities, including the lung, can be considered as fractals within a limited scaling range known as a ‘scaling window’

▪ A fractal dimension (FD) is a non-integer value that relates how the detail and complexity of an object changes with scale

▪ FD can be used to quantify complex shapes and patterns in a range of clinical and biological images, including those illustrating DNA, cellular architectural, histopathological, and radiological features

▪ Fractal dimension can detect subtle changes in images and could potentially provide clinically useful information relating to tumour type, stage, and response to therapy

Go to:

Supplementary Material

Figure 1

Click here to view.(175K, pdf)

Figure 2

Click here to view.(236K, pdf)

Go to:


The work of the authors is supported in part by the NIH National Cancer Institute (grant P30 CA014599 to the University of Chicago Cancer Research Foundation). The work of R.S. is supported by the Mesothelioma Applied Research Foundation, the Guy Geleerd Memorial Golf Invitational–V-Foundation for Cancer Research.

Go to:


Competing interests

The authors declare no competing interests.

Author contributions

F.E.L., G.C.C., N.A.C., T.A.H., M.W.V. and R.S. wrote the article and contributed to all stages of the preparation of the manuscript for submission. In addition, H.J.Z. and C.-T.C. contributed to researching data for the article, and S.D.M. and E.E.V. made substantial contributions to discussion of content. All authors reviewed/edited the manuscript before submission.

Supplementary information is linked to the online version of the paper at

Go to:


1. American Cancer Society. Cancer Facts and Figures 2015. 2015 [online],

2. Siegel R, Ma J, Zou Z, Jemal A. Cancer statistics, 2014. CA Cancer J. Clin. 2014;64:9–29. [PubMed]

3. Mozley PD, et al. Change in lung tumor volume as a biomarker of treatment response: a critical review of the evidence. Ann. Oncol. 2010;21:1751–1755. [PubMed]

4. Mandelbrot BB. The Fractal Geometry of Nature. W. H. Freeman & Co. Ltd; 1982.

5. Peitgen H-O, Jürgens H, Saupe D. Chaos and fractals: New Frontiers of Science. 2nd. Springer-Verlag; 2004.

6. Legner P. Fractals. Mathigon—World of Mathematics. 2015 [online],

7. Ristanović D, Milosević NT. Fractal analysis: methodologies for biomedical researchers. Theor. Biol. Forum. 2012;105:99–118. [PubMed]

8. Mandelbrot B. How long is the coast of Britain? Statistical self-similarity and fractional dimension. Science. 1967;156:636–638. [PubMed]

9. Eghball B, Hergert GW, Lesoing GW, Ferguson RB. Fractal analysis of spatial and temporal variability. Geoderma. 1999;88:349–362.

10. Lopes R, Betrouni N. Fractal and multifractal analysis: a review. Med. Image Anal. 2009;13:634–649. [PubMed]

11. Dubuc B, Quiniou JF, Roques-Carmes C, Tricot C, Zucker SW. Evaluating the fractal dimension of profiles. Phys. Rev. A. 1989;39:1500–1512. [PubMed]

12. Jelinek HF, Fernandez E. Neurons and fractals: how reliable and useful are calculations of fractal dimensions? J. Neurosci. Methods. 1998;81:9–18. [PubMed]

13. Karperien A, Ahammer H, Jelinek HF. Quantitating the subtleties of microglial morphology with fractal analysis. Front. Cell. Neurosci. 2013;7:3. [PMC free article] [PubMed]

14. Nonnenmacher TF, Baumann G, Barth A, Losa GA. Digital image analysis of self-similar cell profiles. Int. J. Biomed. Comput. 1994;37:131–138. [PubMed]

15. Smith TG, Jr, Lange GD, Marks WB. Fractal methods and results in cellular morphology—dimensions, lacunarity and multifractals. J. Neurosci. Methods. 1996;69:123–136. [PubMed]

16. Iannaccone PM, Khokha M. Fractal Geometry in Biological Systems: An Analytical Approach. CRC Press; 1996.

17. Peleg S, Naor J, Hartley R, Avnir D. Multiple resolution texture analysis and classification. IEEE Trans. Pattern Anal. Mach. Intell. 1984;6:518–523. [PubMed]

18. Tolle CR, McJunkin TR, Gorsich DJ. An efficient implementation of the gliding box lacunarity algorithm. Physica D. 2008;237:10.

19. Plotnick RE, Gardner RH, Hargrove WW, Prestegaard K, Perlmutter M. Lacunarity analysis: a general technique for the analysis of spatial patterns. Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics. 1996;53:5461–5468. [PubMed]

20. Borys P, Krasowska M, Grzywna ZJ, Djamgoz MB, Mycielska ME. Lacunarity as a novel measure of cancer cells behavior. Biosystems. 2008;94:276–281. [PubMed]

21. Weibel ER. What makes a good lung? Swiss Med. Wkly. 2009;139:375–386. [PubMed]

22. Iber D, Menshykau D. The control of branching morphogenesis. Open Biol. 2013;3:130088. [PMC free article] [PubMed]

23. Kitaoka H, Takaki R, Suki B. A three-dimensional model of the human airway tree. J. Appl. Physiol. (1985) 1999;87:2207–2217. [PubMed]

24. Glenny RW. Emergence of matched airway and vascular trees from fractal rules. J. Appl. Physiol. (1985) 2011;110:1119–1129. [PubMed]

25. Fleury V, Gouyet J-F, Léonetti M, editors. Branching in Nature: Dynamics and Morphogenesis of Branching Structures, From Cell to River Networks. Springer-Verlag; 2001.

26. West BJ. Physiology in fractal dimensions: error tolerance. Ann. Biomed. Eng. 1990;18:135–149. [PubMed]

27. Nelson TR, West BJ, Goldberger AL. The fractal lung: universal and species-related scaling patterns. Experientia. 1990;46:251–254. [PubMed]

28. Alencar AM, et al. Physiology: dynamic instabilities in the inflating lung. Nature. 2002;417:809–811. [PubMed]

29. Suki B, et al. Mechanical failure, stress redistribution, elastase activity and binding site availability on elastin during the progression of emphysema. Pulm. Pharmacol. Ther. 2012;25:268–275. [PubMed]

30. Bates JH, Suki B. Assessment of peripheral lung mechanics. Respir. Physiol. Neurobiol. 2008;163:54–63. [PMC free article] [PubMed]

31. Boser SR, Park H, Perry SF, Ménache MG, Green FH. Fractal geometry of airway remodeling in human asthma. Am. J. Respir. Crit. Care Med. 2005;172:817–823. [PubMed]

32. Gehr P, Bachofen M, Weibel ER. The normal human lung: ultrastructure and morphometric estimation of diffusion capacity. Respir. Physiol. 1978;32:121–140. [PubMed]

33. Losa GA. The fractal geometry of life. Riv. Biol. 2009;102:29–59. [PubMed]

34. Landini G, Rippin JW. Quantification of nuclear pleomorphism using an asymptotic fractal model. Anal. Quant. Cytol. Histol. 1996;18:167–176. [PubMed]

35. Bancaud A, et al. Molecular crowding affects diffusion and binding of nuclear proteins in heterochromatin and reveals the fractal organization of chromatin. EMBO J. 2009;28:3785–3798. [PMC free article] [PubMed]

36. Lieberman-Aiden E, et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science. 2009;326:289–293. [PMC free article] [PubMed]

37. Grosberg AY, Nechaev SK, Shakhnovich EI. The role of topological constraints in the kinetics of collapse of macromolecules. J. Phys. (France) 1988;49:2095–2100.

38. Metze K. Fractal dimension of chromatin: potential molecular diagnostic applications for cancer prognosis. Expert Rev. Mol. Diagn. 2013;13:719–735. [PMC free article] [PubMed]

39. Peng CK, et al. Fractal landscape analysis of DNA walks. Physica A. 1992;191:25–29. [PubMed]

40. Peng CK, et al. Long-range correlations in nucleotide sequences. Nature. 1992;356:168–170. [PubMed]

41. Arakawa K, et al. Genome Projector: zoomable genome map with multiple views. BMC Bioinformatics. 2009;10:31. [PMC free article] [PubMed]

42. Jeffrey HJ. Chaos game representation of gene structure. Nucleic Acids Res. 1990;18:2163–2170. [PMC free article] [PubMed]

43. Almeida JS. Sequence analysis by iterated maps, a review. Brief. Bioinform. 2014;15:369–375. [PMC free article] [PubMed]

44. Tsai IJ, Otto TD, Berriman M. Improving draft assemblies by iterative mapping and assembly of short reads to eliminate gaps. Genome Biol. 2010;11:R41. [PMC free article] [PubMed]

45. Peng CK, et al. Quantifying fractal dynamics of human respiration: age and gender effects. Ann. Biomed. Eng. 2002;30:683–692. [PubMed]

46. West BJ. Fractal physiology and the fractional calculus: a perspective. Front. Physiol. 2010;1:12. [PMC free article] [PubMed]

47. Mutch WA, Graham MR, Girling LG, Brewster JF. Fractal ventilation enhances respiratory sinus arrhythmia. Respir. Res. 2005;6:41. [PMC free article] [PubMed]

48. Gutierrez G, et al. Decreased respiratory rate variability during mechanical ventilation is associated with increased mortality. Intensive Care Med. 2013;39:1359–1367. [PubMed]

49. Seely AJ, et al. Do heart and respiratory rate variability improve prediction of extubation outcomes in critically ill patients? Crit. Care. 2014;18:R65. [PMC free article] [PubMed]

50. Lee LH, et al. Digital differentiation of non-small cell carcinomas of the lung by the fractal dimension of their epithelial architecture. Micron. 2014;67:125–131. [PubMed]

51. Vasiljevic J, et al. Application of multifractal analysis on microscopic images in the classification of metastatic bone disease. Biomed. Microdevices. 2012;14:541–548. [PubMed]

52. US National Institutes of Health. ImageJ. 2015 [online],

53. Karperien A. FracLac for ImageJ. US National Institutes of Health; 2013. [online],

54. Fudenberg G, Getz G, Meyerson M, Mirny LA. High order chromatin architecture shapes the landscape of chromosomal alterations in cancer. Nat. Biotechnol. 2011;29:1109–1113. [PMC free article] [PubMed]

55. Misteli T. Higher-order genome organization in human disease. Cold Spring Harb. Perspect. Biol. 2010;2:a000794. [PMC free article] [PubMed]

56. Irinopoulou T, Rigaut JP, Benson MC. Toward objective prognostic grading of prostatic carcinoma using image analysis. Anal. Quant. Cytol. Histol. 1993;15:341–344. [PubMed]

57. Streba CT, et al. Fractal analysis differentiation of nuclear and vascular patterns in hepatocellular carcinomas and hepatic metastasis. Rom. J. Morphol. Embryol. 2011;52:845–854. [PubMed]

58. Shtivelman E, et al. Molecular pathways and therapeutic targets in lung cancer. Oncotarget. 2014;5:1392–1433. [PMC free article] [PubMed]

59. Hayano K, Yoshida H, Zhu AX, Sahani DV. Fractal analysis of contrast-enhanced CT images to predict survival of patients with hepatocellular carcinoma treated with sunitinib. Dig. Dis. Sci. 2014;59:1996–2003. [PubMed]

60. Kido S, Kuriyama K, Higashiyama M, Kasugai T, Kuroda C. Fractal analysis of internal and peripheral textures of small peripheral bronchogenic carcinomas in thin-section computed tomography: comparison of bronchioloalveolar cell carcinomas with nonbronchioloalveolar cell carcinomas. J. Comput. Assist. Tomogr. 2003;27:56–61. [PubMed]

61. Michallek F, Dewey M. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review. Eur. Radiol. 2014;24:60–69. [PubMed]

62. Miwa K, et al. FDG uptake heterogeneity evaluated by fractal analysis improves the differential diagnosis of pulmonary nodules. Eur. J. Radiol. 2014;83:715–719. [PubMed]

63. Al-Kadi OS. Assessment of texture measures susceptibility to noise in conventional and contrast enhanced computed tomography lung tumour images. Comput. Med. Imaging Graph. 2010;34:494–503. [PubMed]

64. Dimitrakopoulou-Strauss A, et al. Prediction of short-term survival in patients with advanced nonsmall cell lung cancer following chemotherapy based on 2-deoxy-2-[F-18]fluoro-D-glucose-positron emission tomography: a feasibility study. Mol. Imaging Biol. 2007;9:308–317. [PubMed]

65. Dimitrakopoulou-Strauss A, Pan L, Strauss LG. Quantitative approaches of dynamic FDG-PET and PET/CT studies (dPET/CT) for the evaluation of oncological patients. Cancer Imaging. 2012;12:283–289. [PMC free article] [PubMed]

66. Al-Kadi OS, Watson D. Texture analysis of aggressive and nonaggressive lung tumor CE CT images. IEEE Trans. Biomed. Eng. 2008;55:1822–1830. [PubMed]

67. Hayano K, Lee SH, Yoshida H, Zhu AX, Sahani DV. Fractal analysis of CT perfusion images for evaluation of antiangiogenic treatment and survival in hepatocellular carcinoma. Acad. Radiol. 2014;21:654–660. [PubMed]

68. Doubal FN, et al. Fractal analysis of retinal vessels suggests that a distinct vasculopathy causes lacunar stroke. Neurology. 2010;74:1102–1107. [PMC free article] [PubMed]

69. Lee J, Zee BC, Li Q. Detection of neovascularization based on fractal and texture analysis with interaction effects in diabetic retinopathy. PLoS ONE. 2013;8:e75699. [PMC free article] [PubMed]

70. Talu S. Fractal analysis of normal retinal vascular network. Oftalmologia. 2011;55:11–16. [PubMed]

71. Di Ieva A, et al. Computer-assisted and fractal-based morphometric assessment of microvascularity in histological specimens of gliomas. Sci. Rep. 2012;2:429. [PMC free article] [PubMed]

72. Di Ieva A, et al. Fractal dimension as a quantitator of the microvasculature of normal and adenomatous pituitary tissue. J. Anat. 2007;211:673–680. [PMC free article] [PubMed]

73. Di Ieva A, et al. Euclidean and fractal geometry of microvascular networks in normal and neoplastic pituitary tissue. Neurosurg. Rev. 2008;31:271–281. [PubMed]

74. Di Ieva A, Grizzi F, Sherif C, Matula C, Tschabitscher M. Angioarchitectural heterogeneity in human glioblastoma multiforme: a fractal-based histopathological assessment. Microvasc. Res. 2011;81:222–230. [PubMed]

75. Goutzanis LP, et al. Vascular fractal dimension and total vascular area in the study of oral cancer. Head Neck. 2009;31:298–307. [PubMed]

76. Al-Kadi OS. A multiresolution clinical decision support system based on fractal model design for classification of histological brain tumours. Comput. Med. Imaging Graph. 2014;41:67–79. [PubMed]

77. Ferro DP, et al. Fractal characteristics of May-Grünwald-Giemsa stained chromatin are independent prognostic factors for survival in multiple myeloma. PLoS ONE. 2011;6:e20706. [PMC free article] [PubMed]

78. Pasqualato A, et al. Shape in migration: quantitative image analysis of migrating chemoresistant HCT-8 colon cancer cells. Cell Adh. Migr. 2013;7:450–459. [PMC free article] [PubMed]

79. Pantic I, Harhaji-Trajkovic L, Pantovic A, Milosevic NT, Trajkovic V. Changes in fractal dimension and lacunarity as early markers of UV-induced apoptosis. J. Theor. Biol. 2012;303:87–92. [PubMed]

80. Fuseler JW, Millette CF, Davis JM, Carver W. Fractal and image analysis of morphological changes in the actin cytoskeleton of neonatal cardiac fibroblasts in response to mechanical stretch. Microsc. Microanal. 2007;13:133–143. [PubMed]

81. Park SH, et al. Texture analyses show synergetic effects of biomechanical and biochemical stimulation on mesenchymal stem cell differentiation into early phase osteoblasts. Microsc. Microanal. 2014;20:219–227. [PubMed]

82. Qian AR, et al. Fractal dimension as a measure of altered actin cytoskeleton in MC3T3-E1 cells under simulated microgravity using 3-D/2-D clinostats. IEEE Trans. Biomed. Eng. 2012;59:1374–1380. [PubMed]

83. Qi YX, Wang XD, Zhang P, Jiang ZL. Fractal and Image Analysis of Cytoskeletal F-Actin Orgnization in Endothelial Cells under Shear Stress and Rho-GDIα Knock Down. In: Lim CT, Goh JC, editors. 6th World Congress of Biomechanics (WCB 2010): In Conjunction with 14th International Conference on Biomedical Engineering (ICBME) and 5th Asia Pacific Conference on Biomechanics (APBiomech). IFMBE Proceedings. Vol. 31. Springer; 2010. pp. 1051–1054.

84. Di Ieva A. Fractal analysis of microvascular networks in malignant brain tumors. Clin. Neuropathol. 2012;31:342–351. [PubMed]

85. Brodatz P. Textures: A Photographic Album for Artists and Designers. Peter Smith Publisher, Incorporated; 1981.

86. Florindo JB, Landini G, Bruno OM. Texture descriptors by a fractal analysis of three-dimensional local coarseness. Digit. Signal Process. 2015;42:70–79.

87. Jimenez J, et al. A Web platform for the interactive visualization and analysis of the 3D fractal dimension of MRI data. J. Biomed. Inform. 2014;51:176–190. [PubMed]

88. Vakoc BJ, et al. Three-dimensional microscopy of the tumor microenvironment in vivo using optical frequency domain imaging. Nat. Med. 2009;15:1219–1223. [PMC free article] [PubMed]

89. Véhel JL, Legrand P. Signal and image processing with FracLab. In: Novak MM, editor. Thinking in Patterns: Fractals and Related Phenomena in Nature. World Scientific; 2004. pp. 321–322.

90. Thé MA. Fractalyse—Fractal Analysis Software. 2015 [online],

91. Silijkerman F. Ultra fractal 5. 2014 [online],

92. Reuter M. Image Analysis: Fractal Dimension—FDim. 2015 [online],

(This "University article" and the NIH-NCI "Institute article", both in Academia about the same time, illustrate that the fractality of the genome as well as the fractality of organisms, like cancers, are at this point universally admitted. Objective and independent measures are the Google hit counts of fractal cancer, fractal genome - together occasionally surpassing 2 million hits. Recognition, however, is given to my FractoGene causal connection "Fractal Genome Grows Fractal Organisms" only by a select few - who understand and actually wish well the disruption by reversal of the two mistaken axioms, the Junk DNA and the Central Dogma, to arrive at The Principle of Recursive Genome Function in 2OO8. A remarkable example is the "Principle of Recursive Genome Function", where the Google counts near to one fifth of a million hits - while Academia cited the peer-reviewed science paper by 37 times. Yes, thirthy seven over about its first decade. I am often asked the obvious question why is Academia hesitant to embrace disruptive breakthroughs. Fractality of the DNA, or the fractality of organisms are features - and features disrupt nothing. Features can be piled up ad infinitum (well, almost ad infinitum, more precisely until the point when the underlying framework collapses under the colossal weight of rapidly escalating evidence). Realizing the cause-and-effect connection of fractal DNA growing fractal organisms, like cancer, however, led to a reversal of both underlying false axioms of modern molecular biology. This new Principle is double-disruptive - its acknowledgement is extremely dangerous for the establishment. Recognition implies the imperative to follow - and those biologists who actually have training in mathematics know best that to describe features is fairly simple, but to follow through the statistical relationship and probabilistic predictions is getting rather difficult very soon. To use the metaphor or the weather, taking measurements of the clouds, winds, temperature, barometric pressure etc. is straightforward - but to predict the weather is an colossal industry built on nonlinear dynamics as a science and accomplished by a big data infrastructure. While my FractoGene (that fractal DNA grows fractal organisms) was outright ridiculed in public by a crass remnant in a University outside of the US, by an ignoramus in mathematics, today I challenge anyone to dare to cast doubt that "fractal DNA grows fractal organisms". Within half a decade, the Principle was generalized for a geometrical unification of neuroscience and genomics - and though all papers are posted for full free .pdf download, glancing into the degree of preparedness in nonlinear dynamics now and for next generations most likely scared away a good portion of ordinary workers in Academia. Just as in meteorology, however, the mathematical and computational difficulties of predicting the weather were taken up by a new industry, computer-based meteorology, since the huge industry more then justified the expense because of the practical importance of the subject matter for mankind. Likewise, the fight against genome regulation diseases, like cancer, already more than justify the trend that major IT companies worldwide take up the computational challenges inherent in postmodern genome informatics. - holgentech_at_gmail_dot_com)

Fractal Cancer - Sokolov Chapter in Cancer Nanotechnology Textbook

Cancer Nanotechnology: Methods and Protocols, Methods in Molecular Biology, vol. 1530, DOI 10.1007/978-1-4939-6646-2_13, © Springer Science+Business Media New York 2017 229

Chapter 13 Fractal Analysis of Cancer Cell Surface

Igor Sokolov and Maxim E. Dokukin

Abstract Fractal analysis of the cell surface is a rather sensitive method which has been recently introduced to characterize cell progression toward cancer. The surface of fixed and freeze-dried cells is imaged with atomic force microscopy (AFM) modality in ambient conditions. Here we describe the method to perform the fractal analysis specifically developed for the AFM images. Technical details, potential difficulties, points of special attention are described.

Key words Fractal analysis, Cancer progression, Physics of cancer

1 Introduction Fractal geometry is one of the intriguing mathematical constructs. If a surface is fractal, its geometry repeats itself periodically at different scales [1]. These complex disorderly patterns are typically formed under far-from-equilibrium conditions [2], or emerge from chaos [1]. Examples of fractal shape range from the large-scale structure of the universe [3] to forms of continental coastlines [4], trees [5], grain structures of many metals, ceramics and minerals [6], clouds [7], and even some artistic creations [8]. Some biological tissues [9] show fractal patterns. Recently a fractal structure of chromatin has been used to show how the cell’s nucleus holds molecules that manage nuclear DNA in the right location [10]. The idea of a possible connection between cancer and fractals (self-affinity) has been suggested in a number of works [11–13]. It was proposed that disbalance of various biochemical reactions, which is typically associated with cancer, could result in chaotic formation of various geometrical characteristics of cancer, and consequently, appearance of fractal geometry because chaos typically results in the appearance of fractals. It was indeed demonstrated that tumor vasculature and antiangiogenesis demonstrated explicit fractal behavior [12, 14]. Cancer-specific fractal behavior of tumors Reema Zeineldin (ed.), at the macroscale has been recently found when analyzing the tumor perimeters [9, 15].

Fractal geometry was also found in the structure of tumor antiangiogenesis [12, 14]. Similar analysis at the micro- and submicron scales was done in both neoplastic and normal cells. Various morphometric analyses have been applied to analyze individual cells [16, 17] and cell nuclei [18]. One dimensional perimeter of cross sections of cells or cell nuclei was analyzed in those works. However, the fractal analysis at the microand submicron scales did not show the transition to fractal when cells become cancerous.

Both cancer and normal cells demonstrated good fractal behavior, although with different fractal dimension [17, 18]. Moreover, those works were descriptive in nature; they were based on the analysis of the previously well-known morphometric features of the cell boundary (pseudopodia) and did not provide any noticeable improvement in identification of cancer cells [19].

Recently it has been shown that the analysis of fractal dimension of cell surface imaged with atomic force microscopy (AFMatomic force microscopy) showed a strong segregation between normal, immortal (precancerous), and malignant human cervical epithelial cells [20, 21]. However, fractal dimension can be calculated for any surface, not necessarily fractal (fractal dimension can formally be assigned to any surface). The study of the emergence of fractal geometry in itself on the cell surface has been reported just recently [22, 23].

It was shown that there was a strong correlation between multi-fractality (a parameter introduced to characterize the deviation from fractal) and the stage of progression to cancer. Multi-fractality is reaching zero, which corresponds to a simple or ideal fractal, at the stage when immortal cells turn into cancerous. Moreover, cancer cells deviate from ideal fractal with the increase of the number of divisions of cancer cells, which is typically associated with the cancer progression.

Here we describe the method to perform the fractal analysis which was used in the AFM works cited above. This method was specifically developed for the AFM images. Non-resonant AFM modes (as well as HarmoniX) allow recording maps of topography (surface image), rigidity, adhesion, dissipation energy, maximal (peak) force, and sometime phase (shift of the oscillating cantilever due to the interaction with cell surface). For the purposes of fractal analysis, it was found that the adhesion map channel gives the best results [20]. Technical details, potential difficulties, points of special attention are described. Application of this method is demonstrated for human cervical epithelial cells at different stages of progression to cervical cancer, from normal to immortal (premalignant), to malignant. There are different methods of processing images to analyze their fractal behavior.

Fractal analysis is typically done by calculating self-correlation function. A power law dependence of the self-correlation function on the geometrical scale is a definitive behavior of fractal. The fractal behavior can also be tested by utilizing Fourier analysis [24]. Because the Fourier analysis of two-dimensional images is typically present at any AFM software, we found the latter method to be particularly suitable to do the fractal analysis of AFM images. Here we describe the method and technical steps needed to do the analysis.

(Sokolov and Dokukin in upstate New York have proven over years ago now that diagnosis with 1OO% certainity can be established for cancer. Fractal Cancer in Google search already shot up to 1.7 million hits in early June. Mirny et al. directly linked cancer to fractality of the genome half a dozen years ago. FractoGene by Pellionisz, "fractal genome grows fractal organisms" provides utilities to establish statistical diagnosis and probabilistic prognosis and precision therapy. The breakthrough Principle of Recursive Genome Function, opening up the class of recusive algorithms (presently with the leading fractal approach) will reach its first decade since peer reviewed science publication - AJP)

Researchers discover hundreds of unexpected mutations from new gene editing technology

Newsweek, May 31, 2017


For the past few years, a new scientific tool known as CRISPR-Cas9 has been hailed as the future of medicine. The technology, which has been the center of both extreme fascination and a bitter patent dispute between two research groups, enables scientists to edit genomes. That is, they can remove harmful genes that cause diseases and replace them with normal genes that don’t—at least in theory. While exciting to many, the idea has also elicited fears that the technology could create dangerous mutations and be used in unbridled ways, for example in attempts to create superhumans and designer babies.

Jennifer Doudna, molecular biologist at University of California, Berkeley, explains CRISPR-Cas9 gene-editing technology at the 2016 Milken Institute Global Conference in Beverly Hills, CA, May 2016. A new study found unexpected mutations in mice that underwent a CRISPR-Cas9 procedure.

According to a new report, such fears may be well founded. The study, published in Nature Methods , found that using CRISPR-Cas9 to edit a genome can result in hundreds of unintended mutations being introduced. For the report, researchers sequenced the genomes of mice that had already undergone CRISPR-Cas9 procedures. They then scrutinized the edited genomes for any changes in the mouse genes—and they found plenty. The technology had accomplished the original intended task of correcting a gene that causes blindness, but it had also resulted in 1,500 other small changes and 100 large changes. Not one of those changes had been predicted by the researchers.

The small changes were what are known as single-nucleotide mutations. With these alterations, just one nucleotide—that is, the chemicals known as A, T, G, and C that make up the building blocks of DNA—switches to a different one. But even that tiny change can have a big effect: single-nucleotide mutations are involved in many diseases.

“We’re still upbeat about CRISPR,” Vinit Mahajan, who researchers ophthalmology at Stanford University and co-authored the study, said in a statement. “We know that every new therapy has some potential side effects, but we need to be aware of what they are.”

Meanwhile, the first clinical trials to study CRISPR-Cas9 in humans have begun. Last October, researchers at Sichuan University in China injected genetically modified cells into the first of 10 lung cancer patients. At the University of Pennsylvania, a study to edit genes in cells from patients with several types of cancer, including melanoma and myeloma, has been approved by the National Institutes of Health. And researchers at Peking University, in Beijing, are awaiting approval for their study of CRISPR-Cas9 to treat bladder cancer and prostate cancer.

Whatever the future of CRISPR-Cas9 may be, it will no doubt be tightly regulated. The approach so far is not only untested but also raises important ethical concerns. The discovery of unexpected consequences in this mouse study is sure to add fuel to that laboratory fire.

Jennifer Doudna, the molecular biologist at the University of California, Berkeley, who co-discovered CRISPR-Cas9, sees no cause for concern. “I don’t think any conclusions can be drawn from the work,” she says. The study used the technology in a different way than is employed by the vast majority of labs, says Doudna, and the alternative approach could have been responsible for the unexpected mutations. That assertion is supported by the fact that none of the genome sites that prior research highlighted as most susceptible to accidental mutations due to CRISPR-Cas9 were affected. Doudna also says the statistical analysis reported in the paper “does not hold up to rigorous scrutiny.”

(We made the point this Spring, see .pdf below, that investors better spend their money towards understanding the "genomic text" - before editing it almost at random. For instance, Microsoft spell checkers could be built after language domain experts were sure about the spelling rules of particular languages. Using a more brutal example, no nuclear devices, reactors or bombs, could possibly be built in a safe and effective manner before nuclear physics was ready with quantum mechanics - AJP.

Pellionisz, A.J., Ramanujam, M.V., Rajan, E.G. (2017) Genome Editing – A Novel Business Opportunity for India as a BRICS Country to Excel in Global Genomics Enterprise. In: Proceedings of ICSCI 2017 Hyderabad, India Conference, pp. 1-3. [FULL TEXT PDF] )

The future of forensic neurosciences with fractal geometry

Franco Posa

Clinical studies in criminology, Sementina, Switzerland

Gabriele A. Losa

Institute for Interdisciplinary Scientific Studies. Locarno, Switzerland

DOI: 10.15761/FGNAMB.1000135


Digital imaging techniques have enabled to gain insight into complex structure-functional processes involved in the neo-cortex maturation and in brain development, already recognized in anatomical and histological preparations. Despite such a refined technical progress most diagnostic records sound still elusive and unreliable because of use of conventional morphometric approaches based on a unique scale of measure, inadequate for investigating irregular cellular components and structures which shape nervous and brain tissues. Instead, these could be efficiently analyzed by adopting principles and methodologies derived from the Fractal Geometry. Through his masterpiece, The Fractal Geometry of Nature, Benoît Mandelbrot has provided a novel epistemological framework for interpreting the real life and the natural world as they are, preventing whatever approximation or subjective sight. Founded upon a body of well-defined laws and coherent principles, the Fractal Geometry is a powerful tool for recognizing and quantitatively describing a good many kinds of complex shapes, living forms, organized patterns, and morphologic features long range correlated with a broad network of functional interactions and metabolic processes that contribute to building up adaptive responses making life sustainable. Scale free dynamics characterized biological systems which develop through the iteration of single generators on different scales thus preserving proper self-similar traits. In the last decades several studies have contributed to showing how relevant may be the recognition of fractal properties for a better understanding of nervous tissues biology in healthy conditions, in pathological states and in those cases that pertain to forensic criminology.

Key words

fractal geometry, fractal dimension, scaling window, resolution power, neurons, brain tissue, forensic neurosciences

The New Weltanschaung on Nature ‘s Complexity

The Fractal Geometry of Nature [1], Benoît Mandelbrot’s masterpiece, evokes a new ‘‘Weltanschauung,’’ providing a novel epistemological approach for interpreting the natural world and a more intelligent vision of life itself. The fractal geometry, which was founded on a body of well-defined laws and coherent principles, including those derived from chaos theory, allows the recognition and quantitative description of complex shapes, images, and other figures that usually are created through unlimited iterations of a simple generator (often a mathematical motif) by means of computer-aided design (CAD). CAD figures undecipherable using classical geometry were referred to as ‘‘fractals’’ because of their peculiarity, which lies in the reproducibility of their shape over a range of scales and in a non integer topological dimension called a ‘‘fractal dimension’’, from the Latin word fractus. Non-Euclidean iterated figures – now including fractals – have often been considered to bear a resemblance to pathological entities or mathematical monsters despite of – or owing to – their beauty, richness and fascinating shapes . Nowadays, most of these have become explicable and even familiar, since Mandelbrot’s assertion that they can almost be considered as a general rule of Nature. This led Mandelbrot to conclude that «clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line». Subsequently, it was noted that these virtual figures share some morphological traits and self-similar properties which could be encountered not only in elements of the inanimate world, but also (though less evident) in complex forms, functions and shapes belonging to the plant and animal realms. Living forms develop according to organized morphological patterns correlated with a complex system of functional metabolic interactions which make the accomplishment of the adaptive response possible. Iteration, self-similarity, form invariance upon scaling, nonequilibrium thermodynamics, self-organization, and energy dissipation are among the mechanisms reputed to sustain the emergence and maintenance of living forms, in contrast to those of homeostasis, linearity, smoothness, regularity, and thermodynamic reversibility pertaining to a more traditional vision based upon the concepts and rules of Euclidean geometry and adequate for an ideal world [2]. Over the past decade, a large amount of experimental evidence has been accumulated showing that biological elements do indeed express statistical self-similar patterns and fractal properties within a defined interval of scales. This is termed the ‘‘scaling window,’’ in which a direct relationship between the observation scale and the measured size/length of an object or the frequency of a temporal event can be ascertained and in turn quantified by a peculiar FD [3]. In other words, the FD of a biological component remains constant within the scaling window, and serves to quantify variations in length, area or volume with changes in the dimensions of the measuring scale. However, real ‘‘fractality’’ exists only when the experimental scaling range covers at least two orders of magnitude, although fractality over many orders of magnitude has been observed in various natural fields. Hence, defining a ‘‘scaling range’’ of length measurements appears to be an inescapable requisite for assessing the fractality of any biological element. Experimental evidence of a definite scale interval avoids any ambiguous assignment of objects or figures lacking that requirement, and confirms Mandelbrot’s assertion that «fractals are not a panacea; they are not everywhere» [1].

Adoption of the fractal geometry in biology and medicine

Most cells, tissues, organs – in either the animal or vegetal realms – are systems in which the component parts and unit fragments assemble with different levels of complexity and organization. This means that a single fragment or element may, on various scales, reproduce the whole object from which it is derived; in other words, it is self-similar, albeit in a statistical sense. Very few of these shapes can be analytically described or evaluated by using Euclidean geometry, which was developed to trace the regular and ideal geometrical forms that are practically unknown in natural and biological systems. Although the first coherent essay on fractal geometry was published in French more than 30 years ago [2,4], it may be worth considering exactly how and when the ‘‘heuristic introduction’’ of such an innovativ discipline occurred or, more pregnantly stated, as when ‘‘the irruption of fractal geometry’’ into the life sciences such as biology and medicine actually took place. Although there is no precise date, it is generally agreed that such introduction occurred within the ‘‘golden age’’ of cell biology – that is, between the 1960s and 1990s [5].

The morphofunctional complexity of cells and tissues

According to ‘‘the state of the art,’’ there was a pressing need to consider the morphological complexity of cells and tissues by using a systemic approach, whilst at the same time developing instruments that could enable the accomplishment of that goal without introducing any shape approximation or smoothing, a condition which could not be satisfactorily achieved with conventional analytical methods. In fact, the latter situation which relied on conventional disciplines such as morphometry and stereology yielded experimental data relating to the quantitative description of membranes that was usually controversial. This left many questions unresolved and thereby prevented a true consensus being reached among the investigators. To highlight the striking debate that led to such turmoil within the biologist community, it might suffice to report the original description, of the first case study conducted several years earlier [6]. This related to the application of fractal geometry in cell biology; notably, the discovery that cellular membrane systems have fractal properties arose from the uncertainty of observations regarding the extent of such membranes. When the results of the first studies on the morphometry of liver cell membranes were reported, the values obtained were much higher than had been reported by others. There followed much debate as to which of these estimates was correct, and whether liver cells contained 6 or 11 m2 of membranes per cm3, a quite significant difference. The question was also raised as to whether the stereological methods used were reliable, since it appeared possible that the same method might yield different results if the measurements were made under different magnifications of the electron micrographs. Ultimately, the systematic measurement of liver cell membranes revealed that the estimates of surface density increased with increased resolution. Shortly after the conclusion of the experimental phase, Mandelbrot suggested that the results should be interpreted with the likely effect of the ‘‘resolution scale,’’ in analogy with the ‘‘Coast of Britain effect’’ [4]. The estimate discrepancy was resolved allowing to explain why measurements of irregular liver cell membranes at higher magnification yielded higher values than were obtained at lower magnification [6]. It must be stressed here that the scaling effect applies mainly to cellular membranes with a folded surface or an indented wrinkled profile, such as the inner mitochondrial membrane. In contrast, measurement of the surface density of the outer mitochondrial membrane, which is almost smooth, was only slightly affected by the resolution effect. Several authors further addressed the problem relating boundary length to resolution, namely with regard to specimens showing ‘‘non-ideally fractal’’ dimensions such as lung tissues but also to cell tissues in different physiopathological conditions [5].

The analytical (quantitative) description of complex shapes

Founded upon a body of well-defined laws and coherent principles, including those derived from chaos theory, the Fractal Geometry allows the recognition and the quantitative description of complex shapes, living forms, biologic tissues, organized patterns of morphologic features correlated through a broad network of functional interactions and metabolic processes that shapes the adaptive response and makes the accomplishment of life possible [5]. Obviously, this came in opposition to the ancient vision based on the rules of the Euclidean geometry and widely adopted concepts, such as homeostasis, linearity, smoothness and thermodynamic reversibility, which emanate from a cultural view but ideal of envisaging things and facts. Later on it has become evident that biologic elements, unlike deterministic mathematical structures, express statistical self-similar patterns and fractal properties within a defined interval of scales, called “scaling window ", in which the relationship between the observation scale and the measured size or length of the object can be established and defined as the fractal dimension [FD] [3]. The fractal dimension FD of a biological component remains constant within the scaling window and serves to quantify the variation in length, area or volume with changes in the size of the measuring scale. The recourse to the principles of fractal geometry has enabled to reveal that most biological elements, either at cellular, tissue or organic level, have self-similar structures within a defined scaling domain which can be characterized by means of the fractal dimension FD. However, concrete "fractality" exists only when the experimental scaling range accounts for at least two orders of magnitude, namely spans two decades on the scaling axis. Data spanning several decades of scale has been previously reported in many other fields: thus, defining a “scaling range” appears as an inescapable requisite for assessing the fractality of every biological element. Its occurrence will hence exclude any ambiguous assignment and so far strengthen the statement of Mandelbrot «Fractals are not a panacea; they are not everywhere». In the Epilog of his work -The path to fractals- he wrote «The reader knows well that the probability distribution of fractals is hyperbolic, and that the study of fractals is rife with other power law relationships». Nowadays, the Fractal Geometry provides the theoretical and methodological framework for unravel temporal processes and complex biological structures, as underlined by the increased frequency of outstanding publications, what definitely reinforces a valiant sentence written more than twenty years ago «Fractal Geometry: a Design Principle for Living Organisms» [7]. So far, it is worthy to point out that the fractal dimension is a statistical measure which correlates the morphological structural complexity of cellular components and biological tissues. It is also a numerical descriptor which serves to measure qualitative morphological traits and self-similar properties at various levels, cellular, tissue or organic. Mandelbrot stated in his book that «A fractal set is a set in metric space for which the Hausdorff-Besicovitch dimension D is greater than the topological dimension DT». In Nature, a fractal object is defined by its structural properties, namelyby surface rugosity and irregularity or absence of smoothness, form invariance on scaling, self-similarity, morphostructural and functional complexity.The Richardson-Mandelbrot equation provides the mathematical basis for understanding geometric and spatial fractal structures, and for measuring and interpreting them ... with ... the equation of a straight line with slope 1-D, from which the dimensional exponent D can be calculated to yield the numerical value of the fractal dimension FD. As mentioned in the Introduction, biological fractals also called asymptotic natural fractals show auto-similar scaling properties (fractality) within a fractal windows, graphically represented by the region II in the middle of three typical regions, limited by a lower (εmin) and an upper bound (εmax), of a bi-asymptotic fractal, where a straight line can be drawn and the fractal dimension [FD] calculated from its slope. While the practical evaluation of the fractal dimension could be obtained by various quantitative approaches, the box counting method easily based on counting of the non empty boxes N at a variable grid length (ε), is by far the most reliable.

Cell membranes and cell organelles

Particularly at the electron microscopy level, the fractal analysis proved useful for an objective investigation of cell shape and cell phenotype, the fine cytoplasmic structures and the organization of cell membranes and nuclear components and other subcellular organelles, either in normal or pathological tissues and in cell cultures during time. The fractal tool has also been employed to document the feasibility of using ultrastructural changes in cell surface and nuclear inter(eu)chromatin to assess the early phases of apoptotic cell death. Ultrastructural changes which involved a loss in heterochromatin irregularity due to its increased condensation (lower FD), were evident well before the detection of conventional cell markers, which were only measurable during the active phases of apoptosis [8]. Furthermore, the nuclear complexity of human healthy lymphocytes was shown to undergo a reduction of FD during the apoptotic process. Measuring the FD of euchromatin and heterochromatin nuclear domains helped to discriminate lymphoid cells found in Mycosis fungoides from those in chronic dermatitis [5]. It has been observed that the complex structure of the living cell is critical for cellular function and that the spatial organization of the cell is even more important for cellular properties than is its genetic, epigenetic, or physiological state. Yet relatively little is known about the mechanisms that produce the complex spatial organization of a living cell [9].

Applications in normal and pathologic tissues

For an objective description of neoplastic and pathologic traits of cell tissues by the fractal approach, the main condition is the experimental definition of a scaling interval rather than a unique dimensional scale selected a priori. A critical reading of the literature shows that such a distinctive characteristic is insufficiently taken into account and inadequately applied in many investigations. These views are typically interpreted in a qualitative manner by clinicians trained to classify abnormal features such as structural irregularities or high indices of mitosis. A more quantitative and hopefully more reproducible approach, which may serve as a useful adjunct to trained observers, is to analyze images with computational tools. Herein lies the potential of fractal analysis as a morphometric measure of the irregular structures typical of tumor growth. Among the most promising fields of investigation, for which fractal geometry provides an original approach and fractal dimension represents more than an additional geometrical parameter or just “a useful adjunct” are cell heterogeneity, architectural organization of tissues tumor, parenchymal border, cellular/nuclear morphology, developmental and morphogenetic processes in tissues and organs in healthy, pathologic, or tumor conditions, and the pathologies of the vascular architecture. Tumor grading on histology specimens is difficult to assess because tumors often consist of a heterogeneous mixture of cells with varying degrees of irregularity as well as local variations in cellular differentiation. Measuring the fractal dimension could aid pathologists in grading heterogeneity and in determining the spatial extent of poorly differentiated regions of breast and prostate tumors [10]. Cell heterogeneity, known to contribute in a determinant way to the histological grading of human breast cancer, has been examined by means of geostatistics and the Hurst fractal parameter [11]. Several examples seem to indicate that the occurrence of morphogenetic dynamics, the emergence of complex patterns, and the architectural organization of active tissues and tumor masses may be driven by constructive mechanisms related to fractal principles, including deterministic and/ or random iteration of constituent units with varying degrees of selfsimilarity, scaling properties, and form conservation. Preservation of tissue architecture and cell polarity of organs and the eventual restoration of organized traits in tumor tissues, deconstructed and deregulated at various levels, is an emerging field of interest since it has been observed that biological entities organize with their own degrees of structural and behavioral complexity, develop on different spatial and time scales and follow modifications induced by drug or endocrine treatment as well. It has been argued that reversion into a more ‘‘physiological’’ fractal dimension implies reduced morphologic instability and an increase of cells connectivity, what emphasizes the relevance of shape fractal parameters as descriptors of cell transition from one phenotype to another. Fractal dimension has been used as a characterization parameter of premalignant and malignant epithelial lesions of the floor of the mouth in humans and of architectural changes of the epithelial connective tissue interface (ECTI) of the buccal mucosa during aging [12]. The onset of fundamental phenomena such as development, growth and cell death during different stages of carcinogenesis and cell differentiation, i.e., from mesenchymal to smooth muscle cells, has been adequately investigated by fractal geometry. One highly promising approach appears to be a combination of fractal analysis, that provides a quantitative description of shapes, with radiographic images able to discriminate malignant from benign tumor masses, and also from normal tissue structures. The computed FD of the contour of a mass may be useful for characterizing shape and gray-scale complexity, which may vary between benign masses and malignant tumors in mammograms [13].

The complexity of human brain and nervous cells

The original conception of Galeno (Pergamo, 129-216 D.C.) that confined the superior functions of brain within three cerebral cells (spheres), has spanned several centuries up to the Renaissance period culminating with Leonardo da Vinci (1452-1519). As pointed out in an exhaustive review [2] the first outstanding breakthrough in the brain knowledge was accomplished by Andreas Vesalius with his famous work “ De humani corporis fabrica” (1543); cerebral surface convolutions were described without a detailed identification of inner morphological pattern. Nevertheless he conjectured a probable link between brain structures and psychological functions. Relevant investigations were successively performed by Marcello Malpighi (1628-1694) who suggested the existence of a nervous fluid filling in cerebral glands, by Thomas Willis (1621-1675) who evidenced an arterial circuit by anastomosis of internal carotids and vertebral artery, and by Vicq d’Azyr (1746-1796) who revealed convolutions in unidentified areas of the external brain surface. Albrecht von Haller (1708-1777) had also conjectured the existence of a secretive function. Franz Joseph Gall (1758-1828) and Johann Spurzheim (1776- 1828) identified phrenological maps sharing specific brain functions. Later on, Paul Broca (1824-1880) localized cerebral zones with specialized activities such as those for the language arguing that “Nous parlons avec l‘hémisphère gauche”. Carl Wernicke (1848-1905) identified an area in the temporal lobe, whose damage may provoke the selective loss of the capacity of listening words. Within the late 19th and the early 20th centuries, there appeared the outstanding contributions of two coeval scientists, both Nobel Laureates: Camillo Golgi (1843-1926) who postulated the “reticular theory” suggesting that the nervous system is a syncytial system of nervous fibers that forms an intricate complex diffuse network along which the nervous impulse may propagate. On the other side Santiago Ramón y Cajal (1852-1934) developed the “neuron theory” for which the relationship between nerve cells was not one of continuity, but rather of contiguity, accomplished through small membranous spines protruding from neuron’s dendrites that typically receive input from a single synapse of an axon (output). In the last decades the implementation of performant imaging techniques, such as Positron Emission Tomography (PET), Functional Magnetic Nuclear Resonance (fMNR), Computed Axial Tomography (CAT), etc. in concomitance with the increment of the theoretical knowledge provided by the innovative Fractal geometry and modern Mathematics have opened an unprecedented analytical breakthrough. Actually, digital images recovered from body sequential cross-sections according to the Cavalieri’s principle for volume determination enabled to combine the close representation of whole structures with the fractal quantitative evaluation of their anatomical/morphological peculiarities. The evolutionary concourse of two major events, i.e., “the tremendous expansion and the differentiation of the neocortex”, has contributed to the development of the human brain [14]. Today, modern neurosciences recognize the presence of fractal properties in brain at various levels, i.e., anatomical, functional, pathological, molecular, and epigenetic, but not so long ago there was no analytical method able to objectively describe the complexity of biological systems such as the brain. The intricacy of mammalian brain folds led Mandelbrot to argue that «A quantitative study of such folding is beyond standard geometry, but fits beautifully in fractal geometry». At that time however, there was no certainty about the brain’s geometry or about neuron branching. Anatomical-histological evidence that the complexity of the plane-filling maze formed from dendrites of neural Purkinje cells of cerebellum was more reduced in non-mammalian species than in mammals led Mandelbrot to comment: «It would be very nice if this corresponded to a decrease in fractal dimension (FD), but the notion that neurons are fractals remains conjectural» affirmed Mandelbrot [1]. Since then, a wealth of investigations have documented the fractal organization of the brain and nervous tissue system, and implication of fractals for neurosciences has been unambiguously affirmed. Among relevant applications of fractal analysis to nervous and brain tissue were pioneering studies which showed that the fractal dimension is an unbiased measure of the complexity of neuronal borders and branching pattern and of the time course of morphological development and differentiation of spinal cord neurons in culture, increasing from 1.1 for the less differentiated neuron up to 1.5 for the most differentiated cell. Further studies have confirmed that the fractal dimension correlates with the increase in morphological complexity and neuronal maturity. In some recent reports, rather exciting, it ha been argued that there is a trend towards a “Unified Fractal Model of the Brain” [15]. These authors suggested that the amount of information necessary to build just a tiny fraction of the human body, that is, just the cerebellum of the nervous system, was a task for which 1.3% of the information that the genome could contain was totally insufficient. «Fractal genome grows fractal organism; yielding the utility that fractality, e.g. self-similar repetitions of the genome can be used for statistical diagnosis, while the resulting fractality of growth, e.g. cancer, is probabilistically correlated with prognosis, up to cure» [15]. The brain is now accepted as one of nature’s complete networks, while the hierarchical organization of the brain, seen at multiple scales from genes to molecular micronetworks and macronetworks organized in building neurons, has a fractal structure as well with various modules that are interconnected in small-world topology. The theoretical significance is that the fractality found in DNA and organisms, for a long time “apparently unrelated,” was put into a “cause and effect” relationship by the principle of recursive genome function [16,17].

Fractal geometry in brain and nervous diseases

Fractal analysis was applied to anatomical/ histological images and high-resolution magnetic resonance images in order to quantify the developmental complexity of the human cerebral cortex, the alterations in diseased brain with epilepsy, schizophrenia, stroke, multiple sclerosis, amyotrophic lateral sclerosis (ALS), cerebellar degeneration, and the morphological differentiation of the peripheral nervous .FD values estimated for brain white matter (WM) skeleton, surface and general structure in both controls and ALS patients revealed no significant WM changes between controls and ALS patients and among the AL5 subgroups. A highly significant reduction of the fractal dimension was observed in the cortical ribbon of Alzheimer’s Disease patients with respect to control subjects [18]. The 3-D fractal dimension of the Purkinje neurons decreased from 1.723 to 1.254, indicating a significant reduction of dendritic complexity during cortical development disorder. The fractal analysis has enabled to quantitatively describe the complex morphological forms in which astrocytes occur in brain of ischemic/hemorrhagic stroke and Alzheimer’s disease (AD). In contrast, fractal dimension values were found higher in the Gray Matter (GM) of Multiple Sclerosis patients (MS) compared to controls and indicated that GM tissue in MS has higher morphological complexity, perhaps due to the presence of the inflammatory component (i.e., microglia activation) and cellular changes in the GM. In large neurons of the human dentate nucleus the fractal dimension [FD] has been found to correlate with the increase in morphological complexity and neuronal maturity. Lastly, a quantitative evaluation of the surface fractal dimension may allow not only to measure the complex geometrical architecture, but also to model the development and growth of tumor neovascular systems and explore the morphological variability of vasculatures in nature, in particular the microvasculature of normal and adenomatous pituitary tissue. The fractal analysis was recently applied on patients with cerebral arterial venous malformations (AVM): increased FD values related to structural vascular complexity were due to the increased number of feeding arteries in patients suffering from AVM. In the normal human retina, blood vessels or vascular trees exhibited an FD of 1.7, the same fractal dimension found for a diffusion- limited growth process, a finding which may have implications for the understanding of the embryological development of the retinal vascular system. In large neurons of the human dentate nucleus the fractal dimension [FD] has been found to correlate with the increase in morphological complexity and neuronal maturity. A recent study described a method for quantifying cerebral blood flow (CBF) distribution in Alzheimer’s disease (AD) from Single Photon Emission Computer Tomography (SPECT) images obtained with Technetium (99mTc) exametazime (HMPAO) by 3-dimensional fractal analysis (3D-FA). The fractal dimension 3D-FA was well correlated with the cognitive impairment, as assessed in neuropsychological tests and could represent a useful method for objectively evaluating the progression of AD.

Attractive perspective in forensic sciences

The human brain was buried for centuries and considered a black box remaining hidden from any experimental scientific investigation, in reason of a dominant philosophical thought and aesthetic perception combined with the lack of an adequate methodology. Nowaday, the brain can be observed in vivo whereas a subject is undertaking many different cognitive or motoric tasks by examining neuroimages obtained with recent developed methodologies, such as TAC, fNMR, NMR, PET, SPECT. Nevertheless an enormous challenge remains to be solved in the future, namely the features and the structures revealed by neuroimages should be submitted to a stringent quantitative analyse i.e., evaluated by a scaling free fractal approach that takes into account shape complexity and morphological richness thus preventing any approximation or smoothing of peculiar features. It has been observed that, in concomitance with genetic heritage, some factors such as volume and functional changes of amigdala might intensify the propensity level toward an aggressive behaviour by abused children. Recently, neuroimaging techniques have indicated that the increase in white matter of corpus callosum, the reduction of gray matter in the prefrontal cortex or the decrease of the posterior Hippocampus volume may compare in many individuals with antisocial conduct [19]. A stereological report has revealed that violent mentally disordered individuals suffer from a significant anterior cingulate volume loss [20]. Despite the evidence brought by the experimental yet conventional studies quoted above, there is no doubt that a definite assessment can only be obtained by measuring complex traits during time, namely by means of the fractal geometry.


While the complexion of brain and nervous tissue remains largely understood, the present survey provides experimental data confirming that biological processes including growth, proliferation, apoptosis, epigenetic and genetic mechanism, morphologic/ultrastructural and functional organization occurring in living elements and complex organized tissues may follow fractal rules. Among the main fractal peculiarities worth noticing is the process of iteration, whose powerful dynamics allows specific generators to be properly iterated at different scales (small and large) without a priority choice, by linking efficient genetic programming in order to achieve the formation of viable biological forms and living objects. The large agreement with the fractal nature of the brain and nervous cell system, sustained by theoretical experimental and heuristic foundations, is nowadays consolidated and intervenes more than thirty years after the publication of the Fractal Geometry of Nature, in which Mandelbrot recognized that «the notion that neurons are fractals remains conjectural». Its relevance and contribution to the cultural development of mankind (as comprehensive of humanistic and scientific thinking) is keen underlined by the observation of some years ago arguing that the fractal geometry could be considered as a biological design principle for living organisms [7]. More recently it was reported that «fractals would surely be a most extraordinary design principle for operational economy in complex systems» [21]. The experimental evidence indicates that the majority of complex biological shapes and structures could be depicted as fractal entities: hence, fractal analysis is the most valuable tool for measuring dimensional, geometrical and functional parameters of cells, tissues and organs that occur within the animal and vegetal realms. In a comprehensive treatise dedicated to fractal geometry of brain it has been alleged that «fractals have definitively entered into the realms of clinical neurosciences» [22]. Now we just have to hope that even forensic neurosciences will fully benefit from the fractal geometry with its theoretical and methodological background in order to unpack complex brain relics.


Mandelbrot B (1983) The Fractal Geometry of Nature. Freeman, San Francisco.

Mandelbrot B (1977) Fractals, M.B. Form, Chance and Dimension. 1977. W.H. Freeman and Company, San Francisco.

Losa GA, Nonnenmacher TF (1996) Self-similarity and fractal irregularity in pathologic tissues. Mod Pathol 9: 174-182. [Crossref]

Mandelbrot B (1967) How long is the coast of britain? Statistical self-similarity and fractional dimension. Science 156: 636-638. [Crossref]

Losa GA (2012) Fractals in Biology and Medicine. In: Meyers R (Ed.), Encyclopedia of Molecular Cell Biology and Molecular Medicine. Wiley-VCH Press, Berlin.

Paumgartner D, Losa G, Weibel ER (1981) Resolution effect on the stereological estimation of surface and volume and its interpretation in terms of fractal dimensions. J Microscopy 121: 51-63. [Crossref]

Weibel ER (1991) Fractal geometry: a design principle for living organisms. Am J Physiol 261: L361-369. [Crossref]

Losa GA, Castelli C (2005) Nuclear patterns of human breast cancer cells during apoptosis: characterization by fractal dimension and (GLCM) co-occurrence matrix statistics. Cell Tissue Res 322: 257-267. [Crossref]

Marshall WF1 (2011) Origins of cellular geometry. BMC Biol 9: 57. [Crossref]

Losa GA (2015) The Living Realm depicted by the Fractal Geometry. Fractal Geometry and Nonlinear Anal in Med and Biol 9: 57-66.

Sharifi-Salamatian V, Pesquet-Popescu B, Simony Lafontaine J, Rigaut JP (2004) Index for spatial heterogeneity in breast cancer. J Microsc 216: 110-122. [Crossref]

Landini G, Hirayama Y, Ti L, Kitano M (2000) Increased fractal complexity of the epithelial-connective tissue interface in the tongue of 4NQO- treated rats. Pathol Res Practice 196: 251-258. [Crossref]

Rangayyan RM, Nguyen TM (2007) Fractal analysis of contours of breast masses in mammograms. J Digit Imaging 20: 223-237. [Crossref]

De Felipe J (2011) The evolution of the brain, the human nature of cortical circuits, and the intellectual creativity. Front Neuroanat 5: 1-16. [Crossref]

Di Ieva A, Grizzi F, Jelinek H, Pellionisz AJ, Losa GA (2014) Fractals in the Neurosciences, Part I: General Principles and Basic Neurosciences. Neuroscientist 20: 403-417. [Crossref]

Pellionisz AJ (2008) The principle of recursive genome function. Cerebellum 7: 348-359. [Crossref]

Pellionisz A (1989) Neural geometry:towards a fractal model of neurons. Cambridge: Cambridge University Press.

King RD, Brown B, Hwang M, Jeon T, George AT; Alzheimer's Disease Neuroimaging Initiative (2010) Fractal dimension analysis of the cortical ribbon in mild Alzheimer's disease. Neuroimage 53: 471-479. [Crossref]

Yang Y, Glenn AL, Raine A (2008) Brain abnormalities in antisocial individuals: implications for the law. Behav Sci Law 26: 65-83. [Crossref]

Kumari V, Uddin S, Premkumar P, Young S, Gudjonsson GH, et al. (2014) Lower anterior cingulate volume in serious violent men with antisocial personality disorder or schizophrenia and an history of childhood abuse. Aust N Z J Psychiatry 48: 153-161. [Crossref]

Werner G (2010) Fractals in the nervous system: conceptual implications for theoretical neuroscience. Front Physiol 1: 15. [Crossref]

Di Ieva A (Ed.) (2016) The Fractal Geometry of the Brain. ISBN 978-1-4939-3995-4, 1st ed. XXII, 585.

Why Genomics Isn't All It's Cracked Up To Be

Forbes, Jan 10, 2017


Opinions expressed by Forbes Contributors are their own.

[Emperor has no clothes would be good - but No Emperor. Slide in Pellionisz Goole Tech Talk YouTube 2008]

Answer by Drew Smith, former R&D director at MicroPhage and SomaLogic, on Quora:

As you can gather from this post, I am skeptical that genomics will have much impact on improving human health. An immediate objection to this skepticism would be that genomic-driven advances have simply not had enough time to work their way into tangible treatments. It has only been 15 years since the first human genome was published. Fair enough. But the obvious follow-on question is, how much time is enough time?

I’d say that 15 years is more than enough time for a truly powerful advance to make an impact. Consider these examples:

• Germ theory as a cause of infectious disease was formulated in the mid-1870s. The first scientific vaccines were developed in the 1880s.

• Blood types were first described in 1901; the first successful transfusions were made in 1907.

• Insulin was discovered in 1921; the first diabetics were treated with it in 1922.

• HIV was confirmed to be the cause of AIDS in 1984; the first anti-retroviral drug was approved in 1987.

• The role of LDL in hypercholesterolemia was described in 1974; the first statin was approved in 1987.

Let’s note that none of these discoveries and treatments are for niche diseases. Infectious diseases were the leading killers in every country until the mid-20th century. Heart disease has now taken its place, and diabetes is not far behind. Even in the age of elaborate clinical trials, big advances get translated into big cures in 15 years or less. By this criterion, genomics does not deserve to be labeled a big advance.

Why not? No one, certainly not me, disputes that genes are fundamental to the workings of life, and that DNA sequences are fundamental to the workings of genes. But there are two factors that interfere with our ability to draw straight lines between DNA sequences and health risks and outcomes: distance and chaos.

DNA, with few exceptions, plays no direct role in health and disease but instead is several steps removed. Our well-being is created and maintained by effector molecules that do all the work: proteins and metabolites. It is true that protein sequences are mostly determined by DNA sequences, and that protein levels are under genetic control. But there is a long series of steps required to translate DNA into a fully processed and localized protein, and every one of these steps is subject to feedback and modification by other proteins as well as by metabolites.

The large number of steps between gene and finished protein means that gene expression contains an element of chaos. Chaos does not mean randomness, although certainly there is a degree of randomness in gene expression [mathematically, this is just very naive]. Genetically identical single cells and individual genes within them have long been known to behave unpredictably, even though their behavior at the population level can be highly predictable. Chaos instead means that the number of interactions in a system gives rise to a combinatorial explosion in which the number of possible outcomes is so great that they cannot be predicted by models that are much less simple than the system itself [the Forbes author does not seem to know a lot about fractals and chaos].

Weather, of course, is the classic example of a chaotic system, one in which butterflies cause blizzards. The number of potential genetic interactions in a cell is not as large as the number of interactions in the atmosphere (the number of air molecules is some 100 tredecillion, which is a 1 followed by 44 zeros), but it is large. In fact, the study of these interactions is its own “-omics” discipline — “interactomics”. A map of the genetic interactome of a single-celled yeast looks like this

From Genetic Networks

To a first approximation, every gene is connected to and influenced by every other gene. Complex animals like humans have much more complex interactomes.

It gets worse. There is not one human genome, of course. Even though humans are remarkably uniform at the genetic level — we are about 99.5% identical — there are still an enormous number of possible variations. The number of single-nucleotide changes that are found in at least 1% of the population is estimated at 10-30 million. The number of unique changes across our population is about 420 billion (60 mutations per genome replication x 7 billion living humans). Only about 1% of these changes are meaningful. But 1% of a very large number is still a large number.

And it keeps getting worse. We don’t know how to reliably recognize meaningful changes in DNA sequences. When we do recognize them, we can’t usually predict how those changes will play out. We can only do this retrospectively, not prospectively. We have to identify people with a medical condition, analyze their genomes and then try to sort the overwhelming number of meaningless changes from the few that are.

Our ability to create genetic information has far outstripped our ability to store and analyze it. Sure, it may cost only $1,000 to sequence a genome, but a conservative estimate puts the costs of analysis at over $100,000 per genome, and that is for an analysis that is not very informative or powerful.

Genomics is not a pseudoscience or a hoax. It is an immensely important discipline that has yielded many insights about the human condition and will continue to do so. But putting it to use is far harder than many of its enthusiasts have led the public to believe. Don’t hold your breath waiting for a new era of medical breakthroughs based on genomics. The most realistic expectation is that progress in genomics will resemble that of the other great science of complex phenomena, weather forecasting: slow, painful incremental progress driven largely by improvements in computational power

[Reply #4 reads - from Andras Pellionisz. Ten years ago (2007) the massive effort of ENCODE by US taxpayers ended with the “official” result that the stronger of the two axioms of Old School Genomics, falsely claiming that 99% of the human genome was “Junk DNA”, proved to be untenable. “the scientific community will need to rethink some long-held views” so ordered Francis Collins, and resigned. My manuscript “The Principle of Recursive Genome Function” (2007, printed in 2008) reversed both mistaken axioms “Junk DNA” of Susumu Ohno (1972) and the already dead “Central Dogma” of Francis Crick (1956). Beyond the peer-reviewed science paper I also popularized PRGF on Google Tech Talk YouTube (Pellionisz 2008) showing a slide of Old School Genomics (of the genes – only!) as “The Emperor Has No Clothes” (look it up). Reversing axioms is never enough – the new principle has to propose a “better theory”. This was my FractoGene (“Fractal Genome Grows Fractal Organisms”). The recursion leaded to nonlinear dynamics – now one face of the fractal/chaos mathematics is “discovered” ten years after PRGF. In all fairness, a single decade is nothing compared e.g. to the Vatican reversing after 300 years the “heresy” of Giordano Bruno (for which he was torched and his ashes thrown into the river Tiberis).. Or, for modern sciences, when the axiom that “atoms don’t split” was found to be utterly false and radioactivity was discovered (Becquerel 1896), development of the needed new mathematics (quantum mechanics) started by Planck within five years (1901) – but the breakthrough of major practical application (fission nuclear bomb) emerged only in 1945 (that is, a full half a Century after the initial discovery). After a single decade, we are still at an early age with PostModern Genomics (of the full genome, genes “anything but Junk” and all) after PRGF and eg. the fractal/chaotic mathematics that PostModern Genomics has to build meticulously. How much time will be needed for major practical breakthrough? Remember the Manhattan Project – it is not only a function of time, but also the direct function of the funds allocated by advanced societies! If one is impatient with the advancement of e.g. PostModern Oncogenomics, establish National Oncogenomics Initiatives, like both of my home countries (tiny Hungary and powerful USA are doing as we speak – the USA somewhat occluded by the long shadow of Old School Genomics). Presently China is in the lead, both with a massive Precision Therapy Program in place as well as with support of fractal/chaos mathematics and software.]

Craig Venter Mapped The Genome. Now He's Trying To Decode Death

Matthew Herper , FORBES

February 21

Matthew Herper , FORBES STAFF
I cover science and medicine, and believe this is biology's century.

THE WORLD'S MOST EXTREME physical exam starts in the world's plushest exam room, complete with a couch, a private bathroom and a teeming fruit plate. It will be my home for an entire day. First come the blood tests, vial after vial. Then two 35-minute sessions in an MRI tube, where REM and U2 try to drown out the clanks as the machine takes pictures of my entire body. There's an ultrasound of my heart. Salade Niçoise for lunch. A stool sample. A cognitive test in which letters flash on a computer screen at a dizzying pace. And a CT scan of my heart as well, which originally seemed so over-the-top for someone my age that I tried to get out of it.

"In Vietnam, I used to do autopsies on 18-to-22-year-olds, and a lot of them had cardiovascular disease," J. Craig Venter, the architect of the process, says with a shrug, before adding, ominously, "We find things. The question is what you do with it."

Yes, it's that Craig Venter, the man in the late 1990s who, frustrated by the slow progress of the government-funded Human Genome Project, launched an effort that sequenced human DNA two years earlier than planned (he was subsequently the first human to have his complete DNA sequenced). He hasn't slowed down since. He sailed around the world in a voyage inspired by Darwin's journey on the Beagle, discovering thousands of new species along the way. He has created synthetic life and started three companies, and was almost a billionaire before being fired from one of the most promising, Celera Genomics.

Now he's back with his most ambitious project since his historic breakthrough 17 years ago. He's raised $300 million from investors including Celgene and GE Ventures for a new firm, Human Longevity, that's trying to take the DNA information he helped unlock and figure out how to leverage it to cheat death for years, or even decades.

Core to the effort is the $25,000 executive physical, branded the Health Nucleus, that I'm taking (disclosure: I got tested for free). It's certainly very thorough--and, to many doctors, precisely the wrong approach, owing to all the false positives. "Study after study of various kinds of screening measures has shown they do more harm than good," says Steven Nissen, the chairman of cardiology at the Cleveland Clinic. "You do a total body MRI and you're lucky if you don't find something. I don't think it's good medicine."

Venter scoffs. "We're screening healthy people, and a lot of physicians don't like that," he acknowledges. "My response is: How do you know they're healthy? We use a definition of health out of the Middle Ages: If you look okay and you feel okay, you're deemed healthy. We have a different way of looking at people."

Now 70, Venter cites himself. Last year, he underwent his own physical and says he found prostate cancer, which was removed last November. The man he has called his "scientific muse," Nobel laureate Hamilton Smith, 85, found he had a deadly lymphoma in his lung. It has also been treated, and Smith says his prognosis is good.

The famously gruff Venter is entirely comfortable ticking off the establishment, no matter what that establishment is, and the feeling is mutual. His DNA breakthrough was one of the great scientific accomplishments of the 20th century, yet he never won a Nobel Prize. Academics view him as someone interested in profits over science. "He's a very insecure person who compensates by coming across as very arrogant and aggressive," says one former collaborator. Similarly, Venter's discoveries have upended industries, yet his business track record, including a brief flirtation with billionairehood, is checkered, as connections to past backers and bosses have gone up in flames. "He has irritated a lot of people," says Harvard genetics professor George Church, a Venter fan. "It's a pity."

Thus, Human Longevity offers Venter a last chance to square his legacy, awe the scientists and make billions in the process, all the while shaking the foundation of a topic that precisely 100% of homo sapiens have a keen interest in: how and when each of us will die.

Yes, it's that Craig Venter, the man in the late 1990s who, frustrated by the slow progress of the government-funded Human Genome Project, launched an effort that sequenced human DNA two years earlier than planned (he was subsequently the first human to have his complete DNA sequenced). He hasn't slowed down since. He sailed around the world in a voyage inspired by Darwin's journey on the Beagle, discovering thousands of new species along the way. He has created synthetic life and started three companies, and was almost a billionaire before being fired from one of the most promising, Celera Genomics.

Now he's back with his most ambitious project since his historic breakthrough 17 years ago. He's raised $300 million from investors including Celgene and GE Ventures for a new firm, Human Longevity, that's trying to take the DNA information he helped unlock and figure out how to leverage it to cheat death for years, or even decades.

Core to the effort is the $25,000 executive physical, branded the Health Nucleus, that I'm taking (disclosure: I got tested for free). It's certainly very thorough--and, to many doctors, precisely the wrong approach, owing to all the false positives. "Study after study of various kinds of screening measures has shown they do more harm than good," says Steven Nissen, the chairman of cardiology at the Cleveland Clinic. "You do a total body MRI and you're lucky if you don't find something. I don't think it's good medicine."

Venter scoffs. "We're screening healthy people, and a lot of physicians don't like that," he acknowledges. "My response is: How do you know they're healthy? We use a definition of health out of the Middle Ages: If you look okay and you feel okay, you're deemed healthy. We have a different way of looking at people."

Now 70, Venter cites himself. Last year, he underwent his own physical and says he found prostate cancer, which was removed last November. The man he has called his "scientific muse," Nobel laureate Hamilton Smith, 85, found he had a deadly lymphoma in his lung. It has also been treated, and Smith says his prognosis is good.

The famously gruff Venter is entirely comfortable ticking off the establishment, no matter what that establishment is, and the feeling is mutual. His DNA breakthrough was one of the great scientific accomplishments of the 20th century, yet he never won a Nobel Prize. Academics view him as someone interested in profits over science. "He's a very insecure person who compensates by coming across as very arrogant and aggressive," says one former collaborator. Similarly, Venter's discoveries have upended industries, yet his business track record, including a brief flirtation with billionairehood, is checkered, as connections to past backers and bosses have gone up in flames. "He has irritated a lot of people," says Harvard genetics professor George Church, a Venter fan. "It's a pity."

Thus, Human Longevity offers Venter a last chance to square his legacy, awe the scientists and make billions in the process, all the while shaking the foundation of a topic that precisely 100% of homo sapiens have a keen interest in: how and when each of us will die.

VENTER HAS DISPLAYED POTENTIAL, BOTH achieved and unrealized, almost since birth. Growing up in Millbrae, California, near what was emerging as Silicon Valley, he had such bad grades that by high school his worried mother sometimes checked his arms for track marks. The first glimmer of his future success was in swimming. He was initially mediocre, but when a coach sent him home for the summer with tips, his competitive streak kicked in. He spent three months training furiously and never again lost a race. "Had things been different I would have been competing for the Olympics," Venter says. "But Lyndon Johnson changed that for me with the draft."

Swimming unlocked his potential, but Vietnam made him who he is. At age 20 he served as a Navy hospital corpsman, triaging troops who came back from battle, including the Tet Offensive. Deciding who would live and who would die was so traumatic that he says he considered suicide and swam far out to sea intending to drown. He says he had a change of heart a mile out after a shark prodded him. But he'd go through Vietnam again. "Knowing the outcome and what it did for my personal growth, I would force myself to do it again if I had the choice," Venter says.

After he returned to the States, he went to community college, then the University of California, San Diego, where he initially wanted to be a doctor but discovered science. He eventually completed his Ph.D. in physiology and pharmacology, became a professor at the State University of New York at Buffalo in 1976 and, in 1984, joined the National Institutes of Health.

At the NIH the themes that would define his career locked into place: productivity, perceived greed, the conflicts between pure science and industry money. Using a new technology, he discovered thousands of human genes. The NIH made the unprecedented decision to patent them in his name, and colleagues blamed Venter, calling him greedy. Nobel laureate James Watson said he was "horrified." Venter insists he was always against the patents but that the NIH did it anyway.

Frustrated, he started a nonprofit institute in 1992, with a unique model. He raised money from venture capitalists, on the condition that he share his data with a for-profit company, Human Genome Sciences, before he published it. The relationship ended unhappily in 1997 because of arguments over data disclosure, with Venter walking away from $40 million in research funding. "I paid a lot of money to get rid of [Human Genome Sciences]," Venter says.

But in 1995, Venter's institute made a real breakthrough: the first genome, or map of the genetic code of an organism, in this case a type of bacterium. It was a suggestion from Ham Smith. They had met at a scientific conference in Spain in 1993 and gone out drinking, starting a two-decade-plus collaboration. Foreshadowing his later race with the Human Genome Project, Venter and Smith's bacterial genome map beat similar projects in academia by many months.

That led a California unit of lab equipment maker Perkin-Elmer, which made DNA sequencers, to approach Venter. If he could sequence a bacterial genome, why not use the company's newest machines to sequence a human genome?

Venter couldn't say no, which led to Celera Genomics' founding in 1998. It not only succeeded in overtaking the $3 billion Human Genome Project, an international consortium funded largely by the U.S. government, but it also mapped the genomes of the fruit fly and the mouse, both important laboratory animals. In the process, Venter angered scientists globally, aghast that such research would be driven by profit rather than knowledge. At the time, James Watson reportedly became so enraged he compared Venter to Hitler, asking colleagues who they were going to be--Chamberlain or Churchill?

But the pressure of private enterprise ultimately spurred results, both at Celera and the public group, which improved their methods and accelerated their research. As a result, the two groups jointly announced they had mapped the entire human genome--an achievement that our grandkids will be reading about in their textbooks--at the White House on June 26, 2000.

In the age of the dot-com boom, Celera became a highflier, raising $855 million in a stock offering in February 2000 and peaking at a market capitalization of $14 billion just before the entire market started to collapse in March. Venter's stake briefly surpassed $700 million. He says he gave half his shares to his nonprofit foundation, which then sold half of them, netting more than $150 million, which has funded his science ever since.

It was a necessary scientific nest egg. Celera struggled to invent drugs and diagnostic tests based on its pioneering research, and Venter bickered constantly with the board. They wanted Celera to become a pharma giant and invent medicines in-house. Venter simply wanted to be a scientist and sell other companies his data. He was fired in January 2002, days before a quarter of his stock options would vest. "Being fired in the way it was done was about as slimy as anybody could do it," Venter says. Celera limped along until 2011, when it was sold to Quest Diagnostics for $344 million. ( Forbes estimates that Venter's current net worth, based on his stakes in his two startups, is $300 million.) Venter's baby had essentially been sold for parts.

WITH HUMAN LONGEVITY, VENTER HOPES TO solve the problem that ultimately limited the efficacy of Celera and the Human Genome Project. Those two groups produced an "average" DNA sequence. That's incredibly important for a science textbook, but for individuals, it's the differences--how one person's genes are different from another's, leading to different noses, eye colors and, yes, diseases--that matter.

Venter says that, thanks to new technology, he can generate the data that can determine those differences. At Celera, Venter loved to show off his 25,000-square-foot rooms of DNA sequencing machines. But just one modern desktop DNA sequencer is as powerful as a thousand of those rooms and can map a person's genome in days for about $1,000. The original Human Genome Project took more than a decade and at least $500 million to do the same thing. (Illumina, the San Diego firm that makes the desktop sequencers, is a big investor in Human Longevity.)

Human Longevity initially sequenced DNA from 40,000 people who had participated in clinical trials for the pharmaceutical companies Roche and AstraZeneca. Venter says this work has led to the discovery of genetic variations that can be found in young people but not older ones--meaning the young folks had genes incompatible with surviving into old age. Figuring out what these genes do could be the kind of breakthrough that would turn the promise of genome sequencing into a lifesaver.

Venter decided that he also needed a study of people that could collect even more data than you can get from a clinical trial. Hence, the $25,000 physical. And because people pay, it's not only a source of data but also a revenue generator. At the moment, close to 500 people have gone through the physical. Venter hopes to be able to serve 2,000 annually as early as this year, which would generate $50 million in revenue. This isn't exactly covered by Medicare. The market, for the moment, will be the wealthy and the occasional company looking out for key executives--the promise of health as the ultimate luxury item.

Doctors hate it. "I'm massively skeptical," says Benjamin Davies, a urologist at the University of Pittsburgh. "We've been down this road of investigating healthy patients, and it's been a sordid road." He points to a recent study that used CT scans to screen for lung cancer: 60% of patients needed follow-up tests, but only 1.5% had cancer. Otis Brawley, the chief medical officer of the American Cancer Society, said Venter's work sounded like "fascinating science," so long as the people taking the physical understand that this is research, not medicine.

Venter believes the problem with earlier screening tests is that they give too little data, not too much. He is his own evidence. He was the first person to get his DNA sequenced, and the results made him think his risk for most types of cancer was low. When he got prostate cancer, he asked his researchers why. They found what he calls "the likely perpetrator."

It's a change in the way his body responds to the hormone testosterone. Testosterone works by tripping a cellular receptor (think of it as a switch). The gene for that receptor is more effective if it has fewer "repeats" (bits of repeated, garbled genetic code). Testosterone makes prostate cancer grow, so a man with 22 repeats and an inefficient receptor has a lowered risk of the disease. Venter's androgen receptor had just six repeats.

"Basically, I have a supersensitive testosterone receptor," Venter says. "Everybody thought I had balls of steel. In fact, I have only six repeats in my androgen receptor."

But Venter's constant search for more data about his own biology also made the problem worse, illustrating one of the true dangers of something like his $25,000 physical. Years before, Venter learned that his testosterone levels were low and decided to take testosterone supplements. (Most doctors don't recommend doing this.) That almost certainly made his tumor grow faster.

About 40% of Health Nucleus' patients have found out they have something serious. Some, like Ham Smith's lung cancer, absolutely needed to be treated. Venter insists Smith's tumor might have killed him had it been discovered a few weeks later. But for most of Human Longevity's patients, the results are not so clear-cut. I'm lucky: My MRI results showed nothing save that my hippocampus, a part of the brain that forms memories, is of only average size. (My DNA sequence isn't in yet.)

I've been thinking a lot about what I would do if I'd learned about a tumor or an aneurysm, and whether this whole endeavor is a bad idea. But I also haven't been able to get myself to regret going through it. Knowledge about yourself is a very seductive offer. It's one that Venter hopes will give him the data to finally deliver on the genome's promise.

[I publicly call Craig (to his delight...) "The Tesla of Genomics" (and George Church "The Edison of Genomics"). We are friends and and an increasingly large circle of Postmodern Genomics knows full well that "The Principle of Recursive Genome Function" governs life (and therefore, death) through the repeats in our non-coding DNA (such as "receptors", maiden name "Junk DNA"). Craig Venter is the first to publicly claim that his cancer was actually caused by an erroneous (in his case, too few) number of repeats in the regulatory DNA. A "portrait" of the scope and pattern of the repeats in his genome was made in 2011, but its fractal nature was missed (the Zipf-Mandelbrot Fractal Parabolic Distribution Curve was not plotted - though the best method of the IP was made explicitely available by 2009. It is unfortunate that Craig did not focus much on math in school - and when I presented FracoGene at his Venter Institute in early 2007 he could not be present for family reasons. Nobelist Ham Smith, however asked a brilliant question "Andras, those fractal repeats you show here in the smallest free-living DNA would be there if the genome was composed of random A,C,T,G-?" Since I have not done at the time of presentation that "random test" I could only answer "Ham, I do not think so, but have never done this very easy but brilliant test. I will get back to you in days when I am home in Silicon Valley". Of course, it took only about half an hour effort for me to code the test, and search for the fractal repeats. NONE OF THEM SHOWED UP IN A RANDOM SEQUENCE. George Church knows math (he can think in code and can write code) and along with others like Eric Schadt and Nobelist Michael Levitt find the fractal recursive genome function "revolutionary" - Andras_at_Pellionisz_dot_com]

China Looks for Fractal Experts - request from Hohhot

Thematic Volume Special Issue on: Advanced Fractal Computing Theorem and Application

Objectives, Purpose and Value of the Recommended Theme

Today, fractal based computing has been applied to a variety of research domains and applications since the fractal is a fixed and special characteristic for natural objects. Since the 1980s, theoretical research and practical application of fractal computing have been a hotspot in natural science and engineering. However, further pending issues in this domain are still waiting to be solved, such as generation mechanism of k-M set with complex exponent, fractal properties of complex functions, fractal properties of natural objects and so on.

Fractal computing is a form of computing that applies to the self-similarity property of various information for improvement of classical computing method. By using fractal computing technology, researchers can seek fixed structure and property of complex dynamical process in engineering and improve the theoretical and practical system in these domains.

Fractal based information processing technologies have also attracted many researchers for years, such as fractal compression, fractal based information classification and recognition and so on. In recent years, fractal computing is also used in various newer domains such as bioinformatics and geology.

Recently, there are also many theoretical and technological researches in the domain of fractal computing with more emerging methods. Moreover, fractal property is acted as a feature component in many classifiers. So, the objective of this thematic issue is to seek state-of-the-art developments in fractal computing theorem and application, including mathematical analysis and novel engineering applications.

Potential topics include, but are not limited to:

Theoretical analysis in fractal

Fractals with mathematical function

Fractal analysis with natural object

Generalized Mandelbrot set

Nonlinear dynamics with fractal

Fractal of quaternion

Analysis of fixed and periodic points

Fractal graphical analysis

Analysis with fractal dimension

Fractal compression

Fractal based encoding and decoding method

Application of fractal in science and engineering

Other interesting and promising domains regarding fractal

Suggested Schedule

Submission of extended abstract: Friday, 7 Dec 2016

Manuscript Due: Friday, 3 Feb 2017

First Round of Reviews: Friday, 10 Mar 2017

Second Round of Reviews: Friday, 25 Mar 2017

Publication Date: Friday, 2 Jun 2017

Editorial Team

Lead Guest Editor

Dr. Shuai Liu, College of Computer Science, Inner Mongolia University, China



Guest Editors

Dr. Xiaochun Cheng, School of Computing Science, Middlesex University, UK



Dr. Peng Xu, College of Science, China Jiliang University, China




[Never heard of the remote and carefully isolated city of Hohhot, the site of "Inner Mongolia University, China"? Below is a picture from 2011, with the added info that "The Chinese leadership also has command bunkers at Hohhot in Inner Mongolia"]

Bill Gates: Bioterrorism could kill more than nuclear war — but no one is ready to deal with it
(except a few unmentioned ...)

By Avi Selk February 18

A genetically engineered virus is easier to make and could kill more people than nuclear weapons — and yet no country on Earth is ready for the threat, Bill Gates warned world leaders Saturday.

No one on his panel at the Munich Security Conference argued with him.

“The next epidemic has a good chance of originating on a computer screen,” said Gates, who made a fortune at Microsoft, then spent much of it fighting disease through his global foundation.

Whether “by the work of nature or the hands of a terrorist,” Gates said, an outbreak could kill tens of millions in the near future unless governments begin “to prepare for these epidemics the same way we prepare for war.”

His co-panelists shared some of the same fears.

“Disease and violence are killing fewer people than ever before, but it's spreading more quickly,” said Erna Solberg, the prime minister of Norway. “We have forgotten how catastrophic those epidemics have been.”

She recalled the Black Death, which she said killed more than half her country's population and created a 200-year recession in Europe.

“It's not if, but when these events are going to occur again,” said Peter Salama, executive director of the World Health Organization. “We need to ramp up our preparedness.”

Gates, who founded the Bill & Melinda Gates Foundation with his wife in 2000, has been worrying about the world's ability to stop a deadly pandemic since Ebola killed thousands two years ago, while governments and militaries struggled to stop it from spreading through West Africa.

“NATO countries participate in joint exercises in which they work out logistics such as how fuel and food will be provided, what language they will speak, and what radio frequencies will be used,” Gates wrote in 2015 in the New England Journal of Medicine. “Few, if any, such measures are in place for response to an epidemic.”

He took the same message to Reddit a year later, when a commenter asked which technologies the world was better off without.

'"I am concerned about biological tools that could be used by a bioterrorist,” Gates wrote. “However the same tools can be used for good things as well.”

Before his panel on Saturday, Gates told the Telegraph: “It would be relatively easy to engineer a new flu strain” by combining a version that spreads quickly with one that kills quickly. Unlike a nuclear war, such a disease would not stop killing once released.

At Munich, Gates ran down all the way that the world's great powers were unprepared: governments out of touch with the companies that make vaccines, international health departments out of touch with each others, and militaries that may not have considered responding to a biological threat.

“Who's this alternate group that's going to deal with the panic?” Gates said. “Who's got the planes and the budget? Maybe the fire department?

While some others on the panel — “Small Bugs. Big Bombs” — focused on the threat of natural diseases, Gates called for “germ games” simulations, better monitoring to spot outbreaks early, and systems to develop vaccines within weeks — rather than the 10-year lead time he said was more common.

“We need a new arsenal of weapons. antiviral drugs, antibodies, vaccines and new diagnostics,” he said.

The Centers for Disease Control's website lists seven agents — including anthrax, plague and bleeding fevers such as Ebola — as potential ingredients in a bioterrorist's cookbook.

The center's sections on surveillance and “planning for all bioterrorism” cite research papers that are mostly more than a decade old.

In his New England Journal of Medicine article, Gates said the United States's last epidemic simulation took place in 2001. At the end of President George W. Bush's administration, a bipartisan report accused the U.S. government of doing too little to address the threat of bioterrorism. Two years into Barack Obama's presidency, a congressional panel gave the government an 'F' in preparedness.

On Saturday, the Munich panelists named only a handful of countries working fast enough to identify and address the threat

“Rwanda is a leader,” Gates said. “If an epidemic started there, we'd see it quickly.”

[Rwanda - or other small countries isolated in African jungles - may well become "testing grounds" for entirely new "designer viruses" made possible by genome editing. As seen below, BGI of China works feverishly behind locked doors on their bold ambition. Although back in 2012 The Atlantic Journal already outlined a frightening scenario "Hacking the President's DNA" , and e.g. George Church, Eric Schadt, Craig Venter (and a few thought-leaders of Genome Informatics, now with Bill Gates welcomed to the club) are keenly aware of the dangers, at U.S. government levels there is not much preparedness as yet. Except, perhaps, that the instincts of the current President (Donald Trump) may be right that shaking hands with un(ac)counted number of people might not be such a great idea, after all. Andras_at_Pellionisz_dot_com]

How Trump can make the most of a nonpartisan cancer 'moonshot'


(Archive photo of Nancy Brinker - in pink, first row - leading her Bridge Walk against breast cancer in Budapest, 2002 as U.S. Ambassador to Hungary)

While many are focusing on what newly appointed Health and Human Services Secretary Tom Price will do about repealing ObamaCare, or cuts to Medicare and Medicaid, attention should also be paid to some of the other critical areas now under his administrative scope.

One key initiative should be sustaining the nation's rich heritage of biomedical research, publically entrusted to the National Institutes of Health (NIH). The NIH benefited from passage of the 21st Century Cures Act in December, which contained $4.8 billion in new funding - $1.8 billion of which is reserved for the cancer "moonshot" initiative launched by then-President Obama, who put then-Vice President Joe Biden in charge of what became the latter's namesake effort.

And although some in Congress criticized the act for providing too many concessions to the pharmaceutical industry at the expense of patient safety, the influx of additional money for research into cancer, as well as brain diseases and opioid abuse, is most welcome if dispensed in a measured, responsible manner dealing with prevention as well as research and treatment.

During the past year, we've written several times in The Hill about some of the shortcomings related to moonshot, especially regarding the partisan and nationalistic nature of its origin, and the lack of clarity in its organization.

And today, there still seems to be some confusion about its leadership and direction.

In a recent conversation with Rep. Debbie Wasserman Schultz (D-Fla.), the congresswoman noted that Biden would continue the effort through a foundation he was starting. And a member of moonshot's blue ribbon panel said that the initiative was now under the direction of the National Cancer Institute's acting director, Douglas Lowy.

To succeed, the Trump administration needs to assume the leadership Biden was given previously as vice president to foster cooperation and collaboration among the various federal agencies and institutions with relevant resources, and to guide moonshot in a direction that maximizes its value to public health.

We've already recommended that the president make Lowy's appointment permanent, and would add now that his administration not repeat the mistake of his predecessor in his partisanship, but rather invite Biden and others passionate and experienced in the cancer community to fight the disease that does not discriminate in its incidence or mortality.

Nancy G. Brinker is the founder of Susan G. Komen, the world's largest breast cancer charity. She has also served as U.S. ambassador to Hungary, U.S. chief of protocol and as a Goodwill Ambassador for Cancer Control to the U.N.'s World Health Organization. She is now continuing her work in media and consulting and has taken a leave of absence from Komen's board.

Rosenthal is an independent journalist who covers issues, controversies and trends in oncology as special correspondent for MedPage Today. He is the founder of the National Cancer Institute's Designated Cancer Centers Public Affairs Network and helped organize a number of national medicine-and-the-media conferences.

Both Brinker and Rosenthal have been co-chairing cancer forums for the Concordia Summit. The opinions expressed belong solely to the authors.

[Fifteen years ago (2002) when Nancy Brinker was the U.S. Ambassador to Hungary, among others mobilizing the society of Hungary for fight against breast cancer (see archive picture above), while I just conceived FractoGene (fractal genome grows fractal organisms - now a US patent in force for about the Next Decade), neither of us could possibly think that in 2017 Hungary will be (in some respect) ahead of the USA!

WHAT !? - It must be someone from Hungary who would dare to say that!

Well, I came from Hungary to Stanford in 1973 ... but in part upon my advise in Budapest in 2006 the bold-looking statement is true in 2017 in a small but very essential way. In the USA, as Nancy Brinker so eloquently observes, "there still seems to be some confusion about its leadership and direction" and the means appears to her "to foster cooperation and collaboration among the various federal agencies and institutions with relevant resources". This is about the Moon Project(s) in the USA - in plural, since there are at least three, presently all in limbo. Meanwhile, tiny Hungary has just accomplished perhaps the most important goal - created a new, independent and focused STRUCTURE with leadership and direction for a National Oncogenomics Initiative. Over a decade before I took up the informatics-challenge inherent in postmodern genomics, I had worked with NASA (to utilize my innovation learnt from bird-brain neural nets how to automatically land F15 fighter planes with one wing chopped off). Since it was "The Decade of the Brain", I realized from inside, that none of the many US government-agencies ALONE could ever accomplish an "out of the box" NEW challenge. Attempting to promote a cooperation of agencies that compete for the same pool of government-money (simply put, taxpayer dollars) was proven to be inherently futile; Pellionisz, A. (1990) "USA Civilian Neural Network Program" to NASA, NIH, NIMH and NSF (Senior Research Associate of the USA National Academy to NASA).

We all agree that we perhaps should learn here something from the original "Moon Project". Indeed! Sputnik of the Soviet Union shocked the West 60 years ago, on 1957 Oct. 4. How did the US respond to the challenge of Intercontinental Ballistic Missiles? Within a year (1958 Oct. 1) established an entirely new and independent government agency to focus resources (NASA). Based on that STRUCTURAL foundation JFK could show "leadership and direction" for the (original) "Moon Shots".

Andras_at_Pellionisz_dot_com ]

China aggressively challenges US lead in precision medicine

Ylan Mui | February 9, 2017 | Washington Post

["Genomic Sputnik, 2017]

The United States has long been the [genomic] industry’s undisputed leader,…but now China is emerging as America’s fiercest competitor….

“I’m very frustrated at how aggressively China is investing in this space while the U.S. is not moving with the same kind of purpose,” said Eric Schadt, director of the Icahn Institute for Genomics and Multiscale Biology at Mount Sinai. “China has established themselves as a really competitive force.”

For China, the genomics revolution has been a chance to showcase its technical prowess as well as cultivate homegrown innovation…To succeed over the next generation, China hopes to emulate Western-style entrepreneurship to transform its economy.

[T]his past spring, Chinese officials launched a $9 billion investment in precision medicine, a wide-ranging initiative to not only sequence genes, but also develop customized new drugs using that data. The funding dwarfs a similar effort announced by President Obama a year ago that has an uncertain future in Trump’s new administration.

“The U.S. system has more dexterity and agility than the Chinese system,” said [Denis Simon, executive vice chancellor of Duke Kunshan University in China]. “But the learning curve in China is very powerful, and the Chinese are moving fast. The question is not if. The question is when.”

[This column brought up the memory, as well as a new and even more formidable renewed possibility, of a "Sputnik-shock" a long time ago. This time, the challenger is not the no longer existing Soviet Union, but is China and other quicker thinking nations. While in the USA (and particularly, in Canada) some Old School remnants still argue to death those suffering e.g. from cancer that 99% of the human genome is "Junk" - the bright new generation -even with obvious prior agenda- totally get it that the 1% is unlikely to cut it (see below). True, there is some mentioning that "genes are turned on and off" - as if the art of piano music were entirely explained by saying that "just turn some keys on, and leave others off". Or, about the "Moon Shot" it is enough to say, "Before this decade is out, take a man to the Moon and bring him safely back". Some of us remember, that after the rockets kept blowing up, Werhner von Braun and 500+ private contractor companies had to sweat it. Above, it takes a double-degree biomathematician, the Director of Icahn Institute of New York Dr. Eric Schadt, to signal that Personalized Medicine for cancers is a field in which the US is already beaten by China. (The $9 Bn initiative by China is not particularly transparent to the West. To peek into BGI of China, see recent past and the future. Though Bill Gates cautiously did not mention any particular nations, clearly warned about strategic aspects in Davos as possibly "Very, very dangerous". To illustrate the sort of mathematics that will go beyond "turning on and off" (to covariant and contravariant fractals) let us just look at the two introductory pages recently, originating from an interesting set of countries. Andras_at_Pellionisz_dot_com]

Patriots cheerleader and MIT researcher, Theresa Oei does it all - the new generation leaps over the nonsense of Junk DNA

Theresa Oei (23) skips the tired question that some Old Experts still agonize on, "if" the non-coding areas regulate the genome. She wants to understand "how".

By Bill Whelan

WickedLocal Cambridge (MIT)

... On weekdays, she splits genes. On Sundays, she just does splits.

Cambridge resident Theresa Oei is a 23-year-old rookie cheerleader with the New England Patriots. She's also a molecular biophysicist at the Broad Institute of the Massachusetts Institute of Technology and Harvard. ..

What are you future aspirations?

I'd like to get my teaching certification for Irish stepdancing. I'm also hoping to pursue a Ph.D. next fall. I'm looking at Harvard programs, MIT, going back to Yale or Stanford. I'd like to look at genetic regulation and understanding how the human genome is controlled and influenced by non-coding areas of the genome

What does that mean?

The genome codes for lots of proteins. Say this sequence of the human genome codes for a protein in heart, for example. But there are also huge regions of the genome that don't seem to code for anything and don't make a particular protein.

Some people thought it was just junk DNA, but it seems these regions have influence on regulating things such as how much protein to make or it silences a particular sequence...

[An Old Expert proposed for 2003-2007, what he labeled as a "Big Science" project, a massive taxpayer-founded effort to find out if in the human genome only the 1% (the "genes") are transcribed, thus can be functional, or the until then discarded "Junk DNA" can also be functional. Even before spending all the taxpayer-money the answer was a devastating result for the Old Establishment "the human genome is throroughly transcribed". Never mind. Though a feeble conclusion said in 2007 (that is, TEN YEARS AGO) that "the scientific community will need to rethink some long-held views", instead of "return to sender" every grant-application that kept focusing only on "genes only" (along with the request "please re-submit taking into consideration the results that the non-coding DNA needs attention as well"), the establishment largely ignored its own results. Abroad, even a second 4-year round was organized to ask the "if" question again (2008-2012), only to arrive at an even stronger affirmation that "the entire human genome is teaming with life". Meanwhile, in the USA an entire decade passed till this week that from now 0.001 of the NIH budget is allocated "funding availability permitted" to start addressing "how". The New Generation, devoting their future to "how" are not without a few concepts already established, however. Even if young "Theresas" all around are not necessarily all mathematically minded, "it seems" that "fractal genome regulates fractal growth of organisms" - thus e.g. fractal defects of the recursive genome function might be "edited out" to stay away from genome regulation diseases e.g. cancers, auto-immune diseases, autism et all. Andras_at_Pellionisz_dot_com]

NIH to expand critical catalog for genomics research

[NIH at the brink of reform acknowledges that "Junk DNA" shapes repetitive (me, Erez Lieberman say FRACTAL) function]

Bethesda, Md., Thurs., Feb. 2, 2017 - The National Institutes of Health (NIH) plans to expand its Encyclopedia of DNA Elements (ENCODE) Project, which is generating a fundamental genomics resource used by many scientists to study human health and disease. Funded by the National Human Genome Research Institute (NHGRI), part of NIH, the ENCODE Project strives to catalog all the genes and regulatory elements - the parts of the genome that control whether genes are active or not - in humans and select model organisms. With four years of additional support, NHGRI builds on a long-standing commitment to developing freely available genomics resources for use by the scientific community.

"ENCODE has created high-quality and easily accessible sets of data, tools and analyses that are being used extensively in studies to interpret genome sequences and to understand the consequence of genomic variation," said Elise Feingold, Ph.D., a program director in the Division of Genome Sciences at NHGRI.

"These awards provide the opportunity to strengthen this foundation by expanding the breadth and depth of the resource."

Since launching in 2003, ENCODE has funded a network of researchers to develop and apply methods for mapping candidate functional elements in the genome, and to analyze the enormous database of generated genomic information. The data and tools generated by ENCODE are organized by two groups: a data coordinating center, which houses the data and provides access to the resource through an open-access portal, and a data analysis center, which synthesizes the data into an encyclopedia for use by the research community.

Pending the availability of funds, NHGRI plans to commit up to $31.5 million in the current fiscal year (FY17) for these awards. With this funding, ENCODE will expand the scope of these efforts to include characterization centers, which will study the biological role that candidate functional elements may play, and develop methods to determine how they contribute to gene regulation in a variety of cell types and model systems. Additionally, the project will enhance the ENCODE catalog by developing a way to incorporate data provided by the research community, and will use biological samples from research participants who have explicitly consented for unrestricted sharing of their genomic data.

At its core, ENCODE is about enabling the scientific community to make discoveries; that is, using basic science approaches to understand genomes at the most fundamental level. Its catalog of genomic information can be used for a variety of research projects - for example, generating hypotheses about what goes wrong in specific diseases or understanding the processes that determine how the same genome sequence is used in different parts of the body to make cells with specialized functions. More than 1,600 scientific publications by the research community have used ENCODE data or tools.

"We found that many of the people that are using the ENCODE resource are doing so for disease studies, and this attests to its translational value," said Mike Pazin, Ph.D., a program director in NHGRI's Division of Genome Sciences.

Identifying the genome's features: the mapping centers

ENCODE's mapping centers have been part of the project since its inception. These groups aim to pinpoint the genomic locations of genes and the regulatory elements that control them. With these new awards, the mapping centers will study a broader diversity of biological samples, including those from individuals with various diseases, as well as highly specialized cells, to expand the catalog of candidate functional elements in the human and mouse genomes.

"In the past, ENCODE has focused on identifying functional elements in healthy individuals; but gene expression may be regulated differently in people that are unhealthy versus those that are healthy," said Dr. Pazin. "Diseased tissues may help with the detection of new functional elements."

"An important aspect of the ENCODE Project is to identify collections of cell types for use in creating an incredibly detailed map of the genome and its features," said Erez Lieberman Aiden, Ph.D., assistant professor in the Department of Genetics at the Baylor College of Medicine and at Rice University, and a first-time ENCODE grantee who will run one of ENCODE's eight mapping centers. "If we put them all together, the whole is much more valuable than the sum of each of the parts."

Dr. Aiden's research focuses on how the genome folds up inside the nucleus in three dimensions. "Scientists tend to think of a chromosome as a long, linear string of letters. In actuality, the string folds up, forming loops and other shapes," he said.

For the roughly two meters of DNA to fit into a cell nucleus, it must be packed into chromatin. This tight packing within the nucleus then can put two parts of the genome - a gene and its regulatory element, for example - in close contact. Having a better understanding of where these loops occur will clarify the relationship of genomic features that were thought to lie far apart.

A more detailed look at genome function: the characterization centers

ENCODE will establish five characterization centers to investigate how large numbers of genomic elements function in specific biological settings. The ENCODE characterization centers will take advantage of newly developed technologies to characterize many elements at the same time.

"We want to try and breathe life into the functional aspects of the catalog that ENCODE has created," said Will Greenleaf, Ph.D., assistant professor in the Genetics Department at Stanford University, and a new ENCODE investigator who will run one of the characterization centers. "Understanding how regulatory elements work together to bring about gene expression is something we're really excited about."

Dr. Greenleaf's research involves changing various candidate regulatory elements through methods like CRISPR-Cas9, a gene-editing technique that can precisely clip out sections of the genome. His group, in collaboration with Michael Bassik, Ph.D.'s group at Stanford University, will then characterize how these cells grow under a variety of conditions to document what happens when regulatory elements are missing from the genome.

"We've sequenced the human genome, but it's written in a language that we don't understand. ENCODE is a way to learn the logic and grammar of that language, so that we can unlock the power of sequencing the genome for understanding both human health and disease," he said.

Analyzing the catalog: the computational analysis projects

In a third facet of ENCODE, researchers will develop computational and statistical approaches that will make the ENCODE catalog even more useful for studying both disease mechanisms and fundamental biology.

"In any given cell type, you may have 30,000-50,000 sites in the genome that modulate gene expression. How do we even think about that? Or visualize that? Or work out which elements in the genome are regulating which genes - and when and how? You have to compute," said Christina Leslie, Ph.D., associate member of the Computational and Systems Biology Program at the Sloan Kettering Institute for Cancer Research who's been awarded funds to work on one of six computational projects.

Her research uses predictive models of gene regulation that incorporate data on which sites in the genome are accessible, and decodes the DNA signals at these sites. Her previous work had investigated this phenomenon looking at the genome in a linear, or one-dimensional, form. Now, she is incorporating data from other ENCODE projects, such as Dr. Aiden's work on mapping chromatin loops in three dimensions, to further our understanding of genome function.

Bringing data to the community: the data coordinating and data analysis centers

The ENCODE Project is bringing together laboratories that are generating vast amounts of data with groups that integrate these data through the power of computational research. The data coordinating center and the data analysis center support ENCODE members by connecting all participants to the data, and creating avenues of easy access by the greater research community.

"As a community resource, ENCODE data must be rapidly and freely available to researchers so they can immediately put it to use in their own work. This is where the data coordinating center and data analysis center play such critical roles," said Dan Gilchrist, Ph.D., a program director in NHGRI's Division of Genome Sciences. "Thanks to these centers, ENCODE data are findable, accessible, interoperable and reusable - maximizing their utility to the research community."

Recipients of the awards are:

Mapping Centers

Bradley Bernstein, M.D., Ph.D. and Chad Nusbaum, Ph.D.; Broad Institute, Cambridge, Massachusetts

Erez Lieberman Aiden, Ph.D.; Baylor College of Medicine, Houston

Mats Ljungman, Ph.D.; University of Michigan, Ann Arbor

Richard Myers, Ph.D. and Eric Mendenhall, Ph.D.; HudsonAlpha Institute for Biotechnology, Huntsville, Alabama.; University of Alabama in Huntsville

Yijun Ruan, Ph.D.; The Jackson Laboratory, Bar Harbor, Maine

Michael Snyder, Ph.D.; Stanford University, California

John Stamatoyannopoulos, M.D.; Altius Institute for Biomedical Sciences, Seattle, Washington

Barbara Wold, Ph.D. and Ali Mortazavi, Ph.D.; California Institute of Technology, Pasedena; University of California, Irvine

Characterization Centers

Nadav Ahituv, Ph.D. and Jay Shendure, M.D., Ph.D.; University of California, San Francisco; University of Washington, Seattle

William Greenleaf, Ph.D. and Michael Bassik, Ph.D.; Stanford University, California

John Lis, Ph.D. and Haiyuan Yu, Ph.D.; Cornell University, Ithaca, New York

Len Pennacchio, Ph.D. and Axel Visel, Ph.D.; University of California, Lawrence Berkeley National Laboratory, Berkeley, California

Yin Shen, Ph.D. and Bing Ren, Ph.D.; University of California, San Francisco; Ludwig Institute for Cancer Research/University of California, San Diego School of Medicine

Computational Analysis

Michael Beer, Ph.D.; Johns Hopkins University, Baltimore, Maryland

Christina Leslie, Ph.D.; Sloan Kettering Institute for Cancer Research, New York, New York

Alkes Price, Ph.D. and Soumya Raychaudhuri, M.D., Ph.D.; Harvard University, Cambridge, Massachusetts.; Brigham and Women's Hospital, Boston, Massachusetts

Jonathan Pritchard, Ph.D.; Stanford University, California

Ting Wang, Ph.D., Barak Cohen, Ph.D. and Cedric Feschotte, Ph.D.; Washington University in St. Louis; University of Utah, Salt Lake City

Xinshu Grace Xiao, Ph.D.; University of California, Los Angeles

Data Coordinating Center

J. Michael Cherry, Ph.D.; Stanford University, California

Data Analysis Center

Zhiping Weng, Ph.D. and Mark Gerstein, Ph.D.; University of Massachusetts Medical School, Worcester; Yale University, New Haven, Connecticut

NHGRI is one of the 27 institutes and centers at the National Institutes of Health. The NHGRI Extramural Research Program supports grants for research, and training and career development at sites nationwide. Additional information about NHGRI can be found at

National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 institutes and centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical and translational medical research, and is investigating the causes, treatments and cures for both common and rare diseases. For more information about NIH and its programs, visit


Sheena Faherty, Ph.D.

NHGRI Communications

(301) 443-3523

Last Updated: February 3, 2017

[The issue of "Junk DNA" playing a crucial role in fractal recursive genome function hit the news in 2002 - yes, 15 years ago! At that time, the double "lucid heresy" could not be published - instead it had to be filed as a US Patent - now in force for about the next decade. Soon after, in 2003 February - yes, 14 years ago! - we celebrated in Monterey, California the 50th Anniversary of the discovery of the Double Helix. Jim Watson, Francis Collins, Craig Venter (etc) were all there. I congratulated Craig for beating the government with his private sector genome sequencing and asked him about the so-called "Junk DNA". He said "This is a most important question that you should have asked all of us". Encouraged, I pulled Francis Collins to the side at the Monterey aquarium evening session to make the sharp point to him. "If the human DNA fully sequenced a year before is 98% identical in its genes with the newly sequenced mouse genome - yet the human genome is about 1/3 larger - it must be the "Junk DNA" function (whatever function they play) that makes the difference!" While visibly obviously bothered by this phenomenon, he found his peace of mind by replying to me "we have some huge genomes, e.g. that of rice, that make the conclusion on the function of "Junk DNA" perhaps questionable". Nonetheless, when he returned to D.C. from sunny California, he requested a god bit of taxpayers's dollars for ENCODE (2003-2007) to "study" if that "silent majority of 98% of dark genome" was truly that silent. Nope - the "Junk DNA" was found to be teaming with life. The ENCODE program ended in 2007 (slighly ahead of schedule, since my Post Genetic Society already concluded in November 2006 at the "First in the World" Interenational Conference in Budapest, Hungary, that genomics must face a "double disruption" of overturning BOTH the "Junk DNA" and the "Central Dogma" mistaken axioms).

While it is not very transparent in the announcement, ENCODE concluded in 2007 (a decade ago), with this verdict: Dr. Collins issued a mandate "the scientific community will need to rethink some long-held views". And now it is 2017.

So let us face it. Since from 2007 till 2017 we have wasted a FULL DECADE, the primary personal repsonsiblity rests on the NIH head, Francis Collins. Some pioneers - denied not just funds, but even a publication on PostGenetics in Science in 2007- went ahead - and even made the burning issue widely public by Google Tech Talk YouTube (2008). However, at the minute Francis Collins was appointed to NIH Chief (2008), he flipped into his "establishment mode" (that is his forte; how to get the largest chunks of taxpayer money without necessarily ruffle the feathers of the bigwigs of the "science"establishment). Now, that most likely we'll see a "rather different" NIH emerging, there will be some interesting questions. It seems highly questionable if the new Director (who may, or may not be Francis Collins) would leave this $31 M "swan song" ("legacy of the establishment") intact. Questions may arise about "availablity of funds" - or the plan might have to change to follow perhaps a revised scenario. True pioneers might not get forgotten and perhaps the entire program will be massively restructured. It is about time. Andras_at_Pellionisz_dot_com]

The mysterious 98%: Scientists look to shine light on the 'dark genome'

February 3, 2017 by Dana Smith


After the 2003 completion of the Human Genome Project – which sequenced all 3 billion "letters," or base pairs, in the human genome – many thought that our DNA would become an open book. But a perplexing problem quickly emerged: although scientists could transcribe the book, they could only interpret a small percentage of it.

The mysterious majority – as much as 98 percent – of our DNA do not code for proteins. Much of this "dark matter genome" is thought to be nonfunctional evolutionary leftovers that are just along for the ride. However, hidden among this noncoding DNA are many crucial regulatory elements that control the activity of thousands of genes. What is more, these elements play a major role in diseases such as cancer, heart disease, and autism, and they could hold the key to possible cures.

As part of a major ongoing effort to fully map and annotate the functional sequences of the human genome, including this silent majority, the National Institutes of Health (NIH) on Feb. 2, 2017, announced new grant funding for a nationwide project to set up five "characterization centers," including two at UC San Francisco, to study how these regulatory elements influence gene expression and, consequently, cell behavior.

The project's aim is for scientists to use the latest technology, such as genome editing, to gain insights into human biology that could one day lead to treatments for complex genetic diseases.

Importance of Genomic Grammar

After the shortfalls of the Human Genome Project became clear, the Encyclopedia of DNA Elements (ENCODE) Project was launched in September 2003 by the National Human Genome Research Institute (NHGRI). The goal of ENCODE is to find all the functional regions of the human genome, whether they form genes or not.

"The Human Genome Project mapped the letters of the human genome, but it didn't tell us anything about the grammar: where the punctuation is, where the starts and ends are," said NIH Program Director Elise Feingold, PhD. "That's what ENCODE is trying to do."

The initiative revealed that millions of these noncoding letter sequences perform essential regulatory actions, like turning genes on or off in different types of cells. However, while scientists have established that these regulatory sequences have important functions, they do not know what function each sequence performs, nor do they know which gene each one affects. That is because the sequences are often located far from their target genes – in some cases millions of letters away. What's more, many of the sequences have different effects in different types of cells.

The new grants from NHGRI will allow the five new centers to work to define the functions and gene targets of these regulatory sequences. At UCSF, two of the centers will be based in the labs of Nadav Ahituv, PhD, and Yin Shen, PhD. The other three characterization centers will be housed at Stanford University, Cornell University, and the Lawrence Berkeley National Laboratory. Additional centers will continue to focus on mapping, computational analysis, data analysis and data coordination.

Cellular Barcodes Reveal Regulatory Function

New technology has made identifying the function and targets of regulatory sequences much easier. Scientists can now manipulate cells to obtain more information about their DNA, and, thanks to high-throughput screening, they can do so in large batches, testing thousands of sequences in one experiment instead of one by one.

"It used to be extremely difficult to test for function in the noncoding part of the genome," said Ahituv, a professor in the Department of Bioengineering and Therapeutic Sciences. "With a gene, it's easier to assess the effect because there is a change in the corresponding protein. But with regulatory sequences, you don't know what a change in DNA can lead to, so it's hard to predict the functional output."

Ahituv and Shen are both using innovative techniques to study enhancers, which play a fundamental role in gene expression. Every cell in the human body contains the same DNA. What determines whether a cell is a skin cell or a brain cell or a heart cell is which genes are turned on and off. Enhancers are the secret switches that turn on cell-type specific genes.

During a previous phase of ENCODE, Ahituv and collaborator Jay Shendure, PhD, at the University of Washington, developed a technique called lentivirus-based massive parallel reporter assay to identify enhancers. With the new grant, they will use this technology to test for enhancers among 100,000 regulatory sequences previously identified by ENCODE.

Their approach pairs each regulatory sequence with a unique DNA barcode of 15 randomly generated letters. A reporter gene is stuck in between the sequence and the barcode, and the whole package is inserted into a cell. If the regulatory sequence is an enhancer, the reporter gene will turn on and activate the barcode. The DNA barcode will then code for RNA in the cell.

Once the researchers see that the reporter gene is turned on, they can easily sequence the RNA in the cell to see which barcode is activated. They then match the barcode back to its corresponding regulatory sequence, which the scientists now know is an enhancer.

"With previous enhancer assays, you had to test each sequence one by one," Ahituv explained. "With our approach, we can clone thousands of sequences along with thousands of barcodes and test them all at once."

Deleting Sequences to Understand Their Role

Shen, an assistant professor in the Department of Neurology and the Institute for Human Genetics, is taking a different approach to characterize the function of regulatory sequences. In collaboration with her former mentor at the Ludwig Institute for Cancer Research and UC San Diego, Bing Ren, PhD, she developed a high-throughput CRISPR-Cas9 screening method to test the function of noncoding sequences. Now, Shen and Ren are using this approach to identify not only which sequences have regulatory functions, but also which genes they affect.

Shen will use CRISPR to edit tens of thousands of regulatory sequences in a large pool of cells and track the effects of the edits on a set of 60 pairs of genes that commonly co-express.

For this work, each cell will be programmed to reflect two fluorescent colors – one for each gene – when a pair of genes is turned on. If the light in a cell goes out, the scientists will know that its target gene has been affected by one of the CRISPR-based sequence edits. The final step is to sequence each cell's DNA to determine which regulatory sequence edit caused the change in gene expression.

By monitoring the colors of co-expressed genes, Shen will reveal the complex relationship between numerous functional sequences and multiple genes, which was beyond the scope of traditional sequencing techniques.

"Until the recent development of CRISPR, it was not possible to genetically manipulate non-coding sequences in a large scale," said Shen. "Now, CRISPR can be scaled up so that we can screen thousands of regulatory sequences in one experiment. This approach will tell us not only which sequences are functional in a cell, but also which gene they regulate."

Can Dark Matter DNA Treat Disease?

By cataloging the functions of thousands of regulatory sequences, Shen and Ahituv hope to develop rules about how to predict and interpret other sequences' functions. This would not only help illuminate the rest of the dark matter genome, it could also reveal new treatment targets for complex genetic diseases.

"A lot of human diseases have been found to be associated with regulatory sequences," Ahituv said. "For example, in genome-wide association studies for common diseases, such as diabetes, cancer and autism, 90 percent of the disease-associated DNA variants are in the noncoding DNA. So it's not a gene that's changed, but what regulates it."

As the price for sequencing a person's genome has dropped significantly, there is talk about using precision medicine to cure many serious diseases. However, the hurdle of how to interpret mutations in noncoding DNA remains.

"If we can characterize the function and identify the gene targets of these regulatory sequences, we can start to reveal how their mutations contribute to diseases," Shen said. "Eventually, we may even be able to treat complex diseases by correcting regulatory mutations."

Debunking the ‘Junk DNA’ Theory


The Wire

A.J. Rachel, a molecular biologist, was determined to prove that the Y heterochromatin was not junk.

Though human DNA, or the genome as it is called, is a chain of around three billion molecules called base pairs, only small segments of them called genes are involved in making/coding proteins. There are 20,000 such genes and together they constitute less than 2% of the whole genome. In the early days of genomics, only genes were considered useful. The rest of the genome was termed junk DNA. This irked scientists for years. How could 98% of the genome be just sitting there doing nothing?

Our genome is organised in each cell of our body as 23 pairs of chromosomes. One pair of these 23 are the ‘sex chromosomes’ and these typically are either XX (in the biologically female) or XY (in the biologically males). As chromosome-Y is the one granting a mammalian individual with biological maleness, most of its genes tend to be have roles specific to males, such as biological sex determination and sperm development. But here’s where the mystery starts: chromosome-Y has very few genes – just about 50-60 out of total 20,000 human genes existing in all other chromosomes.

In fact, one of the largest chunks of junk DNA in the human genome lies on the Y chromosome (chrY) and is called the Y heterochromatin. This is a block of DNA 40 million base pairs long (out of a toatl length of 59 million base pairs). For a long time, this region seemed to have no discernable function. It is composed mostly of sequence repeats and has no genes (DNA segments that can code for proteins). Because of this poverty of genes, chrY is known as one of the genome’s largest ‘gene deserts’.

Rachel asks Y

A.J. Rachel, a molecular biologist in CCMB, was determined to prove that the Y heterochromatin was not junk. “It’s just basic intuition. Nature is not wasteful. If something is present, it has a function,” she said to me matter-of-factly as we settled down in her office. It is with this unshakeable belief, and years of training as a biologist, that Rachel set her sights on solving this important piece of the junk DNA puzzle when she joined the centre 30 years ago.

For a long time, this large chunk on chrY was thought to be functionally inert. It was thought not to participate in processes like transcription where genes are copied into molecules called messenger RNAs (mRNA) and translation where this mRNA codes for proteins. But Rachel and her colleagues had a breakthrough. They used a DNA probe and identified two transcripts (mRNA) that were produced from this region. For the first time, it was proved that this region is not inert after all.

But this is was not enough for Rachel to definitively junk the junk hypothesis. “We needed to find out the function of these transcripts.” Further testing showed that these transcripts would go on to physically mix with another transcript produced from a gene situated in chromosome-1.

This mixing called ‘splicing’ is an important modification that happens before protein synthesis. Splicing refers to the editing of the mRNA to produce a more mature mRNA that is ready to code for proteins. Usually it involves the splitting of the mRNA, the disposal of the unwanted portions, and the rejoining of the wanted portions.

When a part of mRNA from one chromosome splices into the mRNA from another, as was happening between chrY and chromosome-1, it is called ‘trans-splicing’. This is exceedingly rare. Rachel’s team not only discovered this but also found out that this trans-splicing by chrY was important to regulate how much protein the chr1 mRNA should synthesize and when the protein synthesis should occur.

What does this all mean?

The most exciting part of this discovery was that this trans-splicing event happens only in the cells of the testes. Fom Rachel’s experiments it was clear that chrY is regulating a gene that is transcribed specifically in the testes. What does this tell us, she wondered. Firstly, it shows that this DNA in the chrY that was once considered junk is regulating protein synthesis in testes even though this region does not code for any protein itself. “No trans-splicing between coding mRNA and noncoding mRNA was known till then – and none with chrY. So this means chrY may not just be sitting there determining sex and doing nothing else.”

Secondly this region in chrY is species-specific, Rachel pointed out. The sequence they were studying was present only in humans, not, say, in mice. “You see, male fertility is being regulated by species-specific repeats (this region) present in the chrY. So if by chance some repeats from another species come, they cannot regulate genes involved in human male fertility.” This fits in with our knowledge that cross-species fertilisation does not work. “Man cannot cross with mouse, clearly.”

Fundamental discoveries like these are adding new dimensions to what we know about the human genome. Even though lesser than 2% of the genome code for proteins, today, over 75% of the genome is known to be transcribed into mRNA. These RNAs may not make it to the protein stage but they all have some function or the other. “100% of the genome will have a function,” affirms Rachel, “we just have to discover it.”

This study done in Rachel’s lab was published in Genome Research journal in 2007, and was considered a landmark of sorts. In fact, it even came up for discussion in both houses of the parliament that year, revealed Rachel. “It was the time we thought India lost out in the Human Genome Project. This study came up to show that Indian scientists are still discovering novel transcripts and functions, so how can you say that India did not contribute? That was the discussion.”

To bolster their findings, Rachel and her colleagues experimented with mouse chrY and here too they found non-coding RNAs involved in regulation of autosomal (other than sex chromosomes) genes in mouse testes. “This proved that it is not an isolated event we found in the human genome but seems to be a pervasive phenomenon.”

The ‘basic intuition’ that basic biologists so rely on to design their studies may or may not be something they are born with, but what is certain is that years of training go into honing that skill. For Rachel, signs pointing her to a life of science started popping up early – in the form of back-to-back scholarships.

Early life and scholarships

“I grew up in Kottarakara, a town in Kerala. I did my basic education in a Malayalam medium school there,” she said. “I used to love reading scientists’ biographies. Marie Curie was a favourite.” Curie’s achievements despite coming from a modest background resonates with Rachel. “I believe it’s not school that matters but intelligence. Good students will come up anywhere.”

Though her parents were school teachers, she recalls her father being very academically oriented. “He made me write these exams. Through one such exam in my class six, I got something called the residential school scholarship (this was given to 200 students all over india).” With this, Rachel got admission to the Rishi Valley school, still considered one of the best residential schools in India.

After finishing her schooling in Rishi Valley, Rachel continued her impressive run by securing herself the science talent scholarship (now evolved into NTSE). This scholarship would facilitate her education in the basic sciences up to doctoral level. “My journey in science began with this. After doing my BSc and MSc in zoology at Women’s College in Trivandrum, I left for Banaras Hindu University to do my PhD.” At this stage, Rachel had also won the CSIR fellowship to aid her research.

Rachel joined CCMB in 1976 and 30 years on, “I’m still enjoying the basic sciences!” Almost 60, Rachel is due to retire in five months (“Do you still want to interview me?” she’d asked, earlier).

Grateful to India, yet worried

Though she completed a couple of postdocs in US universities, Rachel was not significantly tempted by the idea of settling down there. And there were many opportunities too. “I was even invited by a professor during a conference at BHU to come do any work I liked in his lab. But I thought why can’t we do good work in india, why should we go out always? I also had that feeling of – India nurtured me, gave me scholarships – so why can’t I do good work here.”

She was somewhat vindicated later on, following the 2007 Y-chromosome discovery. “When I published the paper, some visiting scientists asked me: ‘who is your foreign collaborator?’ I informed them there is no foreign collaborator. They replied ‘no foreign collaborator? All this work done in India?!’” she said, laughing at the perceived irony.

Which is not to say that doing science is a smooth ride here. “There are definitely ‘n’ number of hurdles in India,” admitted Rachel. “The scientific atmosphere in India needs to change. They are trying to introduce too much of bureaucracy. They don’t leave scientists alone.” Interestingly, she is not talking about the government but scientists themselves, the ones at higher levels. “Scientists themselves do politics. And honestly, in the last 30 years I have only seen it getting worse.”

“You need a free mind. Leave the scientists alone. The government gives us projects so that we can work regardless of red tapism, but the execution is really not good. The minute people come into power, they try to prevent others from doing science. I resent that.” Rachel suspects that this culture could be what is preventing India perhaps from rising as a world power in science.

Rachel never felt discriminated as a woman in her career, though she says that it could be because she had more time and energy as she did not get married and start a family. She joked, “I used to like standing up and telling the men what the Y chromosome does! Ha ha. I’m a lady and I’m doing this. Just fun times…”

Not much of a retirement

Now that she is approaching her retirement, Rachel has some plans, but none of them involve sitting back and taking it easy. “I don’t care for power, position, I just want to be left alone to do science. [This attitude] has helped me so far but I still have work to do – I have two DBT projects which will go on till 2019. Once the director gives me permission, I will continue till then – here or anywhere else.”

And after that, Rachel wants to continue contributing, but in a different way: “As long as I can, I want to be energetic and active. The rest of the time, I want to devote to the havenots – children especially.” She already volunteers with organisations like Don Bosco to interact with street children, and she’d like to give more time to this. “After retirement I will go and work with one of the mission fields of the churches. Somewhere where the need is. Hyderabad has been home, but I don’t know what the future holds. It doesn’t – shouldn’t – matter where I am as long as I can help those who need it.”

[Old School Genomics faced this issue in 2002, 2007 (when this manuscript was written), hit the Internet- media by Google Tech Talk YouTube in 2008 - and on, becoming a US patent in force for about the forthcoming decade. Most interestingly, the issue re-surficed in India by the end of January, 2017. Within a few days, when it will be due, Rachel's point that the culture of science politics often brutally suppresses breakthroughs will be elaborated here beyond India also for the USA, Europe, Canada, Australia - with the specific exception and singularity of China! The issue has global strategic importance - thus some key countries might wish to do something about it at this final junction.


Two Infants Cured of Terminal Cancer by Breakthrough Gene-Editing Therapy (UK)

Big Think

January 29, 2017 by PAUL RATNER

A group of British doctors successfully eliminated cancer in two infants with leukemia by using genetically modified immune cells from a donor. The accomplishment opens a new age of cancer therapy treatment.

Cancer cured by genome editing

This medical first was carried out by doctors from London’s Ormond Street hospital on two children aged 11 and 16 months, who were not responding to other forms of therapy. Scientists manipulated the donor T cells to be able to kill the cells of leukemia, with chemotherapy following the new experimental approach. Now one of them has been cancer-free for a year and another for 18 months.

The difference in the treatment was that the engineered T-cells (known as CAR-T) were from another person, while usual T-cell therapy involves removing immune cells from the patient, modifying them and giving them back to the patient. What’s remarkable about this approach is that the cells could be collected from donors, treated and stored before they are needed, thus making it possible for the patient to receive them immediacy upon diagnosis. They would not have to wait for their own T cells to be modified. Additionally, blood from one donor could supply hundreds of treatments, reducing costs and efficiency.

“We estimate the cost to manufacture a dose would be about $4,000,” told Julianne Smith, vice president of CAR-T development for Cellectis, supplier of universal cells, in an interview with Technology Review. “That’s compared to a cost of around $50,000 to alter a patient’s cells and return them.”

The novel treatment is not yet available to the general public but CAR-T cell therapy is currently in phase II clinical trials in the U.S. There is also the question if the infants are actually cured, because doctors usually wait a few years before declaring someone completely cancer-free.

Some critics have pointed out that because chemotherapy was also used as part of the treatment, it's not entirely clear if the modified T-cells were the main cause of the improvements. But the doctors point to the long-lasting effects of their treatment and are enthusiastic about its potential in future treatments.

[There are "naysayers" with every breakthrough. However, just stop for a moment and ask yourself "would I rather see kids perish of a cancer that does not respond to chemoterapy - or try, even for the first time, an experimental breakthrough to save a waning life? At the same time with renewed optimism that cancer and other genomic ailments might be in our lifetime curable, the inevitable rise of "genome editing" is certainly scary IF WE DO NOT KNOW HOW THE CODE WORKS THAT WE ARE EDITING. Bill Gates, the richest person (first dollar-trillioner, ever) spectacularly illuminates both aspects. On one hand, as a sofware super-expert he knows that "ignorant editing" could lead to "very, very dangerous" (see below his Davos speech). At the same time, upon the news of the start-up with the very telling name "Editas", Bill Gates with fellow-investors poured $120 M into developing the technology of genome-editing. We have seen something similar "sense of urgency" before. When for strategic defense the encrypted message had to be mathematically understood, Alan Turing, a mathematician, was endowed by Great Britain to interpret the code of German messages. The appropriate agency is unikely to be an old-fashioned (or even new-fashioned) NIH. Much more like DARPA, or Lawrence Livermore, the single Homeland Defense Laboratory. Andras_at_Pellionisz_dot_com]

Could Cancer Drugs Treat Autism?

Five years ago, on Charlie Ryan’s second birthday, a big lump mysteriously formed on the side of his abdomen. At the emergency room his parents took him to, doctors suggested the lump was a hernia caused by some unknown trauma, and referred the family to a surgeon. The surgeon told them it was a benign tumor, and sent them home.

Charlie already had a host of medical issues. He’d been born with an abnormally large head and other features of autism, including being nonverbal. Now this.

Like many a baffled and worried parent, his mother, Autumn Ryan, turned to Google, typing in Charlie’s ailments and coming up with a possible cause: a mutation in PTEN, a gene that reins in cell growth. Further searching led her to websites dedicated to families whose children have PTEN mutations. It all looked familiar—and worrisome. “I read stories of little boys who haven’t lived, descriptions of these children with a multitude of bumps all over their bodies,” Ryan recalls. “I was freaked out.”Two years later, after visits to multiple doctors and the nine months it took for his genetic test to be analyzed, her hunch proved right. Charlie has a mutation in PTEN. Ryan immediately faxed the results to Charis Eng, a PTEN expert at the Cleveland Clinic in Ohio, whose name she had come across in her research. A few months later, the family made their first trip from their home in Tulsa, Oklahoma, to see Eng and her colleague, autism expert Thomas Frazier. Together, Eng and Frazier have treated more than two dozen children like Charlie—who all have PTEN mutations, autism, and large heads.

In Eng, Ryan finally found someone who understood Charlie’s condition. “It’s like going to the person who has all the knowledge in the world, and you can ask her any question,” Ryan says of Eng.

Eng was first and foremost a cancer geneticist, and stumbled upon PTEN-linked autism through her work in that field. In 1997, she discovered the genetic root of Cowden syndrome, a rare condition characterized by tumor-like growths and a high lifetime risk of many cancers. Following people with the syndrome, as well as their unaffected family members, she noticed that a few relatives of people with Cowden syndrome have an autism diagnosis. She thought little of it; it was probably a coincidence. But then Eng noticed a few more children with autism in the families. And then a few more. “We started going, ‘Huh, how come there’s all this autism in the family members?’” she recalls. Curious to see if this trend was more than a fluke, Eng teamed up in 2004 with Merlin Butler, a clinical geneticist at the University of Kansas Medical Center in Kansas City. Eng and Butler screened 18 children with an autism diagnosis and macrocephaly, because enlarged head size is a hallmark of both Cowden syndrome and autism. They found that three of the 18 children have mutations in PTEN. Mutations in PTEN have been linked to dozens of cancers, but the gene had never before been shown to have any effect on social skills or behavior. “Wow,” Eng recalls thinking. “PTEN plays a role in a neurodevelopmental disorder, too.”

The finding was so unexpected that it was a tough sell. The Lancet and the New England Journal of Medicine both rejected the paper. “They didn’t believe us,” ays Eng, now head of the Cleveland Clinic’s Genomic Medicine Institute. “Their reaction was: ‘It’s a cancer gene. How can this be?’”

To Eng, it was perfectly plausible that a mutation in the gene could lead to autism as well as to the many cancers of Cowden syndrome. Cancer arises when PTEN mutations release their brake on cell growth and proliferation, and cells grow out of control. When PTEN mutations cause an overgrowth of nerve fibers in the brain, Eng reasoned, they might instead lead to autism.

The Journal of Medical Genetics ultimately published the study in 2005. Other groups soon confirmed the findings, and today the subtype of autism with PTEN mutations and macrocephaly—called PTEN-ASD—is estimated to represent up to 2 percent of all autism cases.

[There is a slew of "genome regulation diseases", most notably cancer (that, when not halted by treatment, is like a "melt-down" of the genome). Recently Craig Venter blaimed his prostate cancer on the "extremely few repeats" at a suspected locus of his X chromosome. Very large repeats (often labeled by the mathematically undefined "Copy Number Variation"), however, occur not only with cancers, but also in autism, schizophrenia, scores of auto-immune diseases, summed up by Pellionisz as early as in 2012, in a Proceedings of a Lecture-Tour, awarded by India (see full free .pdf). As for cancer, in a peer-reviewed paper a year ago by eminent British cancer researchers (see below) the mathematical analysis showed features of fractality of tumor evolution (see Fig. 5. below), demonstrating a high precision fit to power-low. (In the 2009 presentation in Cold Spring Harbor Pellionisz showed a curve-fitting of repeats in even the smallest genome of free-living organisms to the Zipf-Mandelbrot-Fractal-Parabolic-Distribution-Curve, see here). In our time full genome sequencing and curve-fitting of the repeats to the Zipf-Mandelbrot-Fractal-Parabolic-Distribution-Curve is eminently doable. "Experts" would not have to argue if e.g. autism has genomic roots. If it does, e.g. autism is much more hopeful that many presently think. - Andras_at_Pellionisz_dot_dom.]

Identification of neutral tumor evolution across cancer types (Tumor Evolution Is Fractal!)

Despite extraordinary efforts to profile cancer genomes, interpreting the vast amount of genomic data in the light of cancer evolution remains challenging. Here we demonstrate that neutral tumor evolution results in a power-law distribution of the mutant allele frequencies reported by next-generation sequencing of tumor bulk samples. We find that the neutral power-law fits with high precision 323 of 904 cancers from 14 types, selected from different cohorts. In malignancies identified as neutral, all clonal selection occurred prior to the onset of cancer growth and not in later-arising subclones, resulting in numerous passenger mutations that are responsible for intra-tumor heterogeneity. Reanalyzing cancer sequencing data within the neutral framework allowed the measurement, in each patient, of both the in vivo mutation rate and the order and timing of mutations. This result provides a new way to interpret existing cancer genomic data and to discriminate between functional and non-functional intra-tumor heterogeneity.

Fig. 5. of full free paper. "Warning" - it has mathematics in it!

The Trillioner is Getting IT

Bill Gates is likely to become the first Dollar Trillioner, ever. Thus, he could easily afford a stance of "blessed ignorance". He does not - look at it how eager and naturally curious he is


At the World Economic Forum (in Davos, Switzerland), now for the second year he was the chief spokesman of Genome Informatics (outshining even someone with a huge government budget, will sum that up separately). In 2016, Bill Gates spoke about genome editing with great enthusiasm (and $100 M of his own money in Editas). Indeed, Microsoft Word is unbeatable for its "spell checkers" to edit what you write in most any language. Bill Gates spent vast amounts of funds for the actual understanding of myriads of human languages to be able to do that. Thus, he is the first to know, that "editing the human genome" will reach its true potential only at the time when science will not just know, but also understand the mathematical code.

Until then, Bill Gates is again the first to know that "editing something you do not understand" can be "very, very dangerous".

Thus, this year in the World Economic Forum (in Davos, Switzerland), "Bill Gates Warns That Damage Caused By Bioterrorism Could Be 'Very, Very Huge':

CRISPR and other powerful new biotechnologies have made science that was once constrained to fancy high-end labs increasingly accessible. This is, of course, mostly a good thing. But it also means that those with nefarious intentions have easier access to the same technologies, too.

Speaking at the World Economic Forum in Davos, Bill Gates warned that the global community has not taken the threat of bioterrorism quite seriously enough. He urged governments and private organizations to make “substantial investments” to prepare for potential bioterrorism attacks.

“What preparedness will look like for intentionally caused things, that needs to be discussed,” he said during a panel last week. “It’s very hard to rate the probability of bioterrorism, but the potential damage is very, very huge."

Gates’ warning came on the heels of an announcement that the Bill and Melinda Gates Foundation will join governments from Germany, Japan and Norway in creating a Coalition for Epidemic Preparedness Innovations to develop new vaccines and strategies for responding to disease outbreaks. The Gates Foundation already spends sizable amounts of cash on research aimed at eliminating diseases, like malaria.

Gates is not the first to raise concerns about bioterrorism threats recently. In November, a new report by the US President’s Council of Advisors on Science and Technology advised President Obama to revamp the country’s biodefense strategies in response to advanced technologies like CRISPR.

Rapid advancement in biotechnology, the council wrote, “holds serious potential for destructive use by both states and technically-competent individuals with access to modern laboratory facilities.”

“Molecular biologists, microbiologists, and virologists can look ahead and anticipate that the nature of biological threats will change substantially over the coming years—in ways both predictable and unpredictable,” the report read. “The US Government’s past ways of thinking and organizing to meet biological threats need to change to reflect and address this rapidly-developing landscape.”

Last year, a top national security official called gene-editing a weapon of mass destruction alongside nuclear detonation, chemical weapons and cruise missiles.

Tools like CRISPR could potentially be used to destroy a nation’s food supply, to interfere with a person’s biology, or to boost the virulence of a virus so that it might better spread. (Such scenarios are, in fact, the premise of a new J.Lo-produced sci-fi show called C.R.I.S.P.R.)

For now, thankfully, these particular terrors are all just hypotheticals. But if we want to keep it that way, officials might do well take Gates’ warning seriously.

(Bill Gates gets IT, and it is virtually certain that major powers of the World will take the warning very, very seriously. On the other hand, Bill Gates, though he listens very attentively to Genomics, does not have the domain expertise to identify the fractal mathematics of recursive genome function, and how the regulation is derailed. Comment by Andras_at_Pellionisz_dot_com)

Illumina Taps Garret Hampton, One of the World’s Leading Clinical Genomics Experts, to Head its Clinical Genomics Unit

January 03, 2017 09:00 AM Eastern Standard Time

Garret Hampton - from Genentech (Roche) to Illumina

SAN DIEGO--(BUSINESS WIRE)--Illumina, Inc. (NASDAQ: ILMN), continuing its strategy to bring the power of genomics into clinical applications, today announced it has named one of the world’s top clinical genomics experts to head its clinical genomics unit. Garret Hampton, PhD, will join as Executive Vice President of Clinical Genomics for the company, starting January 9, 2017.

“This is a strategically important hire for Illumina,” said Francis deSouza, President and Chief Executive Officer of Illumina. “Garret has been on the front lines of oncology and brings deep expertise in clinical genomics to the leadership team, key to his new role leading our clinical genomics organization.”

Garret will be responsible for leading the clinical genomics group, including the reproductive and genetic health and oncology businesses, regulatory, clinical and medical affairs, CLIA labs, and the Chief Medical Officer’s organization. He joins Illumina from Genentech, Inc., where he was Global Head of Oncology Biomarker Development and Companion Diagnostics, co-led the Roche Personalized Medicine R&D Initiative and chaired the Roche/Foundation Medicine Joint R&D Committee. He previously held scientific and management roles at Celgene Corporation, Genomics Institute of the Novartis Research Foundation, and University of California at San Diego. Garret holds a BA in natural sciences and genetics and MA in natural sciences from Trinity College in Dublin, Ireland, and a PhD in cancer genetics from Imperial Cancer Research Fund and the University College London.

(Those of us who participated in the rather similar Internet-boom, may detect a phenomenon occurring at this time in New School Genomics. Disruptive technologies often hinge on Key People - thus formative times are marked by a "musical chair" of competing companies snatching lead "movers and shakers" from one-another. Note, that Garret played a key role at Genentech - which is a fully owned subsidiary of Roche. That Big Pharma twice attempted a hostile take-over of Illumina. Maybe as pre-emptive moves of a third attempt, Illumina at the time of announcing a new generation of sequencers, also allied itself with IBM and Philips, as well as lured a key person from Roche-Genentech into his fold. Such "musical chair" hires were commonplace when the Internet took off - since money itself is nothing without people who can contribute with "competitive advantage" in their skull. At our time, Grail (in Silicon Valley) not only essentially gutted Illumina, but also lured away Franz Joseph Och (originally the mastermind of Google Translate, who was snatched by Human Longevity) - now at Grail. Very clever! Roche as a Big Pharma is now in a direct competition with Big IT (IBM and Philips), thus while Roche may execute a third hostile takeover-attempt of Illumina, grand-master Jay Flatley, on top of Grail will not be acquired. This is one of the reasons he could pitch for a dollar-Billion, that very likely will be over-subscribed. Comment by Andras_at_Pellionisz_dot_com)

Illumina Promises To Sequence Human Genome For $100 -- But Not Quite Yet

January 9th

Matthew Herper , FORBES STAFF
I cover science and medicine, and believe this is biology's century.

Illumina, the largest maker of DNA sequencers, is launching a new DNA sequencer with new architecture it says could push the cost of decoding a human genome from $1,000 to $100–although that decrease will not come for years.

“This is good news, affirming that the field is still so healthy that price-plummeting is still considered good for business,” says George Church, the Robert Winthrop Professor of Genetics at Harvard Medical School.

The new DNA sequencers, called the NovaSeq 5000 and the NovaSeq 6000, could be about 70% faster than existing machines, by Church’s math–not a big increase in this field. But Illumina promises rapid increases as new parts and software upgrade the machines, meaning that by the end of the year the machines will be six times faster than its predecessors. Illumina has annual sales of $2.4 billion.

In 2006, Illumina’s first DNA sequencer could sequence a human genome at a cost of $300,000. For Illumina’s highest-end products, the cost is currently $1,000 per genome, including reagents and the amortization of its machines. The machines themselves are still costly: The NovaSeq 5000 and 6000 cost $850,000 and $985,000, respectively.

But those drops in cost have generated a huge amount of medical knowledge. Five years, ago, only a small number of human genomes had been sequenced. Two years ago, when Illumina made its last big product announcement, 65,000 human genomes had been sequenced. Now more than 500,000 have been sequenced, Illumina says.

"We feel like in the high end of the market this continues to put distance between us and any possible player in the market on any dimension: Quality, throughput or cost per genome,” says Francis deSouza, Illumina’s chief executive. “This is a phenomenal machine."

However, deSouza said the $100 number was more a roadmap–something that would probably happen in more than three years and fewer than ten.

Prominent genomics researchers were happier about more prosaic features. First, new machines are individual machines–Illumina’s current top-line products, the X5 and X10, are sold in bundles and can’t be bought one at a time. Second, they don’t come with restrictions on what kind of research they can be used for. Illumina does not allow the X5 and X10 to be used for looking at what are called exomes: looking at only the parts of the genome that contain known genes. Those restrictions will still exist for the older machines, Illumina says.

“It's good they are moving away from the model of requiring users to buy a certain number of instruments as a bundle, and not allowing you to do some types of assays on an instrument you bought and paid for (dearly), which was just silly,” wrote Elaine Mardis, co-director of the Institute for Genomic Medicine at Nationwide Children’s Hospital in Ohio, in an email.

DeSouza says that the new machines are also easier to use, with the number of steps in its workflow cut from 38 to 8. The machine has been made more idiot-proof, with RFID chips checking to make sure components are loaded properly. Creating the new architecture, he says, required 17 major innovations across every part of the sequencer, although it uses the same basic chemistry as previous models.

Another question is read length. DNA sequencers actually only read small bits of DNA, which are assembled like a puzzle. The Illumina machine is still stuck at 150 base pairs (pairs of DNA letters). But some new technologies are allowing much longer readlengths–at a much higher cost per DNA base pair.

One is Oxford Nanopore, which already sells a portable DNA sequencer and is expected to launch one for laboratories this year. The question is whether Oxford can even make a dent in the Illumina marketing machine.

“I think there’s going to be blood in the water at some point in 2017 or 2018,” says Ewan Birney, joint director of the European Bioinformatics Institute. He notes that Nanopore, which he consults for, has some advantage. An experiment on its machine can be run in a matter of minutes or hours (one run on an the NovaSeq takes 40 hours) and that the Nanopore machine will only cost $75,000, which could help labs start to try it out.

Many other DNA sequencing upstarts–Ion Torrent, Pacific BioSciences, Complete Genomics--have tried and failed. None has even made a dent. At present, the company holds the vast majority of the market for DNA sequencing worldwide. But in order to make its investors happy, it’s also going to have to make that market grow beyond research. That's as big a challenge as any competitor.

(Matthew Herper, FORBES staff writer is one of the most knowledgable journalists covering New School Genomics. In my opinion, while the above article is an excellent and very focused analysis on sequencing ("get info"), perhaps could benefit from a wider perspective, also including interpretation ("use info"). For the last decade, since 2OO8, New School Genomics has been in a crisis, with much blood already in the water. Complete Genomics almost went bankrupt (but became a USD 118 M fully owned tiny subsidiary of Chinese BGI). This happened for a lack of demand for full DNA sequences since the only mathematical principle of recursive genome function was academically acknowledged but industry failed to swiftly put it into use (because the IP was not secured for over a decade by the sluggish USPTO). The double disruption is not trivial to accomodate as it reverses both axioms, the flabbergasting narrow-mindedness of misnomer "Junk DNA" and crazy upholding of "Central Dogma", that has died of a thousand wounds of facts contradicting the "Dogma" - but for a combination of reasons was not allowed to be buried (not unlike Lenin, symbol of Soviets).

What is the crisis of New School Genomics all about? The facts that "sequencing" became a commodity by about 2OO8, yet the totally disruptive "sequence interpretation" was not faced fair and square by investors till very recently, with USD 1 Bn investment-pitch into Grail. (The announcement appeared just one week ago this year, on January 5th).

Promise of a full genome sequencing for about the time and cost of a dinner is not at all "news of this year". It was first voiced a full decade ago by Pacific Biosciences founder. Thank God it did not happen (and as the article above is diligent to make it totally clear, it will not happen for several years). This is good since the "glut" of full human DNA sequences that the industry did not properly interpret indeed bankrupt a slew of companies through a decade - and will derail massive investments now and in the future until the algorithmic basis of genome regulation is sound. Note, that the highest quality full human DNA sequence (of Craig Venter) has been sitting on the shelf for 15 years, and was only retro-actively realized weeks ago (see news item below) that his prostate cancer was likely to be caused by "far too few repeats" at a locus of his X chromosome. The utility of measuring aberrant self-similar repeats is in the claims of patent in force.

The challenge is much larger than creating a start-up. Take the example of Illumina (way beyond a start-up, but recently gutted by Grail), turning for help to Big IT IBM and Philips. (Or, for a real start-up in the shadows of Illumina, Edico turning immediately to Big IT Dell). Big IT is absolutely needed for Big DATA what the inevitably large databases of human genomes call for. However, there are two further requirements. One recalls the announcement of the appearence of iPhone exactly a decade ago by Steve Jobs.

Ask for full preso at holgentech_at_gmail_dot_com

Steve Jobs brilliantly pointed out exactly a decade ago that his invention was mapped out in the two dimensions spanned by the "smartness" and "user friendliness" axes. "Cell Phones" prior to his invention were not very smart, and were difficult to use. Second generation devices prior to his invention (in blue) were smart, but were even more difficult to use. It took the Grand Master of "user friendliness of computing" (Apple, with the leadership of Steve Jobs), to invent and develop the iPhone that combined extreme smartness with extreme user friendliness. Instead of communication, take the issue of genomics. The human genome, with plain text of 6 Billion A,C,T,G-s, is by far the most user-UNFRIENDLY, yet most brilliant, super-smart "encryption of life". Amazingly, as depicted by the top right insert (top picture of this column, where a nano-device for sequencing is plugged into a smartphone), the future of the super-smart genome information that is brutally user-unfriendly, is clear. It will be made user-friendly by Apple, Google or Samsung, perhaps by Sony (or equivalents in China). However, as Steve Jobs added to his presentation with a smile "yes, we protected the IP, to safeguard the sizable investment, by patent(s)". Comment by Andras_at_Pellionisz_dot_com.)

Venter - and The Principle of Recursive Genome Function

Craig Venter Used Own Posh Health Clinic To Diagnose His Cancer

Alex Lash

December 8th, 2016

Xconomy, San Francisco

Speaking at a conference in San Francisco Wednesday, geneticist Craig Venter revealed that he had just had surgery for prostate cancer three weeks ago. A health workup at his own high-end clinic, Health Nucleus, pinpointed the cancer.

Famous for leading one of two teams in the 1990s that raced to unlock the human genome, Venter said he had surgery soon after the diagnosis because the cancer was classified as “high grade,” meaning likely to grow and spread quickly, according to the National Cancer Institute.

“It was a surprise for me,” he said. There were no indications, such as elevated levels of the prostate-specific antigen (PSA) protein—a potential warning sign for prostate cancer—that Venter had the malignancy.

Private clients who presume to be healthy can get the same workup for $25,000 at the San Diego clinic, which launched last year. Part of the regimen includes a whole genome analysis. Venter’s DNA had an unusual repeating pattern in the code of the androgen receptor gene, a key part of the hormonal system that regulates hair loss, muscle and bone growth, and other biological fundamentals. A high number of the repeats signals lower risk for prostate cancer, Venter said. But his X chromosome, where the gene resides, had an extremely low number of the repeats.

But as much as he used the Health Nucleus clinic to catch his cancer, he seemed eager Wednesday to use his cancer to tout his clinic, volunteering his health information and cracking jokes with his interviewer from The Economist magazine. “I’ve always been accused of having balls of steel,” Venter said, poking fun at his own in-your-face reputation and penchant for self-promotion. “It’s because I only have six repeats.”

Venter’s own genome was also the first to be published in full, all 6 billion letters, in 2007. He has branched into synthetic biology, founding a company to produce biofuels and other products, and he led a team that reported this year building a bacterium that used the minimum number of genes possible.

More than four hundred people have used the Health Nucleus clinic, according to spokeswoman Heather Kowalski. The clinic is part of Venter’s firm Human Longevity (HLI), founded in 2014 and amply funded since then. The startup’s eventual goal is to sequence 100,000 human genomes a year and supplement them with other layers of health data for a massive database that HLI can charge researchers to access—and that HLI itself could eventually use to discover drugs or create other medical products. “We’re trying to match traits from people in the clinic to what we find in the genome,” Venter said.

Venter said Wednesday that 40 percent of the Health Nucleus clients, who buy the service believing they are healthy, are found to have “serious disease,” and 20 percent have life-threatening disease. He did not elaborate, but spokeswoman Kowalski said the diseases ranged from cancer to metabolic conditions to aneurisms and more. The clients get a whole genome analysis, brain and body scans, sequencing of their microbiome and metabolites—chemicals produced by the body’s processes that are floating around in the blood—and various other tests.

Venter’s comments add to the debate about testing for cancer in otherwise healthy people. For decades, many clinicians, medical societies, and advocacy groups like Susan G. Komen have urged healthy people to get regular testing for various cancers. Screening saves lives, goes the mantra. But many experts have come to question the widespread use of mammograms to detect breast cancer and the PSA test for prostate cancer. The risk is over-diagnosis: unnecessary biopsies, surgeries, and drug regimens for people wrongly diagnosed or whose cancer might never have progressed.

But proponents of new forms of screening, particularly blood tests, want to create reliable diagnoses of DNA and other proteins to catch cancer as early as possible, arguing that cancer before it spreads is much more treatable.

The race is on to collect reams of personal health data and turn them into businesses. Joining HLI are Seattle’s Arivale, the San Francisco Bay Area’s 23andMe and Invitae (NASDAQ: NVTA), and many others. Venter said HLI has accumulated about 20 petabytes of data, putting it “in Amazon [data storage service]’s one-percent club, right up there with video streamers and porn sites.” The firm spends $1 million a month on storage and computing, he said.

The “concierge” Health Nucleus service, which insurance won’t cover, also raises the question of whether such services and products to help healthy people stay healthy will be limited to rich people. An audience member at the conference Wednesday asked Venter and other speakers if insurance would ever pay for “catching things at an early stage.”

Not until there’s a large body of evidence that tests, scans, or medicines actually help, Venter answered. Brian Kennedy, the president and CEO of the Buck Institute for Research on Aging in Novato, CA—which recently spun out a company that’s pursuing drugs to treat “diseases of aging”—noted one precedent. “High cholesterol isn’t a disease,” he said—meaning it can show up in otherwise healthy people. But insurance companies pay for cholesterol-lowering statins, Kennedy said, because of decades of studies to show that lower cholesterol decreases the risk of cardiac disease.

(Craig I personally and publicly call "the Tesla of Genomics". Not only because he beat with private domain Celera from scratch the colossal government-project of human DNA sequencing, but for a slew of breakthroughs. Craig, with his personal and scientific revelation above perhaps permits to go beyond the emotion of optimism that if anybody can master the genome, it is himself! On the personal note, this is much different from Steve Jobs, who - with all total respect and admiration - did not have either domain expertise in genomics, nor his doctors were at the more advanced stage of knowledge as we are today. Beyond personal notes of empathy and optimism, Craig revealed something that I believe is an epoch-making scientific admission. Not (yet?) in a full science paper, but in the short communication above, Craig Venter openly blames his condition on "an extremely low number of the repeats". Why is this so highly significant, when "everybody admits that the genome is replete with repeats"? The significance is in "The Principle of Recursive Genome Function". The peer-reviewed science paper and its popularization by a Google Tech Talk YouTube (both in 2OO8) laid down the principle of recursion (a double lucid heresy at that time) that growth of (fractal) organisms is governed by (fractal) genome, where each recursion picks up auxiliary information from the genome to sustain both healty growth, and the required "full stop", beyond which growth can turn into pathology. It is often overlooked that e.g. for Mandelbrot set there is no mathematical limit for the number of repeats (however, in any computer-plot, though the number of recursions can be many millions, it is also a finite number, since computers do not run forever). For FractoGene, e.g. growth of a 4-stage Lindenmayer-fractal of a Purkinje brain cell, it is imperative not only to govern the growth, but also to drive it into a full stop to maturity, and not let it recurse beyond (into pathology). The principle was depicted as below

YouTube (Pellionisz)

FractoGene has been endorsed over the years by eminent biomathematicians (double-degree Eric Schadt), Nobelist (Michael Levitt). The novel corroboration by both an outstanding leader of genomics (as well as a patient) Craig Venter can thus be called remarkable. Also noteworthy, that the utility of correlation of aberrant repeats with pathology is specifically claimed by issued US patent to Pellionisz 8,28O,641 (in force for most of the forthcoming decade) and the more recent hopes that perhaps the most immediate applications of the new field of "genome editing" may be focused on correcting the erroneous number of repeats. - Andras_at_Pellionisz_dot_com)

Illumina Strikes Deals With Philips, IBM to Interpret Cancer Genomic Data

Jan 09, 2017 | staff reporter

NEW YORK (GenomeWeb) – Illumina announced two separate collaborations today, with Royal Philips and with IBM, to advance the analysis and interpretation of genomic data for cancer.

The company's strategic collaboration with Philips aims to integrate Illumina's sequencing systems and Philips' IntelliSpace Genomics clinical informatics platform and to coordinate marketing and sales of the combined solution.

Genomic data from Illumina's instruments will be acquired using the BaseSpace Sequence Hub and processed through Philips' IntelliSpace Genomics solution for oncology, which combines data from multiple sources, such as radiology, immunohistochemistry, digital pathology, medical records, and lab tests. Labs adopting the solution will have access to advanced analytics, deep learning technologies and literature, guidelines, and other evidence in a single view.

In addition, Philips and Illumina plan to collaborate on clinical research with health systems in the US that want to develop precision medicine programs in oncology.

"Until now the ability to use genomic data with the aim of having a precise diagnosis of cancer and improve treatment was mostly for the domain of academic centers," said Jeroen Tas, CEO of Connected Care and Health Informatics at Philips, in a statement. "Through this collaboration we will unlock the value of genomics for a much wider group of laboratories and care providers to help them advance genomics initiatives at greater speed with the aim to offer precision medicine with better outcomes for their patients."

"We believe that this collaboration will provide an excellent path for our next-generation sequencing systems to be incorporated into health systems in the US and worldwide," Francis deSouza, president and CEO of Illumina, added in a statement.

Separately, IBM and Illumina plan to integrate Watson for Genomics and Illumina's BaseSpace and tumor sequencing process in order to standardize and simplify genomic data interpretation.

As a result, researchers using Illumina's TruSight Tumor 170 cancer sequencing panel will have access to information to help interpret the variant data. In particular, Watson for Genomics will comb professional guidelines, medical literature, clinical trials compendia, and other sources to provide information for each genomic alteration and to produce a report in a process that will take minutes.

The Watson for Genomics software, which adds data from about 10,000 scientific articles and 100 new clinical trials every month, will be available early this year to support the TruSight Tumor 170 assay.

"To enable precision cancer medicine on a large scale, we need new tools to overcome the data barriers of genomic research,” said John Leite, vice president of oncology at Illumina, in a statement. "With a comprehensive assay of Illumina and the power of Watson, we hope to deliver a rapid turnaround of the genomic alteration results."

"This partnership lays the groundwork for more systematic study of the impact of genomics in oncology," said Deborah DiSanzo, general manager of IBM Watson Health, in a statement. "Together we are poised to help researchers realize the potential of precision oncology by expanding access to valuable genome sequencing from Illumina and reliable, standardized genomic interpretation from Watson."

(Illumina-Grail more than double the stakes of Genome Interpretation to fight cancer. This is a major strategical twist in the rapid developments of 2017. While Grail is essentially a start-up with "clean slate" regarding Genome Interpretation, the Illumina-IBM-Philips alliance is monstreous. Such "double edged" approaches have not been uncommon earlier, e.g. with personal computers. Start-ups like Apple, Microsoft took a very focused approach in both hardware and software development, while the giant IBM with its OS2 became a spectacular failure on both counts. It is impossible to predict at this time which branch of the self-generated "double take" by Illumina (with a spin-off of Grail and at the same time creating a major global alliance with Philips and IBM) will emerge as the winner. My personal observation is, that Illumina, having become "gutted" and reduced to a "commodity company" by Grail (that took the high road of Holy Grail), created the Grand Alliance to fend off yet another (third) hostile take-over attempt of Illumina by Roche. Even if that is the case (nothing but speculation), "cornering" the commodity-source of sequencing might already be too late. China is just about ready with the miniaturized (certainly cheaper, if not faster and better than the Illumina sequencers), and on top of it Oxford Nanopore (and others) are already out with disruptive nanotechnology of sequencing. In any case, the global re-arrangement of major IT forces around New School Genomics amplified the colossal need (heralded almost a decade ago by Google Tech Talk YouTube) that the gap between Information Technology and Information Theory of Genome Function must be bridged. The genome regulation disease of cancer is highly unlikely to be fought successfully without an algorithmic (software enabling) understanding of genome regulation itself. Comment by Andras_at_Pellionisz_dot_com)

Grail Seeks to Raise $1B in Series B; Illumina's Stake to Fall Below 20 Percent

Jan 05, 2017 | staff reporter

NEW YORK (GenomeWeb) – Illumina's liquid biopsy startup Grail is aiming to raise $1 billion in its Series B financing round, Illumina said after the close of the market on Thursday.

The $1 billion would be raised from undisclosed private and strategic investors, from which Grail has received interest, Illumina said. Grail said that it has tapped Goldman Sachs to serve as a placement agent for additional financing in plans to raise in its Series B round, which it will close before the end of the first quarter.

Grail plans to use the proceeds to develop and validate its blood-based test for cancer screening, including large-scale clinical trials like the previously announced Circulating Cell-free Genome Atlas study and other trials that are anticipated to sequence hundreds of thousands of patients. It will also use proceeds to repurchase a portion of Illumina's stake.

The financing round "will provide Grail the resources to develop its first products and embark on the large-scale trials required to demonstrate the stringent performance requirements of a cancer screening test," Jay Flatley, Grail's chairman and Illumina's executive chairman, said in a statement.

Illumina also noted today that it will accelerate Grail's path to becoming an independent company. Illumina will no longer have representation on Grail's board of directors and will reduce its ownership to slightly less than 20 percent. Illumina also plans to update its supply and commercialization with Grail to reflect a market-based agreement.

Grail will become one of Illumina's "largest customers of sequencing instruments and consumables over time, providing royalties on future Grail tests and through appreciation of our ownership interest," Illumina CEO Francis de Souza said in a statement.

In a research note, Tim Evans, a senior analyst at Wells Fargo, wrote that the change in Grail and Illumina's business relationship would likely have a positive impact on Illumina's profitability. Previously, Illumina had anticipated that Grail would be dilutive, and Wells Fargo estimated it would have a $.15 per share dilutive impact. But now, "we believe Grail will swing to an EPS benefit," Evans wrote. Depending on how much Grail spends per year on sequencing instruments and consumables, it could have a positive impact of more than $.20 annually.

Not all analysts had such a positive reaction to the news, however. Paul Knight with Janney wrote that while the change in the business releationship between Illumina and Grail will improve visibility of Illumina's revenue estimates, "transparency is just as quickly impaired."

Knight added that due to the hurdles of developing diagnostics, even if Grail launches a commercial test within Illumina's three-year timeline goal, "we don't see significant Grail revenue until 2020-2025 as FDA approval and private reimbursement are part of the treacherous post-commercial launch process." Moreover, when Illumina launched Grail, the company said that only it could sequence at the price points necessary to enable the required clinical trials, but now "Grail will have to pay market prices," Knight added.


Company Will Raise $1 Billion To Create Blood Test To Detect Cancer

Matthew Herper , FORBES STAFF

JAN 5, 2017

Grail, a San Francisco startup that aims to invent a blood test that can detect cancer early, announced this afternoon that it plans to raise $1 billion in venture capital in its second financing round, a sum that puts the biotech startup in a class with tech names like Uber, Facebook and AirBnb.

As part of the announcement, Grail is also spinning out of its parent company, San Diego's Illumina, the $2.4 billion (sales) firm that makes most of the DNA sequencing machines that scientists and doctors use to study human biology, diagnose rare genetic diseases and pick treatments for cancer patients. Illumina is keeping a 20% stake in Grail.

“We founded Grail a year ago to enable early cancer detection via a blood-based screening test powered by Illumina sequencing technology,” said Jay Flatley, Executive Chairman of Illumina and current Chairman of Grail. “This raise, when completed, will provide Grail the resources to develop its first products and embark on the large-scale trials required to demonstrate the stringent performance requirements of a cancer screening test.”

The spinout is likely to be popular with many of Illumina’s investors. Grail executives have discussed conducting clinical trials involving hundreds of thousands of patients. These would be expensive, and could be a drag on Illumina's profitablity. Illumina shares fell 23% last year.

In a press release, Grail said it has received "indications of interest" from investors who would commit $1 billion. That means the money has not been received, and that the size of the funding round could grow. Grail said in an interview that the announcement was made now so Illumina could give clearer guidance to its investors, and because there may be others interested in investing.

The $1 billion round represents a victory--and a gamble -- for ARCH venture capitalist Robert Nelsen, who has been pushing venture investments in biotechnology into the nosebleed realms favored by the Silicon Valley set. Traditionally, biotech companies have limited capital, raising first tens then hundreds of millions as drugs progress. Nelsen has recently engineered much bigger deals with Juno Therapeutics (cancer, $120 million raised in its first round in 2013) and Denali Therapeutics (Alzheimer's and Parkinson's, $217 million raised in a round in 2015). Those deals, however, would be dwarfed by this one.

Grail was created as a unit of Illumina in late 2015, based on internal research at Illumina that showed sequencing DNA in the bloodstream again and again made it possible to pick up floating bits of DNA from cancer cells much more accurately than scientists previously believed. This data has not been published. At the time, investors including ARCH Venture Partners, Jeff Bezos and Bill Gates invested $100 million in the company.

As chief executive, Illumina hired Jeff Huber, a former Google executive. He says he was inspired to take the job in part by the death of his wife Laura, a healthy 46-year-old, from colorectal cancer. "It’s very mission-driven for me," he said in an interview this afternoon.

Huber says the company had made progress toward its audacious goal: "A test that will detect all of the major cancer types." That will require what he calls "ultra-intense genome sequencing and two or three orders of magnitude deeper than anyone else is doing."

"Cancer is not one thing," says Huber. "It is really driven by mutations. Every case of cancer is unique. It is a snowflake. (I would say, fractal, AJP). Being able to [find it] with medical and statistical rigor drives clinical studies of unprecedented scale." Already Grail has announced that it has started a 10,000-patient clinical trial, but Huber says studies involving hundreds of thousands of patients will be required, as will machine-learning technologies that the company is in the midst of building up.

The history of creating diagnostic tests in cancer is long and bitter and pockmarked with failure. For existing tests, like PSA for prostate cancer or mammography for breast cancer, debates rage about whether or not the tests harm more patients with additional surgeries and test procedures than they help. For many doctors, Grail's vision conjures up a whole population of people who are warned they have cancer before there is anything they can do. That's something that the company will have to grapple with even as it also wrestles with a disease that has afflicted people for the entirety of human history.

In fact, looked at another way, $1 billion isn't that much. Companies including Pfizer and Johnson & Johnson have spent that on individual experimental medicines--investments that sometimes ended up producing mainly financial losses. Huber says that Grail is going to be "prudent" with the money it is about to raise. "It is capital intensive path," he says.

(The era of "Genome Sequencing" is gone - sequencing business became a commodity. Jay Flatley, the genius of Illumina just completed a 180 degree turn-around, having left Illumina for creating Grail, and now weaning the totally disruptive GENOME INTERPRETATION start-up the commodity-business of sequencing in the Series B of Grail. One should note, that this is a private industry initiative, in the magnitude of $Billion, directly comparable to the scale of imaginative but largely imaginary (presently un-funded) "government Moon-Shots". Flatley, one of the most brilliant CEO-s of Modern Genomics, will escalate his new venture in Silicon Valley, and already tied to Big IT (albeit hitherto by snatching one Google expert Jeff Huber as a CEO to Grail, another Google expert Franz Och to handle Big Data ...). Now it is up to Silicon Valley Big IT to decide which will spearhead making the smartest but most user-un-friendly secret of the universe (the genome, 6 Bn A,C,T,G-s) into the Next Big Thing of Silicon Valley. Comment by Andras_at_Pellionisz_dot_com)

Biden looks to continue cancer work (Moon Shot exploded before launch)


Biden Looks to Continue Cancer Work

Jan 05, 2017

Vice President Joe Biden plans to continue his cancer work after leaving the White House, the Washington Post reports.

Biden is currently heading up the cancer moonshot initiative announced by President Barack Obama in his last State of the Union Address in 2016 that aims to increase cancer research funding and break down barriers between different cancer research silos. The recently passed 21st Century Cures Act includes some $1.8 billion for the effort.

Biden tells the Post that that his new nonprofit organization will tackle many of the same data sharing and clinical trial participation issues as the moonshot effort, but also others like drug pricing.

"I'm going to begin a national conversation and get Congress and advocacy groups in to make sure these treatments are accessible for everyone, including these vulnerable underserved populations, and that we have a more rational way of paying for them while promoting innovation," Biden tells the Post. Biden's son Beau died of brain cancer in 2015.

He adds that the nonprofit, tentatively being called the Biden Cancer Initiative, will be located in either Washington, DC, or Wilmington, Del., and won't be associated with any cancer center.

(With all admiration of stepping down Joe Biden, the above communication can be considered most unfortunate, at least in its timing. To this minute of January 5th, there is no announcement for the Head of NIH, that will govern most research and development in health sciences and services. With the above announcement, Joe Biden made it much-much harder for Francis Collins to become re-appointed, as the above "non-profit entity" - that most interestingly will not be associated with any cancer center - can be perceived as a pathway set up to side-track the new establishment (even if funds would become available, albeit at the fraction of proposed but not funded $1.8 billion). Given the stance of the proposed Chief of Department of Health and Human Services (Tom Price), should Price be confirmed by the Senate, the chances of launching of this particular Moon Shot looks almost like it exploded on the launch-pad - AJP)

Hungary Launches National Oncogenomics Program

January 1st, 2017

Dr. Pellionisz in Silicon Valley, California received confirmation that the 9 Billion HUF (about USD 30 M) Hungarian National Oncogenomics Program was approved and funded by the Government of Hungary. Lead by Drs. István Vályi-Nagy and Miklós Kásler, with the participation of scores of Hospitals in the Capital (Budapest) and planned to expand to the Nation and beyond, the program that is not very big by US standards, beats in time the "Moon Projects" of the USA government - as they have been laudly heralded, but hitherto not funded.

Through the instrument of licensing the double-disruptive FractoGene patent of Dr. Pellionisz - following the business model of Skype of Estonia - Hungary can be closely allied with one of the global leaders of Big IT companies, already with vested interest in New School Genome Informatics.

(From Dr. Pellionisz, cabinet-level advisor to the government)

Washington D.C. making history

1) Lame duck President signs a bill that is not funded

President Barack Obama makes remarks before signing the 21st Century Cures Act on Tuesday in the South Court Auditorium in the Eisenhower Executive Office Building on the White House complex in Washington.(Photo: Pablo Martinez Monsivais / AP)

Washington — President Barack Obama on Tuesday signed into law a $6.3 billion legislative package that expedites government review of drugs and medical devices, boosts cancer and Alzheimer’s research and combats opioid abuse.

At the White House ceremony, Republican Rep. Fred Upton of southwest Michigan stood to the right of the president as Obama used a dozen pens to sign the legislation, known as the 21st Century Cures Act. It was likely the last bill signing of Obama’s eight-year presidency.

Obama thanked Upton and other architects of the bill, including Reps. Frank Pallone, D-New Jersey; Diana DeGette, D-Colorado; and Sen. Lamar Alexander, R-Tennessee.

“We are bringing to reality the possibility of new breakthroughs to some of the greatest health challenges of our time,” Obama said.

In a statement with DeGette, Upton said, “Patients needed a game-changer, and it is our hope that history will look back at the Cures effort as the moment in time when the tide finally turned against disease.”

The legislation authorizes 10 years worth of funding, including $1.8 billion for a cancer research “moonshot” effort supported by Vice President Joe Biden.

“God willing, this bill will literally, not figuratively, save lives,” said Biden, whose 46-year-old son, Beau, died last year after a long battle with brain cancer.

“But most of all what it does is give millions of Americans hope. Probably not one of you in this audience or anyone listening to this who hasn’t had a family member or friend or someone touched by cancer.”

The bill restructures federal mental health programs and envisions $4.8 billion in funding for the National Institutes of Health, IF future Congresses appropriate the funds. (And that is a very big IF -

The bill also aims to improve and “modernize” the development of new drugs and treatments. Critics had wanted the measure to include controls for drug pricing and say that speeding up the drug and device approvals will compromise patient safety.

“We’re building on the FDA’s work to modernize clinical trial design so that we’re updating necessary rules and regulations to protect consumers so that they’re taking into account this genetic biotech age,” Obama said. “And we’re making sure that patients’ voices are incorporated into the drug development process.”

The package includes $1 billion over two years, including $500 million in 2017, in grant money for states to help prevent and treat abuse of opioids and other addictive drugs such as heroin. Obama somberly noted the number of opioid overdoses that have nearly quadrupled since 1999.

When he first proposed the spending earlier this year, the White House said Michigan would be eligible for an estimated $28 million over the two years, depending on the strength of the state’s plan to combat the epidemic.

Michael Botticelli, director of National Drug Control Policy for the White House, said Tuesday that states will have some flexibility in how they deploy the grant funding, but it’s intended in part to close gaps in treatment coverage in under-served areas of the country.

“We know that the states really needed federal resources, and they should expect federal resources to really look at how we continue to prevent additional prescription drug misuse and specifically how we think about closing the treatment gap, ensuring everybody gets access to treatment,” Botticelli told reporters.


2) President-elect Trump creates a basis for a consensus how private industry should get the government projects funded and done

President-elect Donald J. Trump and the nation’s tech elite was hyped as something out of “The Apprentice”: The new boss tells his minions to shape up. It turned out to be a charm offensive, a kind of “Dancing With the Silicon Valley Stars.”

“This is a truly amazing group of people,” the president-elect said on Wednesday in a 25th-floor conference room at Trump Tower in Manhattan. The gathering included Jeff Bezos of Amazon; Elon Musk of Tesla; Timothy D. Cook of Apple; Sheryl Sandberg of Facebook; Larry Page and Eric Schmidt of Alphabet, Google’s parent company; and Satya Nadella of Microsoft, among others. “I’m here to help you folks do well,” Mr. Trump said.

He kept going in that vein. “There’s nobody like you in the world,” he enthused. “In the world! There’s nobody like the people in this room.” Anything that the government “can do to help this go along,” he made clear, “we’re going to be there for you.”

And that was just in the first few minutes. The candidate who warned during the presidential campaign that Amazon was going to have antitrust problems, that Apple needed to build its iPhones in the United States instead of China, was nowhere to be seen.

The meeting between President-elect Donald J. Trump and the nation’s tech elite was hyped as something out of “The Apprentice”: The new boss tells his minions to shape up. It turned out to be a charm offensive, a kind of “Dancing With the Silicon Valley Stars.”

“This is a truly amazing group of people,” the president-elect said on Wednesday in a 25th-floor conference room at Trump Tower in Manhattan. The gathering included Jeff Bezos of Amazon; Elon Musk of Tesla; Timothy D. Cook of Apple; Sheryl Sandberg of Facebook; Larry Page and Eric Schmidt of Alphabet, Google’s parent company; and Satya Nadella of Microsoft, among others. “I’m here to help you folks do well,” Mr. Trump said.

(The full list, according to Trump staff, included:

Safra Catz - Oracle

Jeff Bezos - Amazon

Tim Cook - Apple

Brian Krzanich - Intel

Larry Page - Alphabet

Eric Schmidt - Google

Chuck Robbins - Cisco

Ginni Rommety - IBM

Sheryl Sandberg - Facebook

Elon Musk - Tesla

Satya Nadella - Microsoft

Alex Karp - Palantir)

(There will be money for spending on innovative health-care, especially for disruptive breakthroughs. One can not, maybe should not take it for granted, however, that the government knows best how to be cost-effective - best is go to private contractors that are competitive. Brace for a USD 6.3 Billion battle to create a new structure - Andras_at_Pellionisz_dot_com)

Mr. President-Elect, The Genome is Fractal! Mathematical analysis reveals architecture of the human genome

October 20, 2016


Mathematical analysis has led researchers in Japan to a formula that can describe the movement of DNA inside living human cells. Using these calculations, researchers may be able to reveal the 3D architecture of the human genome. In the future, these results may allow scientists to understand in detail how DNA is organized and accessed by essential cellular machinery.

Previous techniques of studying the genome's architecture have relied on methods that require killing the cells. This research project, involving collaborators at multiple institutes in Japan, used alternative molecular and cell biology techniques to keep the cells alive and collect data about the natural movement of DNA.

DNA is often envisaged as a stable and static code, but the genome as a whole is actually an active molecule that moves around and changes shape. Currently, scientists can sequence the entire basic code of DNA, but knowing the larger-scale 3D architecture of the genome would reveal more information about how cells use the code.

While the cell is growing, DNA is stored as an unraveled spool of string; certain portions (euchromatin) are more loosely wound, and therefore accessible to the cellular machinery that turns DNA into protein, than other areas that are kept tightly wound (heterochromatin). When the cell prepares to split in half during cell division, it packages all of the chromatin into tightly-wound, X-shaped chromosomes.

"Our calculations consider the fractal dimensions of the DNA, which shows how densely the DNA is packed inside the cell. The way the DNA is packed may indicate how the cell uses certain genes," said Soya Shinkai, PhD, Assistant Professor at Hiroshima University and first author of the research paper.

Along the "string" of chromatin are regularly-spaced, barrel-shaped "beads" of DNA-protein complexes called nucleosomes. Researchers tracked nucleosomes' movement around the cell to understand where and how chromatin is stored.

Researchers labeled the nucleosomes with fluorescent tags and took microscopy images during the growth phase of human cells. They then used theories of polymer physics to quantify the movement of the nucleosomes.

"Every second, a 10 nanometer-sized nucleosome can move 100 nanometers. The constant, subtle random forces within the cell make the chromatin move around so much," said Shinkai.

Before a cell can use a gene, the DNA must be completely unwound. Areas of chromatin containing frequently used genes are less tightly wrapped than areas of chromatin with infrequently used genes. A model to visualize how chromatin is packed within the cell could allow researchers to understand which genes are accessed most or least often and how the genome is physically organized.

"Our calculations are relevant to local chromatin structures, but this method could also be extended to whole chromosomes. These mathematical formulas are a theory for how to interpret the visual data from microscope images of DNA moving within the cell," said Yuichi Togashi, PhD, Associate Professor at Hiroshima University and last author of the research paper.

Future research projects will include finding new microscopy and DNA labeling techniques to visually track the movement of individual nucleosomes over longer periods time.

The four researchers who published the recent paper are experts in theoretical and computational biophysics, structural biology, cell biology, and molecular biology. The results come from a research project made possible by an interdisciplinary team of scientists associated with the Research Center for the Mathematics on Chromatin Live Dynamics at Hiroshima University. Other collaborators for the project include the National Institute of Genetics (Japan), Keio University, and Sokendai Graduate University for Advanced Studies.

Explore further: Biologists' discovery may force revision of biology textbooks

More information: Soya Shinkai et al, Dynamic Nucleosome Movement Provides Structural Information of Topological Chromatin Domains in Living Human Cells, PLOS Computational Biology (2016). DOI: 10.1371/journal.pcbi.1005136

Journal reference: PLoS Computational Biology
Provided by: Hiroshima University


The mammalian genome is organized into submegabase-sized chromatin domains (CDs) including topologically associating domains, which have been identified using chromosome conformation capture-based methods. Single-nucleosome imaging in living mammalian cells has revealed subdiffusively dynamic nucleosome movement. It is unclear how single nucleosomes within CDs fluctuate and how the CD structure reflects the nucleosome movement. Here, we present a polymer model wherein CDs are characterized by fractal dimensions and the nucleosome fibers fluctuate in a viscoelastic medium with memory. We analytically show that the mean-squared displacement (MSD) of nucleosome fluctuations within CDs is subdiffusive. The diffusion coefficient and the subdiffusive exponent depend on the structural information of CDs. This analytical result enabled us to extract information from the single-nucleosome imaging data for HeLa cells. Our observation that the MSD is lower at the nuclear periphery region than the interior region indicates that CDs in the heterochromatin-rich nuclear periphery region are more compact than those in the euchromatin-rich interior region with respect to the fractal dimensions as well as the size. Finally, we evaluated that the average size of CDs is in the range of 100–500 nm and that the relaxation time of nucleosome movement within CDs is a few seconds. Our results provide physical and dynamic insights into the genome architecture in living cells.

[Outgoing US President's Science Adviser reported that the genome was fractal in 2009 (within 2 years after I handed to him the manuscript of my fractal model The Principle of Recursive Genome Function). Because of the fragmented organization (NIH, NSF, DOE, DARPA etc) interdisciplinary teams so vital for deciphering genome regulation have not been set up very effectively. Instead, the old school of genomics was preoccupied with (repeated) multi-year projects to elaborate the obvious, that 98.7 percent of the human genome was NOT "Junk" DNA. The new President Elect may not even have a "Science Advisor" (yet), but just as the genuine "Moon Shot" needed the creation of a focus (NASA), various "Cancer Moon Shot" ideas might not succeed if an organizational interdisciplinary focus is not created first to establish the inevitable intrinsic mathematical platform. Similar organizational plans were addressed much before in brain research (1990 - awarded by the Humboldt Prize by Germany - a country ready to focus now on Precision Medicine) but went ignored in the USA. Andras_at_Pellionisz_dot_com]

Maryland congressman in running to head NIH?

By Jocelyn KaiserDec. 1, 2016 , 5:00 PM

Representative Andy Harris (R–MD), an anesthesiologist who has shown a keen interest in the National Institutes of Health (NIH) while in Congress, has put his hat in the ring for NIH director in the incoming administration of President-elect Donald Trump, he told ScienceInsider today. Harris says he knows that some biomedical scientists would view him as a controversial choice, but argues that his blend of research and political experience would make him a good advocate for addressing NIH’s flaws and for growing the agency’s budget in a time of fiscal restraint.

“I have conducted both clinical research and basic science research. And I have the background in the political arena to understand how funding occurs, how policies can change in new directions, and how reform can be accomplished,” says Harris, a fiscal and social conservative representing eastern Maryland.

Harris says he has spoken with the Trump transition team about his interest in NIH and that Representative Tom Price (R–GA), whom Trump has tapped to head the Department of Health of Human Services, NIH’s parent agency, is also aware of his desire to be considered for the NIH directorship.

Harris was once a co-investigator on NIH-funded research as a faculty member at Johns Hopkins University in Baltimore, Maryland. He was elected to the Maryland Senate in 1998 and to the House of Representatives in 2011. As a member of the House spending panel that funds NIH and one of the lawmakers who shaped the 21st Century Cures biomedical innovation bill, he has pushed for policy changes at the agency, such as requiring it to develop an overarching strategic plan.

Harris’s pet issue has been the plight of young investigators, who receive their first NIH research grant at an average age of 42. But some of his proposals have been fiercely opposed by the scientific community, who say they are not the right way to address the problem and could hurt mid- and late-career scientists. One Harris proposal would require NIH to find ways to lower the average age at first grant by 4 years, to 38, but the idea has not gone anywhere. And a provision in an early version of Cures that would have created a pot of money specifically for new investigators was eventually watered down to an initiative within the director’s office to oversee programs aimed at young researchers.

Harris says that as NIH director, “I'd build on what 21st Century Cures started but use the position of the director to push these ideas even further.” That includes also shifting more of the agency’s research funding to diseases that exact a high financial toll on society, such as Alzheimer’s. Although that idea also is controversial, Harris says future funding increases for NIH will depend on the “argument that more funding is a good investment because of the potential return to the federal budget,” he says.

Harris acknowledges that he would be a controversial choice to head NIH. “There will be people in the scientific community who view reformers as something to be wary about. There will be others in the scientific community who view problems such as the relative neglect of young investigators as an immediate problem that needs to be solved,” he says.

Some Republicans in Congress, meanwhile, want the Trump transition team to keep current NIH Director Francis Collins on for another 4 years, according to a report today in The Hill Extra. Harris says he’s a Collins fan, too: “He's done an excellent job. It's going to be a decision between the new administration and Dr. Collins but I have only the highest praise for Dr. Collins.”

Rumors about Harris as a possible NIH pick have been circulating in the research community for days, to mixed reactions. Some prominent scientists and lobbyists who preferred not to be quoted by name said he would be a disaster for NIH; but others said he could be better than other alternatives. Tony Mazzaschi, a longtime NIH watcher who is now senior director for policy and research at the Association of Schools and Programs of Public Health in Washington, D.C., praises Harris as “a respected voice on health care matters with his GOP colleagues” and someone who “is knowledgeable about academic public health and academic medicine given his long association with Johns Hopkins.”

Still, Mazzaschi adds, Harris “would be an out-of-the-box choice as NIH director given his political background and that he doesn't have a track record as a research visionary or as the leader of a major research enterprise.”

Some researchers are concerned about how Harris might approach human embryonic stem cell research. In 2005, when Harris was a member of the Maryland legislature, he led an ultimately unsuccessful effort to derail legislation creating a state stem cell research fund. In the end, Maryland in 2006 became one of the first four states to establish such a fund.

Harris’s interest in the NIH directorship was first reported by CQ Roll Call earlier today.

iHope - gratis full sequencing (and interpretation?) by Illumina

Illumina Launches Philanthropic Sequencing Program for Children With Rare Diseases

Dec 01, 2016 | a GenomeWeb staff reporter

NEW YORK (GenomeWeb) – Illumina has launched a philanthropic whole-genome sequencing program in conjunction with the Foundation for Children of the Californias, the Rare Genomics Institute, and the University of California, San Francisco Benioff Children's Hospital.

The program, called iHope, is aimed at children with undiagnosed rare diseases who cannot afford diagnostic sequencing. In its first year, Illumina expects to sequence 100 patients and their parents.

"Understanding the scope and size of the population affected by rare diseases, we have a moral imperative to increase the visibility of this global health problem and help find solutions for the children and families who are suffering," Illumina President and CEO Francis deSouza said in a statement.

The iHope program partners will select eligible participants who are referred to the program by clinical experts. Illumina will perform clinical whole-genome sequencing for selected individuals at no cost in its CLIA-certified and CAP-accredited laboratory.

"Whole-genome sequencing has already shown its value in identifying rare and undiagnosed diseases and, as we learn more, I believe that the process will become a routine part of medical practice," Jimmy Lin, founder and president of the Rare Genomics Institute, added. "Children will no longer have to suffer through a crusade of testing."

[While the government (of the Old Establishment) agonizes on built-in "duplication of efforts" (Cancer Moon Shots both by a lame-duck President and his deputy, without funding and unlikely to be provided "blank checks in duplicate"), Private Industry already cut in with one of the most brilliant and wonderful charitable programs. With a name that suggests a very likely partner (in full genome interpretation of just one hundred human full genomes in the first year), the program is called iHope. Illumina is the King of Sequencing (at least until a massive challenge from China's copy and miniaturization of the sequencing technology developed by Silicon Valley gem Complete Genomcs), Illumina has not focused in the past on "algorithmic interpretation of human full genomes". iHope [pun intended] that this initiative will lead to a breakthrough! - Andras_at_Pellionisz_dot_com]

Moon Shots - GenomeWeb writes

Precision Medicine Has Bipartisan Support, Proponents Assure Amid Trump Administration Transition

GenomeWeb, Nov 22, 2016 | Turna Ray

NEW YORK (GenomeWeb) – At a cocktail reception in Boston last week ahead of an annual meeting on personalized medicine, attendees milled around not talking about the latest advances in genomics or the challenges of companion diagnostics development. They were too preoccupied with the impact of the Presidential elections the week before.

Will the new administration value genomics research and personalized medicine projects going on around the country that depend on government funding? How will a change in administration and priorities impact projects such as the Precision Medicine Initiative (PMI) and the Cancer Moonshot? Who will head up the US Department of Health and Human Services, the National Institutes of Health, and the US Food and Drug Administration? [Now we know : Tom Price. When STAT asked Price about Vice President Joe Biden’s cancer moonshot this year, he said of course he supported medical research — but he wasn’t willing to give the government a blank check. “We’re all in favor of increasing funding for cancer research,” he said. “The problem that the administration has is that they always want to add funding on, they never want to decrease funding somewhere else. That’s what needs to happen.” - AJP]

And will these new government officials continue efforts of the last administration to advance data sharing, privacy protections, and integrated systems critical for the implementation of personalized medicine?

Ed Abrahams, president of the advocacy organization Personalized Medicine Coalition, which hosted the 12th Annual Personalized Medicine Conference last week, tried to reassure stakeholders. At the cocktail reception, he articulated what was already on the minds of many attendees: the recent Presidential elections would bring change. But he also said that "the promise of personalized medicine has not changed," and reminded the crowd that "the commitment to it, going back to President [George W.] Bush, is bipartisan."

Abrahams didn't shy away, however, from admitting that persuading policymakers to increase investment in and spur greater adoption of personalized medicine is challenging and would remain so. Two days later, leaders in the House of Representatives and Senate announced they wouldn't approve the 2017 appropriations bill, and chose instead to advance another continuing resolution that would fund the federal government until March 31, 2017.

This in turn, raises questions about whether the new Congress controlled by Republicans will bolster NIH funding by $2 billion, as legislators had previously intended. Under a Senate proposal advanced over the summer, the planned $34 billion NIH budget included $300 million for the PMI, which seeks to enroll 1 million Americans, collect genomic and a variety of other data, and use that information to accelerate the development of personalized care. In 2016, the PMI received around $200 million in funding.

The Senate budget proposal also sought a $100 million increase for the BRAIN Initiative, which seeks to advance technologies for imaging, mapping, and studying the brain; and a $216.3 million increase for the National Cancer Institute. Last year, the BRAIN Initiative received $150 million and the NCI got $5.21 billion.

Though it's not unusual to put appropriations on hold in an election year, Daryl Pritchard, PMC's VP of science policy, told GenomeWeb he's not sure where funding for PMI will end up in 2017 after the continuing resolution runs out in March. He expects Congress might not pass full appropriations for 2017 after March since the fiscal year will be half over.

Meanwhile, Eric Dishman, who is heading efforts to enroll the 1 million participants in the PMI (newly named the All of U Research Program), has been working feverishly since July to set up the recruitment process. However, at the Personalized Medicine Conference last week, he said that enrollment wouldn't start this fall as originally projected, but would likely begin in the first quarter of 2017.

The NIH gave the All of Us program $55 million for fiscal year 2016, likely enough to begin bringing in participants and collecting certain kinds of information from them. However, according to Dishman, discussions are just starting on how to approach genomic testing, data collection, reporting, and counseling. He suggested that the price point for genomic sequencing will need to drop further to enable testing a million participants. Additional funds will certainly be necessary to reach the PMI's enrollment goal over four years, keep participants engaged and participating long term, and collect the different kinds of data the project has planned.

While there might be "an attempt to do major surgery" on the 2017 budget, Russ Altman, a professor of bioengineering, genetics, and medicine at Stanford University, is also optimistic that PMI has bipartisan support. Altman, a member of the committee that advises the NIH director on the initiative, met with Senator Lamar Alexander at a PMI event in February at the White House and said the Republican congressman from Tennessee was totally supportive of President Obama on this front.

"However, the new administration is not necessarily in sync with either party on this, so we will need to see," Altman said over e-mail. "However, there is reason to be optimistic since the Republicans have a history of supporting basic science research and understand how it can be an economic driver."

But Altman cautioned that stakeholders involved with PMI shouldn't take bipartisan support as a given, and that the initiative could be at risk of being scrapped if Republicans consider it just another Obama legacy project. It might be effective, he proposed, to present PMI as a basic research project that can bolster the US leadership in science internationally, given that other countries, such as the UK, have launched similar large-scale genomics and personalized medicine projects.

"I think that the NIH has to make sure that they get in front of the new administration and present the benefits and promise of the PMI in very clear terms — the scientific and economic long-term benefits, the benefits to healthcare long term and the competitiveness of the US in the next generation of health," he said. "If they can do this, it should be safe."

Meanwhile, the funding for the Cancer Moonshot — which aims to achieve 10 years of progress in cancer research in five — doesn't have any appropriations. So, there's even more uncertainty around Vice President Joe Biden's pet project, for which the Obama administration had requested $750 million in 2017, the majority of which would go to NIH with $75 million earmarked for the FDA.

At the Personalized Medicine Conference last week, Greg Simon, executive director of the Cancer Moonshot task force, said he hadn't met with the Trump transition team yet. "The bad news is I have no idea what's going to happen. The good news is we're supposed to know by January 20th," said Simon, who was an aide to former VP Al Gore from 1991 to 1997, and headed up FasterCures, a think tank focused on accelerating medical research.

But he was optimistic that the Cancer Moonshot has widespread support across academia, industry, and government. "I've worked on a lot of stuff that I felt was important," he said, reflecting on his 30 years in Washington. "Nothing has the bipartisan support like this has bipartisan support."

Speculation on the future of the Cancer Moonshot has increased over the last few days, after Trump's website issued a statement that the president-elect and vice president-elect had dinner with billionaire biotech entrepreneur Patrick Soon-Shiong. He is CEO of NantHealth, which offers a service called GPS Cancer that integrates whole-genome sequencing, whole-transcriptome sequencing, and quantitative proteomics to provide oncologists with a molecular profile of a cancer patient. Soon-Shiong last year also launched his own Cancer Moonshot 2020 — an effort to join pharma and government around immunotherapy clinical trials — around the time Biden's Cancer Moonshot was announced.

Regardless of where the government lands on the Cancer Moonshot, the private sector has committed to sharing data and launching precision oncology programs in support of the national project, and those efforts will continue to advance — and Biden will remain a champion.

"There are all the reasons in the world too keep it going at the government level," Simon reflected at the meeting. "But apart from that, Vice President Biden has said publicly many times he will continue to do this work from wherever he is, in terms of convening people around problems … and helping the international community work together better."

[Those of us who worked with NASA at any time remember the "original Moon Shot". First, it was a political idea to affirm symbolic and technological superiority of the USA over the Soviet Union. Later, it became a victory accomplished... Well, not exactly by NASA alone, but more like taking Werhner von Braun out of the prisoner of war camp to finesse the breakthrough, and assigning the heavy lifting to over 500 USA-based private contractors, like Boeing, North American Aviation, the Douglas Aircraft Corporation, the Rocketdyne Division of North American Aviation, IBM etc, etc. The "Cancer Moon Shot" will be very different - but ultimately may turn out to be rather similar. Tom Price will hardly give a blank check to Biden for his Moon Shot. Ever since the President Elect and Vice President Elect had dinner with Patrick Soon-Shiong, they may have spotted in a cancer moon shot a "yuge business opportunity" for the US private sector, China-style! Thus, the question may be if a China-style cancer moon shot will win in China, or in the USA, by taking advantage of the heavy lifting of US-HQ-ed "Big IT". Just as with the original Moon Shot, it is difficult to overestimate the significance of the outcome of this round. Those of us favoring the USA root again for the US in the "Cancer Moon Shot". Andras_at_Pellionisz_dot_com]

The Massive "Next Big Thing" - National Genome Projects

China took away not only our manufacturing jobs, but also our leadership in Modern Genomics. This was predictable for at least half a decade: see "Global Industrialization of Genomics" long foreshadow of the US fading out (in "Hologenomics news" at about the time China's BGI acquired the gem of Silicon Valley "Complete Genomics" for a token $100 M). Today, a single bunch of Intellectual Property in this field is worth much more. (Lenovo’s mobile phone business is set to grow even larger. In January, the Chinese company announced it planned to buy Motorola Mobility from Google for $2.9 billion).

[Emergence of BGI to take over leadership at the time of the sale of Complete Genomics, see also the open challenge here]

"The China Precision Medicine Cloud" claims World leadership on May 24, 2016: 

A world-leading platform to benefit patients and health

Huawei, WuXi AppTec and WuXi NextCODE launch the most powerful, proven and quality-certified national-scale cloud infrastructure for using the genome to improve medicine and health

- Will benefit patients by accelerating capabilities in discovery, clinical medicine and wellness

- The world's leading genomics platform hosted on China's secure and reliable cloud network

- A unified infrastructure to serve the China Precision Medicine Initiative, genome discovery of unrivalled scale, as well as clinical diagnostics and drug development

Not to be totally outdone, Germany, France, UK, Netherlands, Belgium, Korea, Canada, Estonia, Poland, Switzerland, Hungary (see below), the entire EU, Kuwait, Quatar, Saudi Arabia (and too many other countries to claim a full list, do your own Googling) also announce their focused efforts towards genome-based precision medicine of their special populations.

Where is the USA National Genome Program? 

There used to be one, kind of. At that time (say, two decades ago) was far and ahead in the lead with the USA: "The Human Genome Project". Over the decades, it ended up as a minuscle subsidiary of NIH - one of the 27 Institutes and Centers at about $30 Bn per year for all. From this, the NIHGR's 2016 requested budget was less than $516 Million (M as in Million, not B as in Billion).

At the same time, noble and mathematically eminently able Humaniatrians wreck their brain if e.g. autism is mathematically well-identifiable (genome) disease, or a mere confuence of "social syndromes". A recent funding request to actually measure the Zipf-Mandelbrot-Fractal-Parabolic-Distribution-Curve of Copy Number Variations in autistic versus control human genomes was turned down. This was about quarter of a Century after a NIH request to expand an existing (hard-fought) grant towards geometrization of neuroscience and genomics was turned down. One could twitter: "SAD".

The US government lost out. Private Industry (Big IT) is going to Win Big. The question is if in the USA (e.g. Apple or Google making not only smart phones here, but also edging decisively into user-friendly genome interpretation based on hard mathematical foundation) - or in the Far East; by the government in China, by Samsung in Korea, etc).

Sapienti sat.

[Andras_at_Pellionisz_dot_com, Four Zero Eight 891-7187]

The Third Hungarian Notation

To be Hungarian is to Think Different. A reason may be the rather unique Hungarian language. It does not belong to the Family of Indo-European Languages (see chart below; the distinct grey matter in the very center of Europe). Hungarian is very unlike English. Phonetical and precise, but without having to worry, like in German, about "gender" of nouns (there are none). In addition to singular and plural, Hungarian also has a dual mode (like in Sanskrit or Russian; "my eye is blind means both are blind - if only one is, "I am blind for half of my eye"). Agglutination can define very complex yet precise meaning - many people, including John von Neumann believed that mastery in the Hungarian language predisposes mathematical thinking. When addressing someone there is a choice of only two levels of distinction - not the mind-boggling nuances in the Japanese. However, like in the Japanese, we put the Family Name first, followed by the Given Name.

[Countries in green speak predominantly Indo-European languages. Hungary in the Center does not]

Perhaps the simplest "Hungarian Notation" is that of Charles Simonyi (Simonyi Károly in the Hungarian). His convention, introduced as the Chief Architect at Microsoft, invented a clever way of naming variables in computer languages.

Another, more profound Hungarian Notation, defining the architecture of digital computers was that of John von Neumann. He realized that since in digital expression both computer instructions as well as data are expressed by strings containing zeros and ones, in the memory of a digital computer both the instructions as well as the date are stored. For non-experts, it is outright impossible to know which sequences of zeros and ones e.g. on a hard drive are program-instructions, or all sorts of data. (When you buy a new computer, storage contains, in an overwhelming manner, program-instructions, and hardly any data; perhaps some data-templates. As your computer system "evolves", your hard drive is mostly filled with data-files; documents, pictures, movies, etc. Physically, everything is zeroes and ones - one has to be an expert in informatics to tell sequences that represent program instructions from data, with very different syntax.)

The third "Hungarian Notation" was introduced deliberately in Budapest, 2006 (yes, just over a Decade ago!). Ever since Ohno (1972) who made "the biggest mistake in the history of molecular biology" to "scientifically argue" (in a false manner) that there is "Too Much Junk in our DNA", many rejected the "convenient oversight" to relieve scientists from the staggering task of trying to interpret the entire genome (the first person to ask a question after Ohno's talk expressed his disbelief first, with Mattick and Simpson followed). Yet, nobody provided a mathematically comprehensive explanation how "Junk" DNA is "anything but" - till Pellionisz took his 2002 FractoGene concept to Budapest in 2006 (along with seriously ill but still very active Malcolm Simons, who embraced FractoGene by a joint peer-reviewed publication, 2006).

Below is the (edited and abbreviated translation) from the original Hungarian report:


The “Junk” disappeared: A new era started in the understanding of DNA

By Márton Münz and Tamás Simon
Translated, edited and abbreviated by Andras J. Pellionisz
Origo (Weekly, in the Hungarian Language), 2006. October 16 [Note the date: A full Decade Ago!]

Scientific research rarely delivers a breakthrough that puts an entire science on a new foundation. In biology, precisely such a revolution started: after a Century of blindness we begin to see the information coded in the genome in its entirety, beyond the "genes". A new world opens up for us and its significance and its practical implications are presently impossible to assess. For us, it is especially exciting that the brand new science called PostGenetics was born in our country, in Budapest.

A new science was born

"In Hungary a new science was born" declared [to the Journal of Origo] Dr. András Pellionisz, a Hungarian biophysicist who works in California to at the Immungenomics and Immunomics World Congress on 2006. October 12. "This new science is PostGenetics, a postmodern period for genomics. The idea of delivering this breakthrough here emerged in my visit a year ago, and we have worked hard to develop the announcement made today."

The congress was a great success. Over five hundred scientists from 46 countries gathered, among them many Hungarian specialists working home or abroad. As a satellite-conference to the congress it was the first time to hold an international science meeting on the question of so-called "Junk DNA", with the conclusion that dismissal of "Junk DNA" triggered the formation of the the International PostGenetics Society.

Encryption in the "Junk" DNA

András Pellionisz discovered, how can we read the part of our genome that used to be considered ”unreadable”. In 1966 as an electrical engineer he joined the research group in Budapest of János Szentágothai, professor of anatomy - [see the booklet of the Szentágothai Centenary held in New York, 2012]. Szentágothai realized that if we want to understand the functioning of neural networks, professional mathematicians & informaticians need to be hired, and Pellionisz accomplished historical breakthroughs in solving the problem of neural networks. Specialized for "geometrization of biology" his professional interest in recent times turned to the analysis of how information is encrypted in the genome; an achievement discovering new, previously unseen layers of the function of DNA.

How to read your genome?

There are astonishing reasons why we did not think, for a long time, that vital information is hidden in the 98.7 percent of heredity material. When the researchers tried to "read text” outside of genes, they could not comprehend any meaning. We now know, why the the conventional method could not have found meaningful information. By now it is clear that regions of DNA that for a long time were mis-labeled as "Junk" DNA because it is one thing to read such sequences containing the protein-coding genes. It is written in another language, how the “Junk” DNA stores the information based on a very different principle than the one used for protein coding genes.

Key to the secret: Fractals

It is not only genes but also the "Junk" DNA that contains important information. However, it it can be read only by different methods. The theorem of Pellionisz is, that in addition to the genes, the incorrectly labeled “Junk” DNA now also called non-coding DNA as it does not code for protein, holds the true "blueprint" how to assemble the raw materials (proteins) that are directly coded by the very few number of genes.

Pellionisz revealed the intrinsic mathematics distinct to the genome structure and function, geometry - and precisely the kind of geometry so common in nature: fractals. Fractals are special mathematical objects whose distinct characteristic feature is "self-similarity". A minor detail magnified makes the whole shape with a similar structure. Therefore, fractals created by theoretical methods exhibit an internal complexity of the infinite, but in everyday life we are surrounded with lots of such shapes as well. The lightning has a fractal-like pattern, the arborization of trees, the veins of leaves, cauliflower and broccoli surface, over-arching network of blood vessels in the body, and even the growth of bacterial colonies can be understood by methods of fractal geometry. Nature is using fractals (also in the genome), because it is an excellent method to compress information - explained the Hungarian biophysicist. In this respect, it is no wonder that the genome that is responsible for the storage of hereditary information, is also arranged in a fractal manner, just as the organisms it grows. The utility lies in the cause-and-effect of fractal geometries; statistical correlation for diagnosis and probabilistic prognosis for precision therapy.

Pellionisz with his colleagues deciphered the secret code. Fractal algorithms provide a new way of finding fractal elements and fractal clusters - and they have been identified in the DNA. These self-similar, repeated sequences found in the genome, "patterns of patterns" that used to be hidden as a kind of repetitive garbage lurking behind the clear (but very few) protein-coding genes. If someone reads the genome in a linear manner in hunt for “genes” will not notice these sequences, or would even deliberately throw them out.

Genes do not make us human

Although reading the "Junk" DNA seems a little complicated at first, the decyphering biophysicist maintains that the mathematics of fractals is fortunately not overly complicated. Pellionisz says that he innovated PostGenetics for Hungary, since his homeland excels in mathematics and IT background, which is several orders of magnitude more cost-effective than the experimental wet-labs of biology. It should be considered whether it would be worthwhile to concentrate our resources in the research and mostly in application, because we are ahead at the moment, as Western countries - said the researcher to warn that this opportunity will not be open to us for a long time. Results were already popping up: for example, Csaba Szalai and colleagues (MTA immunogenomic Research Group) with the help of Pellionisz by using fractal geometric characteristics found in a chromosomal region that contains the genetic susceptibility to asthma differences.

The DNA outside of genes could also have an important evolutionary role. A very interesting new information for example, while that between the chimpanzee and human genes only 0.1% of the difference, while stages do not contain genes that have been forty-fold, 4%. It is obvious that this surplus reported in 4% of them are important information that make us human, coding the "raw materials" that are virtually identical to the genes of all living systems. It can be said that the higher levels of organization defines complexity, not the DNA encoding proteins, but a more complicated "blueprint" to build a superior organism.

Testing DNA beyond “genes” genes is also a very practical question - Pellionisz said. It was found that it is essential to look for errors in DNA outside the genes, since in case of many diseases this provides an understanding and treatment of "Junk" DNA diseases. In thousands of diseases the key is not in the genes, but can be found in the "Junk" DNA. The research also underlines the economic usefulness: the fractal approach already found such structures in a bacteria, which may lead to reducing the cost of hydrogen produced by living systems.


A new science is born before our very eyes. The discovery when analyzing the genome shocked some researchers. In recent decades [since Ohno, 1972] we were trapped, because we only tested a small portion of our DNA and not the vast part that operates as a regulatory system. The truth is more and more certain: genes that we wanted miracles from simply encode how proteins as ingredients of the organism are produced. The information how to architect ingredients into even a single cell, is coded "beyond the genes". The way how living systems, e.g. cells are built, are determined by self-similar repeats, microRNAs, small interfering RNAs, pyknons, satellite DNAs, introns and a slew of other regulatory elements - all in the "non-coding" regions of the genome. The genes provide the raw materials only. The program is encrypted, it seems, in the vast seas of non-coding DNA that was called, based on mistaken "scientific" rationale [by Ohno, 1972, read full text above], mis-labeling "Junk" DNA. This information is "a different language" than the one written for genes to code for proteins. Non-linear dynamics, fractal geometry methods are needed to read the regulatory system. The secret of fractal DNA governing fractal organisms [FractoGene] was discovered first by the Hungarian explains biophysicist [AJP], who says that it comes to lend Hungarian researchers an excellent opportunity to capitalize on their talents.

In closing, let us recall the world famous Hungarian mathematician John von Neumann's train of thought dating from 1945. Neumann earned outstanding merits in architecting electronic computers. The Neumann architecture, the principle of program storage, is the use of computers to operate the binary memory system for instructions as well as data. Thus, memory of a computer not only can store the data, but it can also store the instructions that need to be performed on data. [Every coder knows that instructions have a very strict syntax, while data, though they are also binary, can come in all sorts of formatting, very different in syntax from "instructions" - AJP]. The data and the program is stored in an apparently identical manner, yet the strings of zeros and ones can only be told apart by those who understand the "von Neumann architecture" - the fundamental theory of nearly all modern computers. But as if evolution had recognized the Neumann principle, in the same DNA "memory" we find both the protein coding instructions, stored in the genes, and the regulatory data, how to design organisms from ingredients. The latter is in the "non [protein]-coding" DNA; and is written in a different language!


So what happened in Budapest after 2006? Hungary froze for a Decade. The government of Hungary, at that time, instead of doing what was best for the people, instead of lauching any kind of innovative application project, shot with rubber bullets at those taking part in celebration of the 50th Anniversary of the 1956 Hungarian Revolution (including yours truly, who most fortunately was not shot either in 1956 or in 2006). This is somewhat worse than how Jim Watson answered the question "what happened after you published the Double Helix"? ("For seven years, nothing happened. Our double helix paper was not cited even once" - Jim Watson replied). Hungary was in a miserable condition in 2006 - both economically and politically, carrying the heavy burden of "Iron Curtain Country" (imposed by Yalta by Roosevelt and Stalin in 1945 and confirmed by a betrayal by Eisenhowever and Hruschev in 1956). The last thing Hungary could afford in 2006 was massive investment in innovation.

Also, Pellionisz did not keep in secret that he filed the utility of his discovery FractoGene USA patent in 2002 (see an eminently lucid article by a San Francisco Chronicle [SF Gate] journalist).

In all fairness, though fractals are inherently recursive, "The Principle of Recursive Genome Function" was not available on paper in 2006 (submitted 2007, published in peer-reviewed science paper 2008). Meanwhile, subsequent to 2007 (the last CIP of his patent submission) improvements on "best methods" were accumulating as "trade secrets". Improvements, e.g. the nonlinear dynamics of recursion from contravariant protein-coding DNA to covariant "non-coding" DNA can not be easily engaged without a close cooperation (see fractal geometry in peer-reviewed book chapter recently).

While Pellionisz intended to cultivate New School Genome Informatics in his native Hungary (and elsewhere in Central Europe, most notably in Poland), several challenges needed to be solved to be ready. First, the FractoGene US-patent (8,280,641) was awarded several years after 2006 (no serious investment is made before securing Intellectual Property rights with about a Decade left for the patent in force). IP is the key for the most lucrative genome informatics market, thousands of US cancer health-institutions and Big Information Technology companies that are all running for capturing the enormous business opportunity.

Development of the Intellectual Property portfolio (US patent and ensuing Trade Secrets) came first, exploration of the Big Information Technology companies that are capable of defending IP in the USA came second, and negotiations with for-profit innovation-entities of Hungary and similar Central European regions that can establish joint ventures with the US-based IP-holder came third. The business structure and financial details can not be disclosed at this time, but Dr. Pellionisz wrapped-up an extended trip in Central Europe with success and prominent appointment to fullfill responsibilities. Part of the long-haul success story is a set of decisions at cabinet-level in Hungary, to (1) Invest heavily in innovation, especially in the field of precision medicine, (2) by January 1st, 2017 lower the business tax in Hungary to 9%, the lowest in the entire European Union. [Inquiries to Andras_at_Pellionisz_dot_com, or Four-zero-eight 891-7187]

Non-coding portions of genome are found to play role in cancer

September 27, 2016 by Chris Palmer, Science Writer

The human body produces 100,000 or more different proteins. Yet, amazingly, only two percent of the human genome actually encodes proteins. Nearly 80 percent of the rest of the genome is transcribed into RNA that does not code for proteins. Two big questions facing scientists are: How much of this "non-coding" RNA is actually functional? And does it play a role in disease?

A team of scientists at Cold Spring Harbor Laboratory (CSHL) screened thousands of non-coding RNAs to find those that were expressed at high levels in two types of aggressive breast cancer. As they describe today in a paper appearing in Cell Reports, when they reduced the level of some of the most over-expressed of these RNAs from mammary tumor samples, cellular features characteristic of cancer spread were significantly reduced.

Of the handful of different types of non-coding RNA, the most abundant and least understood are long non-coding RNAs, or lncRNAs. About 16,000 lncRNAs have been identified in humans, but functions for the vast majority are unknown.

"Since so much of the genome is being transcribed into RNA, it would seem that there would be a vast wealth of potential therapeutic targets out there that have not really been studied," says the team leader, CSHL Professor David Spector, who is also Director of Research at the Laboratory.

While the exact functions of most lncRNAs remain to be discovered, it has already been shown that in some cases their over-expression is linked to specific cancers, including breast cancer, prostate cancer and leukemia. Earlier this year, Spector's team demonstrated that a lncRNA called Malat1 was a critical regulator of breast cancer progression. Eliminating that particular lncRNA in a mouse model of luminal B breast cancer caused the cells within the primary tumor to change character, and resulted in a significant reduction in metastasis.

"That study provided significant motivation for us to look for other lncRNAs that might also be over-expressed and impact breast cancer," says Spector.

Spector and his team, led by postdoctoral fellow Sarah Diermeier, systematically sifted through the vast database of lncRNAs to identify those that are expressed more often in tumors, relative to normal mammary cells.

The team found several hundred lncRNAs that were expressed at higher than normal levels in both types of aggressive mouse tumors that they tested: luminal B and Her-2 positive. They then performed an extensive computational analysis to prioritize a subset of 30 of these lncRNAs that they dubbed Mammary Tumor Associated RNAs, or MaTARs.

In collaboration with Ionis Pharmaceuticals, Spector and hcolleagues designed a series of molecules that bind tightly to, and thereby destroy, specific RNA sequences. They used these so-called "antisense" molecules to wipe out individual MaTARs in mammary cancer-derived organoids, three-dimensional models of tumor cells that represent many features of real tumors.

The researchers found that individually eliminating 20 of the 30 MaTARs in these organoids diminished features associated with cancer, including cell proliferation, invasion, and migration.

"We now have an innovative way of destroying RNA targets inside live cells and assessing whether a tumor is dependent on them for survival," says Spector.

The team's next step is administering antisense molecules to degrade specific MaTARs in mice, in the hope that this will decrease primary tumor mass and/or metastasis. Should those experiments be successful, Spector's team will perform additional preclinical tests in human tumor samples to better identify which subgroups of patients would benefit most from being treated with antisense molecules to eradicate certain lncRNAs or clusters of lncRNAs.

"We think these tests will have particular relevance for personalized medicine," says study first author Diermeier "We imagine a situation where organoids can be derived from an individual's tumor, grown up in a dish, and act as a platform for figuring out which antisense molecules comprise the optimal treatment for a patient."

The research described in this release was supported by a National Cancer Institute grant 5PO1CA013106-Project 3, the Manhasset Women's Coalition Against Breast Cancer, the Simons Foundation, and a Cancer Center Support Grant to Cold Spring Harbor Laboratory (2P30CA45508).

[Ten years ago (October, 2006) I organized the International Symposium to become the first body of scientists who not only declared that "Junk DNA" was anything, but. I also presented in Budapest, Hungary, that the so-called "non-coding" DNA is actually "not directly coding", it regulates the fractal growth. By now, a Decade later, there is no sane genomist who would stick with "the biggest mistake in the history of molecular biology", calling 98% of the human DNA "Junk". The Principle of Recursive Genome Function elaborated how proteins return to the DNA to retrieve "auxiliary information" how the fractal genome governs growth of fractal organisms. A Decade later, I fly to Budapest, Hungary, to connect with the global venture the $30 M "Oncogenomics Initiative" by the government of Hungary. Essentially, Hungary (and Poland) with formerly centralized health-care, now can match digitized medical data with genome sequences. This one only took a Decade...]

Microsoft intiatives treat cancer as a computing problem

By Rick Massimo

September 21, 2016

WASHINGTON — Medical research has traditionally treated cancer as a disease to be cured, but Microsoft’s latest efforts to aid the medical professionals treats it as a puzzle to be solved.

The company recently announced a range of initiatives in which computer scientists are working out the complexities of cancer and the best options for treatments. The efforts range from a way to sort through the mountain of research data on cancer to a “moonshot” effort to program cells to fight cancer and other diseases.

Much of the work happens at the genetic level, Microsoft and their associated scientists say.

“We’re in a revolution with respect to cancer treatment,” said David Heckerman, a scientist and senior director of the genomics group at Microsoft.

“Even 10 years ago people thought that you treat the tissue: You have brain cancer, you get brain cancer treatment. You have lung cancer, you get lung cancer treatment. Now, we know it’s just as, if not more, important to treat the genomics of the cancer, e.g. which genes have gone bad in the genome.”

That generates a mountain of information, both in terms of genetic mutations and combinations, and in the research on the genomes.

Bloomberg Technology reports that there are more than 800 cancer medicines and vaccines in clinical trials. The reports on all these drugs are far more than any oncologist can sift through — and that’s where “machine learning” comes in.

Microsoft gives as an example of machine learning a program’s ability to recognize photos of cats based on previous photos of cats a system has seen. The key is to translate that to the task of sifting through research.

Microsoft’s Hoifung Poon says his Hanover project is designed to help scientists sift through all the data. He showed Bloomberg Technology pictures of a patient whose cancer had been knocked back but had reappeared.

“There are already hundreds of these kinds of specifically targeted drugs, so even if you think let’s pair two drugs, there are tens of thousands of options,” Poon said. “It’s very hard to wrestle with. You might need several drugs to lock down all of the tumor’s pathways.”

Meanwhile, research is looking at the cancer gene directly.

“The tools that are used to model and reason about computational processes — such as programming languages, compilers and model checkers — are used to model and reason about biological processes,” Microsoft said in the statement.

Jasmin Fisher, a Microsoft researcher and biochemist in Cambridge, England, says that she’s taking a computational approach to the process that turns a cell cancerous. She’s trying to figure out the behavior of a cell the way a computer scientist would decode a computer program he didn’t create. The goal, Microsoft says, is to figure out in a schematic way the behavior of a healthy cell, compare it to a cancerous cell, and work out how it can be fixed.

“If you can figure out how to build these programs, and then you can debug them, [cancer is] a solved problem,” she said.

[There is a difference between software (as lines of instructions) and a mathematical code (e.g. how Z=Z^2+C encodes a Mandelbrot fractal). These days ALL BIG IT companies are engaged in Genome Informatics, from different angles. My tenet is aimed at identifying the mathematics intrinsic to recursive genome regulation (see my FractoGene; "Fractal Genome Grows Fractal Organisms", with the complexity of organisms corresponding to the amount of "Junk DNA").]

Junk DNA’ tells mice—and snakes—how to grow a backbone

By Diana CrowAug. 1, 2016

‘Junk DNA’ tells mice—and snakes—how to grow a backbone

By Diana CrowAug. 1, 2016 , 11:45 AM

Why does a snake have 25 or more rows of ribs, whereas a mouse has only 13? The answer, according to a new study, may lie in “junk DNA,” large chunks of an animal’s genome that were once thought to be useless. The findings could help explain how dramatic changes in body shape have occurred over evolutionary history.

Scientists began discovering junk DNA sequences in the 1960s. These stretches of the genome—also known as noncoding DNA—contain the same genetic alphabet found in genes, but they don’t code for the proteins that make us who we are. As a result, many researchers long believed this mysterious genetic material was simply DNA debris accumulated over the course of evolution. But over the past couple decades, geneticists have discovered that this so-called junk is anything but. It has important functions, such as switching genes on and off and setting the timing for changes in gene activity.

Recently, scientists have even begun to suspect that noncoding DNA plays an important role in evolution. Body shape is a case in point: “There’s an immense amount of variation in body length across vertebrates, but within species the number of ribs and so forth stays almost exactly the same,” says developmental biologist Valerie Wilson of the University of Edinburgh. “There must be some ways to alter the expression of those [genes] regulating evolution to generate this massive amount of variation that we see across the vertebrates.”

To explore this question further, researchers led by developmental biologist Moises Mallo of the Gulbenkian Institute of Science in Oeiras, Portugal, turned to an unusual mouse. Most mice have 13 pairs of ribs, but a few strains of mutant mice bred by Mallo and colleagues have 24 pairs. Their rib cages extend all the way along their backbone, down to the hind legs, similar to those of snakes.

Snakes, such as this Gaboon viper, have can have more than 100 pairs of ribs.

Snakes, such as this Gaboon viper, have can have more than 100 pairs of ribs.

Stefan3345/Wikimedia Commons

The research team traced the extra ribs to a mutation deactivating a gene called GDF11, which puts the brakes on another gene that helps stem cells retain their ability to morph into many cell types. Without GDF11 to slow down that second gene—OCT4—the mice grew extra vertebrae and ribs. But GDF11 seemed just fine in snakes. So what was regulating vertebrate growth in snakes? The researchers decided to look at the DNA surrounding OCT4 to see whether something else was going on.

The OCT4 gene itself is similar in snakes, mice, and humans, but the surrounding noncoding DNA—which also plays a role in slowing down OCT4—looks different in snakes. To see whether this junk DNA gives snakes a longer-lasting growth spurt, Mallo and his colleagues spliced noncoding snake DNA into normal mouse embryos near OCT4. The embryos grew large amounts of additional spinal cord, suggesting that this junk DNA does indeed play a role in body shape regulation, the team reports this month in Developmental Cell.

But the researchers will have to do more to definitively confirm their findings, says developmental biologist and snake specialist Michael Richardson of Leiden University in the Netherlands, who was not involved in the study. Snakes would have to be genetically engineered with noncoding DNA that switches off OCT4 early, as it does in most other vertebrates. If this noncoding DNA is in fact the cause of snakes’ extra-long midsections, then snakes with that version governing OCT4 would be much shorter. Unfortunately, genetically engineering snakes is almost impossible because there’s no way to get access to very early embryos. “When the snake lays an egg, it’s already got a little head and about 26 vertebrate, so it’s already well on the way [to becoming a fully formed snake]. That way we miss out on the early genes,” Richardson explains.

Developmental biologists say OCT4 could be another example of evolution using noncoding DNA to change up animal anatomy. “We know that oftentimes it’s not the gene itself that changed—it’s the flanking regions or the regulatory regions,” Richardson says. “What they’ve shown quite clearly here is that the OCT4 gene isn’t different but the timing [of its expression] is prolonged.”

Snakes are an extreme variation. Almost all vertebrates have a head, a neck, a rib cage, and a tail (or tail region), but the lengths of those sections vary wildly among different species. “A flamingo has a very long neck, but snakes have a huge trunk. It’s not only the tail that’s longer,” Mallo explains. “The ingredients are not changing. The amounts and the timing of adding ingredients are.”

[The above is an obvious proof of "The Principle of Recursive Genome Function" (2008). No matter how visibly obvious were "Jumping Genes", most people missed the key concept for 40 years. Therefore, one can not really complain that after a mere 8 years the Fractal Recursion to "non-coding DNA" is breaking through. Even in an earlier Fractal Paper (co-authored by Pellionisz and the late Malcolm Simons, an early champion that "Junk is anything but - and in his late years, commuting from Melbourne to me in Silicon Valley became convinced of my FractoGene. There were predictions that were supported in just about 4 years by independent experimentalists. A core concept of Fractal Recursion is, not just what to repeat, BUT WHEN TO STOP  recursion. The paper gave a lucid explanation of repetitions coming to a halt when no supporting information is available any more. Andras_at_Pellionisz_dot_com]

The Oncoming Double Disruption of Technology of New School Genomics; smidgION by Oxford Nanopore

[My intellectual "mentor" (whom I could never meet), John von Neumann played a dual role in how "Nuclear Age" depended on the parallel development of disruptive "Nuclear Physics" (suddenly, Newtonian mechanics had to yield to the brand new science of quantum mechanics), and to make it all happen, the disruptive "Nuclear Technology" had to be develped (at the cost of the Manhattan Project). Likewise, New School Genomics has to build up its very own mathematical foundation (biology is obviously a multi-scale mathematical object, see Eric Schadt or Michael Levitt). In parallel, development of the technology of both sequencing and interpretation requires disruptive development. Below, we see it coming together to the "user-friendly" smartphone (in a way, predicted in my YouTube, 2010 and recently advocated by Eric Schadt and Eric Topol together), culminating in the miniaturized Oxford Nanopore-based USB-device (see below). At the time of Google teaming up with Stanford, fundamental questions that Eric Schadt and Eric Topol discussed arise in full grandiosity if some developments (just like with nuclear technology) should be kept "proprietary" - or must be spread widely in an open manner in Academia (such that a new crop of students can be spruced for the colossal challenge). Here we have it, Google is already engaged, and Apple is on the brink or becoming perhaps even more crucial for its Steve Job's "DNA" - "user friendliness". Technology alone is not enough, however. For the Manhattan Project technology was marshalled by the generals - but scientists figured out how all together it would work! For ultimate miniaturization, our technology may want to adopt the same "proprietary compression" that the genome itself uses; fractal algorithm. It was Barnsley who showed that fractals can provide 30,000 times of compression! Big IT is already competing - it may be crucial for any one to win; to secure proprietary IP. - Andras_at_Pellionisz_dot_com]

SmidgION uses the same core nanopore sensing technology as MinION and PromethION but will be designed for use with smartphones or other mobile, low power devices. It is designed to cater for a broad range of field-based analyses; potential applications may include remote monitoring of pathogens in a breakout or infectious disease; the on-site analysis of environmental samples such as water/metagenomics samples, real time species ID for analysis of food, timber, wildlife or even unknown samples; field-based analysis of agricultural environments, and much more.

Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com

Jim Watson was asked "what happened when you came up with The Double Helix? His answer was revealing. "Nothing happened! For 7 years nobody even referenced our work". Barbara McClintock, with her astounding paradigm shift of "jumping genes" had to wait 40 years till she "suddenly"received proper recognition.

Below I present SEVEN article appeared over a year, to show fruits of labor of my original contribution "FractoGene"; Fractal Genome Grows Fractal Organisms. Priority date of the "lucid heresy", a double-disruption concept, was secured in 2002, hearlded immediately, issued 2012 with about the Next Decade left for the patent in force.  With the two cardinal mistakes ("Junk DNA" and "Central Dogma") reversed, "The Principle of Recursive Genome Function" emerged both as the first requirement in a peer reviewed science paper with the additional requirement satisfied that predictions were made and confirmed by independent experimentalists, the subject of the patent and additional trade secrets announced widely by a Google Tech Talk YouTube with many thousands of views, opening a "New School Genomics". FractoGene explains genome regulation in mathematical terms of fractal genome growing a fractal (physiological as well as pathological, e.g. cancerous) growth. The growth is based on DNA-fractal through an operator generating fractal organisms. In itself, correlation yields statistical diagnosis and probabilistic prognosis (and precision therapy). George Church of Harvard invited the FractoGene presentation to his Cold Spring Harbor Conference (2009 September) and within weeks, substantion by independent workers was accomplished in a mere 7 years after the "Eureka" concept (by the Science cover Fractal Globule as the Hilbert fractal, 2009, October). Within 2 short more years Mirny established that compromised fractal globule is linked to cancer (2011).

The second 7-year period (2009-2016) produced ample R&D evidence by independent researchers, that the FractoGene concept is not only on the righ track, but presently is the single most coherent mathematical (software enabling) handle on genome (mis)regulation. Early proponents, as mentioned were George Church (Harvard) and Eric Schadt (at that time at Pacific Biosciences and now as Head of the $600 M Mount Sinai Institute of Genomics and Multiscale Biology). In addition to the 7 articles of the last year below, it is noteworty to mention that the approach was endorsed by the pioneer of "fractals in biology" (G. Losa of Switzerland), and Stanford Nobelist Michael Levitt,

All "Academic" requirements, therefore, were satisfied by 2016, after the the 2 x 7 = 14 years of efforts since 2002.

So what is so special about 2016?

The announcement a few weeks ago, that with the new leadership of Stanford University from Sept. 1, 2016, a Stanford-Google joint effort was launched, aimed at "Precision Medicine" (most importantly, a paramount effort to beat cancer).

Why is it so vastly important? Because in "non-profit Academia" (e.g. Stanford University, or the countless academic institutions of the cited 7 landmark-papers) anybody is "free to use the utility inherent in correlating genomic and organismal fractals". However, Google is the colossus of "FOR-PROFIT" Big IT (and it is not anchored to any single Big Pharma, for instance). Therefore, the driver of the New School of Genomics (just as it was with the Internet) has shifted from mostly government-supported non-profit institutions into the ruthlessly "for profit" business of the biggest of Big Information Technology!

Already, there are scores of an unmistakable "consolidation" as a result of entirely changing the ecosystem of New School Genomics. Stanford and Google will lead, but because of their sheer size and complexity will not move the fastest. Smaller entities will mushroom - ultimately to be bought by Google (or perhaps Apple?). Investors will scramble to secure available Intellectual Property (patents & trade secrets).

Some, like me, worked on "the Internet Boom", thus the pattern is unmistakable:

"The Next Big Thing in Silicon Valley will be Genome Informatics".

by andras_at_pellionisz_dot_com ]

Multiscale modeling of cellular epigenetic states: stochasticity in molecular networks, chromatin folding in cell nuclei, and tissue pattern formation of cells

Jie Liang,1,* Youfang Cao,2 Gamze Gürsoy,1 Hammad Naveed,3 Anna Terebus,1 and Jieling Zhao1

Crit Rev Biomed Eng. Author manuscript; available in PMC 2016 Aug 8.

Published in final edited form as:

Crit Rev Biomed Eng. 2015; 43(4): 323–346.

doi: 10.1615/CritRevBiomedEng.2016016559

PMCID: PMC4976639



Genome sequences provide the overall genetic blueprint of cells, but cells possessing the same genome can exhibit diverse phenotypes. There is a multitude of mechanisms controlling cellular epigenetic states and that dictate the behavior of cells. Among these, networks of interacting molecules, often under stochastic control, depending on the specific wirings of molecular components and the physiological conditions, can have a different landscape of cellular states. In addition, chromosome folding in three-dimensional space provides another important control mechanism for selective activation and repression of gene expression. Fully differentiated cells with different properties grow, divide, and interact through mechanical forces and communicate through signal transduction, resulting in the formation of complex tissue patterns. Developing quantitative models to study these multi-scale phenomena and to identify opportunities for improving human health requires development of theoretical models, algorithms, and computational tools. Here we review recent progress made in these important directions.

The fractal globule (FG) model[13] was the first model developed to describe the global folding properties of the human genome, as it can explain the scaling relationship between Pc(s) and s. However, it does not account for the leveling-off effects observed in FISH experiments.[10,11] Subsequently, the Strings and Binders Switch (SBS) model was developed, which pointed to a more heterogeneous structural ensemble, in which the scaling properties of the individual structures depend on the concentration of binder molecules such as architectural proteins.[53] However, scaling in the SBS model strongly depend on the choice of model parameters, and all observed scaling properties cannot be accounted for using a fixed set of parameters.

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]

Fractal Dimension of Tc-99m DTPA GSA Estimates Pathologic Liver Injury due to Chemotherapy in Liver Cancer Patients.

Ann Surg Oncol. 2016 Jul 20. [Epub ahead of print]

Hiroshima Y1, Shuto K1, Yamazaki K2, Kawaguchi D1, Yamada M2, Kikuchi Y1, Kasahara K1, Murakami T1, Hirano A1, Mori M1, Kosugi C1, Matsuo K1, Ishida Y2, Koda K1, Tanaka K3.

Author information


Chemotherapy-induced liver injury after potent chemotherapy is a considerable problem in patients undergoing liver resection. The aim of this study was to assess the relationship between the fractal dimension (FD) of Tc-99m diethylenetriaminepentaacetic acid (DTPA) galactosyl human serum albumin (GSA) and pathologic change of liver parenchyma in liver cancer patients who have undergone chemotherapy.


We examined 34 patients (10 female and 24 male; mean age, 68.5 years) who underwent hepatectomy. Hepatic injury was defined as steatosis more than 30 %, grade 2-3 sinusoidal dilation, and/or steatohepatitis Kleiner score ≥4. Fractal analysis was applied to all images of Tc-99m DTPA GSA using a plug-in tool on ImageJ software (NIH, Bethesda, MD). A differential box-counting method was applied, and FD was calculated as a heterogeneity parameter. Correlations between FD and clinicopathological variables were examined.


FD values of patients with steatosis and steatohepatitis were significantly higher than those without (P > .001 and P > .001, respectively). There was no difference between the FD values of patients with and without sinusoidal dilatation (P = .357). Multivariate logistic regression showed FD as the only significant predictor for steatosis (P = .005; OR 36.5; 95 % CI 3.0-446.3) and steatohepatitis (P = .012; OR, 29.1; 95 % CI 2.1-400.1).


FD of Tc-99m DTPA GSA was the significant predictor for fatty liver disease in patients who underwent chemotherapy. This new modality is able to differentiate steatohepatitis from steatosis; therefore, it may be useful for predicting chemotherapy-induced pathologic liver injury.

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]

Unique fractal evaluation and therapeutic implications of mitochondrial morphology in malignant mesothelioma

Sci Rep. 2016; 6: 24578.

Published online 2016 Apr 15. doi: 10.1038/srep24578

PMCID: PMC4832330

Frances E. Lennon,1 Gianguido C. Cianci,2 Rajani Kanteti,1 Jacob J. Riehm,1 Qudsia Arif,3 Valeriy A. Poroyko,4 Eitan Lupovitch,5 Wickii Vigneswaran,6 Aliya Husain,3 Phetcharat Chen,7 James K. Liao,7 Martin Sattler,8 Hedy L. Kindler,1 and Ravi Salgiaa,1,*

Author information ► Article notes ► Copyright and License information►


Malignant mesothelioma (MM), is an intractable disease with limited therapeutic options and grim survival rates. Altered metabolic and mitochondrial functions are hallmarks of MM and most other cancers. Mitochondria exist as a dynamic network, playing a central role in cellular metabolism. MM cell lines display a spectrum of altered mitochondrial morphologies and function compared to control mesothelial cells. Fractal dimension and lacunarity measurements are a sensitive and objective method to quantify mitochondrial morphology and most importantly are a promising predictor of response to mitochondrial inhibition. Control cells have high fractal dimension and low lacunarity and are relatively insensitive to mitochondrial inhibition. MM cells exhibit a spectrum of sensitivities to mitochondrial inhibitors. Low mitochondrial fractal dimension and high lacunarity correlates with increased sensitivity to the mitochondrial inhibitor metformin. Lacunarity also correlates with sensitivity to Mdivi-1, a mitochondrial fission inhibitor. MM and control cells have similar sensitivities to cisplatin, a chemotherapeutic agent used in the treatment of MM. Neither oxidative phosphorylation nor glycolytic activity, correlated with sensitivity to either metformin or mdivi-1. Our results suggest that mitochondrial inhibition may be an effective and selective therapeutic strategy in mesothelioma, and identifies mitochondrial morphology as a possible predictor of response to targeted mitochondrial inhibition.

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]

Systems Medicine of Cancer: Bringing Together Clinical Data and Nonlinear Dynamics of Genetic Networks

Comput Math Methods Med. 2016; 2016: 7904693.

Published online 2016 Jan 11. doi: 10.1155/2016/7904693

PMCID: PMC4737019

Systems Medicine of Cancer: Bringing Together Clinical Data and Nonlinear Dynamics of Genetic Networks

Konstantin B. Blyuss, 1 Ranjit Manchanda, 2 Jürgen Kurths, 3 Ahmed Alsaedi, 4 and Alexey Zaikin 5 , 6 , *

1Department of Mathematics, University of Sussex, Falmer, Brighton BN1 9QH, UK

2Barts Cancer Institute, Queen Mary University of London, London EC1M 6BQ, UK

3Potsdam Institute for Climate Impact Research, 14473 Potsdam, Germany

4Department of Mathematics, King AbdulAziz University, Jeddah 21589, Saudi Arabia

5Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod 603950, Russia

6Institute for Women's Health and Department of Mathematics, University College London, London WC1E 6AU, UK

*Alexey Zaikin: Email:

Author information ▼ Article notes ► Copyright and License information ►

The last few years have witnessed major developments in experimental techniques for analysis of cancer, including full genome sequencing, measurement of multiple oncomarkers, DNA methylation profile, genomic profile, or transcriptome of the pathological tissue. Whilst these advances have provided some additional understanding of cancer onset and development, the problem of cancer is far from solved. Since the amounts of data emerging from these are very substantial, and the data itself can be extremely heterogeneous (qualitative, quantitative, and verbal descriptions), this makes standard data analysis techniques not practically applicable. One promising direction of addressing the problem of analysis of cancer data is to use modern and sophisticated methods from systems biology and cybernetics. Some challenges of this approach are connecting the results of mathematical analysis with real clinical data, and bridging the existing gaps between the communities of clinicians and applied mathematicians.

This special issue showcases some of the most recent developments in the areas of nonlinear dynamics, mathematical analysis and modeling, data analysis, and simulations in the area of cancer. Having received 11 submissions, six best papers were chosen and published to provide an overview of the research field and to motivate further study.

In “Optimal Placement of Irradiation Sources in the Planning of Radiotherapy: Mathematical Models and Methods of Solving,” O. Blyuss et al. analyse optimal choice of placement of irradiation sources during radiotherapy as an optimization problem. Using the techniques of nondifferentiable optimization and an approximate Klepper's algorithm, the authors derive a new approach for solving this problem and illustrate their methodology with actual numerical simulations of different scenarios.

The paper “Time-Delayed Models of Gene Regulatory Networks” by K. Parmar et al. provide an overview of existing mathematical techniques applicable to modeling the dynamics of gene regulatory networks. The authors focus on the effects of transcriptional and translational time delays and demonstrate how stability of different steady states and the associated behavior change depending on system parameters and the time delays. They contrast the dynamics of the fast mRNA regime, as described by a reduced model, with the dynamics of the full system to illustrate possible differences in behaviour and to highlight the important role played by the time delays.

H. Namazi and M. Kiminezhadmalale in their paper titled “Diagnosis of Lung Cancer by Fractal Analysis of Damaged DNA” discuss how DNA sequences emerging from patient blood plasma can be studied by analysing DNA walks. Comparing the features of DNA profiles for healthy individuals with those for lung cancer patients, the authors derive several predictive criteria for lung cancer based on Hurst exponents and fractal dimension.

The circadian physiology, clock genes, and cell cycle may critically affect results of cancer chronotherapeutics; hence, investigation of gene regulatory networks controlling circadian rhythms is an irreplaceable part of systems medicine of cancer. R. Heussen and D. Whitmore study how the circadian clock is entrained by light, and they experimentally investigate whether there exists a threshold for this synchronization. Moreover, analysis of the constructed numerical model shows that stochastic effects are an essential feature of the circadian clock that provides an explanation of signal decay from the zebrafish cell lines in prolonged darkness.

Some cancer types are especially dangerous because of the high progression rate, and malignant gliomas represent one of the most severe types of tumors. Modern medical approaches offer sophisticated treatment procedures, based on microsurgical tumor removal combined with radio- and chemotherapy. A success of these surgical resection depends on the clarity of intraoperative diagnostics of human gliomas, and the review of O. Tyurikova et al. surveys the wide diversity of modern diagnostic methods used in the course of glial tumor resections.

Recent development in systems biology and medicine have enabled us to analyse and infer networks of interactions and to search for new network oncomarkers. S.-M. Wang et al. in their paper about the identification of dysregulated genes and pathways in clear cell renal cell carcinoma provide an example of this research direction. Using systematic tracking the dysregulated modules of reweighted protein-protein interaction networks they successfully identified dysregulated genes and pathways for this type of cancer, thus gaining insights into possible biological markers or targets for drug development.

We hope that the readers will find the papers published in this special issue interesting, and this will encourage and foster further research on developing new and efficient techniques in systems biology and data analysis for predicting the onset and for monitoring the progression of cancer.

Konstantin B. Blyuss

Ranjit Manchanda

Jürgen Kurths

Ahmed Alsaedi

Alexey Zaikin

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]

ASXL2 promotes proliferation of breast cancer cells by linking ERα to histone methylation.

Oncogene. 2016 Jul 14;35(28):3742-52. doi: 10.1038/onc.2015.443. Epub 2015 Dec 7.

Park UH1, Kang MR2, Kim EJ3, Kwon YS1, Hur W4, Yoon SK4, Song BJ5, Park JH6, Hwang JT6, Jeong JC7, Um SJ1.

Author information


Estrogen receptor alpha (ERα) has a pivotal role in breast carcinogenesis by associating with various cellular factors. Selective expression of additional sex comb-like 2 (ASXL2) in ERα-positive breast cancer cells prompted us to investigate its role in chromatin modification required for ERα activation and breast carcinogenesis. Here, we observed that ASXL2 interacts with ligand E2-bound ERα and mediates ERα activation. Chromatin immunoprecipitation-sequencing analysis supports a positive role of ASXL2 at ERα target gene promoters. ASXL2 forms a complex with histone methylation modifiers including LSD1, UTX and MLL2, which all are recruited to the E2-responsive genes via ASXL2 and regulate methylations at histone H3 lysine 4, 9 and 27. The preferential binding of the PHD finger of ASXL2 to the dimethylated H3 lysine 4 may account for its requirement for ERα activation. On ASXL2 depletion, the proliferative potential of MCF7 cells and tumor size of xenograft mice decreased. Together with our finding on the higher ASXL2 expression in ERα-positive patients, we propose that ASXL2 could be a novel prognostic marker in breast cancer.

PMID: 26640146 DOI: 10.1038/onc.2015.443

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]

Researchers identify two proteins important for the demethylation of DNA

January 12, 2016

Scientists at the Institute of Molecular Biology (IMB) in Mainz have identified a missing piece of the puzzle in understanding how epigenetic marks are removed from DNA. The research on DNA demethylation sheds new light on a fundamental process that is important in development and diseases such as cancer.Epigenetics is defined by heritable changes in gene expression that do not derive from changes in the DNA sequence itself.

Epigenetic processes play a central role in a broad spectrum of diseases, such as cardiovascular disease, neurodegenerative disorders and cancer. One of the most prominent epigenetic processes is DNA methylation, where one of the four bases of animal DNA is marked by a methyl group. DNA methylation typically reduces the activity of surrounding genes.

A lot is known about how methyl marks are put onto the DNA, but how they are removed – a process called DNA demethylation – and, thus, how genes are reactivated is still not well understood. In their recent study, published in Nature Structural and Molecular Biology, IMB scientists have identified two proteins, Neil1 and Neil2 that are important for the demethylation of DNA. "These proteins are a missing link in the chain of events that explain how DNA can be efficiently demethylated," said Lars Schomacher, first author on the paper.

Intriguingly, DNA demethylation has been shown to involve proteins of the DNA repair machinery. Thus epigenetic gene regulation and genome maintenance are linked. Schomacher and his colleagues identified in Neil1 and Neil2 two more repair factors that not only protect the DNA's integrity but are also involved in DNA demethylation. The researchers showed that the role of Neils is to boost the activity of another protein, Tdg, which is known to be of central importance for DNA demethylation.

Both the Neils and Tdg are essential proteins for survival and development. Schomacher et al. carried out experiments where they removed either one of these proteins in very early frog embryos. They found that the embryos had severe problems developing and died before reaching adulthood.

Failure in setting and resetting methyl marks on DNA is involved in developmental abnormalities and cancer, where cells forget what type they are and start to divide uncontrollably. Understanding which proteins are responsible for DNA demethylation will help us to understand more about such disease processes, and may provide new approaches to develop treatments for them.

Explore further: Researchers identify new mechanism used by cells to reverse silenced genes

More information: Lars Schomacher et al. Neil DNA glycosylases promote substrate turnover by Tdg during DNA demethylation, Nature Structural & Molecular Biology (2016). DOI: 10.1038/nsmb.3151

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]

Mathematical Modelling and Prediction of the Effect of Chemotherapy on Cancer Cells

Hamidreza Namazi, Vladimir V. Kulish & Albert Wong

Scientific Reports 5, Article number: 13583 (2015)


Published online:

28 August 2015


Cancer is a class of diseases characterized by out-of-control cells’ growth which affect DNAs and make them damaged. Many treatment options for cancer exist, with the primary ones including surgery, chemotherapy, radiation therapy, hormonal therapy, targeted therapy and palliative care. Which treatments are used depends on the type, location, and grade of the cancer as well as the person’s health and wishes. Chemotherapy is the use of medication (chemicals) to treat disease. More specifically, chemotherapy typically refers to the destruction of cancer cells. Considering the diffusion of drugs in cancer cells and fractality of DNA walks, in this research we worked on modelling and prediction of the effect of chemotherapy on cancer cells using Fractional Diffusion Equation (FDE). The employed methodology is useful not only for analysis of the effect of special drug and cancer considered in this research but can be expanded in case of different drugs and cancers.


Cells production and die are regulated in human body in an orderly way. But in case of cancer, the division and growth of cells is out of control. In this manner, the damaged cells start to occupy more and more space in a part of body and so they expel the useful healthy cells. By this way that part of body is called tumor. So fighting with these cancer cells and changing the way of their production and accumulation is a critical issue in medical science.

Scientists have developed different methods for treatment of cancer. Some of these methods are surgery, chemotherapy, radiation therapy, hormonal therapy, targeted therapy and palliative care. Employing the less-invasive methods have always had critical role in patient treatment. Chemotherapy as a method for cancer treatment deals with application of drugs affecting the cancer cell’s ability to divide and reproduce. The drug makes the cancer cells weak and destroys them by directly applying to cancer site or through the bloodstream.

During years some researchers have worked on mathematical modelling of the effect of chemotherapy on cancer treatment. Some researchers employed different types of differential equations for modelling of the effect of chemotherapy on cancer treatment. For instance, Pillis et al. developed a mathematical model based on a system of ordinary differential equations which analyses the cancer growth on a cell population level after using chemotherapy1. Despite the overall success of this mathematical model, it couldn’t explain the effects of IL-2 on a growing tumour. So, in another work Pillis et al. updated their model by introducing new parameters governing their values from empirical data which are specific in case of different patients. The new model allows production of endogenous IL-2, IL-2-stimulated NK cell proliferation and IL-2-dependent CD8+ T-cell self-regulations2. In another work, using a system of delayed differential equations, Liu and Freedman proposed a mathematical model of vascular tumor treatment using chemotherapy. This model represents the number of blood vessels within the tumor and changes in mass of healthy cells and competing parenchyma cells. Using the proposed model they considered a continuous treatment for tumor growth3. See also4,5. In a closer approach some researchers specially focused on mathematical modelling of the diffusion of anti-cancer drugs to cancer tumor6,7,8,9,10,11.

Beside all the works done on mathematical modelling of cancer treatment using chemotherapy, no work has been reported which analyse and model this treatment by linking between DNA walk and drug diffusion. In this research we model the response of tumor to anti-cancer drug. For this purpose we consider the diffusion of the drug in solid tumor. This diffusion will cause the damaged cells die and thus healthy cells appear.

In the following first we talk about DNA walk as a random multi fractal walk. After that we discuss about chemotherapy and diffusion of drug in tumor. By considering these two topics we start to develop the Fractional Diffusion Equation (FDE) which maps the effect of drug diffusion on DNA walk. The result and discussion remarks are brought in last sections.

[Dr. Pellionisz comments on the 7 R&D articles posted on Labor Day, 2016. Andras_at_Pellionisz_dot_com]

Breast cancer researchers look beyond genes to identify more drivers of disease development

August 29, 2016

Science Daily

Dr. Lupien, Toronto (Canada)

Breast cancer researchers have discovered that mutations found outside of genes that accumulate in estrogen receptor positive breast tumours throughout their development act as dominant culprits driving the disease.

The research, published online today in Nature Genetics, focuses on the most common type of breast cancer, estrogen receptor positive, says principal investigator Mathieu Lupien, Senior Scientist, Princess Margaret Cancer Centre, University Health Network and Associate Professor in the Department of Medical Biophysics, University of Toronto.

“By investigating acquired mutations found outside of genes through the power of epigenetics, we have identified that functional regulatory components can be altered to impact the expression of genes to promote breast cancer development,” says Dr. Lupien.

The multi-institutional research team collaborated with the Princess Margaret Genomics Centre and Bioinformatics group to analyze changes in the DNA sequence that accumulate in patients’ tumours with respect to the epigenetic identity of estrogen receptor-positive breast cancer cells..

“Thinking of genes as the source of light in the human genome, our research shows that driver mutations will not only hit the light bulbs but also directly alter light switches and dimmers that serve as functional regulatory components,” says Dr. Lupien.

“We now have the opportunity to start mining the genome for driver mutations not only in genes but also in other functional regulatory components to expand our capacity to identify the best biomarkers and to delineate the fundamental biology of each tumour to help advance personalized cancer medicine for patients.”

Dr. Lupien’s research builds on a previous study that identified why 44 known genetic variations increased breast cancer risk (Nature Genetics, Sept. 23, 2012).

The convergence of more knowledge about inherited risk variants and the role of acquired mutations should readily enable translating the science into more precise clinical tests to diagnose and monitor patients, he says.

Story Source:

The above post is reprinted from materials provided by University Health Network (UHN). Note: Content may be edited for style and length.

Journal Reference:

Swneke D Bailey, Kinjal Desai, Ken J Kron, Parisa Mazrooei, Nicholas A Sinnott-Armstrong, Aislinn E Treloar, Mark Dowar, Kelsie L Thu, David W Cescon, Jennifer Silvester, S Y Cindy Yang, Xue Wu, Rossanna C Pezo, Benjamin Haibe-Kains, Tak W Mak, Philippe L Bedard, Trevor J Pugh, Richard C Sallari, Mathieu Lupien. Noncoding somatic and inherited single-nucleotide variants converge to promote ESR1 expression in breast cancer. Nature Genetics, 2016; DOI: 10.1038/ng.3650

[The above research appears to shake the foundations of the old school, that "genes are responsible for everything". There is the "oncogene-school" with the core-belief that cancerous (mutant) gene(s), maybe just a single gene, is the culprit of cancer. Today, there are so many hundreds (or thousands...) of so-called "oncogenes" (genes that are involved in cancer, mutant or not), that many believe that in terminal cases practically all genes (and non-genes...) are affected by zillions of mutations. This school is based on the rationale, that "genes pump proteins" (that are the protein-materials of tumors). The above article points into the other possibility of the "what was first, the chicken or the egg". The New School maintains that "cancer is the melt-down of genome REGULATION", during which mutations spread along zillions of "pathways". Think of Chernobyl. Was it the uranium-rods (emitting enormous energy by their fission) that was the culprit of the "melt-down"? In case of Chernobyl we know for sure that the complex and delicate REGULATORY SYSTEM went out of control, and the uncontrolled fission-energy blew up half of Europe. Since the "non-coding DNA" (maiden name "Junk DNA") used to be pretty much unknown, as a matter of course all attention was focused on protein-pumping genes. While a definitive answer to the question "which came first, the chicken or the egg" is also true to "cancer-theories", in my personal opinion a mathematical understanding of genome-REGULATION -SYSTEM is inevitable for truly effective "Cancer Moon Projects". Andras_at_Pellionisz_at_com]

HPE Synergy Shows Promise And Progress With Early Adopters

Aug. 29, 2016


by Patrick Moorhead

Hewlett Packard Enterprise has placed a big bet with HPE Synergy—the company is a pioneer in the composable infrastructure market, and is the furthest along in customer enablement. In the rapidly changing world of IT, composable infrastructure could be the next big thing in enterprise infrastructure. Designed to treat hardware like software (what is often referred to as, “infrastructure as code”) it has the ability to allocate the optimal resources for each application—with the goal in lowering infrastructure costs, providing flexibility as a resource, and accelerating time-to- market for customers. HPE Synergy was launched in December 2015, and touted as the first platform in the market purposefully built for composability (read more here). Hewlett Packard Enterprise is making progress with building out the composable infrastructure ecosystem and though it is still too early to say definitively, they are seeing some success with Synergy’s early beta customers.

HudsonAlpha Institute for Biotechnology, a nonprofit specializing in genomics research, education, and medical treatment, was HPE Synergy’s first customer (you can read our full case study here). Genomics is a highly data-intensive field (they generate more than one petabyte of data a month), and in order to handle the intense workload demands, HudsonAlpha had to rethink their infrastructure—HPE Synergy promised the flexibility and compute power they needed to get the job done. The solution is well-aligned with the HudsonAlpha’s existing strategy—the institute already manages its infrastructure via resource pools. HudsonAlpha says Synergy’s Direct Attach Storage (DAS) simplifies storage for maximum efficiency—a must, when dealing with such large volumes of data. Hewlett Packard Enterprise ’s partnership with Docker is also a selling point—HudsonAlpha views containers as being critical for delivery of microservices. In addition, HudsonAlpha says HPE Synergy delivers the agility needed for collaboration between thousands of researchers worldwide—the platform is quick to get users road-ready and running with new applications.

As it currently stands, they are in the beta stage of deployment—but they have started running production-level workloads on HPE Synergy. Testament to the platform’s ease of installation, HudsonAlpha was able to set up the hardware and complete the install process in-house before the HPE support team even arrived. They’re currently using Docker Swarm, Docker Machine, and DevOps tools like Vagrant on top of Synergy. They’ve constructed their own templates for the platform to allow better transitions through tenants, and developers have begun to deploy their own workloads to the hardware without requiring the the assistance of operations. According to Jim Hudson (Co-Founder and Chairman of the institute), an analysis of the human genome that used to take about 2 days to complete can now be accomplished in 45 minutes with HPE Synergy—an impressive jump. As HudsonAlpha’s existing infrastructure is swapped out for Synergy, they say they will continue to measure gains in efficiency and capability through comparison of the two. I think we’re going to continue to see good results.

Other early testimonials of HPE’s new solutions have also been positive. Rich Lawson , Senior IT Architect at Dish Network (one of the first 100 Synergy customers), praised Synergy’s flexibility, and its ability to unlock the full potential of the public cloud. Greg Peterson (VP of HPE Solutions at Avnet AVT +0.26%, Inc.) lauded HPE Hyper Converged 380’s ease-of- deployment and management, saying that “the solution works as advertised.” We’ll continue to monitor as more early adopters report back on their experiences with HPE Synergy, but so far it’s looking pretty good.

The other, very important piece of the puzzle is the work that HPE is doing to expand the composable infrastructure ecosystem. I’ve said it before, and I’ll say it again—I think HPE “gets strategic partnering,” even though the company-wide approach is new. They’ve spent the first half of 2016 integrating HPE OneView with tools from their partners—Docker, Chef, nLyte, Eaton ETN -0.83%, SaltStack, Ansible, and VMTurbo , just to name a few. The crux of the entire composable movement is to make it easier for customers to drive automation with whatever tools they already have. Expanding the composable ecosystem is going to be an ongoing task for years to come, but HPE appears to be making some good strides through the collaborations with their many partners.

In conclusion, I’m not quite ready to call HPE Synergy a composable slam-dunk yet—signs are looking positive, but it’s still too early in the testing period to say. I do feel comfortable saying that these proof points are an indicator that HPE can deliver on their promise PMSEY +% of composable infrastructure. It’s not just a nice buzz-phrase anymore, it’s actually a viable way of doing things—and it’s only going to get more viable as HPE continues to build out the composable ecosystem. If HPE Synergy’s beta customers continue to report positive results, I think we could be looking at a big shift in enterprise infrastructure.

[It is not only HP - the entire Silicon Valley is in a scramble upon the announcement of the World's most potent Joint Venture (by Google with Stanford going for genome-based precision medicine). HP just acquired SGI for $275 Million, and as we see above, Jim Hudson is also one of those who have their mind on "human genome analysis"... Those of us who went with the Internet from a tiny US Government pet project (that system administrators could chat by emails of mainframes...), and switched gears from Government to hand the Internet over to Public Domain Big Business, know and understand the kind of "scramble" that is taking place in Silicon Valley as we speak! Andras_at_Pellionisz_dot_com]

Illumina - Can GRAIL Deal A Death Blow In The War Against Cancer?

Aug. 22, 2016 About: Illumina, Inc. (ILMN)



The scientific and business communities have joined forces in a war against cancer - will it be a death knell for the dreaded disease?

Illumina's startup venture GRAIL has aspirations of developing an early-detection cancer screening test and unlocking a $20 billion industry.

The new venture faces scientific uncertainties and a growing field of competing researchers.

Despite the significant challenges it faces, management claims that GRAIL possesses key advantages which give it an edge against rivals.

Will investors be able to gain their own edge in predicting the outcome of this scientific endeavor?

Introduction and Significance to Illumina

Earlier this year, Illumina (NASDAQ:ILMN) announced plans to form a separate entity to be among the pioneers in liquid biopsy research with the goal of creating a simple blood test for early detection of all major types of cancer. Illumina is a 52 percent owner of the new venture GRAIL, which has also received well-publicized backing from Microsoft (NASDAQ:MSFT) Founder Bill Gates and (NASDAQ:AMZN) CEO Jeff Bezos, among other notable investors.

The new initiative complements Illumina's core business of DNA sequencing equipment very well, and the company can use this expertise to improve its odds of success in developing its early detection cancer test. Illumina is dominant in DNA sequencing machines with an estimated 75% market share which reaches as high as 90% among premium sequencing devices according to Morningstar estimates. The total addressable market of its core business has been estimated by management to be in excess of $20 billion, which represents an opportunity for significant future growth from the $2.2 billion of revenue Illumina booked in 2015 while holding 75% of the market.

While the future of the business is very promising, expectations are also very high, with the stock's price-earnings multiple near 60. The stock is a high-potential, high-risk investment that appeals strongly to many enterprising, growth-oriented investors. Illumina's dominant core business has been discussed in detail in prior coverage; the analysis today will focus on its new investment in GRAIL and its prospects for achieving the inspiring goals that management has set for it. The company's ambition is to bring a cancer-screening test to market by 2019. With demand for Illumina's DNA sequencing machines already quite robust, any incremental success for GRAIL could contribute significantly to the future growth the company must achieve to fulfill the promising potential its investors envision.

The test GRAIL hopes to develop is a form of liquid biopsy, a new scientific technology that has attracted substantial attention from cancer researchers. Many have been quick to recognize the most optimistic prospects of GRAIL, for which management has estimated a total addressable market size between $20 billion and $200 billion. However, what has been harder to come by is analysis regarding the company's likelihood of capturing this market, its potential future market share, and the challenges that must be overcome before all of this can transpire. The following will set forth an evaluation of why the new field of liquid biopsy is so promising, what challenges face those researching it, and how GRAIL stacks up against the competition within this emerging industry.

Liquid Biopsy Overview & Opportunities

Tissue biopsy has long been the standard in diagnosing and profiling cancer tumors in patients. The method entails extracting a tissue sample from a patient through invasive and sometimes painful surgical procedures. A more patient-friendly alternative has emerged as an alternative and possibly even future replacement. The new method, liquid biopsy, serves a similar purpose as tissue biopsy, but uses blood or other body fluids instead of tissue.

Cancer tumors mutate over time and their characteristics change as the disease progresses from early to more advanced stages. During the process, dying cancer cells give off DNA into the blood stream. Advances in DNA sequencing technologies have made it possible to harvest the enormous amount of information contained in our blood by isolating genetic markers of cancer. This makes diagnosis of cancer through a blood test theoretically possible.

An additional benefit of blood-based liquid biopsy is the ease of obtaining a sample; any doctor's office is surely equipped to draw blood. This is an advantage over the invasive and painful method of tissue biopsy, which often is performed only once on a patient. In contrast, liquid biopsies can be performed multiple times throughout an illness, giving care providers a better view of the disease, an understanding of its progression over time, feedback on the success of current treatment, and insight into how treatment should change as the illness evolves.

Liquid biopsy is a rapidly developing new field with five current and possible future applications:

1. Screening for Early Cancer Detection

2. Profiling Cancers

3. Monitoring Treatment

4. Developing Personalized Treatment

5. Assessing Possible Recurrence

The most promising but also most difficult among these is early detection of cancer. An early detection test capable of diagnosing all major types of cancer is referred to by some in the industry as the holy grail of liquid biopsy research. This new tool against cancer has ignited a major technological race among researchers and companies poised to deepen their understanding of this new procedure and bring solutions to the marketplace. The emergence of liquid biopsy presents many opportunities but, along with them, perhaps just as many challenges as well.

Challenges of Liquid Biopsy

While successfully developing liquid biopsy tests could revolutionize cancer diagnosis and treatment, the technology faces steep challenges as well. One such challenge relates to the two methods of isolating cancer DNA. The primary method being researched relates to circulating tumor DNA (ctDNA). While more prominent, this method obtains information from DNA shed by dying cancer cells. However, some researchers will point out that DNA from dying cells doesn't accurately represent the characteristics of the disease inside the body.

The alternative method, which assesses circulating tumor cells and is known as CTC, attempts to isolate whole tumor cells, not just DNA fragments as with ctDNA. Dr. Daniel Haber, one such researcher, offers the following justification for his preference for CTC over ctDNA: "A fragment from a dying tumor cell 'doesn't tell me anything about the biology of the living tumor.'" CTC-based research offers a more thorough understanding of a specific cancer. Another advantage is that CTC makes it possible to assess the effectiveness of a treatment against a tumor cell, a benefit not shared by ctDNA. The drawback is that circulating tumors are extremely rare in the bloodstream, and remarkably difficult to isolate. While Dr. Haber believes that "we're on the cusp of having a standardized, affordable technology" in CTC, the propensity of most researchers to direct their efforts toward ctDNA could suggest otherwise. Still, it's important to keep in mind that there is a competing method with arguably more favorable characteristics.

Another challenge threatening the success of ctDNA is the difficulty of developing tests with sufficient sensitivity (ability to identify the presence of genetic markers) while also ensuring that the test is accurate (not prone to identify false positives). With the extensive amount of health-related data that can be obtained from an individual's blood, separating out the extremely small amount of cancerous DNA from the voluminous amount of healthy DNA can make sensitivity of cancer screening tests a troublesome proposition. Additionally, millions of false positives could result from a test that is only 90 percent accurate.

If false positives become prevalent in cancer screens they could undermine the hopes of early detection efforts if the risk cannot be sufficiently mitigated. The problem is that positive test results for individuals who have no cancer present could potentially lead to unnecessary additional biopsies and treatments. Similarly, there is a risk of overdiagnosis. Because our bodies are equipped to deal with and dispose of the frequent cell mutations they experience, many early stage cancers may never advance to a more threatening phase. Screening tests without long-term trials could be found to result in unnecessary tests and treatments for patients with tumors that would have never spread inside the body. This fact emphasizes that, even for tests with sufficient sensitivity and accuracy, it can still be a challenge to determine when treatment should be administered to a patient who tests positive.

Despite these major challenges, many remain highly optimistic about the future of liquid biopsy. Victor Velculescu, a lead researcher at Johns Hopkins, expects these issues to resolve favorably and considers an early detection test for cancer to be probable within the next five years, which is consistent with Illumina's 2019 target date. With that prospect in mind, numerous organizations have positioned themselves to capitalize on the opportunity.

Sizing Up the Competition

One player becoming more prominent in the field is Guardant Health. Much like Illumina's moonshot GRAIL, Guardant Health's Project Lunar program has begun making strides towards developing a simple blood test capable of detecting most types of cancer at an early stage. Guardant's early successes include a study where it performed its liquid biopsy blood screening test side by side with the standard tissue biopsy and produced similar results 98 percent of the time. Guardant itself claims to be able to identify circulating tumor DNA with 1,000 times the accuracy of standard sequencing methods, and its Guardant 360 test has attained specificity (accuracy) reaching as much as 99.9999 percent, thereby generally ruling out false positives. Guardant has enlisted the assistance of research institutions including Massachusetts General Hospital, Perelman School of Medicine at the University of Pennsylvania, Robert H. Lurie Comprehensive Cancer Center at Northwestern University, UC San Francisco, and others.

In response to Illumina's announcement regarding its plans for GRAIL, Sequenom (NASDAQ:SQNM) CEO Dirk van den Boom noted that, "Sequenom had already made significant progress on the technology." The company had been seeking a partner to bring its test to market prior to its recent acquisition bid from LabCorp (NYSE:LH). Trovagene uses liquid biopsy to monitor the progression of cancer in patients already diagnosed and keep tabs on changes in the disease over time. Janssen Diagnostics, a subsidiary of Johnson & Johnson (NYSE:JNJ), has an FDA-approved liquid biopsy test. Life sciences giant Roche also has aspirations in the field. And this is just a small sampling of the large number of companies making a bid in the field.

In total, liquid biopsy research has attracted attention from 38 companies in the U.S. alone, and there are approximately 350 clinical trials under way to learn more about the new technology. With the crowded research space, investors will undoubtedly be interested in learning how GRAIL measures up against its competition.

GRAIL's Competitive Positioning

Illumina is interested in early detection screening tests for cancer; its ambitions do not extend to other potential applications of liquid biopsy, such as monitoring, assessing possibility of recurrence, or other areas according to CEO Jay Flatley. In addition, Flatley emphasizes that monitoring for recurrence and analyzing tumors has dominated much of the current research. Instead, GRAIL is aiming to carve out a niche in the most scientifically complex area of the field: early detection of all major types of cancer.

The company itself claims that it is "uniquely positioned" and that it can achieve superior accuracy in its testing due to its technology which enables deep sequencing capabilities. Another part of GRAIL's unique positioning comes from scale: the company's plan is to obtain clinical data from hundreds of thousands of people. One third party, CEO Andre Marziali of Boreal Genomics, noted that GRAIL will be capable of amassing data well beyond the scope possible with any other current competitor, and further stated that, whether it succeeds or fails, "GRAIL will accelerate the arrival of ctDNA screening."

GRAIL describes its advantage as making small amounts of ctDNA detectableby improving signal to noise, thereby providing the test greater sensitivity and reducing the risk that cancer goes unnoticed in early stages. While the company has promoted its expectations to achieve greater sensitivity, it has also been careful to stress that eliminating false positives will be among its primary concerns. Based on these communications, it might be inferred that management would be willing to sacrifice a certain amount of sensitivity to achieve a greater amount of accuracy and fewer false positives, a plan that helps to overcome one of the major liquid biopsy challenges if GRAIL can deliver on this promise in its testing.

Without question, GRAIL still faces major challenges in executing its plan. One author may have put it best with the comment that "GRAIL's plans will require development of biological [I would say informatics of genome regulation, AJP] understanding that is presently unknown, and strategies that contradict much current thinking in public health, making GRAIL's initiatives extremely ambitious." Still, the rush of research activity into this field shows that the scientific community senses a major opportunity. And GRAIL possesses capabilities and ambitions which afford it a reasonable possibility of capitalizing on that promise.


As part of the larger company, GRAIL is a relatively minor investment which represents a small amount of incremental risk for Illumina, a company for which is there is much to be optimistic about. However, a careful analysis of the considerations discussed above will reveal to investors what they likely already know: Illumina's side project GRAIL is a startup with infinite promise, but speculative prospects and significant competition and scientific uncertainty. As with many biotech and scientific endeavors, early investors who lack expertise in the field but hope to gain an edge in predicting the outcome of this technological race may find themselves hard pressed to do so. Despite this, any incremental value Illumina can create from its investment in GRAIL will be a welcome additional benefit for shareholders of the promising core business.

Author's Note: Investors who valued this analysis and who wish to receive future articles and ideas from The Virtuous Cycle can do so by clicking the "Follow" button at the top of the article and selecting real-time alerts.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: All investments involve risks. Investors are encouraged to do their own due diligence prior to making buy or sell decisions.

[The fundamental idea that blood tests can reveal early signs of certain types of cancer is already a well-known practice. In your Annual Physical, your blood test most likely include a check for "PSA number", a Prostate-Specific Antigen (PSA) Test. It is very affordable, and if for a man it stays close to 1 in a steady manner for years, one can be reasonably relaxed about the health of prostate. The Big Idea of Grail, catapults such singular test into an entirely new dimension. Instead of measuring an Antigen (a protein), the Illumina-dominated Grail can look at the genome, and the full genome, of suspicious cells circulating very early in the bloodstream. The colossal science question, of course, is "to find a needle in a haystack". In my terms, looking for Fractal Defects, mostly in the Non-Coding (Regulatory, maiden name "Junk" DNA). Cancer leads to a genomic melt-down with millions of mutations and even chromosomes breaking up. In Chernobyl (and other nuclear reactors) there was nothing wrong with the blazing Uranium (or Plutonium) fuels! THE PROBLEM WAS A BREAK-DOWN OF THE REGULATORY SYSTEM! What makes Grail a true Holy Grail that it has the potential with Full Human Sequencing, using the World's best and most powerful X10 (or X5 for smaller Countries, like my homeland Hungary of only 10 million people, but due to her post-Communist history there is a single Central Health Insurance System, with all the medical records digitized for at least the last 20 years!). Thus, it is very likely that a FractoGene patent 8,280,641 to correlate higher fractality of certain cell surfaces (with cancer, the cells become more "spiky", these cells can be fished out and the genome fully sequenced (in a "repeat-customer mode", as the article points out). Correlation of fractality of the shape and Fractal Defects detected in the Genome can provide potentially all leading 38 competitors in the field towards "holy grail" to seize a monopoly in the vast cancer market (estimated anywhere between $20 Billion to $200 Billion. The time is now to seize. Andras_at_Pellionisz_dot_com]


The leading 8 of the 38 Competitors in Liquid Biopsy Analytics

8 Companies Developing Liquid Biopsy Cancer Tests


Cancer blood tests, or “liquid biopsies” as they are called, promise to be a huge niche in molecular diagnostics (MDx). Cowen & Co estimates that using DNA blood tests for cancer screening will be a $10 billion a year market in just 4 years. Illumina just announced that they are developing a universal blood test to identify early-stage cancers in people with no symptoms of the disease. The Illumina venture is called “Grail”, and has taken in more than $100 million in Series A financing from investors that include ARCH Venture Partners, Sutter Hill Ventures and Bezos Expeditions.

With Illumina having worked on this project for 18 months already, competing companies in this space should be worried. While over 38 companies are actively targeting this space, the below chart from Piper Jaffrey shows 6 key players that we’ll take a closer look at:


We first wrote about Myriad in November of 2014, and since then the stock price is up +28% giving the Company a market cap of nearly $3 billion. Myriad’s initial product was a 25-gene genetic test called myRisk that identifies an elevated risk for eight important cancers using just a simple blood sample. Since then, the Company has developed an entire suite of “liquid biopsy” tests. Here’s where the company wants to be in 2020. Those are some lofty goals, but Myriad has been making some good progress towards them. Revenues for Q3 2015 were $183 million which was down slightly from the previous quarter. Over 90,000 physicians have ordered Myriad’s tests, with over 2 million tests being performed so far.


Founded in 2009, Oncocyte is developing non-invasive liquid biopsy diagnostic tests in areas of high unmet need in oncology, specifically the 13-16 million lung and breast cancer patients. With an IPO taking place just several weeks ago, the Company must be wondering if their timing couldn’t have been any worse. Investors who bought shares the first day of that IPO would have lost more than -50% of their investment in just 14 days resulting in a market cap for OCX of just $104 million. Oncocyte has yet to generate any revenues.


Founded in 1993, Austin based Vermillion has lost -78% of their share price value in the past 5 years giving the Company a market cap of just $86 million. Vermillion has developed the first FDA-cleared, multi-biomarker blood test that helps assess the risk of ovarian cancer prior to surgery. The Company brought in minuscule revenues in Q3 2015 making investors wonder just when this product offering will take off.


Since their market debut in 2013, VCYT is down over -50% giving the Company a market cap of just $171 million. Veracyte’s liquid biopsy tests are targeting lung and thyroid cancer, two diseases that often require invasive procedures for an accurate diagnosis. The Company has shown strong revenue growth along with equally strong operating losses. 2014 showed revenues of $38 million on losses of $29 million.


While their first product “FoundationOne” required an actual tissue biopsy, their second product “FoundationOne Heme” is a liquid biopsy which was released after we first wrote about Foundation Medicine back in March of 2014. We also wrote about Foundation more recently in January 2015 when their share price soared +95% on the back of a strategic partnership with Roche. While we hoped that partnership would establish a support level for the stock, it didn’t. Since then, FMI as sunk -65% giving the Company a current market cap of $560 million. Revenues appear to be steadily growing over time, with Q3 2015 revenues coming in at around $25 million.


Since their market debut in 2005, this $1 billion market cap company has returned +166% compared to a NASDAQ return of +113% over the same time frame. Similar to Foundation Medicine, Genomic Health has made a strong business of detecting cancer in tissues obtained from biopsies using their Oncotype DX test suite which brought in revenues of $275 million in 2014. In January of 2015, the Company announced their intention to release a liquid biopsy test in 2016 which they would price much lower than their Oncotype DX test which is priced at around $4,500.


We first wrote about Biocept in May 2014 and since then the stock is down over -60% giving the Company a market cap of just $30 million. Biocept is developing their OncoCEE platform which claims to offer 10-100X the sensitivity of competing platforms in detecting cancer mutations in the blood stream. While the stock price languishes, Biocept continues to strengthen their patent portfolio and sign commercial agreements. Revenues last quarter continue to be minuscule.

GUARDANT (Private)

While not mentioned in the above chart from Piper Jaffrey, Guardant is a company we wrote about in March 2014 which is developing the GUARDANT360, a test that looks for tumor DNA which is shed into the bloodstream for almost every type of cancer. Just last week, Guardant closed a massive funding round of $100 million, the same amount of money put forward by Illumina to launch Grail. Guardant360 is currently being used by 20,000 patients with cancer per year at a price point of $5,400.

Precision medicine: Analytics, data science and EHRs in the new age

[For those who may not know; EHR stands for "Electronic Health Records" obsolete standard by the Office of the National Coordinator for Health Information Technology launched the interoperability initiative in 2004 - TWELVE YEARS AGO - AJP]

The promise of genomics and personalized care are closer than many realize. But clinical systems and EHRs are not ready yet. While policymakers and innovators play catch-up, here’s a look at what you need to know.

By John Andrews August 15, 2016

Considering how fast technology advances in the healthcare industry, it seems natural that a once-innovative concept could become obsolete in the span of, say, a dozen years. Knowledge, comprehension and capabilities continue moving forward, and if the instruments of support don't keep pace, it can cause a rift to appear. If nothing is done, it can exacerbate into a seismic event.

Some contend that this situation exists with the rapid advancement of precision medicine continually outstripping the static state of electronic health records. Medical research is forging ahead with genomic discoveries, while EHRs remain essentially the same as when the Office of the National Coordinator for Health Information Technology launched the interoperability initiative in 2004.

Over that time, healthcare provider IT teams have worked tirelessly at implementing systems with EHR capability and towards industry-wide interoperability. If the relationship between science and infrastructure has hit an inexorable bottleneck, what are the reasons for it?

"It depends on how you look at it," noted Nephi Walton, MD, biomedical informaticist and genetics fellow at Washington University School of Medicine in St. Louis. "One of the problems I have seen is when new functionality is created in EHRs, it is not necessarily well integrated into the overall data structure and many EHRs as a result have a database structure underneath them that is unintelligible with repetitive data instances. We often seem to be patching holes and building on top of old architecture instead of tearing down and remodeling the right way."

Walton addressed the disconnect between the growth in precision medicine and the limitations of healthcare IT infrastructure at a presentation during the recent HIMSS Big Data and Healthcare Analytics Forum in San Francisco.

"IT in healthcare tends to lag a bit behind other industries for a number of reasons," he said. "One of them is that healthcare IT is seen as a cost center rather than a revenue-generating center at most institutions, so fewer resources are put into it."

Overall, EHR limitations have resonated negatively among providers since they were introduced, said Dave Bennett, executive vice president of product and strategy at Orion Health in Scottsdale, Ariz.

"The EHR reality has fallen painfully short of the promise of patient-centric quality care, happy practitioners and reduced costs," he said. "In recent surveys, EHRs are identified as the top driver of dissatisfaction among providers. According to the clinical end-users of EHRs, it takes too long to manage menial tasks, it decreases face-to-face time with patients, and it degrades the quality of documentation. In one sentence, it does not bring value to providers and consumers."

Despite the limitations though, EHR designs aren't to blame, Bennett said.

"It is not the technology in itself – it is the technology usability that needs a new approach to successfully deliver data-driven healthcare," he said. "We need to redesign the EHR with the patient in mind and build a technology foundation that allows the EHR full integration into the care system. Today's EHRs are good for billing and documenting but are not really designed to be real-time and actionable. They cannot support an ecosystem of real-time interactions, and they lack the data-driven approaches that retail, financial, and high tech industries have taken to optimize their ecosystems."

Strengthening weak links

Technological disparity doesn't just exist between medical research and EHRs, but in how EHRs are used within health systems, added Jon Elwell, CEO for Boise, Idaho-based Kno2.

"One of the biggest struggles in healthcare IT today is the widely uneven distribution of healthcare providers, facilities and systems along the maturity continuum towards a totally digital healthcare future," he said. "One healthcare system is only as technologically advanced as the least mature provider or facility within its network."

For example, he said an advanced accountable care organization may be using EHRs in every department and using direct messaging to exchange patient charts and other content with others in the network. However, he said, it is still common for some to be using faxes to communicate, "thrusting the advanced system back into the dark ages."

As an industry, providers "have to work harder to develop solutions that prevent early adopters from being dragged down to the lowest common technology denominator," Elwell said. "These new solutions should extend a hand up to less-advanced providers and facilities by providing easy ways for them to adopt digital processes, particularly when it comes to interoperability."

Aligning vision with reality

The Office of the National Coordinator for Health IT, which evolved alongside EHRs over the past 12 years, hasn't sat idly by as the imbalance has gradually appeared. Jon White, MD, deputy national coordinator, is fully aware of the situation and says it is time to take a fresh look at precision medicine and EHRs.

"What we need to do is bring reality in with our vision," he said. "It's not just science, but the IT infrastructure that supports it."

With its roots going back to 2000, precision medicine sprung up from genome sequencing and has continued to map its route forward. White says at its inception the ONC realized that information infrastructure needed improvement and the EHR initiative was designed to get the process moving.

"The view of precision medicine and the vision for precision medicine has broadened considerably beyond the genome, which is still a viable part of the precision medicine field," White said. "But it is really about people's data and understanding how it relates to one another."

Precision medicine is being given a cross-institutional approach, with new types of science and analysis emerging and a new methodology being envisioned, White said. For IT, a solid and dynamic infrastructure has been built "where little existed before and over the past seven years EHRs adoption has gone from 30 percent of physicians to 95 percent now."

So the vast majority of provider organizations are now using EHRs and the systems are operating with the clinical utility that was expected, White said. Next steps for interoperability and enhanced functionality, he added, are a logical part of the long-term process.

"EHRs are doing a lot of the things we want them to do," he said. "We're at a place where we have the information structure and need to understand how to best use it as well as continuing to adapt and evolve the systems."

More introspection needed

In order for EHRs to gain more functionality and interoperability to achieve a wider scope of utilization, more has to be done with the inner machinations of the process, Walton said.

"I don't think there has been much of a focus on interoperability between systems, especially now where you have a few major players that have pretty much taken over the market," he said. "I fear that as we have less choices, there will be less innovation and I sense now that EHR vendors are more likely to dictate what you want than to give into what you need. The overarching problem with interoperability is that there is no common data model – not only between vendors, but between instances of a particular vendor. There really needs to be a standard data model for healthcare."

Yet while precision medicine – especially as it relates to genomics – continues to emerge, analysts like Eric Just, vice president of technology at Salt Lake City-based Health Catalyst, aren't sure IT infrastructure is solely to blame for the problem.

"I'm not really convinced that EHR interoperability is the true rate limiter here, save for a few very advanced institutions," he said. "Practical application of genomics in a clinical setting requires robust analytics, the ability to constantly ingest new genomic evidence, and there needs to be a clinical vision for actually incorporating this data into clinical care. But very few organizations have all of these pieces in place to actually push up against the EHR limits."

To be sure, White acknowledged that academic institutions who pushed EHRs for research purposes do want more functionality and capability from electronic records.

"Those large academic institutions have been telling their vendors that when it comes to EHRs, 'this is our business and we need you to meet our needs,'" he said.

When presenting on the topic of precision medicine and EHRs, Just said he senses "a big rift" between academic and non-academic centers on the topic.

"Our poll shows that maybe the issue is not EHRs, but the science that needs to be worked out," he said. "A lot of progress is being made, but analyzing the whole genome and bringing it to medical record is not an agenda that many organizations are pushing. And those that are don't have clear vision of what they're looking for."

Charting new horizons

Because precision medicine's advancement is growing so rapidly, it is understandable that EHRs will be limited, Just said.

"These new analyses have workflows no one has seen before, they need to be developed and current technology won't allow it," he said. "EHRs are good at established workflows, but we need to open workflows so that third parties can develop extensions to the EHR."

As it exists today, the healthcare IT infrastructure is "simply genomic unaware," said Chris Callahan, vice president of Cambridge, Mass.-based Genelnsight, meaning that genetic data has no accommodations within the records.

"Epic and Cerner don't have a data field in their system called 'variant,' for example, the basic unit of analysis in genetics," he said. "It's simply not represented in the system. They are not genomic ready."

Enakshi Singh is a genomic researcher who sees firsthand academia's quest for higher EHR functionality. As a senior product manager for genomics and healthcare for SAP in Palo Alto, Calif., she is at the center of Stanford School of Medicine's efforts to apply genomic data at the point of care. In this role, she works with multidisciplinary teams to develop software solutions for real-time analyses of large-scale biological, wearable and clinical data.

"The interoperability win will be when patients can seamlessly add data to their EHRs," she said. "But at this point, today's EHR systems can't handle genomic data or wearable data streams."

EHRs may not be equipped for ultra-sophisticated data processing and storage, but Singh also understands that they reflect the limitations of the medical establishment when it comes to genomic knowledge. Every individual has approximately three billion characteristics in their genomic code, with three million variants that are specific to each person.

"General practitioners aren't equipped to understand the three million characteristics that make each individual unique," she said.

One reason for precision medicine's growth is how the cost of sequencing has shrunk, Singh said. The first genomic sequence in 2000 took 13 years and $13 billion from a large consortium to produce. Today a genome can be sequenced for $1,000, which has led to a stampede of consumers wanting to find out their genetic predispositions, she said.

Singh's colleague Carlos Bustamante, professor of biomedical data science and genetics at Stanford calls the trend "a $1,000 genome for a $1 million interpretation."

The frontier for genomics and precision medicine continues to be vast and wide, Singh said, because of the three million variants, "we only know a fraction of what that means. When we talk about complex diseases, it's an interplay of multiple different characters and mutations and how it's related to environment and lifestyle. Integrating this hasn't evolved yet."

The other challenge is connecting with clinical data and sets that have shown to play a role in disease, how to integrate at the point of care and create assertions based on profile information. Singh is involved with building new software that takes new data streams and provides for quick interpretation. The Stanford hospital clinic is in the process of piloting a genomic service, where anyone at the hospital can refer patients to the service for a swab and sequencing.

"They will work the process and curate it, filter down what's not important and go down the list related to symptoms," Singh said. "This replaces searching through databases. What we have done is to create a prototype app that automates the workflow and create a field where workflow is streamlined for interpreting more patients. Current workflow without the prototype is 50 hours per patient and ours is dramatically cutting the time down. It's not close to being in clinical decision support yet, but it did go through 30 patients with the genomic service."

Jeff Wu

Workflow and analytics

With a background in EHRs, Jeffrey Wu, director of product development for Health Catalyst, specializes in population health. To adequately utilize EHRs for genomics, Wu is developing an analytics framework capable of bringing data in from different sources, which could include genomics as part of the much broader precision medicine field. Ultimately, he said it's about giving EHRs the capability to handle a more complete patient profile.

"Right now there is minimal differentiation between patients, which makes it harder to distinguish between them," Wu said. "Standardizing the types of genomes and the type of care for those genomes will make EHRs more effective."

Wu explained that his project has two spaces – the EHRs are the workflow space, coinciding with a separate analytics engine for large computations and complex algorithms.

"These two architectures live separately," he said. "Our goal is to get those integration points together to host the capabilities and leverage up-and-coming technologies to get the data in real time."

Stoking the FHIR

A key tool in helping vendors expand the EHR's functionality is FHIR – Fast Health Interoperability Resources, an open healthcare standard from HL7. While it has been available for trial use since 2014.

SMART on FHIR is the latest platform offering, designed to provide a complete open standards-based technology stack. SMART on FHIR is designed so developers can integrate a vast array of clinical data with ease.

Joshua Mandel, MD, research scientist in biomedical informatics at Harvard and lead architect of the SMART Health IT project, is optimistic that SMART on FHIR and a pilot project called Sync for Science will give vendors the incentive and the platform to move EHR capability in a direction that can accommodate advancing medical science.

"When ONC and the National Institutes of Health were looking for forward-thinking ways to incorporate EHR data into research, using the SMART on FHIR API was a natural fit," he said. "It's a technology that works for research, but also provides a platform for other kinds of data access as well. The technology fits into the national roadmap for providing patient API access, where patients can use whatever apps they choose, and connect those apps to EHR data. In that sense, research is just one use case – if we have a working apps ecosystem, then researchers can leverage that ecosystem just the same as any other app developer."

With Sync for Science, Mandel's team at the Harvard Department of Biomedical Informatics is leading a technical coordination effort that is funded initially for 12 months to work with seven EHR vendors – Allscripts, athenahealth, Cerner, drchrono, eClinicalWorks, Epic, and McKesson/RelayHealth – to ensure that each of these vendors can implement a consistent API that allows patients to share their clinical data with researchers.

Sync for Science – known as S4S – is designed to help any research study ask for (and receive, if the patient approves) patient-level electronic health record data, Mandel said. One important upcoming study is the Precision Medicine Initiative.

Josh Mandel

"It's important to keep in mind that much of the most interesting work will involve aggregation of data from multiple modalities, including self-reports, mobile device/sensors, 'omics' data, and the EHR, he said. "S4S is focused on this latter piece – making the connection to the EHR. This will help keep results grounded in traditional clinical concepts like historical diagnoses and lab results."

The project is focused on a relatively small "summary" data set, known as the Meaningful Use Common Clinical Data Set. It includes the kind of basic structured clinical data that makes up the core of a health record, including allergies, medications, lab results, immunizations, vital signs, procedure history, and smoking status. The timeline is structured so that the pilot should be completed by the end of December and Mandel expects that the technical coordination work will be finished by that time. The next step, he says, is to test the deployments with actual patients.

"We're still working out the details of how these tests will happen," Mandel said. "One possibility is that the Precision Medicine Initiative Cohort Program will be able to run these tests as part of their early participant app data collection workflow."

Built on the FHIR foundation, S4S is designated as the lynchpin for interoperability to broaden its scope for research, clinical data and patient access. FHIR is organized toward profiles and the use case data and the data types that characterize it. S4S is building a FHIR profile so that the data such as demographics, medications and laboratory results can be accessed and donated to precision medicine.

As a proponent of S4S, the ONC sees the program as an extension of "the fundamental building blocks for interoperability," White said. The APIs that are integral to the S4S effort have been used in EHRs for a long time, but he said vendors kept them proprietary.

"When we told vendors in 2015 that they would need to open APIs so that there could be appropriate access to data, they agreed, and moreover, they said they would lead the charge," White said.

MU and MACRA influence

When the industry started on the EHR and interoperability initiative in 2004, meaningful use hadn't been conceived of yet. With MU's arrival as part of President Obama's ARRA program, healthcare providers were suddenly diverted from the original development plan with an extra layer of bureaucracy.

Walton talks about its impact on the overall effort: "Meaningful use had some value but largely missed the goals of its intention," he said. "I think a lot of people essentially played the system to achieve the financial benefit of meaningful use without necessarily being concerned about how that translated into benefits for patients. Meaningful use has pushed people to start talking about interoperability, which is good, but it has not gone much further than that. Most of the changes in EHRs around meaningful use were driven by billing and financial reimbursement, but it has opened the door to more possibilities."

The broader problem, says Wayne Oxenham, president of Orion Health's North America operation, is that a business-to-business model did not really exist in healthcare, "so incentives were not aligned, and MU was only focusing on EHR interoperability and quality measures that provide no value versus proactive care models."

In essence, Oxenham said "MU did not deliver much. The program tried to do too much by wanting to digitize healthcare in 10 years and curiously, their approach was only focused on the technology instead of focusing on the patient and creating value. The point was to improve outcomes and stabilize costs, not to exchange documents that did not necessarily need to be shared, and they brought no value when stored in a locker deep in a clinical portal. MU missed the point – it just helped digitize processes that were and are still oriented towards billing, but aren't focused on optimizing care and using the data in meaningful ways."

As with MU, new certification requirements for the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) could also influence the dynamics of precision medicine and genomics, but Walton contends that it isn't an issue at this point.

"I don't think MU has inhibited the development at all, and people are still trying to wrap their heads around MACRA," Walton said. "A big part of the problem is that there has not really been a financial incentive to pursue this and many healthcare IT innovations are driven by billing and trying to increase revenue. I think that MU has tied up a lot of healthcare IT resources but I don't know that I can say they would have been working on precision medicine if they were not tied up."

Eric Rock CEO of Plano, Texas-based Vivify, calls MU "a measuring stick used to drive quality through financial penalties or gains, the strongest driver for healthcare industry change." While he considers MU a "good move, it perhaps wasn't strong enough to make a 'meaningful' difference with interoperability or on the cost of care."

Forthcoming CMS bundles, such as the recent Comprehensive Care for Joint Replacement model, could advance the MU incentive component further, he said.

"The impact that CMS bundles and other value-based care models will have is a much stronger demand by providers towards healthcare interoperability in a landmark effort to reduce costs," Rock said. "As a result, winning large contracts may require a commitment to a new level of interoperability."

Eric Rock

Altering the trajectory

If the current trajectory of precision medicine-EHR imbalance continues, it won't be for a lack of trying by medical science and the healthcare IT industry to curb it. Programs like Sync for Science need time to develop and produce results. At this point, however, there are a lot of questions about how the "technology gap" issue will proceed and whether it will continue to widen.

From a technology perspective, Walton believes the focus needs to be on scaling horizontally.

"Right now EHRs are primarily based on massive servers rather than distributing tasks across multiple computers," he said. "You can't handle the data from precision medicine this way – it's just not going to work, especially when you have multiple departments trying to access and process it at the same time."

True cloud computing, whether internally or externally hosted, is needed for this type of data, Walton said, because "the database infrastructure behind EHRs and clinical data warehouses is not geared towards precision medicine and cannot handle the data generated. There are clinical data warehouses that can handle the data better but they are not usually updated in real time, which you need for an effective system for precision medicine. This will require investments in very fast hardware and distributed computing and we have a ways to go on this front."

On the current trajectory, precision medicine is "slowly sweeping into standards of care and what we are doing is going little-step by little-step to find places where personalized medicine is applicable and can be used," Callahan said.

The only way the current trajectory will change is if reimbursement patterns change, he said.

"If and when payers latch onto the idea that personalized medicine is actually a key enabler of population health, then that they should pay for it as an investment," he said. "That will be a game changer, a new trajectory. Right now the payer community views precision medicine and genetics as just another costly test and people don't know what it means or what the clinical utility of it is. That is the exact wrong way to think about it. Precision medicine and genetics are key enablers for population management. When you can get your head around that idea, when you can marry the idea, then you really start to see things change." .

[Precision Medicine, very much like computing from mainframes by IBM to minicomputers by DEC to personal computers by APPLE will be disrupted by entirely new paradigms secured since 2004. Andras_at_Pellionisz_dot_com]

Stanford Medicine, Google team up to harness power of data science for health care

Stanford Medicine will use the power, security and scale of Google Cloud Platform to support precision health and more efficient patient care.

Stanford Medicine and Google are working together to transform patient care and medical research through data science.

The new collaboration combines Stanford Medicine’s excellence in health-care research and clinical work with Google’s expertise in cloud technology and data science. Stanford’s forthcoming Clinical Genomics Service, which puts genomic sequencing into the hands of clinicians to help diagnose disease, will be built using Google Genomics, a service that applies the same technologies that power Google Search and Maps to securely store, process, explore and share genomic data sets.

Stanford Medicine includes the Stanford School of Medicine, Stanford Health Care and Stanford Children’s Health. Together, Stanford Medicine and Google will build cloud-based applications for exploring massive health-care data sets, a move that could transform patient care and medical research.

“Stanford Medicine and Google are committing to major investments in preventing and curing diseases that afflict ordinary people worldwide. We’re proud to be setting this milestone for the future of patient care and research,” said Lloyd Minor, MD, dean of the School of Medicine.

The agreement — considered key to Stanford Health Care’s development of the Clinical Genomics Service — makes Google Inc. a formal business associate of Stanford Medicine. As such, Google and Stanford will both comply with the Health Insurance Portability and Accountability Act, a federal law that regulates the privacy and security of medical information. HIPAA requires that Stanford Medicine patient data stored on Google Cloud Platform servers stay private. Patient information will be encrypted, both in transit and on servers, and kept on servers in the United States.

Analyzing genetic data

With Google Genomics, Stanford Medicine will build its new Clinical Genomics Service on the Google Cloud Platform, expanding genomics research and establishing new methods of real-time data analysis for efficient patient care. “We are excited to support the creation of the Clinical Genomics Service by connecting our clinical care technologies with Google’s extraordinary capabilities for cloud data storage, analysis and interpretation, enabling Stanford to lead in the field of precision health,” said Pravene Nath, chief information officer for Stanford Health Care.

The Clinical Genomics Service will enable physicians at Stanford Health Care and Stanford Children’s Health to order genome sequencing for patients who have distinctive or unusual symptoms that might be caused by a wayward gene. The genomic data would then go to the Google Cloud Platform to join masses of aggregated and anonymous data from other Stanford patients. “As the new service launches,” said Euan Ashley, MRCP, DPhil, a Stanford associate professor of medicine and of genetics, “we’ll be doing hundreds and then thousands of genome sequences.”

The Clinical Genomics Service aims to make genetic testing a normal part of health care for patients. “Genetic testing is built into the whole system,” said Ashley. A physician who thinks a genome-sequencing test could help a patient can simply request sequencing along with other blood tests, he said. “The DNA gets sequenced and a large amount of data comes back,” he said. At that point, Stanford can use Google Cloud to analyze the data to decide which gene variants might be responsible for the patient’s health condition. Then a data curation team will work with the physician to narrow the possibilities, he said.

“This collaboration will enable Stanford to discover new ways to advance medicine to the benefit of Stanford patients and families,” said Ed Kopetsky, chief information officer at Lucile Packard Children’s Hospital Stanford and Stanford Children’s Health. “Together, Stanford Medicine and Google are making a major contribution and commitment in curing diseases that afflict children not just in our community, but throughout the world. It’s an extraordinary investment, and we’re proud to play such a large role in transforming patient care and research.”

Ashley noted that medicine mostly deals in small data, such as lab tests. But genomic studies, patient health records, medical images from MRI and CT scans, and wearable devices that monitor activity, gait or blood chemistry involve huge amounts of data that can allow doctors and researchers alike to analyze myriad aspects of patient health in ways that lead to improved medical decisions and products that are tailored to the patient — the essence of a precision health approach.

Focusing on precision health

“In the past few years, the amount of available data about health care has exploded,” said Minor. “While researchers are learning to integrate this big data, putting it to work for individual patients, in real time, is a huge challenge. Our collaboration with Google will help us to meet this challenge.”

Sam Schillace, vice president of engineering for industry solutions at Google Cloud Platform, said, “I’m excited because this agreement brings together expertise in three areas: data science, life science research and clinical care. The next decade of improvements in understanding and advancing health care is going to come from leaders in those three areas working together to build the next generation of platforms, tools and data.”

It’s all consistent with Stanford Medicine’s focus on precision health. “You could imagine that, going forward, potentially every patient could be sequenced,” said Michael Halaas, chief information officer for the School of Medicine. “The technology challenge we need to solve is how to derive useful insights from data and apply it directly to the care of a patient in near real time and also make progress on research.”

Halaas said the Stanford-Google agreement does more than provide Stanford with server space. “It’s not just stacks of servers,” he said. “It includes layers and layers of innovative technology. This agreement allows us to do the analytics in a way that is fast and secure.”

Minor said, “We’ll be working with Google to build innovative technology that will enable Stanford to lead in precision health, the goal of which is to anticipate and prevent disease in the healthy and precisely diagnose and treat disease in the ill.”

Data as the engine that drives research

Large-scale patient data is already helping answer research questions at Stanford. For example, Ami Bhatt, MD, PhD, an assistant professor of medicine and of genetics, is exploring changes in patient microbiomes that can precede symptoms of a disease such as cancer.

Another study is looking at alarm data from patient hospital rooms. The de-identified, or anonymized, data has been accumulating at Stanford’s adult and children’s hospitals for about 15 years, said Ashley, but until now no one has studied it. Hospitalized patients are typically hooked up to monitors that display their heart rate, blood-oxygen levels and other basic data, with alarms that go off if the measurements suggest something is wrong. The problem is that the alarms go off when nothing is wrong — sometimes when the patient just moves. Health-care providers often turn off the alarms so patients can rest and nurses can concentrate on people who need care. An artificial-intelligence approach in the works could use the alarm data to distinguish false alarms from real ones.

The analytics applications and virtual supercomputers available through Google Genomics could pave the way for other kinds of projects, as well. Working with Google’s engineers, Stanford researchers could make advances in visual learning that might, for example, enable computers to distinguish malignant tumors from benign ones in medical images.

The Stanford-Google collaboration is a critical step on the path to precision health, Minor said. “This is the foundational work for bringing patient health information and other big data to the bedside,” he said.

Google Tech Talk YouTube, 2008

[Pellionisz started his US career of geometrization of neuroscience and genomics at Stanford, 1973]

Leroy Hood 2002: The Genome is a System (needs some system theory)

Book Excerpt: ‘Hood: Trailblazer of the Genomics Age’

Armed with a vision for revolutionizing healthcare, biologist Leroy Hood wasn't going to let anything — or anyone — stand in his way.

08.08.2016 / BY Luke Timmerman

TWO WEEKS BEFORE Christmas 1999, Lee Hood appeared to have it all: A loving family. Money. Fame. Power. He counted Bill Gates, one of the world’s richest men, as a friend and supporter. Eight years earlier, Gates had given the University of Washington $12 million to lure the star biologist from Caltech in what the Wall Street Journal called a “major coup.”

Hood’s assignment on arrival: build a first-of-its-kind research department at the intersection of biology, computer science, and medicine.

Even at 61, the former high school football quarterback could still do 100 pushups in a row. He ran at least three miles a day. He climbed mountains. He traveled the world to give scientific talks to rapt audiences. At a time when many men slow down, Hood maintained a breakneck pace, sleeping just four to five hours a night. He owned a luxurious art-filled mansion on Lake Washington, but otherwise cared little for the finer things in life, sporting a cheap plastic wristwatch and driving an aging Toyota Camry. Those who worked closely with him said he still had the same wonder and enthusiasm for science he had as a student.

Yet here, at the turn of the millennium, Hood was miserable.

His once-controversial vision for “big science” was becoming a reality through the Human Genome Project, yet he didn’t feel like a winner. He felt suffocated. He had a new vision, a more far-sighted and expansive one that he insisted would revolutionize healthcare. But he felt the university bureaucrats were blind to the opportunity. They kept getting in his way. It was time, Hood felt, to have a difficult conversation with his biggest supporter.

On a typically dark and gray December day in Seattle, Hood climbed into his dinged-up Camry and drove across the Highway 520 floating bridge over Lake Washington to meet Gates, the billionaire CEO of Microsoft. Hood shared some startling news: he had resigned his endowed Gates-funded professorship at UW. He wanted to start a new institute free from university red tape. It was the only way to fulfill his dream for biology in the 21st century.

Gates was well aware of Hood’s record of achievements and its catalytic potential. Hood had led a team at Caltech that invented four research instrument prototypes in the 1980s, including the first automated DNA sequencer. The improved machines that followed made the Human Genome Project possible and transformed biology into more of a data-driven, quantitative science. Researchers no longer had to spend several years — an entire graduate student’s career — just to determine the sequence of a single gene. With fast, automated sequencing tools, a new generation of biologists could think broadly about the billions of units of DNA.

The sequences were obviously important, as they held the instructions to make proteins that do the work within cells. Thousands of labs around the world — at research institutions, diagnostic companies, and drugmakers — used the progeny of Hood’s prototype instruments as everyday workhorses. George Rathmann, the founding CEO at Amgen, biotech’s first big success story, once said Hood “accelerated the whole field of biotechnology by a number of years.”

Building on his success at Caltech, Hood had recruited teams at the University of Washington that continued to create new technologies to further medical innovation. One machine analyzed the extent to which genes are expressed in biological samples; another analyzed large numbers of proteins simultaneously. One device quickly sorted through different types of cells.

Hood certainly wasn’t the only biologist looking ahead, imagining what these automated tools could enable. Once scientists had the full parts list of the human genome on their computers, many believed it would lead to greater understanding of disease, paving the way for precise diagnostics and, ultimately, “personalized medicine.” But Hood had an unusually clear and far-reaching view for how biologists could fully exploit the new instruments. His enthusiasm inspired many bright scientists to devote themselves to his vision and to do their best work.

Not that he inspired everyone. Many people throughout his career saw a man who took excessive credit for discoveries made by others, including young scientists who toiled for him around the clock. Critics saw a self-promoting narcissist, someone who could be blind to the ways his actions sometimes hurt people. He had contradictions: Influenced by his teachers, Hood was dedicated to science education his entire career, yet he did little to mentor his own graduate students.

Passionate as he was about his vision, Hood could be strangely detached from the people he asked to carry it out. He had a big ego — an “unshakable confidence,” in his own words, and he thought he was entitled to special treatment, which frustrated university leaders. When people complained Hood was out of control, he usually turned a deaf ear, dismissing them as bureaucrats or whiners. He was quick to point the finger at others when things went wrong, while hardly ever admitting a mistake of his own.

Like many biotech entrepreneurs, Hood made promises he couldn’t keep. He predicted his work would lead to a personalized medicine revolution within a few years. It didn’t. Competitive to the core, he drove himself to stay at the cutting edge. That meant starting multiple projects at once, getting them operational, then leaping ahead to the next thing. He left the slow, painstaking, meticulous work of science to others.

Ever since James Watson and Francis Crick discovered the double helix structure of DNA in 1953, biologists had pursued the promise of molecular biology. Scientists spent the last half of the 20th century drilling ever deeper into understanding one gene, and usually the one protein created by that gene’s instructions, at a time. Each gene was studied in isolation. It was a thrilling time, as biologists saw there was an underlying unity to life: the DNA code was present in animals, plants, bacteria — every living organism on the planet. The code held genetic information that had so much influence over life on Earth. Many great discoveries had been made using a narrow, deep approach that sought to understand the meaning of the code in many contexts — different animals, different disease states, different environments.

Hood himself, as a graduate student, carried out important immunology research in this tradition. Yet at the start of the 21st century, Hood believed biology was ready for more ambitious goals. He believed traditional reductionism, looking at one gene at a time, was outdated “small science.” Biology was maturing from a cottage industry into a modern science with fast automated research tools. The time was right, he argued, for scientists to look at hundreds or thousands of genes and proteins together in the complex symphony that makes up a whole human organism with trillions of cells. Biology, like physics, had an opportunity to turn into “big science” — fueled by big money, big teams, and big goals.

The way to tackle such complexity, Hood said, was through what he called “systems biology.” It was a new twist on an old idea that involved bringing scientists together from various disciplines of biology, chemistry, physics, mathematics, and computer science. He wanted the power to recruit the right people to the University of Washington for this mission. These people didn’t always speak the same language, but Hood saw himself as the leader who could cross the scientific cultural divide.

He would break every rule and custom of academia if necessary. If he needed to offer a computer scientist a salary that was competitive with Microsoft’s wages, he would. He wanted to choose whom to hire and whom to fire. He needed the flexibility to raise his own money from wealthy donors, some of whom had struck riches in the original internet boom of the late 1990s. If he wanted to out-license a technology for further development to a company, or start a new company, he didn’t want to ask permission. Hood demanded a multi-million dollar facility with enough room for all of his scientists, complete with a floor layout that tore down traditional walls between departments to enable more collaboration. As negotiations at the University of Washington dragged on, Hood realized he wasn’t going to get what he wanted. And university officials were growing weary with his entrepreneurial break-all-the-rules attitude. Hood had approached a wealthy donor without permission. He was actively organizing an independent nonprofit while still on the state payroll. When officials suggested he had run afoul of a new state ethics law, Hood felt threatened and embittered. Abruptly, he quit.

For his final act, Hood wanted to be the boss. New ideas need new organizations to support them, he said. Years earlier, while at Caltech, he had learned this lesson the hard way. Nineteen established instrument companies told him they weren’t interested in developing his DNA sequencer. Hood and some venture capitalists started a successful new company, Applied Biosystems. Emboldened by that experience, Hood now decided he was ready for a whole new kind of risk. In his early 60s, he decided to give up his department chairmanship and tenured faculty position. He had to start his own research institute. It was time to put his money and his reputation on the line.

There were sensitivities that needed to be considered. Would all of Hood’s federally funded grants transfer to his new institute in an orderly manner? How many of his bright students and postdocs would leave an accredited university to join a risky venture? Would peers be supportive, or would they dismiss the institute as a flight of fancy unworthy of grant funding? Could he find lab space? How much of his own money would he need to spend? Would he be condemned by the media for turning his back on the University of Washington?

Many of those questions would take months or years to resolve. But on that gloomy day in December 1999, Hood wanted to break the news to Gates in person. He knew it would be bad form for Gates to hear about it on the evening news. Hood had his assistant call Gates’ office and request a face-to-face meeting. The two men had been close, so Hood got on the calendar. Gates had heard the gist of the institute-within-the-university idea and was curious to hear what was so important that it couldn’t wait.

The men sat down in a couple of comfortable chairs in Gates’ office in Building 8 at the Microsoft campus in Redmond. Hood came quickly to the point. He’d resigned his endowed Gates professorship at UW because the bureaucracy of a public institution would never be flexible enough to let him achieve his goals for multi-disciplinary systems biology. Hardly stopping for breath, Hood barreled through his long list of grievances with administrators who didn’t share his vision. In the same breath, he rhapsodized about the opportunity for systems biology.

The billionaire listened for a solid 15 to 20 minutes. When Gates asked whether the dispute could be resolved some other way, Hood said he had tried for three years to set up such an institute within the university.

When Hood had said his piece, Gates cut to the heart of the matter.

“How are you going to fund this institute?” he asked.

“Well, that’s part of the reason I’m here…” Hood replied.

Gates interjected.

“I never fund anything I think is going to fail,” he said.

Hood was stunned.

He hadn’t expected Gates to commit on the spot to bankrolling a new institute. But he didn’t expect to be flatly dismissed. Gates was a logical thinker, not the impulsive type. He was a kindred spirit, an entrepreneur, a fellow impatient optimist. Years earlier, they bonded on a safari in East Africa; Hood listened to Gates talk about the “digital divide” as hippopotamuses grunted in the night. Often, Gates peppered his biologist friend with questions about the human immune system, widely considered to be the most sophisticated adaptive intelligence system in the universe. The recruitment of Hood helped raise the University of Washington to international prominence in genomics and biotechnology during the 1990s. Given that success, Hood thought he could talk his friend into providing as much as $200 million for a new institute.

Hood didn’t realize it at the time, but Gates was starting to think more seriously about how to make an impact by giving away most of his fortune, by tackling diseases that plague the world’s least fortunate people. By contrast, Hood’s brand of systems biology was abstract, and its applications were likely to come first in rich countries.

The harsh truth, for Hood at least, took years to sink in. Gates didn’t give his institute a penny in its first five years.

Their friendship didn’t end, but the two men would never be quite so close again. “I definitely disappointed Lee,” Gates said years later.

Reflecting on Hood’s split with the university nearly 15 years later, Gates took a nuanced view. He was intrigued by Hood’s new vision, but he also saw why he didn’t work well with others in the university. “He’s a wonderful guy, but a very demanding guy. He’s kind of a classic great scientist,” Gates said. “These things are never black and white.”

On the drive home that fateful day in December 1999, Hood wondered whether he had said something wrong, failed to make a case. But it was a fleeting emotion. Moments of self-doubt, to the extent he had them, were brief. He confided in his wife, Valerie Logan, the one person he knew would give him the support he needed, no matter what. He brooded for a while. “It shocked and hurt me,” Hood said. “It was a statement of skepticism from someone I had hoped would support me.”

Others who were close to the situation understood why the meeting had gone badly. “Bill had not, at that time, been schooled in philanthropy,” said Roger Perlmutter, a former student of Hood’s who went on to run R&D at Amgen and Merck. “This gift to the University of Washington to create Molecular Biotechnology was surely the biggest thing he had done in philanthropy. It was all done to bring Lee here. And then in short order, it unravels? It was a kick in the teeth.”

If Hood’s first thought was that he had possibly damaged his relationship with his most important benefactor, his second thought was that his vision was right and he needed to find other support. He had some money already. Much of it was through his shares in Amgen, the biotech company he advised from its early days, and which had become one of the best performing stocks of the 1990s. He’d also made millions from royalties on DNA sequencers sold by another company he helped start — Applied Biosystems. And Hood had other wealthy friends and companies he could call on for help.

There was a lot to think about beyond science. Where to begin on starting a new institute? Even though he was hailed as one of biotech’s great first-generation entrepreneurs, Hood had never played an executive role in running those enterprises. Now, he would have to act as a startup CEO responsible for not just vision and fundraising, but day-to-day operations. He knew he wasn’t a skilled administrator. He was impatient in meetings, lacked empathy, and made clear to all around him that he didn’t want to hear any bad news. He had a bad habit of avoiding sensitive personnel matters, like whether to fire people.

None of that deterred him. Hood’s son, Eran, an environmental scientist, once said of his father: “He’s always sort of had this narrative in his life of him struggling against people who are trying to keep him from doing what he wants to do. I always joke they should take the Tupac Shakur song, ‘Me Against the World’ and rewrite it as ‘Lee Against the World.’ They could take out the district attorneys and the crooked cops and put in university presidents and the medical school deans who just don’t know, don’t understand, his vision.”

The path ahead was clear. Hood had to prove his vision was right. He would push himself around the clock, to the far ends of the Earth, spend his last nickel. Nothing, he was determined, would get in his way.

Luke Timmerman is the founder and editor of Timmerman Report, a subscription publication for biotech professionals. He is also a contributing biotech writer at Forbes, and the co-host of Signal, a biweekly biotech podcast at STAT. In 2015, he was named one of the 100 most influential people in biotech by Scientific American Worldview.

[Lee Hood, an M.D., Ph.D. went public in 2002 that "The Genome is a System" (implying that it needs some system theory - obviously beyond the realm of Old School Genomics. In the same year, I did not go public (but filed the FractoGene patent), that "The Genome is Fractal and Grows Fractal Organisms". The first seven years later (2009) the Hilbert-fractal of the genome was Cover Picture of Science. Explaining genome (regulation) by Fractal System Theory or by any other System Theory can scare some; Genomeweb encapsulated this in their one-liner "Paired Ends, Lee Hood, Andras Pellionisz". Now, the second seven years later (2016), Big Information Theory companies (Google, Apple, Samsung, Siemens, Microsoft and all) will have to gladly employ mathematical algorithms to meaningfully program their computers. Andras_at_Pellionisz_dot_com ]

What if Dawkin's "The Selfish Gene" would have been "Selfish FractoGene"?

[From "very near misses" to an almost complete embrace of my 2002 FractoGene concept (given to the hands of Dawkins in 2003 in Monterey and to Mandelbrot 2004 in Stanford), the 2004 original of "The Ancestor's Tale" by Dawkins & Wong has a new, 2016 edition (see fractal title-page below). Mandelbrot is gone (2012). While in his entire life he deliberately avoided "mathematization of biology" although he was offered extremely substantial funds since "biologists were not ready" (see his Memoirs), among the few of his illustrations he did feature the "Cauliflower Romanesca" (brought into limelight by Pellionisz, 2008).

Now, in 2016, another giant of influence, Richard Dawkins (Charles Simonyi emeritus Professor of Oxford) embraced the fractal approach to biology. I make some comments "what if Dawkins Selfish Gene would have been Selfish FractoGene?". - Andras_at_Pellionisz_dot_com

Science & Environment published an utterly fascinating recollection after 40 years of "The Selfish Gene" by Richard Dawkins. No, their title was different]:

The gene's still selfish: Dawkins' famous idea turns 40

By Jonathan Webb

Science reporter, BBC News

24 May 2016

From the section Science & Environment

Richard Dawkins, 40 years after "The Selfish Gene".

[Excerps from BBC article] Prof Dawkins hunches over his laptop to dig up examples of biomorphs - the computer-generated "creatures" he conceived in the 1980s to illustrate artificial selection:

Apple Macintosh software "Biomorphs" used for his book Blind Watchmaker.

[I switched from the IBM-type mainframes (that Mandelbrot used) to the precursors of home computers, DEC-15 graphic computers, with an optical cursor (predecessor of the "mouse"). I built a computer model of the entire cerebellar neural network (of the frog), containing 1.68 Million brain cells. In the arduous process of programming the growth of the 5 types of brain cells, along with the most spectacular neuron, the vast array of self-similar Purkinje cells, I know from my "99% perspiration" (from 1965 to 1989) how much information one has to put in, to "grow" the entire cerebellar neural network (this is why I never believed that the less than 1% of the human genome, "the genes" were enough information, it just could not be that the rest was "Junk DNA"). At the moment the Apple Macintosh computer appeared, I through out all obsolete hardware from my New York University Medical Center office, and put in a Macintosh at work and also in my home (Waterside Plaza, NYC 10010 - a five minute walking distance). Thus, I could work "around the clock" on my software development to mathematization (geometrization) of biology. Among the first concept to check out was to follow-up on Mandelbrot's musing in his famous "Fractal book" that "clouds are not spheres, mountains are not cones and even the lightning does not travel along a straight line", with an additional allusion that "maybe even brain cells are fractal". Just as Dawkins, I used the Lindenmayer L-string replacement (one class of fractal recursive algorithms), to duplicate an existing Purkinje cell (from the cerebellum of the guinea pig). Since all fractal algorithms are recursive (like in the famous Mandelbrot Set any new Z point is generated by the previous Z-point squared, plus a C-constant. Mandelbrot's "trick" was that from the Gaston Julia Set the variable was a real number - while Mandelbrot was curious how the recursion would work for Z as a complex number. Unfortunately, few people know how to "square" a complex number, with a real and also an imaginary component; but it is just a couple of lines of code for a computer. With enough number of repetitions by an IBM mainframe, a new world opened up; the Mandelbrot Set, an epitomy of "complexity generated from simplicity"). In my Cambridge University Press book chapter of the fractal Purkinje cell, I even made my allusion in 1989 that "far into the future" perhaps the recursive process necessary to generate a fractal brain cell will lead to The Principle of Recursive Genome Function (2008). With his "Biomorphs", Richard Dawkins had a "very near miss", indeed. Especially with teaming up with Charles Simonyi (Microsoft VP, mastermind of Microsoft Office), they must have been totally familiar with the recursive "programming" of the "Biophorms" (based on the recursive L-string replacement algorithm). However, neither Dawkins nor Simonyi devoted their focus to either fractals, neuroscience at the level of brain cells. Dawkins certainly developed a keen interest in the DNA as a code, but Dawkins and Simonyi neither separately nor together made the "Heureka!" connection between fractal brain cells and the fractal DNA that generates them! Richard Dawkins was interested enough in the DNA as a code to show up from Oxford at the Monterey 2003 50th Anniversary Meeting of "The Double Helix" (Jim Watson was there, Francis Crick was already too ill to attend). I made the short drive from Silicon Valley to Monterey for the entire 3 days of the Celebration, and "elbow rubbing" with those enthused by the fact that just over half a Century not only the Full Human Genome was sequenced, but by 2003 we also knew that the mouse Full DNA Sequence was 98% identical (with the genes) with the human. The "only difference" was, that the so-called "non-coding DNA" (maiden name "Junk DNA") was 1/3 less in the mouse. Throughout the 3 days (with my FractoGene patent already filed in 2002) I was very vocal about my FractoGene interpretation. Prepared a "mini-CD" with the cover-picture of the 4 fractal stages of Purkine neuron growth, illustrating with 4 lines of self-similar DNA sequence-snippets how the obviously repetitious DNA is not only fractal, but its fractality is in a cause-and-effect relationship with the fractal growth of the Purkinje cell it governs. In the "evening entertainment time" (at the Monterey acquarium) I had an informal but rather lengthy chat with Francis Collins why "Junk DNA" can not be discarded. His single main objection was that he cited the fact that some genomes are actually much larger than the human DNA (he cited the rice). Nonetheless, he (and also Craig Venter) seemed to be thinking hard about Ohno's "biggest mistake of the history of molecular biology" (1972). Upon return to Washington D.C., Francis Collins requested funds for ENCODE-I to find out!

In the Monterey meeting I personally met Richard Dawkins (the only time I encountered him face-to-face). When I put into his hands my mini-CD with the FractoGene cover, he looked at it and I will never forget his face and body language. He did not say so, but to me the answer seemed to be "What did I missed with my Biomorphs!!! - This guy must be right!!!". I had plenty of copies of the mini-CD to distribute over 3 days to all who took it. The FractoGene concept, however, was a "double heresy" - it reversed both fundamental dogmas of Old School Genomics. (Any "recursion" was a horrific violation to the still living Crick's "Central Dogma" - that the DNA>RNA>Protein "arrow" NEVER recurses to DNA, and also Ohno's "Junk DNA" nonsense made any "recursion" pointless - since the non-coding DNA was not supposed to have any information. This is why I could only publish my fractal Purkinje cell model in Cambridge University Press book (of a Copenhagen "Neural Net" meeting, where I was the program chairman...). From the initial stonewalling I saw no chance to publish my FractoGene theory - thus filed the utility it implies as a US patent in 2002, granted in 2012 (in force for about the Next Decade).

It is pure fantasy to ponder at this point, "what if" Dawkins-Simonyi would have made the cause-and-effect link between repetitive (to me obviously fractal) DNA, and (to everybody obviously fractal algorithm) of the Lindenmayer L-string replacement fractal algorithm.

Perhaps the Apple software-efforts (both by Dawkins and myself) would have resulted a very early "Big IT interest" of Apple in New School Genomics. Apple, in fact, did make a little attempt, e.g. using their Tower to speed up one of the most often used BLAST algorithm of Old School Genomics... Perhaps by the time Steve Jobs got cancer, Apple would have been very deeply into genome informatics... Perhaps Steve's dilemma in his memoirs "he will be the first to be cured with the help of computers - or the last one to die without computers helping full force" could have ended the other way? If Simonyi picked up the ball, perhaps Microsoft would now be a towering Big IT of New School Genome Informatics (where Bill Gates just forked over $100 M to Editas one of the genome editing companies). Charitable contributions of Charles Simonyi extend to Hungary (a son of Károly Simonyi, famous physicist professor of Hungary, whose lectures I was blessed enough to listen to...). When in 2006 I organized a "PostGenetics" World Symposium in Hungary, to trigger biophysicists and software developers of Hungary to focus on New School Genome Informatics, perhaps I could have turned to Charles Simonyi to tide us over the problem that ENCODE-I only came out with "NO JUNK!" results in 2007 (2006 was a bit early...). Bill Gates was asked very recently if his genome was ever fully sequenced. ("No" - he answered, adding "but it would be the first thing I would do if I would learn any sign of cancer). Craig Venter had his full DNA sequenced ages ago (and had both types of skin cancer-episodes) - but he is on multiple record that the data sit on the shelves awaiting breakthroughs in interpretation. What seems to be sorely missing is a focus on the mathematics of genome regulation - since cancers are clearly "a disease of genome mis-regulation".

Richard Dawkins completed his turn to embrace some "fractal approach" by 2016. Along with Yan Wong, they incorporated into the new 2016 edition of The Ancestor's Tale a "fractal interactive interpretation of evolution", based on the 2012 October 16 PLOS paper by Rosindell and Harmon (just a few days after my FractoGene USPTO 8,280,641 was granted), amply referring to the brilliant OneZoom interactive web-portal (see below). Fractals made the cover picture of Science 2009, 2016 (see in this column) and now "fractal evolution" by Dawkins and Wong.


... he is particularly excited by the way whole-genome sequences can inform our extended family tree. ... As the technology gets cheaper and faster, Prof Dawkins says with excitement, "it will become possible to lay out the complete tree of life".

To emphasise the point, he returns to the laptop and opens OneZoom, a dazzling, all-encompassing representation of the tree of life which uses fractal shapes to allow continued expansion.

Onezoom - an interactive fractal model of evolution, based on the original PLOS paper "OneZoom: A Fractal Explorer for the Tree of Life" by Rosindell and Harmon, 2012, Oct. 16

The Ancestor's Tale 2004 first edition (no "fractal" used), but in the 2016 edition (with co-author Yan Wong) totally adopts a fractal viewpoint:

DNA pioneer James Watson: The cancer moonshot is ‘crap’ but there is still hope


JULY 20, 2016

James Watson, whose 1953 discovery of the structure of DNA with Francis Crick launched the revolution in molecular biology, says recent heart surgery has wreaked havoc on his long-term memory (though not his tennis serve: the 88-year-old can still reach 100 miles per hour). At a celebration of his friend Arthur Pardee’s 95th birthday last weekend at the American Academy of Arts and Sciences in Cambridge, Mass., however, Watson showed no signs of cognitive slowdown, much less of forgetting the world-changing events of 63 years ago.

His acerbic and impolitic wit was also in fine form. Describing one scientist who gave a talk at the meeting, Watson said, he is “so brilliant, he reminds me of Francis,” including being so much smarter than everyone else that “no one wants to work with him.” The public’s embrace of antioxidants may well be fatally misguided, he said, rattling off biochemical data on how reducing antioxidants in cancer cells may be the key to destroying them — while consuming high levels of antioxidants as pills or even in foods may increase the risk of dying of cancer, as he argued in a 2013 paper.

Watson spoke to STAT at the Academy and by phone. Here are excerpts from those conversations and from his remarks at the birthday bash:

On the cancer moonshot announced this year by President Obama:

The depressing thing about the “cancer moonshot” is that it’s the same old people getting together, forming committees, and the same old ideas, and it’s all crap . . .

On the prospects of curing cancer:

Everyone wants to sequence DNA [to treat cancer], but I don’t think that will help you cure late-stage cancer, because the mutations in metastatic cancer are not the same as those that started the cancer. I was pessimistic about curing cancer when gene-targeted drugs began to fail, but now I’m optimistic.

On what he sees as the best hope for treating and even curing advanced (metastatic) cancer: an experimental drug from Boston Biomedical (for which Watson is a paid consultant):

Papers have identified the gene STAT3, a transcription factor [that turns on other genes], as expressed in most kinds of cancer. It causes cancer cells to become filled with antioxidants [which neutralize many common chemotherapies]. In the presence of the experimental drug that targets STAT3, cancers become sensitive to chemotherapies like paclitaxel and docetaxel again. This is the most important advance in the last 40 years. It really looks like late-stage cancer will be partly stopped by a drug.

On his involvement in current cancer research:

I’m not at war with the cancer community, but they ignore me and I ignore them.

On his own anticancer regimen:

I take metformin [a widely used diabetes drug] and aspirin; I try not to eat too much sugar, and I exercise. Put all together, they probably reduce my cancer risk 50 percent. At 88, I give myself five years to see 80 percent of cancers treatable. What we can now say is that lots of untreatable cancers have become treatable. When does “treatable” mean “curable”? I’m not sure, but living five years with pancreatic cancer would be quite something. I don’t want to die until I see that most cancers have become curable.

[The government - and its NIH branch (more precisely the National Cancer Institute of NIH) is at a crisis; directors changed not once lately. Not only Watson (without question the greatest champion alive of genomics) profoundly disagrees with the government's solution, but even an intramural scientist of the National Cancer Institute, who is very well versed in mathematics, displays a "schizophrenic" stance. In his publication first he posted his essay based on the realization that the underlying mathematics is fractal (cited from Mandelbrot to Pellionisz some 50 papers, mostly fractals). Somebody must have tapped him on the shoulder that "Do you know what this would mean? - We would all have to understand fractals!" (Actually it is very easy; just keep repeating a recursive operator, e.g. the "mind-boggling complexity" of the Mandelbrot set is totally determined by Z=Z^2+C. The "glitch" is that to get a new Z, which is a complex number, some do not know how to square a complex number, while adding a Constant is trivial. Nonetheless, the second version of the same NIH paper was stripped of all fractal mathematics. It is a pity, since even medical doctors can remain totally immune to "squaring complex numbers", since computers do much more formidable computations for us on a daily basis. I am absolutely positive that "nobody wants to die (of cancer) before cancers have become curable". Neither Steve Jobs, nor Bill Gates, nobody. For that matter, I would bet that e.g. Bill Gates or Tim Cook (etc) would, by a simple stroke of a pen, fork over any amount of computation for free, if they, or any loved ones, could be liberated from the curse of cancer. What is the problem then? (Since hardly anybody would be totally satisfied with just metformin, aspirin and exercise...). The problem is, as Jim so brutally clearly explains in his hallmark style, that the Old School of Genomics is presently locked horns with a New School of Genomics (thus we have the Janus-double-face of the same NIH Cancer Institute paper). I am totally optimistic, since genome misregulation (a.k.a. cancer) will be solved by the awesome assistance of computers. If it will be Apple, the tool be as "user friendly" to MD-s as e.g. their iPhone (e.g. to recommend the therapy with the best probability to be effective for a certain individual genome; easier than to Google an item). I developed an enormous amount of software in my life, and don't even know how many Gb memory my old iPhone had (I no longer care, since the new model has plentiful). Andras_at_Pellionisz_dot_com ]

Science Cover Issue 2016 July 22 with Fractal folding of DNA and of Proteins

[Twice the 7-year "critical time" lapsed from FractoGene (2002). In 2009 September Pellionisz presented it in George Church' Cold Spring Harbor Meeting. Weeks later, Science Cover picture was the Hilbert Fractal Folding (October, 2009). Now, after another 7 years, Science Cover picture in 2016 shows the self-similar sets of protein-elements, whose "design" is in the fractal DNA. Now further comment is needed, but the following brief Chinese publication of "editing out fractal defects" clearly points to the epoch-making applications e.g. to defeat cancer - Andras_at_Pellionisz_dot_com]

CRISPR Immunotherapy Trial Ahead

Jul 22, 2016

Chinese researchers will be starting a human trial investigating a CRISPR/Cas9-based therapy for metastatic non-small cell lung cancer next month, Nature News reports.

The trial, led by Sichuan University's Lu You, involves isolating T cells from patients who've failed chemotherapy, radiation therapy, and other treatments. The CRISPR/Cas9 approach will then be used to knock out the gene encoding the PD-1 protein that typically prevents the immune system from attacking healthy cells. Through this, the researchers hope to boost the patients' immune response to cancer. The trial, Nature News adds, is starting small, with just one patient and with low doses of altered cells and will gradually increase both the cohort size and dosage.

"Treatment options are very limited," Lu tells Nature News. "This technique is of great promise in bringing benefits to patients, especially the cancer patients whom we treat every day."

Approval for the trial, which the researchers received earlier this month, took about six months, Lu says. A similar trial in the US, led by the University of Pennsylvania's Edward Stadtmauer, has received approvalfrom a National Institutes of Health advisory panel, but it also needs the go-ahead from the Food and Drug Administration and its institutional review board.

Qatari genomes provide a reference for the Middle East

Published online 20 July 2016

Researchers have assembled a reference genome to reflect the variants in Middle Eastern populations.

Written by Sedeer El-Showk

Weill Cornell Medicine - Qatar. A reference genome specific to Arab populations was recently published by geneticists in Qatar and New York, facilitating future research into genetic diseases and the application of precision medicine in the region1.

It is hoped that the availability of a Qatari reference genome will enable doctors should be able to treat Arab patients using precision medicine, a process which involves incorporating information about a patient’s genome in the prediction, diagnosis, and treatment of diseases.

“Precise genome interpretation is key to the successful integration of genomics in healthcare decision-making,” says the study’s lead author, Khalid Fakhro of Weill Cornell Medicine in Qatar (WCM-Q) and the Sidra Medical and Research Center.

Genomic information about Arab populations has so far been sparse, limiting benefit from precision medicine and other fruits of modern genomics.

“When we started sequencing the first 100 Qatari genomes, we were surprised by the unusually high numbers of variants being called. Either the chemistry [of the sequencing reaction] was introducing systematic errors or there was something biologically interesting going on,” says Fakhro. The sequenced Qatari genomes differed from the standard reference genome at more than four million locations — roughly 25% more than expected if sequencing a Caucasian or East Asian genome.

Decoding the genomes of Qatari Bedouins revealed that indigenous Arabs have probably been present in the Arabian Peninsula since the out-of-Africa migration, representing one of the oldest populations outside Africa2. As a result of this ancient divergence, variants which are quite rare in other groups may be quite common in populations of the region, rendering the standard reference genome inadequate for studying them.

To overcome this, the team embarked on creating a reference genome which could be used as the standard for further work in Gulf Arab populations. They collected samples from more than 1,000 Qataris and analysed them using the genomics and advanced computing resources available at WCM-Q and WCM-NY. By only using data from individuals who were unrelated and whose four grandparents had all been born in Qatar, the team ensured that the reference genome would be representative of Qataris in particular and Arabs in general.

“For now, this reference genome will be a good approximation for most Middle Eastern Arabs, but it’s still not better than a truly local reference for each subpopulation,” says Fakhro, who would like to see Arab scientists work together to build a rich data set of Arab genomes.

The researchers found that roughly one out of every six differences between the Qatari genomes and the standard reference were in fact common among Qataris; these were therefore incorporated into the new reference genome. “Surprisingly, some of these were previously reported as Mendelian disease-causing variants,” says Fakhro, explaining that they may have been incorrectly labelled as rare because their prevalence in this population was not known.

"This study is the result of several years of hard work by colleagues in Qatar,” says Fowzan Alkuraya, a human geneticist at King Faisal Specialist Hospital and Research Center in Saudi Arabia. “I hope future studies will explore some of the features of the genome, but this is already a big step forward for genomics research in Qatar."

Alkuraya was not involved in this study. But Saudi Arabia is home to the Saudi Human Genome Project, which aims to discover disease-causing genetic variants in the Saudi population by sequencing 100,000 genomes.

Likewise, the Qatar Genome Programme was launched in 2014 with the goal of sequencing the entire Qatari population to deliver precision medicine. Both projects, along with studies of Arab samples elsewhere in the world, stand to benefit greatly from the new reference genome, which will reduce their computational burden and make false positives much less common.

“Qatar has made a serious commitment to be a leader in precision medicine, and we will continue to do our best to discover as much as possible for the Arab world from the Arab world,” says Fakhro.


Fakhro, R. et al. The Qatar Genome: A population-specific tool for precision medicine in the Middle East. Human Genetic Variation (2016)

Rodriguez-Flores, J.L. et al. Indigenous Arabs are descendants of the earliest split from ancient Eurasian populations. Genome Research (2016)

[There is a Worldwide Rush to sequence populations (in Full DNA), that appear to exhibit much more divergence than previously thought. First, why is this unexpected diversity? The reason is, that the 19 thousand or so "genes" are providing "the basic materials", pumping (through RNA) proteins. They are more or less the same in humans as in the mouse (found Next Year 2002 when the mouse was turned out to have 98% the same "genes"). Even the tiny worm C-elegans has about the same number, and same kinds of genes. Not so for the vast amount (98% in human) of "Junk DNA" - that we all agree is anything but "Junk" after a tumultuous struggle since Ohno's 1972 "So much Junk DNA in our Genome" The non-coding DNA provides the fractal design of how the basic materials are architected together - and the different groups of organisms from worms to mammals are just about as different as the great variety of "fractal flowers, leaves, trees". Okay, but what is the significance of deep sequencing the full human genome of different groups of people? The paper emphasizes "precision medicine" (the new term for "personalized medicine"). There is, however, an even more basic utility. Just think of tighly knit ethnic groups in which arranged marriages are frequent. Enormous wealths are scrupulously considered how to preserve them while merging and dividing. Think about this. With all the calculations and pondering about material wealth, should the most important treasure we all have (our hereditary material) be considered perhaps even more carefully? Indeed! There are already scores of structural variants (fractal defects) that one person harbors without him/her or descendants showing any health problems at all. Except, if both parents harbor some identical defects. The closer knit is the community, the more likely it is that in their arranged marriages the partners are blood-relatives. Thus, the probability that "all remains in the family" enhances the chance that rather serious malfunctions can be inherited from both sides. UNLESS the arrangement of marriages meticulously investigates not just the merged wealth, but also the compatibility of closely similar genomes. Does it mean that certain matches should not happen? Fortunately, there is no such bad verdict. Already, there are methods to select those fertilized eggs for full development that e.g. in a 50-50% chance do NOT inherit the defect. Sounds simple? At the level of different societies, it is not all that simple at all. Affordability is only one question - the expertise and cultural acceptance are much more difficult challenges. Yet, such a positive use will absolutily happen! Andras_at_Pellionisz_dot_com ]

CIA Chief Claims that Genome Editing May Be Used For Biological Warfare

CIA Director John Brennan claims that advances in genome editing pose a threat to national security and may be used to create biological weapons.

NEW YORK (Sputnik) — Advances in genome editing pose a threat to national security and may be used to create biological weapons, Central Intelligence Agency Director John Brennan said at the Council on Foreign Relations on Wednesday.

"Nowhere are the stakes higher for our national security than in the field of biotechnology," Brennan stated. "Recent advances in genome editing that offer great potential for breakthroughs in public health are also a cause for concern because the same methods can be used to create genetically engineered biological warfare agents."

In April, the Department of Homeland Security’s Office of Health Affairs Acting Director Aaron Firoved testified in Congress that synthetic biology and gene editing offer terrorist organizations potential to modify organisms for malicious purposes such as manmade pathogens that can rapidly cause disease outbreaks.

Moreover, subnational terrorist entities such as Daesh "would have few compunctions in wielding such a weapon," Brennan noted.

Brennan called on the international community to create national and global strategies to counter such threats, along with the consensus of laws, standards and authorities that needed to counter the threat.

[Nuclear Science started with almost trivial observations that radioactive rocks left some ugly dark spots on a photographic plate. Soon, basic axioms of physics/chemistry (Dalton's "atom theory") became obsolete, since elements in the Mendelejev' table CAN transform into other element(s) - by nuclear fission or fusion. The hard part for science was that Quantum Theory and Nuclear Physics had to be created (by the Coppenhagen Group) to mathematically understand the new science. Still, it remained a brave new but innocent effort of science. Leo Szilárd, however, composed a "heads up" letter and with Edward Teller they had Einstein sign it for the President's perusal. It was pointed out that unheard amounts of energy can be unleashed by nuclear fission - and WWII can be won! Incidentally, the same Leo Szilárd filed the patent for nuclear reactors (a peaceful utilization of nuclear energy - however his patent was not released for many-many years. One may also note that John von Neumann architected a new device [the computer] to help dealing with the mathematical monstocities of the challenge).

Old School Genetics used to be like Dalton's "atom theory". "Genes" counted, but the 98.5%, the rest, was "JunkDNA". "Genes" weren't "jumping", weren't "scattered in pieces" (but each was a contiguous sequence, albeit interrupted by totally inexplicable "introns"), proteins were "never" recursive to the non-coding DNA (see the obsolete Crick's Central Dogma). New School HoloGenomics (Genomics plus Epigenomics) overturned all primitive dogmas. The mathematical laws of genome regulation are in the domain of fractal-chaotic nonlinear dynamics (and e.g. cancer is a systemic breakdown of the hologenome regulation - better stiffen your immune system!). Still HoloGenomics has been a very benevolent (a basic-science effort, with anti-disease applications, though we have pointed out years ago that with understanding of hologenome regulation a totally new type of "antibiotics" can be made by shutting down their genome function).

Curiously, a huge amount of massive download from this column by the Homeland Defense Data Center (Cheyenne, Wyoming) sobered me just a couple days ago that New School HoloGenomics may have left the station of innocence. Nonetheless, at any event, massive funds will be available to mathematically understand genome regulation (e.g. what fractal defects could be edited out to fight diseases, or ... ). Look for Homeland Defense facility (at Livermore, California), DARPA, NSF (etc) suddenly considering the mathematical understanding of hologenome function a super-urgent matter. Andras_at_Pellionisz_dot_com ]

Are Early Harbingers of Alzheimer’s Scattered Across the Genome? [FractoGene!]

13 Jul 2016

Large genome-wide association studies have implicated fewer than two dozen genes in AD, but some scientists believe thousands of other variants may influence the course of the disease, even if only by a smidgen. When considered en masse, these genetic weenies may punch above their weight, according to a study published in the July 6 Neurology. Researchers led by Elizabeth Mormino of Massachusetts General Hospital in Charlestown report that thousands of genetic variants, which by themselves fail to pass significance thresholds imposed by GWAS, nevertheless associate with cognitive and structural brain changes that precede the onset of dementia. They propose using polygenic risk scores derived from these GWAS runners-up to more completely assess genetic risk for AD, rather than focusing on the few strongest loci.

“If we restrict ourselves to the top hits, we may be missing some useful genetic information that is buried beneath the surface of GWAS data,” Mormino told Alzforum. Commenters agreed that the most important implication of the study was that more genetic associations exist below the statistical thresholds used for AD GWAS.

Some studies estimate that genetic variation explains more than half the risk of developing AD (see Gatz et al., 2006). However, the 21 variants thus far found in GWAS together account for only 2 percent of the AD risk attributable to genetics (see Oct 2013 news; Jul 2013 conference coverage). The ApoE4 gene alone takes care of another 6 percent. That leaves a sizable portion of genetic underpinnings of AD unaccounted for.

To unearth the hidden associations, researchers have started lumping together genetic polymorphisms that fall below the statistical GWAS criteria. They generate polygenic risk scores (PGRS) based on how many of these polymorphisms a person has (see Maher 2015). Some studies have revealed that many weakly associated loci cumulate to strengthen the genetic contribution underlying a given disease, while others, such as a recent polygenic study on diabetes, have found that not to be the case (see Fuchsberger et al., 2016). One recent polygenic study analyzed data from the International Genomics of Alzheimer’s Project (IGAP) to find that the collective association of thousands of variants accounted for far more of the AD heritability than did the 21 GWAS hits (see Escott-Price et al., 2015). A more recent study by Sonya Foley and colleagues at Cardiff University, Wales, correlated polygenic risk scores with low hippocampal volume in healthy young adults, suggesting a genetic basis for changes in the brain that precede dementia (see Foley et al., 2016).

Mormino and colleagues more broadly explored the question if polygenic risk scores associate with pathological changes that precede the onset of dementia. The researchers started by revisiting the GWAS data from IGAP, which included more than 17,000 people with AD and 37,000 healthy controls. They lowered the significance threshold from a stringent p value of 5x10-8 to a more liberal cut hoping to uncover variants that together might associate with AD. When they chose a p value of 1x10-2, more than 16,000 polymorphisms emerged. Using this expanded set of SNPs, the researchers generated polygenic risk scores for individual people in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, based on how many of these SNPs each person harbored. Compared with just considering the GWAS hits alone, those scores increased the researchers’ ability to distinguish 166 patients with AD dementia from 194 older cognitively normal people in ADNI by about fourfold, Mormino et al. report. Lowering the significance threshold further (i.e., 10-1 or more), and thus including even more polymorphisms, brought no further improvement in the ability to distinguish between AD patients and controls.

Armed with this expanded set of SNPs, the researchers searched for association between PGRS and early biomarkers of AD in 526 ADNI participants who at baseline were cognitively normal or had mild cognitive impairment but no dementia. The researchers found that higher PGRS associated with lower baseline performance on memory tests and with smaller hippocampal volumes. Over a nearly three-year follow-up period, higher PGRS scores associated with faster decline in memory and executive function, but curiously not with decreases in hippocampal volume. The higher scores also associated with progression: People with high PGRS were more likely to progress to MCI, or from MCI to AD.

What about polygenic risk and amyloid burden? In 505 participants without dementia from ADNI2, higher PGRS correlated with having a positive amyloid-PET scan. In 272 ADNI1 participants with available CSF data, higher PGRS trended toward lower CSF Aβ levels though this association missed statistical significance. The researchers attributed this to the smaller number of participants.

The researchers next tried to relate PGRS to changes that occur decades prior to disease onset. To do this, they measured PGRS in more than 1,300 healthy volunteers, aged 18 to 35, from the Brain Genomics Superstruct Project, an open-access neuroimaging dataset run by Randy Buckner at Massachusetts General Hospital in Boston. As with the Welsh study, high polygenic risk scores marginally associated with small hippocampal volume, indicating that polygenic risk influences brain structure at an early age, even five decades prior to the typical onset of AD in people who ultimately get it. Furthermore, given that most people at this age have no amyloid deposition yet, this polygenic influence on brain structure is likely amyloid-independent, Mormino told Alzforum. “The potential implications of this result are that changes associated with AD may begin earlier than we thought, and also have a genetic basis,” she said.

The polygenic risk score accounted for a modest amount of variance in disease factors. For example, the score explained 2.3 percent of the variance in baseline memory, 3.2 percent of the variance in longitudinal memory, 1 percent of the variance in Aβ deposition, and 2 percent of the variance in baseline hippocampal volume in the ADNI cohort, and just 0.2 percent of the variance of hippocampal volume in the young cohort. Mormino said this is likely due to the presence of unknown rare variants with large effect sizes, or to synergistic relationships between variants, for which the polygenic scores at present cannot account. Furthermore, Mormino said she was measuring association with intermediate phenotypes, such as Aβ deposition and hippocampal volume, rather than the final outcome of AD that had been used to select the polymorphisms in the first place. Because having a small hippocampus, for example, does not always lead to AD, the strength of the associations is inherently limited.

Could polygenic risk scores be used to identify candidates for prevention trials? Mormino considers this use a long way off. Gerard Schellenberg of the University of Pennsylvania in Philadelphia agreed, saying that polygenic risk scores are unlikely to predict who will develop a given phenotype because they account for such a small percentage of the variance. Instead he sees value in combined scores of genetic and non-genetic risk, such as cardiovascular health and lifestyle factors. Schellenberg pointed out a fundamental implication of this paper. “The most important aspect of the study was not the potential application of PGRS, but rather the implication that unknown genes associated with AD still exist,” he said. “But just because they exist doesn’t mean we’ll find them.”

Nick Martin of QIMR Berghofer Medical Research Institute in Queensland, Australia, saw the findings as a motivation to do even larger GWAS to uncover these hidden risk variants. “The lesson of this paper is that even larger sample sizes will find new loci, which may lead us to new pathways and new biology,” he wrote. “This is certainly the experience with the recent success of GWAS for schizophrenia, where 108 separate loci have now been identified, leading us to previously unsuspected biology and strong leads for new therapies.”—Jessica Shugart

[The Old School of Genomics rapidly yields to New School HoloGenomics. Gone are the days (decades, rather) when for a single phenotype (e.g. a disease; from schizophrenia to Alzheimer's) "there should be a single gene responsible". Not so simple. The genome-epigenome (hologenome) obviously demands a SYSTEMIC understanding of the nature of its regulatory system. Genes (in the sense of protein-coding DNA fragments) are not necessarily "bunched up" into any solid block. They are scattered through very large LINEAR distances (that can be minuscle distances with demonstrated "fractal folding". To single out any one element (coding or not) may be valid in case of so-called Mendelian diseases (where a SNP single point mutation may cause through a premature termination codon "incomplete protein" that may be from "not very effective" to severe or even lethal). We are not so lucky with late-onset regulatory diseases, especially with cancers. The need to raise our head and look from a very different SYSTEMIC perspective of the New School of Hologenomics, please consider the dramatic visualization, a movie of a well-known collapse of a SYSTEM.

Play YouTube

3.7 million viewers were interested to view this 4-minute video. Suppose that the structure was riveted. Would be an interesting approach to investigate certain rivets that were sheered? It certainly happened. Would it be interesting to track down the serial number of the rivet, and name it e.g. APOE-4 that first snapped? Is it possible that the certain rivet had some weakness compared to others? Not only possible, but rather likely. Still, such track of "Old School Investigation" may be fundamentally misguided, e.g. by a conclusion that the certain rivet was a "collapse rivet" and actually caused the collapse of the bridge. From a different, SYSTEMIC viewpoint it is obvious that the oscillation-dampening SYSTEM was the cause of the collapse, THE CAUSE WAS A DESIGN FAULT - not any large and distributed set of rivets lurking in the structure. From this viewpoint, it is rather futule e.g. to count the number of rivets that were eventually sheared. Just counting and naming sheared rivets would not advance bridge architecture in a meaningful way. Obviously, the resonance-properties of the design were ill-understood (and were not investigated in wind-tunnel experiments). Was it possible at all - having leant the SYSTEMIC PROPERTIES - to strengthen the design? Absolutely! Much has been learnt by understanding the design-defects, and new bridges were built by "editing in" stiffening elements - possibly using the very same kind of rivets as in the bridge that collapsed not because of a "rivet-problem" - but a design problem. I know some people don't like metphors (others do...), and one might scratch his/her head what we can do. "Ask What You Can Do For Your Genome!" - says the motto of HolGenTech. Ask me, if interested.]

[Independence Day] A 27-year-old who worked for Apple as a teenager wants to make a yearly blood test to diagnose cancer — and he just got $5.5 million from Silicon Valley VCs to pull it off

Business Insider

Lydia Ramsey

When Gabe Otte went to college, he already had an Apple internship under his belt. In undergrad, his computer-science professors told him to diversify and pick out another area to focus on so he wasn't bored in class, so he chose biology.

Now, at 27, he just got $5.5 million in a seed-funding round led by Andreessen Horowitz's bio fund to build out a blood test that screens for the earliest signs of cancer. Founders Fund, Data Collective Venture Capital, and Third Kind Venture Capital are also in on the round.

"What we're aiming to do is develop a test that healthy patients would take as part of their annual physical that tells you whether or not somebody's going to have cancer," Otte told Business Insider.

Freenome, a startup Otte started two years ago along with Dr. Charles Roberts and Riley Ennis that's based out of Philadelphia, wants to use supercomputing to crunch the human genome (the entire genetic material that gives our body instructions on how to live and grow) to look for any signs of cancer that are hanging out in the body.

These tests are often referred to as "liquid biopsies," since unlike solid-tumor biopsies, these "liquid" versions just pick up clues from the blood. They rely on something called "circulating tumor DNA," or the bits of DNA that are released from dying tumor cells into the bloodstream. Knowing what abnormalities a specific tumor possesses could help link cancer patients to treatments that specifically target those mutations as a more effective way approach cancer treatment, which is how the tests are being used right now. Better yet, the hope is that you might be able to accurately find these bits of DNA before a person's even diagnosed with cancer.

It's something everyone from Guardant Health, which has been running liquid biopsies on people with cancer to monitor how the disease progresses, and Illumina spin-off Grail, are working on. In May, Guardant launched LUNAR, a collaboration with research institutes that will study Guardant's own diagnostic cancer blood test.

Where Freenome wants to be different

Right now, liquid biopsies screen for specific genetic mutations you can find in a small percentage of the blood that have been vetted and will give you information that you can act on in the course of a patient's treatment. At the moment, that's a bit more limited than solid-tumor biopsies, which is why it's still considered the gold standard by doctors.

But Otte said he wants to go after the whole genome.

"When I talk to a biologist, they'll be like what's wrong with that? You know what you're looking for you're going and looking for it," he said. That's much different than the reaction he gets when he talks to people in tech, who think the idea of throwing out data is ridiculous. "If you can get more data, if you can get the entire genome, why would you throw that out?"

Crunching all that technology takes a lot of computing power and money, but with the decreasing cost of genome sequencing falling quickly, Otte seemed convinced that he can do it at a cheap cost, working methodically. Even the latest funding round wasn't necessary to keep the lights on, he said, thanks to a grant the company received in 2015.

"It needs to be done in this step-wise way, with these validations along the way," he said. "Otherwise, chances are you're going to throw a bunch of money down the drain."

Another Theranos?

The story lines may sound similar — revolutionary blood-testing technology, young founders, promises to disrupt the healthcare industry — but Andreessen Horowitz, to validate the test, sent Freenome five blinded blood tests for the company to sequence on their technology to show that it works. Having blinded studies meant that Freenome's team didn't know what was in the blood ahead of time, so they couldn't fudge the results that make its tech look more advanced and accurate than it actually is. It's a key difference from what's being reported on Theranos and its relationship with Walgreens.

Otte said he's committed to publishing all work and working with regulatory agencies to make sure everything is up to standards. So far, Freenome has tested out its technology in hundreds of samples, and the next step is to validate it on a larger, thousands-of-samples level.

Andreessen Horowitz's first bio-fund investment was in November, in twoXAR, which is looking at how to use digital in the drug-discovery process. Vijay Pande told Business Insider that the bio fund has multiple other investments already in addition to Freenome, but most are working quietly at the moment. Pande seemed confident that this next wave of health-tech entrepreneurs would fare better than others have in the past, in part because founders are getting a better understanding of both computer science and lab sciences.

"There's this new crop of founders that can go deep in biology, and can also go extremely deep in computer science," he said. "They're well versed in both areas, which is much different than founders five to 10 years ago."


[Theranos, Guardant, Grail ... and now Freenome. Those who recall their personal experience with "the Internet boom" know this type of explosion very well. Although Google, Apple, Intel (etc) in Silicon Valley could easily acquire Illumina (stock value is about $20 Bn, about the same as Motorola Mobile...), the "Internet Revolution" did not happen to huge companies (see Christiansen's bestseller "The Innovator's Dilemma" - the bigger and better firms are, they are more likely to fail to make the tight turns that can take even "multiple disruptions"). In the Internet Boom "the king of computers" IBM failed miserably both with its home computer hardware (who remembers the extinct "IBM PC?"), and with its home computer software (who remembers the extinct "System 2"?). The winners were totally "noname" (freename) companies, rather, small bunches of young innovators who leaped to Freedom from the slavery of huge companies, like Steve Job's Apple and Bill Gates' Microsoft (helped by Charles Simonyi from Hungary, a VP who developed Microsoft Office, perhaps the most enduring home computer software. (To be fair, even Intel was sparked to become a monster from a tiny company a generation before, by young employee number 4, the now late Andy Grove from Hungary). Lydia Ramsey is also right to point out another key factor. While Big Pharma Roche has been jockeying to do a hostile takeover of Illumina (twice), and thus accomplishing the vision of Complete Genomics (2008) of a "sequencing and Google-type data center", as e.g. Eric Schadt (double-degree biomathematician) can testify, few in the Big Pharma domain (in his case, Merck) resonate totally with advanced mathematics. In my case, when I advocate "The Principle of Recursive Genome Function" and thus conclude with FractoGene (fractal genome grows fractal organisms) typically I meet a "blank face" - many have no (mathematical) idea of what a "fractal" might be. At best, some eminent Venture Capitalists whom I would rather not name here, vaguely remember "fractals as pretty pictures" - and direct me to the Eric Schadt type mathematicians. Thus, while huge entities are jockeying now for the exclusive rights of FractoGene patent (in force for the next decade, 8,280,641) Roche is still stuck at the stage of "searching for known markers", and thus I largely gave up on "Big Pharma" and vetted "Big IT" - all of them are very keen on the "next big thing in Silicon Valley". With all said and done, my small incubator less than a mile from the Apple Spaceship new HQ could pick up those talented developers (and their investors...) who see hyperescalation in an "independent way" from huge entities like Apple, Google, Amazon and the like. Today, as Illumina, PacBio and other manufacturers of sequencers (a commodity) are keen to develop markets in the World for their products to survive, my HolGenTech, Inc. is working with my homeland, Central European Hungary. If the "developing rim" of EU could produce Skype (in Estonia), the five times larger Hungary can easily produce hyperescalation with an excellent record of mathematicians, computer scientists, etc. Andras_at_Pellionisz_dot_com ]

President Obama Hints He May Head to Silicon Valley for His Next Job


...the soon-to-be former-Commander-in-Chief is considering what his next job could be, and it could be one that will shake up Silicon Valley

In a recent interview with Bloomberg, Obama expressed his interest in the tech industry of Silicon Valley, hinting that his next job may be as a venture capitalist.

"Well, you know, it’s hard to say. But what I will say is that—just to bring things full circle about innovation—the conversations I have with Silicon Valley and with venture capital pull together my interests in science and organization in a way I find really satisfying. You know, you think about something like precision medicine: the work we’ve done to try to build off of breakthroughs in the human genome; the fact that now you can have your personal genome mapped for a thousand bucks instead of $100,000; and the potential for us to identify what your tendencies are, and to sculpt medicines that are uniquely effective for you. That’s just an example of something I can sit and listen and talk to folks for hours about."

["Listen and talk to folks for hours about it" is by far the easiest part. "Talking about it" will not make it happen. Investing into the bottleneck genome informatics makes it happen, and ultra-hard as it is, needs all the help it can get. Actually accomplishing multiple paradigm-shifts is a quite a bit harder than "listening to it" - and not everybody gets it right away. Sometimes it takes decades, unfortunately. For instance, a 2008 YouTube Google Tech Talk precisely predicted that with the advent of powerful and affordable genome sequencing there will be such a glut of full human DNA sequences that sequencer companies would head towards bankruptcy taking dollar billions of investments into their no-so-bright-future (remember Silicon Valley's Complete Genomics that had to be sold to China for peanuts of $118 M to avert bankruptcy?). Even today, both Illumina and PacBio are trying hard to build up a market for the "dreaded DNA data deluge". While I wonder that the would-be-venture-capitalist took 58 minutes to view the fairly technical video, many thousands of experts did. Nonetheless, now we have a "deja vu". Al Gore became famous for "inventing the Internet" - and became a Silicon Valley Venture Capitalist (Kleiner Perkins Caufield & Byers - by the way, along with Colin Powell). A key factor probably is, that politicians on the top certainly know "where the next huge pile of money hides". So did Dick Cheney (WellAware), - and many others. If Obama wants big money, he might end up with the now fairly mature Google (where Google Genomics is still under reorg in "Alphabet"). If he is up to "make a new difference", it is not inconceivable that he might team up with Francis Collins and actually make genome-based precision oncology a reality in the private sector in Silicon Valley. Politicians becoming Venture Capitalists almost always are landmarks that a certain formerly "government" sector became "private sector" - and as a result, see a classic example of the Internet - hyperescalated in funds, success and utility. Andras_at_Pellionisz_dot_com.]

Is This the Biggest Threat Yet to Illumina?

The launch of a new gene-sequencing machine made by Pacific Biosciences is underway

Illumina (NASDAQ:ILMN) has held go-to status as the dominant maker of gene-sequencing devices for years, but a new, next-generation gene-sequencing machine made by Pacific Biosciences (NASDAQ:PACB) aims to change that. Can Illumina fend off this competitor?

First, a bit of background

As the cost to sequence a gene falls and researchers increasingly discover the genetic causes of disease, drugmakers are flocking to create genetically inspired, personalized medicine. As a result, there's been a tidal wave of demand for gene-sequencing machines that allow researchers to peer into and analyze DNA.

Currently, Illumina, which markets high-throughput machines that can sequence an entire genome for less than $1,000, is the leading manufacturer of these machines, controlling an estimated 90% market share. Globally, Illumina boasts an installed base of more than 7,500 machines, including 300 top-of-the-line HiSeq X sequencers. As a result, sales of Illumina's sequencers and the consumables used to run them topped $2.2 billion last year, up 19% from 2014.

Mounting a threat

Up until now, Pacific Biosciences has been a bit player in the gene-sequencing industry.

The company's installed base of 160 units and its $93 million in 2015 sales, including $30 million in milestone payments from Roche Holdings, positions it a far cry south of Illumina.

However, Pacific Biosciences' next-generation sequencer -- the Sequel -- may allow it to mount its most significant threat to Illumina yet.

The Sequel's $350,000 price tag is roughly half the price of the company's previous RS II model. Thanks to new technology, the Sequel is far smaller and less heavy than its predecessor too. Additionally, the Sequel can deliver longer read data than Illumina and researchers may find that advantage compelling, especially if they're working in clinical research.

Overall, Sequel's cost and performance advantage could lead to Pacific Biosciences selling a bunch of them -- something that may already be beginning.

In Q4, Pacific Biosciences reports that they received 49 orders for the Sequel, including orders for 10 machines that they installed at customer sites in December. That's arguably a solid start for the Sequel considering that the company only recorded 40 orders for the RS II in 2014.

Hang on a minute...

Does the Sequel mean that Illumina's best days are behind it?

Not necessarily. The Sequel is an intriguing machine, but Illumina is far from on the ropes. Illumina's product line is broader, it's deeply entrenched with its customers, and it's got deep pockets that can allow it to respond to the Sequel with products of its own. Pacific Biosciences has less than $85 million in cash on the books exiting last quarter, while Illumina has a $1.2 billion cash stockpile. That's a big advantage and it should give Illumina the financial flexibility to navigate around the Sequel.

Perhaps, a bigger risk to Illumina will come from Roche Holdings. Roche's deal with Pacific Biosciences allows it to rebrand the Sequel and sell it to clinicians in the field of in vitro diagnostics. Given Roche is a global powerhouse and the role of sequencers as a tool for diagnosing illness and determining treatment protocols could be huge, Roche's sequencing business could pose a big threat to Illumina's MiSeqDx machine, which targets the in vitro diagnostics market.

Looking ahead

Genetic sequencing is fueling the development of increasingly complex medicine and, arguably, that's where the drug industry is heading. If Pacific Biosciences can convince researchers that Sequel's cost and long read advantage are worth it, then this stock could become one of the market's most intriguing growth stories over the next few years.

However, the Sequel's launch is in the early days, so there's no guarantee that customers will continue to flock to it, or if they do, that Pacific Biosciences will turn profitable. Because of those risks, Pacific Biosciences may be worth keeping an eye on, but it's still a high-risk investment.

[The determinant of an Illumina/PacBio battle is NOT the set of their sequencing parameters. The winner will be the sequencer commodity company that will realize the original concept of Complete Genomics - not only rolls out basic tools but supplies the steady stream of ever-improving versions of genome analytics software. Since the technology of Complete Genomics was sold out to China, a likely winner might be BGI. Illumina sales dipped 24% in a week late April, due to lackluster European sales - while BGI already has 13 established outlets in Europe. Fortunately, not yet in Central (formerly, East) Europe. Andras_at_Pellionisz_dot_com]

Big talk about big data, but little collaboration

May 13, 2016, 07:00 am

By Nancy G. Brinker and Eric T. Rosenthal, contributors

There's been much talk lately about big data's potential value in treating cancer, but little effort has been made to make big data bigger — and more effective — by sharing what's being collected.

Big data in clinical cancer care involves collecting vast amounts of data about patients that can be analyzed to identify trends, associations and patterns that would help oncology professionals develop better and more tailored therapies for cancer patients.

Today, most of this information is available in only about an estimated 4 percent of cases for those patients involved in clinical trials.

Big data initiatives — in government, academia and industry — are striving to collect genomic and other patient information from the remaining 96 percent of cases through electronic health records (EHR) and other cancer-related registries.

However, not all EHRs and registries are compatible with one another; there is a lack of standardization; there is not consensus about what constitutes "good" data; there are privacy issues involved; not all patients and physicians understand or are incentivized to contribute to the effort; much of the data collected is proprietary; and many of the entities involved in big data initiatives are competing rather than collaborating.

There are currently numerous big data initiatives underway. Some examples include:

On the federal level: The president's Precision Medicine Initiative, the vice president's cancer "moonshot" program, and the Department of Veterans Affairs' Million Veteran Program all depend on big data.

On the academic and professional society level: The American Society of Clinical Oncology (ASCO) — the world's largest clinical oncology organization — is developing CancerLinQ, a health-information technology platform "assembling vast amounts of usable, searchable, real-world cancer information into a powerful database"; and the American Association for Cancer Research (AACR) — the world's oldest and largest scientific organization focused on cancer research — is working on Project Genie (Genomics, Evidence, Neoplasia, Information, Exchange) to "provide the 'critical mass' of genomic and clinical data necessary to improve clinical decision making and catalyze new clinical and translational research."

On the academic and for-profit level: Together, IBM's Watson supercomputer and the New York Genome Center are developing a "national tumor registry to match genetic characteristics with available treatments for patients."

On the for-profit level: Flatiron Health is "building a disruptive software platform that connects cancer centers across the country."

In addition, there have been numerous conferences convened over the last few years to discuss the issues related to what big data is, how it can be standardized, and how it can be used more meaningfully for patient care.

But what many of these efforts lack is the oversight and will to make these newly created silos share the big data they are collecting to provide a comprehensive clearinghouse of information benefitting all.

Congress has a vested interest in ensuring that government plays a constructive role in the promise that big data can bring to reducing healthcare costs through disease prevention and treatment. Great leaps in society take place through public-private partnerships.

Perhaps Congress should consider legislation to make data-sharing mandatory for all information gathered through any efforts supported by federal dollars and to encourage collaboration between public and private entities for the common good.

Brinker is the founder of Susan G. Komen, the world's largest breast cancer charity. She was previously a Goodwill Ambassador for Cancer Control to the U.N.'s World Health Organization; U.S. chief of protocol; and U.S. ambassador to Hungary. Rosenthal is an independent journalist who covers issues, controversies and trends in oncology as special correspondent for MedPage Today. He is the founder of the h Designated Cancer Centers Public Affairs Network, and helped organize a number of national medicine-and-the-media conferences. The opinions expressed belong solely to the authors.

[Having worked through the "Internet Boom" that started from a tiny US government program and exploded into a mega-business of the private domain, I am rather skeptical that the genie of genome-based oncology can ever be pushed back to the government-bottleneck by legislation of whatever Congress are we electing in November. Nonetheless, let's suppose it will happen (the same US government that forbids by HIPPA the sharing of private health-data would somehow make data-sharing mandatory.. first of all, for any legislation to emerge would take YEARS). How about the rest of the World? Already, the largest number of full human DNA sequences are available NOT IN THE USA - but in China! Would their government obey US legislation? Not very sure... India, though thus far she is the "sleeping giant of genome-based oncology", can catapult at any time (in part, precisely to counterbalance the lead in genomics, especially with genome editing, by arch-enemy China). Would any US legislation be effective in India?? Highly unlikely, since even the legislature of India is not that effective... How about Europe (EU or not)? Having worked with Government, Academia and Industry (not just in the USA, but also in Germany, India and my homeland Hungary) I have my own proposal on the table of interested parties. Andras_at_Pellionisz_dot_com]

Can Silicon Valley Cure Cancer?

Sean Parker, creator of Napster, first President of Facebook

By Roni Selig and Ben Tinker, CNN

Silicon Valley thrives on disrupting the traditional ways we do many things: education, consuming music and other media, communicate with others, even how we stay healthy. Bill Gates and Dr. Patrick Soon-Shiong know a few things about how to spend a lot of money to disrupt mainstream research while searching for cures in medicine.

Sean Parker hopes to join their ranks. In 1999, he co-founded the file-sharing service Napster, and in 2004, he became the first president of Facebook. Today, Parker announced his latest endeavor: a $250 million bet on eradicating cancer. Through the Parker Institute for Cancer Immunotherapy, he says his plan is just a matter of time until it works.

What's unique about Parker's Institute is its structure and design.

It brings together six of the country's leading cancer centers to have them share intellectual property, enabling more than 300 researchers at more than 40 labs across the country to have immediate access to each other's findings.

The institute will license the research findings from each of the cancer centers in order to share.

"That removes a lot of the bureaucratic barriers that would've prevented scientists from immediately sharing or capitalizing upon each others' discoveries," Parker said. "So a breakthrough made by one scientist at one center is immediately available to be used by any scientist within the network, and they improve upon it."

The participating centers are Memorial Sloan Kettering Cancer Center, Stanford Medicine, the University of Texas MD Anderson Cancer Center, the University of Pennsylvania and the University of California campuses in Los Angeles and San Francisco.

"To do the research that really moves the field forward, you need a lot of collaboration, but you (also) need one big, open sandbox for everyone to play in, in order for that collaboration to take place," said Parker. "So a breakthrough made by one scientist at one center is immediately available to be used by any scientist within the network, and they improve upon it. They can move the ball down the field, so to speak, and as a result of that, things can happen much, much faster."

"Sharing enormous amounts of data is not new in the scientific community" said Jean Claude Zenklusen, director of the Cancer Genome Atlas Project at the National Cancer Institute. He cites the Human Genome Project and the Cancer Genome Atlas as examples of successful projects where researchers have access to each others' results.

During his 2016 State of the Union address, President Barack Obama announced the establishment of a new White House Cancer Moonshot Task Force to accelerate cancer research and that he wants a budget of $1 billion. But the problem with government-funded research, said Parker, is that potentially life-saving projects take too long to get funded.

"In our case, it could be 48 hours before a trial is funded, and (just) several weeks before we have approval to conduct that trial in actual humans," said Parker.

According to the FDA, when a sponsor submits a study as part of the initial application for a new drug, the agency has 30 days to review the application and place the study on "hold" if there are any obvious reasons why it should not be conducted. Barring a hold, the study may begin with Institutional Review Board approval.

Parker wants the researchers to lead the charge, not institutions.

"Our model is completely different from the model of a grant-making organization," said Parker. "We internally develop this road map, working with every single scientist. Everything is exhaustively debated. We tell them to throw out their mediocre ideas that maybe they were waiting to get funded or they were standing in line effectively trying to get funding for one of their ideas from the NCI. We say, 'Throw it all away. Tell us the most ambitious thing you want to work on. We want you working on that.' "

Lessons from Silicon Valley

"There's something very entrepreneurial about the (institute's) way of thinking, because entrepreneurs need to be very focused," said Parker. "Entrepreneurs don't have unlimited time. They don't have unlimited resources -- and if they're going to change the world, they need to place bets. I wouldn't call them gambles because you're placing a bet on something where you have every reason to believe that it works and you're choosing amongst all of the things that you could potentially be doing -- the highest value, the highest impact thing."

Every year, 14 million people are diagnosed with cancer and 8.2 million people die of cancer-related causes. To Sean Parker and his team, those numbers are unacceptable.

Parker points to the hundreds of billions of dollars invested to yield a very small increase in overall life expectancy.

"Chemo, radiation, surgery and some targeted drugs are capable of treating about 50% of all cancers. The other 50% are a death sentence, and there hasn't been a significant paradigm shift in the way we treat cancer in quite a long time, said Parker. "There have been a lot of false starts and promises made about new treatment modalities that never materialized, or they resulted in this incremental three- to six-months average life extension, which is what qualifies as a new drug."

"We're focused on immunotherapy for a reason ... because it's a treatment modality that has the potential to treat all cancers," said Parker, founder and chairman of the new foundation. Immunotherapy, back in the 1970s, was seen as a high-tech breakthrough therapy, using the body's immune system to fight cancer cells.

"Cancer cells are very smart; said Dr. Jeffrey Bluestone, president and CEO of the Parker Institute. They mutate, change and learn how to escape the drugs we use to try to treat them. By training (the immune system) to see unique cancer markers ... when it sees it again, it can attack again."

It's personal

One of Parker's best friends, prominent Hollywood producer Laura Ziskin, founded Stand up to Cancer, and she was instrumental in shaping Parker's thinking about the disease. She died of complications from breast cancer in 2011.

"(Laura) was surrounded by all the best doctors in the world and had access to all the resources in the world, so if anybody should have lived, or anybody could have beaten cancer, it should have been Laura," said Parker.

"Her death at age 61, galvanized me to do more, and now I look back on it with a certain degree of frustration and angst because if only I had done a little bit more a little bit faster, if only we had built this network sooner. The treatments that are coming out of even some of our trials now potentially could have cured her. It's a tough thing."

"Twenty years from now, we should look back on cancer as something that our parents worried about -- and even though we'll probably never live in a world without cancer, the treatments should be relatively easy and extremely effective -- so it's not something we have to worry about," said Parker.

[Silicon Valley finally became the epicenter of New School Genomics, concluding in health care applications. Google Genomics, In., ex-Googler Franz Och of the Human Longevity, Inc. (of Venter), Grail, Inc. (of Jay Flatley's Illumina), now the Parker Institute, Inc. (and HolGenTech, Inc. - along with the venerable GenenTech, Inc.) all crowd in a 40-mile radius in the center of Silicon Valley. This all reminds us to Internet - when the meager govenment effort exploded the "brick and mortar" industries into advanced IT companies (, and finally and Private enterprise will take it from here in an alliance with USA and Global industrial partners. While "Big Players" used to be the Boston, San Diego, Houston areas in the USA, and UK, China - novel participants are tiny Poland, Lithuania and the Sleeping Giant of India - andras_at_pellionisz_dot_com]

Life Code (Bioinformatics): The Most Disruptive Technology Ever?


[One and a half decades after revealing the full human DNA, "Old School" makes room for the "New School" based on openly admitting that just knowing all A,C,T,G letters of the Life Code means precious little - without the mathematical understanding of the Code. Juan has no personal stake (he is not a Turing-type "code breaker"). Thus, both schools can take it from him that the pure biochemistry (the 6 billion letters along a Double Helix) must make room for "Bioinformatics" - that is to deliver "The Most Disruptive Technology Ever"; the mathematical understanding of the Code of Life. My FractoGene approach is one that sums it all up "Fractal genome grows fractal organisms" - and thus gives us a mathematical handle for the interpretation of the Code. Many (mistakenly) believe that "the fractal approach" at once makes the hologenomic fractal operator fully understood. No breakthrough, ever, is an end. Breakthroughs are always new beginnings. Nonetheless, the "heureka!" of realizing the cause-and-effect of the de facto fractality of DNA and that of the organisms it grows (brain cells, lungs, cancer tumors, etc), immediately yielded utility (8,280,641, in force for the Next Decade) - andras_at_pellionisz_dot_com]

Craig Venter: We Are Not Ready to Edit Human Embryos Yet

Unless we have sufficient knowledge and wisdom we should not proceed

Discussions on human genome modifications to eliminate disease genes and/or for human enhancement are not new and have been common place since the first discussions on sequencing the human genome occurred in the mid 1980’s. Many a bioethicist has made their careers from such discussions, and currently on Amazon there are dozens of books on a wide range of human enhancement topics including those that predict that editing our genes will lead to the end of humanity. There are also thousands of news stories on the new DNA editing tools called CRISPR.

So why is genome editing so different? If we can use CRISPR techniques to change the letters of the genetic code known to be associated with rare genetic disorders such as Tay-Sachs disease, Huntington’s disease, cystic fibrosis, cycle cell anemia or ataxia telangiectasia, why wouldn’t we just do so and eliminate the diseases from human existence? The answer is both simple and complex at the same time: just because the techniques have become easier to perform, the ethical issues are not easier. In fact, the reality of the technical ease of CRISPR-based genome editing has changed hypothetical, esoteric arguments limited largely to “bioethicists” to here and now discussions and decisions for all of us.

For me there are three fundamental issues of why we should proceed with extreme caution in this brave new world.

1. Insufficient knowledge: Our knowledge of the human genome is just finally beginning to emerge as we sequence tens of thousands of genomes. We have little or no detailed knowledge of how (with a few exceptions) changing the genetic code will effect development and the subtlety associated with the tremendous array of human traits. Genes and proteins rarely have a single function in the genome and we know of many cases in experimental animals where changing a “known function” of a gene results in development surprises. Only a small percentage of human genes are well understood, for most we have little or no clue as to their role.

2. The slippery slope argument: If we allow editing of disease genes, it will open the door to all gene editing for human enhancement. This needs no further explanation: it is human nature and inevitable in my view that we will edit our genomes for enhancements.

3. The global ban on human experimentation: From Mary Shelly’s Frankenstein to Nazi war crimes to the X Men, we have pondered human experimentation. Unless we have sufficient knowledge and wisdom we should not proceed.

CRISPRs and other gene-editing tools are wonderful research tools to understand the function of DNA coding and should proceed. The U.K. approval of editing human embryos to understand human development has no impact on actual genome editing for disease prevention or human enhancement. Some of the experiments planned at the Crick Institute are simple experiments akin to gene knockouts in mice or other species where CRISPR will be used to cut out a gene to see what happens. They will yield some interesting results, but most, I predict, will be ambiguous or not informative as we have seen in this field before.

The only reason the announcement is headline-provoking is that it seems to be one more step toward editing our genomes to change life outcomes. We need to proceed with caution and with open public dialogue so we are all clear on where this exciting science is taking us. I do not think we are ready to edit human embryos yet. I think the scientific community needs to focus on obtaining a much more complete understanding of the whole-genome sequence as our software of life before we begin re-writing this code.

[Venter is "Tesla of Genomics"; not only in the greatest and most formative achievements, but also in the sense that he is keenly aware of the edge between the known and the unknown. His statement above is consistent with the view of Juan Enriquez (above) - both making it crystal clear that in between an ability to have a "text" available and the ability to "edit" - an indispensable element is hitherto largely missing. No sane person should edit any "text" (code, rather) that one does not understand. This is most obvious to software coders; it is unthinkable to "edit" a code without understanding the code. Likewise, at the dawn of the nuclear age, it was obvious that certain atoms violate the axiom of "old school" (that the atom is the smallest unit of an element that can not split). Physicists immediately started tinkering with radioactive materials - the best of the best often died of such tinkering. When the enormity of scientific, economical (and even geopolitical) significance of nuclear physics became clear, mankind embarked on one of the greatest endeavor of creating quantum mechanics (a new branch of mathematical physics), and developed nuclear physics - before starting to build awesome instruments to utilize for energy nuclear fission and fusion. I am not sure that in modern "new school" genomics the parallel is equally clear to all - perhaps because of the multidisciplinary nature of bioinformatics. Andras_at_Pellionisz_dot_com]

Big Data Meets Big Biology in San Diego on March 31: The Agenda

Bruce V. Bigelow

March 9th, 2016

Exconomy, San Diego

In less than a month, Xconomy is bringing some of the big guns in life sciences together in San Diego to talk about the opportunities that are emerging for tech and software innovation in fields like genomics, biotechnology, and digital health. We can now give you a preview of what it’s going to look like.

There’s no question that big data and big biology are coming together in a big way. The only question is how it’s going to happen.

We’re laying out at least part of that roadmap on March 31 at “Big Data Meets Big Science” at the Illumina Theater at the Alexandria, which is on the Torrey Pines mesa at 10996 Torreyana Road. In genomics, much of the technology roadmap already has been charted by Illumina (NASDAQ: ILMN), the San Diego-based maker of next-generation genome sequencing technology, consumables, and genetic analysis tools—and Illumina president Francis deSouza is kicking off our forum.

DeSouza, who will be taking over as Illumina CEO in July, also will talk with venture investors about the bets they are placing on IT innovations in the life sciences. If you’re an entrepreneur, innovator, or investor in the IT sector, you’ll want to be there. I’ve asked other speakers to also highlight the big trends in their respective fields, and to provide examples of the innovation needs they see in high-performance computing, predictive analytics, data storage, and software development.

Examples abound:

—Grail, a San Francisco startup founded earlier this year, is developing diagnostic technology sensitive enough to detect early stage cancer. The nature of the technology challenge, though, became apparent when Grail recently named Jeff Huber—who spent 12 years as the top engineer at Google—as its CEO.

—You don’t need to be a data scientist to innovate in healthcare. Amid a public furor over drug pricing, Santa Monica, CA-based GoodRx and New York’s Blink Health have developed online tools that help consumers get their generic drugs at prices that are far lower than the prices pharmacies typically charge customers who are paying out of pocket instead of through insurance.

—In San Diego, Edico Genome has developed a processor that has been optimized specifically for next-generation genome sequencing machines—reducing the time needed to map a patient’s whole genome from 20 hours to 20 minutes.

Edico Genome is on our agenda. Edico’s founding CEO, Pieter Van Rooyen, will take the stage with one of his principal investors, Lucian Iancovici, a senior investment manager at Qualcomm Life, to discuss how Edico got started.

Nicholas Schork, professor and director of human biology at the J. Craig Venter Institute will provide an overview of the fast-moving trends in genomics, and offer his insights on IT needs. We’ll also hear from Franz Och, an expert in machine learning and language translation, explain why he left a plum job as the head of Google Translate to become the chief data scientist at Human Longevity, a San Diego startup founded by the human genome pioneer J. Craig Venter.

Finally, we have scheduled a series of quick pitches from CureMetrix, Sentrian, Nervana Systems, Applied Proteomics, and ChromaCode to highlight how the local tech community is innovating in life sciences

We’ve posted the agenda for Big Data Meets Big Bio here. Tickets are available at this link. I’m looking forward to this event, and to seeing you at the Alexandria at Torrey Pines on March 31.

[Dr. Pellionisz had participated in the past at Xconomy meetings, e.g. in Seattle when Dr. Eric Schadt publicly affirmed that the CPU-intensive FractoGene approach to the interpretation of full human DNA sequences will be a massive paradigm-shift. Those meetings, however, were only at half-time from the "Is IT ready for the Dreaded DNA Data Deluge" (2008) towards our present day, when both full human DNA sequencing is affordable <$1,000 and Big Data is ready with also affordable (private) cloud computing. FractoGene (US patent 8,280,641, in force through the next decade) draws statistical diagnosis and probabilistic prognosis from "fractal genome grows fractal organisms". Beyond the double-bottleneck of formerly prohibitively expensive full human DNA sequencing and cumbersome and pricey "CPU farms" (before cloud computing), a third massive factor makes the deployment of FractoGene patent both timely and extremely lucrative. "Fractal Defects" were computed by Pellionisz as early as 2007, but at that time there was no way to know when any "genome defects" could be eliminated. Now, with Genome Editing (almost certainly a Nobel for the top triad this year), the motto of Pellionisz' HolGenTech, Inc. "Ask what you can do for your Genome!" also stepped up to a beta-stage, now in advanced negotiations. "Fractal defects", when identified, could be edited out! We have arrived at an age when the industry of "full human genome sequencing" (a commodity, hitherto struggling with a glut of too many sequences), dovetails with cloud computing industry to make even CPU-intensive (fractal) interpretation of DNA possible - and the found fractal defects could be edited out. Dr. Pellionisz will attend the meeting in San Diego to talk to interested parties. (Four-zero-8) 891-7187. ]

An amusing postscript: Big Data is like Teenage Sex: everyone talks about it, nobody really knows how to do it, and everyone thinks everyone else is doing it, so everyone claims they are doing it... (Dan Ariely, Duke University)

Illumina Forms New Company [Grail, see] to Enable Early Cancer Detection via Blood-Based Screening

Significant Development in the War on Cancer

[Press Release by Illumina]

SAN DIEGO--(BUSINESS WIRE)--Jan. 10, 2016-- Illumina, Inc. (NASDAQ:ILMN) today announced GRAIL, a new company formed to enable cancer screening from a simple blood test. Powered by Illumina sequencing technology, GRAIL will develop a pan-cancer screening test by directly measuring circulating nucleic acids in blood.

Detecting cancer at the earliest stages dramatically increases long-term survival, hence the successful development of a pan-cancer screening test for asymptomatic individuals would make the first major dent in global cancer mortality.

GRAIL’s unique structure enables it to take on this grand challenge. GRAIL has been formed as a separate company, majority owned by Illumina. GRAIL is initially funded by over $100 million in Series A financing from Illumina and ARCH Venture Partners, with participating investments from Bezos Expeditions, Bill Gates and Sutter Hill Ventures. GRAIL’s unique relationship with Illumina provides the ability to economically sequence at the high depths necessary to create a screening test with the required sensitivity and a hoped for level of specificity never before achievable for cancer screening.

“We hope today is a turning point in the war on cancer,” said Jay Flatley, Illumina Chief Executive Officer and Chairman of the Board of GRAIL. “By enabling the early detection of cancer in asymptomatic individuals through a simple blood screen, we aim to massively decrease cancer mortality by detecting the disease at a curable stage.”

“The holy grail in oncology has been the search for biomarkers that could reliably signal the presence of cancer at an early stage,” said Dr. Richard Klausner, formerly Illumina Chief Medical Officer and NCI Director, and a Director of GRAIL. “Illumina’s sequencing technology now allows the detection of circulating nucleic acids originating in the cancer cells themselves, a superior approach that provides a direct rather than surrogate measurement.”

“GRAIL’s rigorous, science-based approach with leading medical and policy advisors worldwide is unprecedented in the fight to defeat cancer,” said Robert Nelsen, Managing Director and Co-Founder of ARCH Venture Partners and a Director of GRAIL.

GRAIL has secured the counsel of a world-class set of industry and cancer experts for the company’s advisory board, including Dr. Richard Klausner; Dr. Jose Baselga, Physician In Chief at Memorial Sloan Kettering and President of the American Association of Cancer Research; Dr. Brian Druker, Director, OHSU Knight Cancer Institute; Mostafa Ronaghi, Chief Technology Officer at Illumina; Don Berry, Professor at MD Anderson Cancer Center; Timothy Church, Professor at the University of Minnesota School of Public Health and Charles Swanton, Group Leader at the Francis Crick Institute. The company will initially have a five-member Board of Directors, including Jay Flatley, William Rastetter (Chairman of Illumina), Dr. Richard Klausner, Robert Nelsen, and the CEO. The company is actively recruiting a CEO.

About Illumina

Illumina is improving human health by unlocking the power of the genome. Our focus on innovation has established us as the global leader in DNA sequencing and array-based technologies, serving customers in the research, clinical and applied markets. Our products are used for applications in the life sciences, oncology, reproductive health, agriculture and other emerging segments. To learn more, visit and follow @illumina.

About GRAIL – Learn more at

Forward-Looking Statement for Illumina

This release contains forward looking statements that involve risks and uncertainties, such as Illumina’s expectations for the performance and utility of products to be developed by GRAIL. Important factors that could cause actual results to differ materially from those in any forward-looking statements include challenges inherent in developing, manufacturing, and launching new products and services and the other factors detailed in our filings with the Securities and Exchange Commission, including our most recent filings on Forms 10-K and 10-Q, or in information disclosed in public conference calls, the date and time of which are released beforehand. We do not intend to update any forward-looking statements after the date of this release.

View source version on

Source: Illumina, Inc.

Illumina, Inc.


Rebecca Chambers, 858-255-5243



Eric Endicott, 858-882-6822



Grail: The Problem

Cancer is a leading cause of death worldwide, with over 14 million new cases per year and over 8 million deaths annually. Cancer incidence is expected to increase more than 70% over the next 20 years. At least half of all cancers in the United States are diagnosed in Stage III and Stage IV, leading to lower survival rates. Detecting cancer at the earliest stages dramatically increases the probability of a cure and long-term survival.

The mission of Grail is to enable the early detection of cancer in asymptomatic individuals through a blood screen – with the goal of massively decreasing global cancer mortality by detection at a curable stage. Grail will leverage the power of "ultra-deep" sequencing technology, the best talent in the field and the passion of its leadership to deliver on that promise.

Grail: The Premise

Ultra-deep sequencing to detect circulating tumor DNA has the potential to be the holy grail for early cancer detection in asymptomatic individuals. Most tumors shed nucleic acids into the blood. Circulating tumor DNA is a direct measurement of cancer DNA, rather than an indirect measure of the effects of cancer.

Grail: The Promise

The mission of Grail is to enable the early detection of cancer in asymptomatic individuals through a blood screen – with the goal of massively decreasing global cancer mortality by detection at a curable stage. Grail will leverage the power of "ultra-deep" sequencing technology, the best talent in the field and the passion of its leadership to deliver on that promise.


Jeff Huber, Key Google X Exec, Departs to Lead Cancer Diagnosis Startup Grail

Jeff Huber, a veteran Google executive instrumental in the formation of its Google X research lab, is leaving the company to become CEO of Grail, a biotech startup that develops blood tests to detect cancer.

It’s a deeply personal move. Huber’s wife, Laura, passed away from cancer last year. Huber posted about the move on his Google+ page:

My work at Grail is dedicated in remembrance of my wife, Laura. She fought an incredibly brave battle with her cancer, but it was ultimately a losing battle since it was diagnosed so late (at stage 4). If Grail had existed before, and caught her cancer earlier, it’s very possible she’d still be with us today. Things don’t “happen for a reason,” but you can find purpose and meaning in things that do happen. When Grail succeeds, hopefully many, many people and their loved ones can be spared the cancer experience Laura endured.

Grail splashed onto the market last month with a $100 million Series A round that included some marquee investors, like Bill Gates and Jeff Bezos. Beyond the technical challenge, the startup must convince regulators that its approach is a valid early diagnosis tool — at a time when biotech companies are under particular scrutiny, thanks to the public issues with Theranos.

Huber has been at Google since 2003. He was a critical engineering SVP for its ad, apps and maps units before shifting over to Google X in 2013. In 2014, he joined the board of Illumina, a genetics research company that also invested in Grail.

[This is a unique development in which the monopoly of genome sequencing commodity leads into a paradigm-shift, triggered by a human tragedy. From the viewpoint of DNA sequencing, Jay Flatley created a virtual monopoly - with the danger luring that the oversupply of DNA "Big Data" might crush the sequencing industry - if it is not matched in time with a virtually unlimited demand. Enter Jeff Huber, the Google techie, whose young wife, Laura was devastated by a stage IV cancer. At the stage of metastatic cancer throughout the body modern science is virtually helpless - since "oncogenes", even if some are suppressed, yield to flare-up of further "oncogenes", and the relentless onslaught consumes sooner or later even the most formidable therapies. It is a staggering fact that about half of newly diagnosed cancer patients are already at the third or fourth stage. Obviously, the sooner a cancer can be caught, the better is the prognosis for effective therapy up to a cure. (Defined as an at least five years of cancer-free patient). The new company (Grailbio) is not only a brilliant combination of a business model in which the sequencing commodity is paired with an unlimited demand (cancer screening), but is also a bright promise for much earlier detection of cancer.

There is, however, a question lurking in some, with the potential of extending this awesome initiative even further. While virtually everybody believes that cancer is the "disease of the genome", most often the culprits are believed to be some "oncogenes". Since every gene by definition produces RNA and amino-acids (conglomerating into proteins), Grailbio in its present initial form aims at intercepting the disease at the (early) stage when in the blood non-cellular RNA can be detected. This is much-much sooner than detecting (often large) tumors (of proeins).

Others, like me, are convinced that cancer is actually a "genome REGULATION disease" - where the primary cause of the pathological "acting up" of (often terribly mutant) "oncogenes" are not the primary reasons - but secondary consequences. My fractal approach holds, that is presently splits the National Cancer Institute into halves, (see (July 19) National Cancer Institute: Fractal Geometry at Critical Juncture of Cancer Research) that genome regulation is derailed by primary problems in the non-coding (regulatory) DNA. It is, therefore, a distinct possibility of embolden the Grail approach by fully sequencing the cellular DNA - and by looking into "fractal defects" that are caracteristic to the true onset of a cancerous process. True, that such an approach calls for CPU-heavy computation (that is likely to thrill Jeff Huber), but the beauty is that the Intellectual Property is secured for the Next Decade. Andras_at_Pellionisz_dot_com.]

Illumina CEO Jay Flatley Built The DNA Sequencing Market. Now He's Stepping Down

Jay Flatley is stepping down as chief executive of Illumina ILMN -7.05%, the largest maker of the DNA sequencing machines that have transformed the study of biology and the invention of new drugs.

He will be replaced as chief executive by Francis deSouza, an executive Illumina hired from Intel INTC -1.19% in 2013 and who was seen as Flatley’s heir apparent. Flatley will remain as executive chairman, focusing on strategy and on expanding the use of DNA sequencing in medicine.

“This is a magnificent team,” Flatley said in an interview. “While I hired a lot of them, it’s not me that made this all happen.”

Flatley, 63, had held the CEO role for 17 years. He grew San Diego-based Illumina’s revenue from $500,000 to $2.2 billion, and its headcount from 30 to 4,800. But the bigger impact was in cutting the cost of sequencing a human genome. In 2001, it was $100 million (or, by some estimates, $3 billion). Now it is less than $1,000. This has allowed researchers to understand genetics in ways that were previously unimaginable. (For more, see this profile I wrote of Flatley in 2014.)

Other companies, notably 454 Life Sciences, got to faster, cheaper DNA sequencing first. But Flatley beat them at being faster and cheaper. Again and again, he confounded competitors who thought their new, flashy ideas could catch up to Illumina’s sequencers. Thanks largely to perfect execution, helped by an aggressive legal strategy, no one has. Illumina has a near stranglehold on most types of DNA sequencing, which is now being used to help pick cancer drugs and to screen for birth defects.

Whether or not a successor can replicate that success is an open question. New DNA sequencers, from companies like Genia and Oxford Nanopore, are far less accurate than Illumina’s machines and as small as thumb drives. DeSouza says he is ready for the challenge.

[Jay Flatley will go down in history as a giant of the first period after the Human Genome Project. He was by far the most successful CEO with a record of wrestling down the price of full human genome sequencing from the Sky to Earth - much faster than Moore's Law did it for microprocessors. His Era is bygone not at all because he was not successful - but precisely because he WAS SO SUCCESSFUL. This all I predicted in a 2008 YouTube (Is IT ready for the Dreaded DNA Data Deluge? - Also published in peer-reviewed science paper "The Principle of Recursive Genome Function, 2008) Billions of dollars could have been saved if many - especially government agencies - were listened! The news-item directly below shows yet another "Big Science" doing nothing but amassing "Big Data" going down in flames. For gathering more and more data never automatically transfers for understanding - said the classic by Thomas Kuhn "The Structure of Scientific Revolutions (published over half a Century ago). According to yet another classic, The Innovator's Dilemma , the Great Firm of Illumina faces now with the new leadership a crucial question. Either it goes on as the eminent lead "sequencer" - or realizes that with the massive disruption of Genome Editing (that requires the knowledge of the sequence, as well as the "error mining technologies" such as FractoGene) Illumina embraces the new times. Given the excellence of the past and new leaders, I am optimistic. Andras_at_Pellionisz_dot_com.]

CRISPR: gene editing is just the beginning

The real power of the biological tool lies in exploring how genomes work.


Heidi Ledford

07 March 2016

Whenever a paper about CRISPR–Cas9 hits the press, the staff at Addgene quickly find out. The non-profit company is where study authors often deposit molecular tools that they used in their work, and where other scientists immediately turn to get them. It is also where other scientists immediately turn to get their hands on these reagents. “We get calls within minutes of a hot paper publishing,” says Joanne Kamens, executive director of the company in Cambridge, Massachusetts.

Addgene's phones have been ringing a lot since early 2013, when researchers first reported1, 2, 3 that they had used the CRISPR–Cas9 system to slice the genome in human cells at sites of their choosing. “It was all hands on deck,” Kamens says. Since then, molecular biologists have rushed to adopt the technique, which can be used to alter the genome of almost any organism with unprecedented ease and finesse. Addgene has sent 60,000 CRISPR-related molecular tools — about 17% of its total shipments — to researchers in 83 countries, and the company's CRISPR-related pages were viewed more than one million times in 2015.

Much of the conversation about CRISPR–Cas9 has revolved around its potential for treating disease or editing the genes of human embryos, but researchers say that the real revolution right now is in the lab. What CRISPR offers, and biologists desire, is specificity: the ability to target and study particular DNA sequences in the vast expanse of a genome. And editing DNA is just one trick that it can be used for. Scientists are hacking the tools so that they can send proteins to precise DNA targets to toggle genes on or off, and even engineer entire biological circuits — with the long-term goal of understanding cellular systems and disease.

“For the humble molecular biologist, it's really an extraordinarily powerful way to understand how the genome works,” says Daniel Bauer, a haematologist at the Boston Children's Hospital in Massachusetts. “It's really opened the number of questions you can address,” adds Peggy Farnham, a molecular biologist at the University of Southern California, Los Angeles. “It's just so fun.”

Here, Nature examines five ways in which CRISPR–Cas9 is changing how biologists can tinker with cells.

Broken scissors

There are two chief ingredients in the CRISPR–Cas9 system: a Cas9 enzyme that snips through DNA like a pair of molecular scissors, and a small RNA molecule that directs the scissors to a specific sequence of DNA to make the cut. The cell's native DNA repair machinery generally mends the cut — but often makes mistakes.

That alone is a boon to scientists who want to disrupt a gene to learn about what it does. The genetic code is merciless: a minor error introduced during repair can completely alter the sequence of the protein it encodes, or halt its production altogether. As a result, scientists can study what happens to cells or organisms when the protein or gene is hobbled.

But there is also a different repair pathway that sometimes mends the cut according to a DNA template. If researchers provide the template, they can edit the genome with nearly any sequence they desire at nearly any site of their choosing.

In 2012, as laboratories were racing to demonstrate how well these gene-editing tools could cut human DNA, one team decided to take a different approach. “The first thing we did: we broke the scissors,” says Jonathan Weissman, a systems biologist at the University of California, San Francisco (UCSF).

Weissman learned about the approach from Stanley Qi, a synthetic biologist now at Stanford University in California, who mutated the Cas9 enzyme so that it still bound DNA at the site that matched its guide RNA, but no longer sliced it. Instead, the enzyme stalled there and blocked other proteins from transcribing that DNA into RNA. The hacked system allowed them to turn a gene off, but without altering the DNA sequence4.

The team then took its 'dead' Cas9 and tried something new: the researchers tethered it to part of another protein, one that activates gene expression. With a few other tweaks, they had built a way to turn genes on and off at will5.

Weissman and his colleagues, including UCSF systems biologist Wendell Lim, further tweaked the method so that it relied on a longer guide RNA, with motifs that bound to different proteins. This allowed them to activate or inhibit genes at three different sites all in one experiment7. Lim thinks that the system can handle up to five operations at once. The limit, he says, may be in how many guide RNAs and proteins can be stuffed into a cell. “Ultimately, it's about payload.”

That combinatorial power has drawn Ron Weiss, a synthetic biologist at the Massachusetts Institute of Technology (MIT) in Cambridge, into the CRISPR–Cas9 frenzy. Weiss and his colleagues have also created multiple gene tweaks in a single experiment8, making it faster and easier to build complicated biological circuits that could, for example, convert a cell's metabolic machinery into a biofuel factory. “The most important goal of synthetic biology is to be able to program complex behaviour via the creation of these sophisticated circuits,” he says.

CRISPR epigenetics

When geneticist Marianne Rots began her career, she wanted to unearth new medical cures. She studied gene therapy, which targets genes mutated in disease. But after a few years, she decided to change tack. “I reasoned that many more diseases are due to disturbed gene-expression profiles, not so much the single genetic mutations I had been focused on,” says Rots, at the University Medical Center Groningen in the Netherlands. The best way to control gene activity, she thought, was to adjust the epigenome, rather than the genome itself.

The epigenome is the constellation of chemical compounds tacked onto DNA and the DNA-packaging proteins called histones. These can govern access to DNA, opening it up or closing it off to the proteins needed for gene expression. The marks change over time: they are added and removed as an organism develops and its environment shifts.

In the past few years, millions of dollars have been poured into cataloguing these epigenetic marks in different human cells, and their patterns have been correlated with everything from brain activity to tumour growth. But without the ability to alter the marks at specific sites, researchers are unable to determine whether they cause biological changes. “The field has met a lot of resistance because we haven't had the kinds of tools that geneticists have had, where they can go in and directly test the function of a gene,” says Jeremy Day, a neuroscientist at the University of Alabama at Birmingham.

CRISPR–Cas9 could turn things around. In April 2015, Charles Gersbach, a bioengineer at Duke University in Durham, North Carolina, and his colleagues published9 a system for adding acetyl groups — one type of epigenetic mark — to histones using the broken scissors to carry enzymes to specific spots in the genome.

The team found that adding acetyl groups to proteins that associate with DNA was enough to send the expression of targeted genes soaring, confirming that the system worked and that, at this location, the epigenetic marks had an effect. When he published the work, Gersbach deposited his enzyme with Addgene so that other research groups could use it — and they quickly did. Gersbach predicts that a wave of upcoming papers will show a synergistic effect when multiple epigenetic markers are manipulated at once.

The tools need to be refined. Dozens of enzymes can create or erase an epigenetic mark on DNA, and not all of them have been amenable to the broken-scissors approach. “It turned out to be harder than a lot of people were expecting,” says Gersbach. “You attach a lot of things to a dead Cas9 and they don't happen to work.” Sometimes it is difficult to work out whether an unexpected result arose because a method did not work well, or because the epigenetic mark simply doesn't matter in that particular cell or environment.

Rots has explored the function of epigenetic marks on cancer-related genes using older editing tools called zinc-finger proteins, and is now adopting CRISPR–Cas9. The new tools have democratized the field, she says, and that has already had a broad impact. People used to say that the correlations were coincidental, Rots says — that if you rewrite the epigenetics it will have no effect on gene expression. “But now that it's not that difficult to test, a lot of people are joining the field.”

CRISPR code cracking

Epigenetic marks on DNA are not the only genomic code that is yet to be broken. More than 98% of the human genome does not code for proteins. But researchers think that a fair chunk of this DNA is doing something important, and they are adopting CRISPR–Cas9 to work out what that is.

Some of it codes for RNA molecules — such as microRNAs and long non-coding RNAs — that are thought to have functions apart from making proteins. Other sequences are 'enhancers' that amplify the expression of the genes under their command. Most of the DNA sequences linked to the risk of common diseases lie in regions of the genome that contain non-coding RNA and enhancers. But before CRISPR–Cas9, it was difficult for researchers to work out what those sequences do. “We didn't have a good way to functionally annotate the non-coding genome,” says Bauer. “Now our experiments are much more sophisticated.”

Farnham and her colleagues are using CRISPR–Cas9 to delete enhancer regions that are found to be mutated in genomic studies of prostate and colon cancer. The results have sometimes surprised her. In one unpublished experiment, her team deleted an enhancer that was thought to be important, yet no gene within one million bases of it changed expression. “How we normally classify the strength of a regulatory element is not corresponding with what happens when you delete that element,” she says.

More surprises may be in store as researchers harness CRISPR–Cas9 to probe large stretches of regulatory DNA. Groups led by geneticists David Gifford at MIT and Richard Sherwood at the Brigham and Women's Hospital in Boston used the technique to create mutations across a 40,000-letter sequence, and then examined whether each change had an effect on the activity of a nearby gene that made a fluorescent protein10. The result was a map of DNA sequences that enhanced gene expression, including several that had not been predicted on the basis of gene regulatory features such as chromatin modifications.

Delving into this dark matter has its challenges, even with CRISPR–Cas9. The Cas9 enzyme will cut where the guide RNA tells it to, but only if a specific but common DNA sequence is present near the cut site. This poses little difficulty for researchers who want to silence a gene, because the key sequences almost always exist somewhere within it. But for those who want to make very specific changes to short, non-coding RNAs, the options can be limited. “We cannot take just any sequence,” says Reuven Agami, a researcher at the Netherlands Cancer Institute in Amsterdam.

Researchers are scouring the bacterial kingdom for relatives of the Cas9 enzyme that recognize different sequences. Last year, the lab of Feng Zhang, a bioengineer at the Broad Institute of MIT and Harvard in Cambridge, characterized a family of enzymes called Cpf1 that work similarly to Cas9 and could expand sequence options11. But Agami notes that few alternative enzymes found so far work as well as the most popular Cas9. In the future, he hopes to have a whole collection of enzymes that can be targeted to any site in the genome. “We're not there yet,” he says.

CRISPR sees the light

Gersbach's lab is using gene-editing tools as part of an effort to understand cell fate and how to manipulate it: the team hopes one day to grow tissues in a dish for drug screening and cell therapies. But CRISPR–Cas9's effects are permanent, and Gersbach's team needed to turn genes on and off transiently, and in very specific locations in the tissue. “Patterning a blood vessel demands a high degree of control,” he says.

Gersbach and his colleagues took their broken, modified scissors — the Cas9 that could now activate genes — and added proteins that are activated by blue light. The resulting system triggers gene expression when cells are exposed to the light, and stops it when the light is flicked off12. A group led by chemical biologist Moritoshi Sato of the University of Tokyo rigged a similar system13, and also made an active Cas9 that edited the genome only after it was hit with blue light14.

Others have achieved similar ends by combining CRISPR with a chemical switch. Lukas Dow, a cancer geneticist at Weill Cornell Medical College in New York City, wanted to mutate cancer-related genes in adult mice, to reproduce mutations that have been identified in human colorectal cancers. His team engineered a CRISPR–Cas9 system in which a dose of the compound doxycycline activates Cas9, allowing it to cut its targets15.

The tools are another step towards gaining fine control over genome editing. Gersbach's team has not patterned its blood vessels just yet: for now, the researchers are working on making their light-inducible system more efficient. “It's a first-generation tool,” says Gersbach.


Cancer researcher Wen Xue spent the first years of his postdoc career making a transgenic mouse that bore a mutation found in some human liver cancers. He slogged away, making the tools necessary for gene targeting, injecting them into embryonic stem cells and then trying to derive mice with the mutation. The cost: a year and US$20,000. “It was the rate-limiting step in studying disease genes,” he says.

A few years later, just as he was about to embark on another transgenic-mouse experiment, his mentor suggested that he give CRISPR–Cas9 a try. This time, Xue just ordered the tools, injected them into single-celled mouse embryos and, a few weeks later — voilá. “We had the mouse in one month,” says Xue. “I wish I had had this technology sooner. My postdoc would have been a lot shorter.”

Researchers who study everything from cancer to neurodegeneration are embracing CRISPR-Cas9 to create animal models of the diseases (see page 160). It lets them engineer more animals, in more complex ways, and in a wider range of species. Xue, who now runs his own lab at the University of Massachusetts Medical School in Worcester, is systematically sifting through data from tumour genomes, using CRISPR–Cas9 to model the mutations in cells grown in culture and in animals.

Researchers are hoping to mix and match the new CRISPR–Cas9 tools to precisely manipulate the genome and epigenome in animal models. “The real power is going to be the integration of those systems,” says Dow. This may allow scientists to capture and understand some of the complexity of common human diseases.

Take tumours, which can bear dozens of mutations that potentially contribute to cancer development. “They're probably not all important in terms of modelling a tumour,” says Dow. “But it's very clear that you're going to need two or three or four mutations to really model aggressive disease and get closer to modelling human cancer.” Introducing all of those mutations into a mouse the old-fashioned way would have been costly and time-consuming, he adds.

Bioengineer Patrick Hsu started his lab at the Salk Institute for Biological Studies in La Jolla, California, in 2015; he aims to use gene editing to model neurodegenerative conditions such as Alzheimer's disease and Parkinson's disease in cell cultures and marmoset monkeys. That could recapitulate human behaviours and progression of disease more effectively than mouse models, but would have been unthinkably expensive and slow before CRISPR–Cas9.

Even as he designs experiments to genetically engineer his first CRISPR–Cas9 marmosets, Hsu is aware that this approach may be only a stepping stone to the next. “Technologies come and go. You can't get married to one,” he says. “You need to always think about what biological problems need to be solved.”

Nature 531, 156–159 (10 March 2016) doi:10.1038/531156a

Geneticists debate whether focus should shift from sequencing genomes to analysing function.

[If one replaces "whether" to "how to", the article makes even more sense - AJP]


Heidi Ledford

05 January 2015

A mammoth US effort to genetically profile 10,000 tumours has officially come to an end. Started in 2006 as a US$100-million pilot, The Cancer Genome Atlas (TCGA) is now the biggest component of the International Cancer Genome Consortium, a collaboration of scientists from 16 nations that has discovered nearly 10 million cancer-related mutations.

The question is what to do next. Some researchers want to continue the focus on sequencing; others would rather expand their work to explore how the mutations that have been identified influence the development and progression of cancer.

“TCGA should be completed and declared a victory,” says Bruce Stillman, president of Cold Spring Harbor Laboratory in New York. “There will always be new mutations found that are associated with a particular cancer. The question is: what is the cost–benefit ratio?”

Stillman was an early advocate for the project, even as some researchers feared that it would drain funds away from individual grants. Initially a three-year project, it was extended for five more years. In 2009, it received an additional $100 million from the US National Institutes of Health plus $175 million from stimulus funding that was intended to spur the US economy during the global economic recession.

The project initially struggled. At the time, the sequencing technology worked only on fresh tissue that had been frozen rapidly. Yet most clinical biopsies are fixed in paraffin and stained for examination by pathologists. Finding and paying for fresh tissue samples became the programme’s largest expense, says Louis Staudt, director of the Office for Cancer Genomics at the National Cancer Institute (NCI) in Bethesda, Maryland.

Also a problem was the complexity of the data. Although a few ‘drivers’ stood out as likely contributors to the development of cancer, most of the mutations formed a bewildering hodgepodge of genetic oddities, with little commonality between tumours. Tests of drugs that targeted the drivers soon revealed another problem: cancers are often quick to become resistant, typically by activating different genes to bypass whatever cellular process is blocked by the treatment.

Despite those difficulties, nearly every aspect of cancer research has benefited from TCGA, says Bert Vogelstein, a cancer geneticist at Johns Hopkins University in Baltimore, Maryland. The data have yielded new ways to classify tumours and pointed to previously unrecognized drug targets and carcinogens. But some researchers think that sequencing still has a lot to offer. In January, a statistical analysis of the mutation data for 21 cancers showed that sequencing still has the potential to find clinically useful mutations (M. S. Lawrence et al. Nature 505, 495–501; 2014).

On 2 December, Staudt announced that once TCGA is completed, the NCI will continue to intensively sequence tumours in three cancers: ovarian, colorectal and lung adenocarcinoma. It then plans to evaluate the fruits of this extra effort before deciding whether to add back more cancers.

Expanded scope

But this time around, the studies will be able to incorporate detailed clinical information about the patient’s health, treatment history and response to therapies. Because researchers can now use paraffin-embedded samples, they can tap into data from past clinical trials, and study how mutations affect a patient’s prognosis and response to treatment. Staudt says that the NCI will be announcing a call for proposals to sequence samples taken during clinical trials using the methods and analysis pipelines established by the TCGA.

The rest of the International Cancer Gene Consortium, slated to release early plans for a second wave of projects in February, will probably take a similar tack, says co-founder Tom Hudson, president of the Ontario Institute for Cancer Research in Toronto, Canada. A focus on finding sequences that make a tumour responsive to therapy has already been embraced by government funders in several countries eager to rein in health-care costs, he says. “Cancer therapies are very expensive. It’s a priority for us to address which patients would respond to an expensive drug.”

The NCI is also backing the creation of a repository for data not only from its own projects, but also from international efforts. This is intended to bring data access and analysis tools to a wider swathe of researchers, says Staudt. At present, the cancer genomics data constitute about 20 petabytes (1015 bytes), and are so large and unwieldy that only institutions with significant computing power can access them. Even then, it can take four months just to download them.

Stimulus funding cannot be counted on to fuel these plans, acknowledges Staudt. But cheaper sequencing and the ability to use biobanked biopsies should bring down the cost, he says. “Genomics is at the centre of much of what we do in cancer research,” he says. “Now we can ask questions in a more directed way.”

Nature 517, 128–129 (08 January 2015) doi:10.1038/517128a

Top U.S. Intelligence Official Calls Gene Editing a WMD Threat

MIT Technology Review, Feb. 9, 2016

Antonia Regalado

Easy to use. Hard to control The intelligence community now sees CRISPR as a threat to national safety.

Genome editing is a weapon of mass destruction.

That’s according to James Clapper, U.S. director of national intelligence, who on Tuesday, in the annual worldwide threat assessment report of the U.S. intelligence community, added gene editing to a list of threats posed by “weapons of mass destruction and proliferation.”

Gene editing refers to several novel ways to alter the DNA inside living cells. The most popular method, CRISPR, has been revolutionizing scientific research, leading to novel animals and crops, and is likely to power a new generation of gene treatments for serious diseases (see “Everything You Need to Know About CRISPR’s Monster Year”).

It is gene editing’s relative ease of use that worries the U.S. intelligence community, according to the assessment. “Given the broad distribution, low cost, and accelerated pace of development of this dual-use technology, its deliberate or unintentional misuse might lead to far-reaching economic and national security implications,” the report said.

The choice by the U.S. spy chief to call out gene editing as a potential weapon of mass destruction, or WMD, surprised some experts. It was the only biotechnology appearing in a tally of six more conventional threats, like North Korea’s suspected nuclear detonation on January 6, Syria’s undeclared chemical weapons, and new Russian cruise missiles that might violate an international treaty.

The report is an unclassified version of the “collective insights” of the Central Intelligence Agency, the National Security Agency, and half a dozen other U.S. spy and fact-gathering operations.

Although the report doesn’t mention CRISPR by name, Clapper clearly had the newest and the most versatile of the gene-editing systems in mind. The CRISPR technique’s low cost and relative ease of use—the basic ingredients can be bought online for $60—seems to have spooked intelligence agencies.

“Research in genome editing conducted by countries with different regulatory or ethical standards than those of Western countries probably increases the risk of the creation of potentially harmful biological agents or products,” the report said.

The concern is that biotechnology is a “dual use” technology—meaning normal scientific developments could also be harnessed as weapons. The report noted that new discoveries “move easily in the globalized economy, as do personnel with the scientific expertise to design and use them.”

Clapper didn’t lay out any particular bioweapons scenarios, but scientists have previously speculated about whether CRISPR could be used to make “killer mosquitoes,” plagues that wipe out staple crops, or even a virus that snips at people’s DNA.

“Biotechnology, more than any other domain, has great potential for human good, but also has the possibility to be misused,” says Daniel Gerstein, a senior policy analyst at RAND and a former under secretary at the Department of Homeland Defense. “We are worried about people developing some sort of pathogen with robust capabilities, but we are also concerned about the chance of misutilization. We could have an accident occur with gene editing that is catastrophic, since the genome is the very essence of life.”

Piers Millet, an expert on bioweapons at the Woodrow Wilson Center in Washington, D.C., says Clapper’s singling out of gene editing on the WMD list was “a surprise,” since making a bioweapon—say, an extra-virulent form of anthrax—still requires mastery of a “wide raft of technologies.”

Development of bioweapons is banned by the Biological and Toxin Weapons Convention, a Cold War–era treaty that outlawed biological warfare programs. The U.S., China, Russia, and 172 other countries have signed it. Millet says that experts who met in Warsaw last September to discuss the treaty felt a threat from terrorist groups was still remote, given the complexity of producing a bioweapon. Millet says the group concluded that “for the foreseeable future, such applications are only within the grasp of states.”

The intelligence assessment drew specific attention to the possibility of using CRISPR to edit the DNA of human embryos to produce genetic changes in the next generation of people—for example, to remove disease risks. It noted that fast advances in genome editing in 2015 compelled “groups of high-profile U.S. and European biologists to question unregulated editing of the human germ line (cells that are relevant for reproduction), which might create inheritable genetic changes.”

So far, the debate over changing the next generation’s genes has been mostly an ethical question, and the report didn’t say how such a development would be considered a WMD, although it’s possible to imagine a virus designed to kill or injure people by altering their genomes.

[No public comment, Andras_at_Pellionisz_dot_com]

Craig Venter: We Are Not Ready to Edit Human Embryos Yet

Craig Venter: We Are Not Ready to Edit Human Embryos Yet

J. Craig Venter @JCVenter Feb. 2, 2016

J. Craig Venter, a TIME 100 honoree, is a geneticist known for being one of the first to sequence the human genome.

Unless we have sufficient knowledge and wisdom we should not proceed

Discussions on human genome modifications to eliminate disease genes and/or for human enhancement are not new and have been common place since the first discussions on sequencing the human genome occurred in the mid 1980’s. Many a bioethicist has made their careers from such discussions, and currently on Amazon there are dozens of books on a wide range of human enhancement topics including those that predict that editing our genes will lead to the end of humanity. There are also thousands of news stories on the new DNA editing tools called CRISPR.

So why is genome editing so different? If we can use CRISPR techniques to change the letters of the genetic code known to be associated with rare genetic disorders such as Tay-Sachs disease, Huntington’s disease, cystic fibrosis, cycle cell anemia or ataxia telangiectasia, why wouldn’t we just do so and eliminate the diseases from human existence? The answer is both simple and complex at the same time: just because the techniques have become easier to perform, the ethical issues are not easier. In fact, the reality of the technical ease of CRISPR-based genome editing has changed hypothetical, esoteric arguments limited largely to “bioethicists” to here and now discussions and decisions for all of us.

For me there are three fundamental issues of why we should proceed with extreme caution in this brave new world.

1. Insufficient knowledge: Our knowledge of the human genome is just finally beginning to emerge as we sequence tens of thousands of genomes. We have little or no detailed knowledge of how (with a few exceptions) changing the genetic code will effect development and the subtlety associated with the tremendous array of human traits. Genes and proteins rarely have a single function in the genome and we know of many cases in experimental animals where changing a “known function” of a gene results in development surprises. Only a small percentage of human genes are well understood, for most we have little or no clue as to their role.

2. The slippery slope argument: If we allow editing of disease genes, it will open the door to all gene editing for human enhancement. This needs no further explanation: it is human nature and inevitable in my view that we will edit our genomes for enhancements.

3. The global ban on human experimentation: From Mary Shelly’s Frankenstein to Nazi war crimes to the X Men, we have pondered human experimentation. Unless we have sufficient knowledge and wisdom we should not proceed.

CRISPRs and other gene-editing tools are wonderful research tools to understand the function of DNA coding and should proceed. The U.K. approval of editing human embryos to understand human development has no impact on actual genome editing for disease prevention or human enhancement. Some of the experiments planned at the Crick Institute are simple experiments akin to gene knockouts in mice or other species where CRISPR will be used to cut out a gene to see what happens. They will yield some interesting results, but most, I predict, will be ambiguous or not informative as we have seen in this field before.

The only reason the announcement is headline-provoking is that it seems to be one more step toward editing our genomes to change life outcomes. We need to proceed with caution and with open public dialogue so we are all clear on where this exciting science is taking us. I do not think we are ready to edit human embryos yet. I think the scientific community needs to focus on obtaining a much more complete understanding of the whole-genome sequence as our software of life before we begin re-writing this code.

[If anybody, Venter (the "Tesla of Genomics") would know best. The Venter Institute "edited out" a rather tiny amount of genes from the genome of Mycoplasma Genitalum (with the smallest genome of all free-living organisms). The "edited version" (synthetized) would not work for 15 agonizing years. Why? Because even the smallest genome contains some 7% of "non-coding" (regulatory) DNA, and by slightly reducing the number of "genes" nobody know how to modify the 7% "regulatory sequences" to kick the protein-pumps ("genes") alive. It is pure fantasy to "edit any text" without an understanding what it means. Even spell-checkers of single letters are hopeless in some cases (bad - bed), since the letter in the middle can be an error in one case, while perfect in an other context. If 15 years of sophisticated "trials" were needed to finally arrive at a "working version" of the slightly reduced "set of genes", imagine how many Centuries would be needed to "get the editing right" e.g. in case of cancers, without a mathematical understanding of genome regulation. FracoGene ("Fractal DNA governs growth of fractal organisms") is presently the only mathematical (fractal geometry) approach that serves as a basis for such "cause and effect" understanding of pristine DNA or one laden with Fractal Defects how it governs the growth of physiological or cancerous organisms. Precious (but fiercely debated) IP of Genome Editing must be combined with the IP of Genome Regulation. Andras_at_Pellionisz_dot_com]

UK scientists gain licence to edit genes in human embryos

Team at Francis Crick Institute permitted to use CRISPR–Cas9 technology in embryos for early-development research.


Ewen Callaway

01 February 2016

Scientists in London have been granted permission to edit the genomes of human embryos for research, UK fertility regulators announced. The 1 February approval by the UK Human Fertilisation and Embryology Authority (HFEA) represents the world's first endorsement of such research by a national regulatory authority.

"It’s an important first. The HFEA has been a very thoughtful, deliberative body that has provided rational oversight of sensitive research areas, and this establishes a strong precedent for allowing this type of research to go forward," says George Daley, a stem-cell biologist at Children's Hospital Boston in Massachusetts.

The HFEA has approved an application by developmental biologist Kathy Niakan, at the Francis Crick Institute in London, to use the genome-editing technique CRISPR–Cas9 in healthy human embryos. Niakan’s team is interested in early development, and it plans to alter genes that are active in the first few days after fertilization. The researchers will stop the experiments after seven days, after which the embryos will be destroyed.

The genetic modifications could help researchers to develop treatments for infertility, but will not themselves form the basis of a therapy.

Robin Lovell-Badge, a developmental biologist at the Crick institute, says that the HFEA’s decision will embolden other researchers who hope to edit the genomes of human embryos. He has heard from other UK scientists who are interested in pursuing embryo-editing research, he says, and expects that more applications will follow. In other countries, he says, the decision “will give scientists confidence to either apply to their national regulatory bodies, if they have them, or just to go ahead anyway”.

Development genes

Niakan’s team has already been granted a licence by the HFEA to conduct research using healthy human embryos that are donated by patients who had undergone in vitro fertilization (IVF) at fertility clinics. But in September last year, the team announced that it had applied to conduct genome editing on these embryos — five months after researchers in China reported that they had used CRISPR–Cas9 to edit the genomes of non-viable human embryos, which sparked a debate about how or whether to draw the line on gene editing in human embryos.

At a press briefing last month, Niakan said that her team could begin experiments within “months” of the HFEA approving the application. Its first experiment will involve blocking the activity of a ‘master regulator’ gene called OCT4, also known as POU5F1, which is active in cells that go on to form the fetus. (Other cells in the embryo go on to form the placenta.) Her team plans to end its test-tube experiments within a week of fertilization, when the fertilized egg has reached the blastocyst stage of development and contains up to 256 cells.

“I am delighted that the HFEA has approved Dr Niakan’s application,” said Crick director Paul Nurse in a statement. “Dr Niakan’s proposed research is important for understanding how a healthy human embryo develops and will enhance our understanding of IVF success rates, by looking at the very earliest stage of human development.”

A local research ethics board (which is similar to an institutional review board in the United States) will now need to approve the research that Niakan’s team has planned. When approving Niakan's application, the HFEA said that no experiments could begin until such ethics approval was granted.

International impact

Sarah Chan, a bioethicist at the University of Edinburgh, UK, says that the decision will reverberate well beyond the United Kingdom. “I think this will be a good example to countries who are considering their approach to regulating this technology. We can have a well-regulated system that is able to make that distinction between research and reproduction,” she says.

It remains illegal to alter the genomes of embryos used to conceive a child in the United Kingdom, but researchers say that the decision to allow embryo-editing research could inform the debate over deploying gene-editing in embryos for therapeutic uses in the clinic.

“This step in the UK will stimulate debate on legal regulation of germline gene editing in clinical settings,” says Tetsuya Ishii, a bioethicist at Hokkaido University in Sapporo, Japan, who notes that some countries do not explicitly prohibit reproductive applications.

“This type of research should prove valuable for understanding the many complex issues around germline editing," adds Daley. "Even though this work isn’t explicitly aiming toward the clinic, it may teach us the potential risks of considering clinical application.”

Nature doi:10.1038/nature.2016.19270

[There is a big difference between genome editing in human embryos for purposes of research - and genome editing (e.g. in case of "single nucleotide polymorphism"-diseases) as a cure. As for complex genome (mis)regulation diseases, like cancer, first the mathematical (fractal) language of the genome has to be known. For natual languages, even for single-letter spelling mistakes a spell-checker might not even be effective with full knowledge of the language (bad or bed could be correct, depending on the context). More substantial editing of natural languages, for instance to edit the grammar, full command of the language is indispensable. Likewise, it might not even make sense, and could be outright dangerous to start editing e.g. cancerous genomes before the understanding of fractal mathematics of genome regulation is understood - Andras_at_Pellionisz_dot_com]

Why Eric Lander morphed from science god to punching bag



Genome-sequencing pioneer Eric Lander, one of the most powerful men in American science, did not embezzle funds from the institute he leads, sexually harass anyone, plagiarize, or fabricate data. But he became the target of venomous online attacks last week because of an essay he wrote on the history of CRISPR, the revolutionary genome-editing technology pioneered partly by his colleagues at the Broad Institute in Cambridge, Mass.To be sure, Lander gave his foes some openings. He and the journal Cell, which published his essay last week, failed to disclose Lander’s potential conflict of interest when it comes to CRISPR. The essay, other scientists said, got several key facts wrong, and Lander later added what he called clarifications. Stirring the greatest anger, critics charged that rather than writing an objective history he downplayed the role of two key CRISPR scientists who happen to be women.Those missteps triggered a bitter online war, including the Twitter hashtag #landergate. Biologist Michael Eisen of the University of California, Berkeley, deemed his essay “science propaganda at its most repellent” and called for its retraction, while anonymous scientists on the post-publication review site PubPeer ripped into Lander’s motives and character. The attacks spread well beyond science, with the feminist website charging that “one man tried to write women out of CRISPR.”Read more: Controversial CRISPR history sets off an online firestormThe episode created cracks in a dam that had long held back public criticism of Lander.The outpouring of rage directed at him arises from what one veteran biomedical researcher calls “pent-up animosity” toward Lander and the Broad Institute, where he serves as director, that has built up over years.“Science can be a blood sport,” said science historian and policy expert Robert Cook-Deegan of Duke University. “This seems to be one of those times.”Some of the brickbats hurled at Lander reflect professional jealousy, especially since he took an unconventional path into the top echelons of molecular biology. Some seem to be payback for the egos Lander bruised over the years, dating to his role in the Human Genome Project in the late 1990s. Some of the anger seems to stem from still-simmering animosity over what Lander and his institute represent to many: the triumph of Big Science in biology.Lander, 58, told STAT that, while he does not peruse social media, the criticism that he’s aware of “does not feel personal in any way. I appreciate that there are a lot of diverse perspectives, and science needs those.”Current and former colleagues contacted by STAT described Lander as brilliant, prickly, and brash, as having “an ego without end,” as “a visionary” who “doesn’t suffer fools gladly,” and as “an authentic genius” who “sees things the rest of us don’t.” Lander won a MacArthur Foundation “genius” award in 1987 at age 30. Since 2009, he has co-chaired President Obama’s scientific advisory council.“Anything I want to say, he’s ahead of me,” said one scientist who has worked closely with Lander on issues of science policy. “With normal mortals you can see wheels grinding in their head, but with Eric you can’t.”The Broad rose from nonexistence in 2003 to the pinnacle of molecular biology. By 2008 three Broad scientists, including Lander, ranked in the top 10 most-cited authors of recent papers in molecular biology and genetics. In 2011, Lander had more “hot papers” (meaning those cited most by other scientists) in any field, not just biology, than anyone else over the previous two years, according to ThomsonReuters’ ScienceWatch. By 2014, 8 out of what ScienceWatch called “the 17 hottest-of-the-hot researchers” in genomics were at the Broad.The institute was punching well above its weight. It attracted eye-popping donations, including $650 million for psychiatric research from the foundation of philanthropist Ted Stanley in 2014 and, since its 2003 founding, $800 million from Los Angeles developer Eli Broad and his wife Edythe. It won $176.5 million in research grants from the National Institutes of Health in fiscal year 2015, ranking it 34th. Larger institutions got more — $604 million for Johns Hopkins, $563 million for the University of California, San Francisco — but the Broad’s smaller number of core researchers were leaving rivals in the dust in terms of their contributions to and influence in science.To many biomedical researchers at other institutions, said Cook-Deegan, “it feels that these guys from Boston, with more money than God, are trying to muscle in. . . . People [at the Broad] think they work at the best biomedical research institution in the world, and at meetings they let everyone know that.”Cook-Deegan admires Lander: he nominated him for the prestigious Abelson Award for public service to science, which will be given to Lander next month by the American Association for the Advancement of Science.Apart from the resentment Lander inspires because of the Broad’s success, there is lingering animus over what Lander represents: Big Science. Physics became Big Science — dominated by huge collaborations rather than lone investigators — decades ago with the advent of atomic accelerators. A key 2015 paper on the Higgs boson (“the God particle”) had 5,154 authors. Biology went that route with the launch of the Human Genome Project, the international effort to determine the sequence of 6 billion molecular “letters” that make up human DNA.Lander was not present at the creation of the $3 billion project in 1990, but the sequencing center he oversaw at the Whitehead Institute became a powerhouse in the race to complete it. Much of that work was done by robots and involved little creativity (once scientists figured out how to do the sequencing). Some individual investigators felt they couldn’t compete against peers at the sequencing centers in the race for grants.“He became a symbol of plowing lots of resources into industrialized, mindless science that could be run by machines and technicians and so wasn’t real biology,” said one scholar of that period. “Eric came to embody Big Science in that way.”More than that, Lander played an outsized role in the project relative to his background and experience. A mathematician by training, after he graduated from Princeton in 1978 and earned a PhD in math in 1981 at Oxford University as a Rhodes Scholar, he taught managerial economics at Harvard Business School from 1981 to 1990. He slowly became bored by the MBA world and enchanted with biology, however, and in 1990 founded the genome center at the Whitehead. It was hardly the pay-your-dues, do your molecular biology PhD and postdoctoral fellowship route to a leading position in the white-hot field of genomics.Read more: Geneticist Craig Venter helped sequence the human genome. Now he wants yours“Eric appeared to be an upstart to some people in the science establishment, a mathematician interloper in the tight club of molecular biology,” said Fintan Steele, former director of communications and scientific education at the Broad.By the late 1990s, confidential National Institutes of Health documents estimated that the genome project was on track to be no more than two-thirds finished by 2005, when it was supposed to be completed, according to histories of the effort. That would have been a disaster: geneticist Craig Venter and his company, Celera, had launched a competing genome-sequencing project and boasted that they would beat the public project to the finish line. Worse, Venter intended to patent DNA sequences, meaning whatever Celera sequenced first would be owned by a for-profit company. In early 1998, James Watson, codiscoverer of DNA’s double-helix structure and former head of the genome project, asked Lander to persuade NIH to spend more money, faster. Lander thought the problem went beyond funding. The sequencing project was “too bloody complicated, with too many groups,” he told the New Yorker in 2000. Tapping his business acumen, Lander decided the project needed to become more focused, with fewer groups. He also thought that allowing two dozen sequencing labs to each claim part of the genome for their own was “madness,” he told author Victor McElheny for a 2010 book on the genome project. If any lab was slow, the whole project would be late.Lander, therefore, pushed to reorganize the genome project. Scientists who disagreed with his strategy “bellowed in protest,” according to James Shreeve’s 2004 book “The Genome War,” and Lander’s “constant demands” for his lab to sequence more and more “led to a crescendo of heated conversations.” But Lander’s strong-arming worked: the public effort battled Venter to a tie, with both releasing “drafts” of the human genome in 2001. Lander was first among equals, the lead author of the Nature paper unveiling the “book of life.”His success left some veteran geneticists bitter at the upstart who helped rescue the highest-profile scientific endeavor of the 1990s. But “competing with Venter excused a lot of behavior,” said New York University bioethicist Arthur Caplan, a member of the Celera advisory board at the time.Lander attributes the genome project’s “huge success” to, among other things, the fact that “it had the flexibility to bring in people with different perspectives and skills.” On weekly phone calls for five years, he said, “we debated and argued about everything imaginable.”In 2003 Lander was instrumental in moving the genome center from the Whitehead to the just-created Broad. “It wasn’t just the genome center that he took,” said Steele, the former Broad staffer. “It was also the substantial funding that supported the center.”That move was spurred in part by the fact that the genome center had outgrown the Whitehead; it constituted three-quarters of the Whitehead’s budget.The departure of Lander and his genome center to the Broad generated hard feelings at the Whitehead. One veteran of that battle recalls it as “very bloody,” especially because the Whitehead wasn’t raising much money and feared that Lander would vacuum up potential donors. For several years after, Whitehead annual reports showed a picture of its facility in Cambridge’s Kendall Square with the next-door Broad conspicuously missing.In the biotech hotbed that surrounds the Massachusetts Institute of Technology, it seems every biology PhD has founded a company. Lander is a cofounder of Millennium Pharmaceuticals, Infinity Pharmaceuticals, Verastem, and the cancer vaccine startup Neon Therapeutics. He is a founding advisor to cancer genomics company Foundation Medicine and has close ties to venture capital firm Third Rock Ventures, a major investor in the CRISPR company Editas.Although his involvement in the for-profit world hardly makes him unusual — MIT, like many universities, encourages scientists to translate their research into drugs and other products — it has, nonetheless, added to the resentment. With Foundation, said a former Broad scientist, “there was a belief that the Broad researchers had done all this work on cancer genomics, and Foundation is built on that. People were asking, ‘Are these guys going to get rich on our work?'”The most serious misstep in Lander’s Cell essay was arguably a failure to disclose a potential conflict of interest: the Broad is engaged in a bitter fight with the University of California system over CRISPR patents. Lander reported this to Cell, but the journal’s policy is not to note such “institutional” conflicts. A review of CRISPR coauthored by Berkeley’s Jennifer Doudna in the same issue has no disclosure either, even though she cofounded the CRISPR company Caribou Biosciences, and the Twitterverse has not attacked her.Read more: Meet one of the world’s most groundbreaking scientists. He’s 34. Critics say Lander downplayed seminal CRISPR research by Doudna and her key collaborator, Emmanuelle Charpentier, and overstated the contributions of Broad biologist Feng Zhang. That has been portrayed as sexist, an impression supported by the title of the essay: Heroes of CRISPR. With too-frequent cases of sexism and outright sexual harassment by leading scientists, sensitivities on this are high, but his defenders say Lander has long been a strong supporter of women in science.“He has always been one of my greatest advocates,” said Harvard and Broad biologist Pardis Sabeti, who did key genetics work on the recent Ebola outbreak. “He has hired strong, tough, brilliant women scientists for the Broad, and has made it one of the best places for women scientists to work.”Lander said that he wanted his Cell essay simply “to turn the spotlight on 20 years of the backstory of CRISPR,” showing that science is an “ensemble” enterprise in which even key discoveries struggle to be recognized — journals rejected early CRISPR papers. “But I guess it’s only natural that some people will want to focus on current conflicts,” he said.Correction: An earlier version of this story failed to attribute to other scientists the claims of errors in Eric Lander’s essay. It also called his response to those claims corrections when he described them as clarifications.

[Some simply "join the fray", but I do not. My observation is that there are different workers in science. Original contributors can be easily distinguished from integrators - and Eric certainly excels in the latter. In the next segment I am not talking about Eric but the rest of us. Myself, enthused by reading in 1958 John von Neumann's "The Computer and the Brain", I devoted my efforts to a single goal of science (yes, "hypothesis driven", i.e. that there is mathematics even to biology). Von Neumann alluded on the last page of his book that the mathematics of the brain we do not know - but it is certainly different from any known mathematics. My half a Century was spent and the result is astoundingly clear. Geometry is the intrinsic mathematics of the brain, and it is united with the geometrization of genome informatics. Those who are truly interested in the elaboration can look up my homepage. The geometry of the metrical (smooth and derivable) space-time domain uses tensor geometry, Google "Tensor Network Theory". As a result, two basic tenets of Big Science "The Brain Project" are simply not true any more. First, "imaging" of either the structure or function of the brain has been proven to be a "necessary means but an insufficient goal". Second, it is just not true that "we do not understand, in the mathematical sense, any brain function". Tensor Network Theory has established that the cerebellar neural networks act as the metric tensor of spacetime coordination space; transforming (covariant) motor intentions into precise (contravariant) motor execution. TNT has been proven experimentally by independent workers, resulted in Neurophilosophy, and an artificial cerebellum to land on one wing badly injured F15 fighters, based on my blueprint as a Senior National Academy Council adviser to NASA. Germany lured me by the Humboldt Prize for Senior Distinguished American Scientists, and our Neurocomputing-II (MIT Press Book, with 1575 citations) appeared. I was, however, not only half done. Two essentials kept me awake at long nights. First, the principal (Purkinje) neuron of the cerebellar network that coordinates movements in a metrical spacetime domain, appeared to be a fractal mathematical object! (Cambridge Univ. Press book chapter, 1989). Second, the publication clinched that the Purkinje cell can only be grown by a recursive genome function.

Eric with a training in mathematics could integrate the cerebellar neural network models (since his brother directed his attention to them). His business school training, however, resulted in a goal different from understanding; but making the Human Genome Project the epitomy of Big Science (only to be "tied" by the competitive private sector approach by the Tesla of Genomics, Craig Venter). The World was, however, frozen by the flabbergasting result(s) of "full DNA sequencing". 2001 - there were far fewer "genes" in the full human genome than anybody expected. Ohno's "Junk DNA" came in, handy. Next year (2002) it became clear that the "genes" of human and mouse are not only similar in their number - but are 98% the same! Clearly, there was a very significant difference in the amount of "Junk DNA". For me, on February 14, 2002 it yielded the "Eureka moment"(Fig.3, reproduced once 2002 provisional filing was followed by regular submission). Looking at some repeats with visible self-similarity, I connected the dots that were known but separate before. FractoGene is: "Fractal Genome Grows Fractal Organisms". Heralded instantaneously, by the 50th Anniversary of the Double Helix the FractoGene discovery (of a "cause and effect" of two fractals) met a deafening silence. No "Ingrator of the Eric-Kind" was anywhere in sight. In spite of peer-reviwed science publication with the late Malcolm Simons (among the firsts to sacrifice themselves against "Junk DNA"), seeking desperately what it IS, if not Junk (and came to terms with FractoGene), in spite of ENCODE-I (and 7 years after ENCODE-II) eroding even the old definition of "gene" (since e.g. it is fractured; fractal), it was very difficult for most scientists to handle the "double disruption". FractoGene reversed both principal axioms (Junk DNA and Central Dogma false claims). One needed fellow-mathematicians-genomists, playing the important (if progressive) role of an Integrator. In 2007 September, Eric payed a visit to lecture at the University of San Francisco. I put my manuscript The Principle of Recursive Genome Function (2008) (dedicated to Eric) personally into Eric's hand. He instantly looked into it and said "Wow, it even has (fractal) math in it! - I will read it on the plane". The Edison of Genomics, with the most plentiful set of original contributions, George Church (yes, "the other person, in Harvard") suddenly invited me to his own Cold Spring Harbor Lab meeting for a September 16, 2009 presentation of my Fractal Approach. Little did I know that "the other person, at Broad" brewed a massive fractal project! It appeared as the Hilbert-fractal on the Science Magazine cover on October 9, 2009 (senior, last author Eric Lander). The actual work on the DNA globule was pioneered by Dekker, and done mostly by Erez-Lieberman. The Integrator reached back a couple of decades to Grosberg and much deeper (to Hilbert). Thus, some original contributors could be skipped. Overall, this still helped me a lot "Mr. President, The Genome is Fractal!". A double-degree mathematician-genomist Eric Schadt endorsed my fractal approach, and lately so did Nobel Laureate Michael Levitt of Stanford. Much earlier, I took my PostGenetics with FractoGene to my native Hungary in 2006 - the first international symposium in the history to recall "Junk DNA". I started to pour mined Fractal Defects of various diseases caused by genomic glitches. "Interesting - what to do with them?" - so went the overwhelming reply. Today, with Genome Editing a reality, suddenly the IP (8,280,641) is precious (especially since it is in force for more than the decade coming). We have the real chance to edit them out for cure. Fractal Defects occuring in the regulatory DNA (maiden name "Junk DNA") are the most likely to cause complex genomic misregulation like cancers, Alzheimers, Parkinsons etc. Editing any code (or text) assumes, however, that we know the mathematical "language" (before) we edit. Herein lies the ultimate merit of FractoGene as it is devloped since 2002 over many decades to come. I am not too likely to be with it for most of those decades - but I am glad I sowed the seeds for recursive dual representation of proteins by coding and non-coding DNA, and unifying the sparsly metrical functional spaces of neuroscience and genomics. Integrators, looking the other way, may have missed their chance. Edisons, Teslas, etc. can greatly benefit from mathematical understanding. Perhaps even more than from "novel" Big Science projects randomly launched (Moonshots everywhere, resulting in Big Data rather than at least a little understanding). As the mathematically savvy would know, integral is useful - but you can only benefit from an integral if the original function is defined. Perhaps most importantly, the value of an integral can be floated by a totally arbitrary Constant, "C". With high "C", a "Secret of the Genome is that it is Fractal" (22:22), though pioneers are skipped, with low "C" fractal genome goes unmentioned. The NIH Cancer Institute is on the track of Fractals. (- Andras_at_Pellionisz_dot_com]

Easy DNA Editing Will Remake the World. Buckle Up.


Amy Maxmen

SPINY GRASS AND SCRAGGLY PINES creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California's Monterey Peninsula hammerheads into the Pacific. It's a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.

They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.

Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.

By 1975, other fields of science—like physics—were subject to broad restrictions. Hardly anyone was allowed to work on atomic bombs, say. But biology was different. Biologists still let the winding road of research guide their steps. On occasion, regulatory bodies had acted retrospectively—after Nuremberg, Tuskegee, and the human radiation experiments, external enforcement entities had told biologists they weren't allowed to do that bad thing again. Asilomar, though, was about establishing prospective guidelines, a remarkably open and forward-thinking move.

At the end of the meeting, Baltimore and four other molecular biologists stayed up all night writing a consensus statement. They laid out ways to isolate potentially dangerous experiments and determined that cloning or otherwise messing with dangerous pathogens should be off-limits. A few attendees fretted about the idea of modifications of the human “germ line”—changes that would be passed on from one generation to the next—but most thought that was so far off as to be unrealistic. Engineering microbes was hard enough. The rules the Asilomar scientists hoped biology would follow didn't look much further ahead than ideas and proposals already on their desks.

Earlier this year, Baltimore joined 17 other researchers for another California conference, this one at the Carneros Inn in Napa Valley. “It was a feeling of déjà vu,” Baltimore says. There he was again, gathered with some of the smartest scientists on earth to talk about the implications of genome engineering.

The stakes, however, have changed. Everyone at the Napa meeting had access to a gene-editing technique called Crispr-Cas9. The first term is an acronym for “clustered regularly interspaced short palindromic repeats,” a description of the genetic basis of the method; Cas9 is the name of a protein that makes it work. Technical details aside, Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people. “These are monumental moments in the history of biomedical research,” Baltimore says. “They don't happen every day.”

Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used Crispr to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals. Startups devoted to Crispr have launched. International pharmaceutical and agricultural companies have spun up Crispr R&D. Two of the most powerful universities in the US are engaged in a vicious war over the basic patent. Depending on what kind of person you are, Crispr makes you see a gleaming world of the future, a Nobel medallion, or dollar signs.

The technique is revolutionary, and like all revolutions, it's perilous. Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.

IN A WAY, humans were genetic engineers long before anyone knew what a gene was. They could give living things new traits—sweeter kernels of corn, flatter bulldog faces—through selective breeding. But it took time, and it didn't always pan out. By the 1930s refining nature got faster. Scientists bombarded seeds and insect eggs with x-rays, causing mutations to scatter through genomes like shrapnel. If one of hundreds of irradiated plants or insects grew up with the traits scientists desired, they bred it and tossed the rest. That's where red grapefruits came from, and most barley for modern beer.

Genome modification has become less of a crapshoot. In 2002, molecular biologists learned to delete or replace specific genes using enzymes called zinc-finger nucleases; the next-generation technique used enzymes named TALENs.

Yet the procedures were expensive and complicated. They only worked on organisms whose molecular innards had been thoroughly dissected—like mice or fruit flies. Genome engineers went on the hunt for something better.

As it happened, the people who found it weren't genome engineers at all. They were basic researchers, trying to unravel the origin of life by sequencing the genomes of ancient bacteria and microbes called Archaea (as in archaic), descendants of the first life on Earth. Deep amid the bases, the As, Ts, Gs, and Cs that made up those DNA sequences, microbiologists noticed recurring segments that were the same back to front and front to back—palindromes. The researchers didn't know what these segments did, but they knew they were weird. In a branding exercise only scientists could love, they named these clusters of repeating palindromes Crispr.

Then, in 2005, a microbiologist named Rodolphe Barrangou, working at a Danish food company called Danisco, spotted some of those same palindromic repeats in Streptococcus thermophilus, the bacteria that the company uses to make yogurt and cheese. Barrangou and his colleagues discovered that the unidentified stretches of DNA between Crispr's palindromes matched sequences from viruses that had infected their S. thermophilus colonies. Like most living things, bacteria get attacked by viruses—in this case they're called bacteriophages, or phages for short. Barrangou's team went on to show that the segments served an important role in the bacteria's defense against the phages, a sort of immunological memory. If a phage infected a microbe whose Crispr carried its fingerprint, the bacteria could recognize the phage and fight back. Barrangou and his colleagues realized they could save their company some money by selecting S. thermophilus species with Crispr sequences that resisted common dairy viruses.

As more researchers sequenced more bacteria, they found Crisprs again and again—half of all bacteria had them. Most Archaea did too. And even stranger, some of Crispr's sequences didn't encode the eventual manufacture of a protein, as is typical of a gene, but instead led to RNA—single-stranded genetic material. (DNA, of course, is double-stranded.)

That pointed to a new hypothesis. Most present-day animals and plants defend themselves against viruses with structures made out of RNA. So a few researchers started to wonder if Crispr was a primordial immune system. Among the people working on that idea was Jill Banfield, a geomicrobiologist at UC Berkeley, who had found Crispr sequences in microbes she collected from acidic, 110-degree water from the defunct Iron Mountain Mine in Shasta County, California. But to figure out if she was right, she needed help.

Luckily, one of the country's best-known RNA experts, a biochemist named Jennifer Doudna, worked on the other side of campus in an office with a view of the Bay and San Francisco's skyline. It certainly wasn't what Doudna had imagined for herself as a girl growing up on the Big Island of Hawaii. She simply liked math and chemistry—an affinity that took her to Harvard and then to a postdoc at the University of Colorado. That's where she made her initial important discoveries, revealing the three-dimensional structure of complex RNA molecules that could, like enzymes, catalyze chemical reactions.

The mine bacteria piqued Doudna's curiosity, but when Doudna pried Crispr apart, she didn't see anything to suggest the bacterial immune system was related to the one plants and animals use. Still, she thought the system might be adapted for diagnostic tests.

Banfield wasn't the only person to ask Doudna for help with a Crispr project. In 2011, Doudna was at an American Society for Microbiology meeting in San Juan, Puerto Rico, when an intense, dark-haired French scientist asked her if she wouldn't mind stepping outside the conference hall for a chat. This was Emmanuelle Charpentier, a microbiologist at Ume˚a University in Sweden.

As they wandered through the alleyways of old San Juan, Charpentier explained that one of Crispr's associated proteins, named Csn1, appeared to be extraordinary. It seemed to search for specific DNA sequences in viruses and cut them apart like a microscopic multitool. Charpentier asked Doudna to help her figure out how it worked. “Somehow the way she said it, I literally—I can almost feel it now—I had this chill down my back,” Doudna says. “When she said ‘the mysterious Csn1’ I just had this feeling, there is going to be something good here.”

Back in Sweden, Charpentier kept a colony of Streptococcus pyogenes in a biohazard chamber. Few people want S. pyogenes anywhere near them. It can cause strep throat and necrotizing fasciitis—flesh-eating disease. But it was the bug Charpentier worked with, and it was in S. pyogenes that she had found that mysterious yet mighty protein, now renamed Cas9. Charpentier swabbed her colony, purified its DNA, and FedExed a sample to Doudna.

Working together, Charpentier’s and Doudna’s teams found that Crispr made two short strands of RNA and that Cas9 latched onto them. The sequence of the RNA strands corresponded to stretches of viral DNA and could home in on those segments like a genetic GPS. And when the Crispr-Cas9 complex arrives at its destination, Cas9 does something almost magical: It changes shape, grasping the DNA and slicing it with a precise molecular scalpel.

Here’s what’s important: Once they’d taken that mechanism apart, Doudna’s postdoc, Martin Jinek, combined the two strands of RNA into one fragment—“guide RNA”—that Jinek could program. He could make guide RNA with whatever genetic letters he wanted; not just from viruses but from, as far as they could tell, anything. In test tubes, the combination of Jinek’s guide RNA and the Cas9 protein proved to be a programmable machine for DNA cutting. Compared to TALENs and zinc-finger nucleases, this was like trading in rusty scissors for a computer-controlled laser cutter. “I remember running into a few of my colleagues at Berkeley and saying we have this fantastic result, and I think it’s going to be really exciting for genome engineering. But I don’t think they quite got it,” Doudna says. “They kind of humored me, saying, ‘Oh, yeah, that’s nice.’”

On June 28, 2012, Doudna’s team published its results in Science. In the paper and in an earlier corresponding patent application, they suggest their technology could be a tool for genome engineering. It was elegant and cheap. A grad student could do it.

The finding got noticed. In the 10 years preceding 2012, 200 papers mentioned Crispr. By 2014 that number had more than tripled. Doudna and Charpentier were each recently awarded the $3 million 2015 Breakthrough Prize. Time magazine listed the duo among the 100 most influential people in the world. Nobody was just humoring Doudna anymore.

MOST WEDNESDAY AFTERNOONS, Feng Zhang, a molecular biologist at the Broad Institute of MIT and Harvard, scans the contents of Science as soon as they are posted online. In 2012, he was working with Crispr-Cas9 too. So when he saw Doudna and Charpentier's paper, did he think he'd been scooped? Not at all. “I didn't feel anything,” Zhang says. “Our goal was to do genome editing, and this paper didn't do it.” Doudna's team had cut DNA floating in a test tube, but to Zhang, if you weren't working with human cells, you were just screwing around.

That kind of seriousness is typical for Zhang. At 11, he moved from China to Des Moines, Iowa, with his parents, who are engineers—one computer, one electrical. When he was 16, he got an internship at the gene therapy research institute at Iowa Methodist hospital. By the time he graduated high school he'd won multiple science awards, including third place in the Intel Science Talent Search.

When Doudna talks about her career, she dwells on her mentors; Zhang lists his personal accomplishments, starting with those high school prizes. Doudna seems intuitive and has a hands-off management style. Zhang … pushes. We scheduled a video chat at 9:15 pm, and he warned me that we'd be talking data for a couple of hours. “Power-nap first,” he said.

Zhang got his job at the Broad in 2011, when he was 29. Soon after starting there, he heard a speaker at a scientific advisory board meeting mention Crispr. “I was bored,” Zhang says, “so as the researcher spoke, I just Googled it.” Then he went to Miami for an epigenetics conference, but he hardly left his hotel room. Instead Zhang spent his time reading papers on Crispr and filling his notebook with sketches on ways to get Crispr and Cas9 into the human genome. “That was an extremely exciting weekend,” he says, smiling.

Just before Doudna's team published its discovery in Science, Zhang applied for a federal grant to study Crispr-Cas9 as a tool for genome editing. Doudna's publication shifted him into hyperspeed. He knew it would prompt others to test Crispr on genomes. And Zhang wanted to be first.

Even Doudna, for all of her equanimity, had rushed to report her finding, though she hadn't shown the system working in human cells. “Frankly, when you have a result that is exciting,” she says, “one does not wait to publish it.”

In January 2013, Zhang's team published a paper in Science showing how Crispr-Cas9 edits genes in human and mouse cells. In the same issue, Harvard geneticist George Church edited human cells with Crispr too. Doudna's team reported success in human cells that month as well, though Zhang is quick to assert that his approach cuts and repairs DNA better.

That detail matters because Zhang had asked the Broad Institute and MIT, where he holds a joint appointment, to file for a patent on his behalf. Doudna had filed her patent application—which was public information—seven months earlier. But the attorney filing for Zhang checked a box on the application marked “accelerate” and paid a fee, usually somewhere between $2,000 and $4,000. A series of emails followed between agents at the US Patent and Trademark Office and the Broad's patent attorneys, who argued that their claim was distinct.

A little more than a year after those human-cell papers came out, Doudna was on her way to work when she got an email telling her that Zhang, the Broad Institute, and MIT had indeed been awarded the patent on Crispr-Cas9 as a method to edit genomes. “I was quite surprised,” she says, “because we had filed our paperwork several months before he had.”

The Broad win started a firefight. The University of California amended Doudna's original claim to overlap Zhang's and sent the patent office a 114-page application for an interference proceeding—a hearing to determine who owns Crispr—this past April. In Europe, several parties are contesting Zhang's patent on the grounds that it lacks novelty. Zhang points to his grant application as proof that he independently came across the idea. He says he could have done what Doudna's team did in 2012, but he wanted to prove that Crispr worked within human cells. The USPTO may make its decision as soon as the end of the year.

The stakes here are high. Any company that wants to work with anything other than microbes will have to license Zhang's patent; royalties could be worth billions of dollars, and the resulting products could be worth billions more. Just by way of example: In 1983 Columbia University scientists patented a method for introducing foreign DNA into cells, called cotransformation. By the time the patents expired in 2000, they had brought in $790 million in revenue.

It's a testament to Crispr's value that despite the uncertainty over ownership, companies based on the technique keep launching. In 2011 Doudna and a student founded a company, Caribou, based on earlier Crispr patents; the University of California offered Caribou an exclusive license on the patent Doudna expected to get. Caribou uses Crispr to create industrial and research materials, potentially enzymes in laundry detergent and laboratory reagents. To focus on disease—where the long-term financial gain of Crispr-Cas9 will undoubtedly lie—Caribou spun off another biotech company called Intellia Therapeutics and sublicensed the Crispr-Cas9 rights. Pharma giant Novartis has invested in both startups. In Switzerland, Charpentier cofounded Crispr Therapeutics. And in Cambridge, Massachusetts, Zhang, George Church, and several others founded Editas Medicine, based on licenses on the patent Zhang eventually received.

Thus far the four companies have raised at least $158 million in venture capital.

ANY GENE TYPICALLY has just a 50–50 chance of getting passed on. Either the offspring gets a copy from Mom or a copy from Dad. But in 1957 biologists found exceptions to that rule, genes that literally manipulated cell division and forced themselves into a larger number of offspring than chance alone would have allowed.

A decade ago, an evolutionary geneticist named Austin Burt proposed a sneaky way to use these “selfish genes.” He suggested tethering one to a separate gene—one that you wanted to propagate through an entire population. If it worked, you'd be able to drive the gene into every individual in a given area. Your gene of interest graduates from public transit to a limousine in a motorcade, speeding through a population in flagrant disregard of heredity's traffic laws. Burt suggested using this “gene drive” to alter mosquitoes that spread malaria, which kills around a million people every year. It's a good idea. In fact, other researchers are already using other methods to modify mosquitoes to resist the Plasmodium parasite that causes malaria and to be less fertile, reducing their numbers in the wild. But engineered mosquitoes are expensive. If researchers don't keep topping up the mutants, the normals soon recapture control of the ecosystem.

Push those modifications through with a gene drive and the normal mosquitoes wouldn't stand a chance. The problem is, inserting the gene drive into the mosquitoes was impossible. Until Crispr-Cas9 came along.

Today, behind a set of four locked and sealed doors in a lab at the Harvard School of Public Health, a special set of mosquito larvae of the African species Anopheles gambiae wriggle near the surface of shallow tubs of water. These aren't normal Anopheles, though. The lab is working on using Crispr to insert malaria-resistant gene drives into their genomes. It hasn't worked yet, but if it does … well, consider this from the mosquitoes' point of view. This project isn't about reengineering one of them. It's about reengineering them all.

Kevin Esvelt, the evolutionary engineer who initiated the project, knows how serious this work is. The basic process could wipe out any species. Scientists will have to study the mosquitoes for years to make sure that the gene drives can't be passed on to other species of mosquitoes. And they want to know what happens to bats and other insect-eating predators if the drives make mosquitoes extinct. “I am responsible for opening a can of worms when it comes to gene drives,” Esvelt says, “and that is why I try to ensure that scientists are taking precautions and showing themselves to be worthy of the public's trust—maybe we're not, but I want to do my damnedest to try.”

Esvelt talked all this over with his adviser—Church, who also worked with Zhang. Together they decided to publish their gene-drive idea before it was actually successful. They wanted to lay out their precautionary measures, way beyond five nested doors. Gene drive research, they wrote, should take place in locations where the species of study isn't native, making it less likely that escapees would take root. And they also proposed a way to turn the gene drive off when an engineered individual mated with a wild counterpart—a genetic sunset clause. Esvelt filed for a patent on Crispr gene drives, partly, he says, to block companies that might not take the same precautions.

Within a year, and without seeing Esvelt's papers, biologists at UC San Diego had used Crispr to insert gene drives into fruit flies—they called them “mutagenic chain reactions.” They had done their research in a chamber behind five doors, but the other precautions weren't there.Church said the San Diego researchers had gone “a step too far”—big talk from a scientist who says he plans to use Crispr to bring back an extinct woolly mammoth by deriving genes from frozen corpses and injecting them into elephant embryos. (Church says tinkering with one woolly mammoth is way less scary than messing with whole populations of rapidly reproducing insects. “I'm afraid of everything,” he says. “I encourage people to be as creative in thinking about the unintended consequences of their work as the intended.”)

Ethan Bier, who worked on the San Diego fly study, agrees that gene drives come with risks. But he points out that Esvelt's mosquitoes don't have the genetic barrier Esvelt himself advocates. (To be fair, that would defeat the purpose of a gene drive.) And the ecological barrier, he says, is nonsense. “In Boston you have hot and humid summers, so sure, tropical mosquitoes may not be native, but they can certainly survive,” Bier says. “If a pregnant female got out, she and her progeny could reproduce in a puddle, fly to ships in the Boston Harbor, and get on a boat to Brazil.”

These problems don't end with mosquitoes. One of Crispr's strengths is that it works on every living thing. That kind of power makes Doudna feel like she opened Pandora's box. Use Crispr to treat, say, Huntington's disease—a debilitating neurological disorder—in the womb, when an embryo is just a ball of cells? Perhaps. But the same method could also possibly alter less medically relevant genes, like the ones that make skin wrinkle. “We haven't had the time, as a community, to discuss the ethics and safety,” Doudna says, “and, frankly, whether there is any real clinical benefit of this versus other ways of dealing with genetic disease.”

That's why she convened the meeting in Napa. All the same problems of recombinant DNA that the Asilomar attendees tried to grapple with are still there—more pressing now than ever. And if the scientists don't figure out how to handle them, some other regulatory body might. Few researchers, Baltimore included, want to see Congress making laws about science. “Legislation is unforgiving,” he says. “Once you pass it, it is very hard to undo.”

In other words, if biologists don't start thinking about ethics, the taxpayers who fund their research might do the thinking for them.

All of that only matters if every scientist is on board. A month after the Napa conference, researchers at Sun Yat-sen University in Guangzhou, China, announced they had used Crispr to edit human embryos. Specifically they were looking to correct mutations in the gene that causes beta thalassemia, a disorder that interferes with a person's ability to make healthy red blood cells.

The work wasn't successful—Crispr, it turns out, didn't target genes as well in embryos as it does in isolated cells. The Chinese researchers tried to skirt the ethical implications of their work by using nonviable embryos, which is to say they could never have been brought to term. But the work attracted attention. A month later, the US National Academy of Sciences announced that it would create a set of recommendations for scientists, policymakers, and regulatory agencies on when, if ever, embryonic engineering might be permissible. Another National Academy report will focus on gene drives. Though those recommendations don't carry the weight of law, federal funding in part determines what science gets done, and agencies that fund research around the world often abide by the academy's guidelines.

THE TRUTH IS, most of what scientists want to do with Crispr is not controversial. For example, researchers once had no way to figure out why spiders have the same gene that determines the pattern of veins in the wings of flies. You could sequence the spider and see that the “wing gene” was in its genome, but all you’d know was that it certainly wasn’t designing wings. Now, with less than $100, an ordinary arachnologist can snip the wing gene out of a spider embryo and see what happens when that spider matures. If it’s obvious—maybe its claws fail to form—you’ve learned that the wing gene must have served a different purpose before insects branched off, evolutionarily, from the ancestor they shared with spiders. Pick your creature, pick your gene, and you can bet someone somewhere is giving it a go.

Academic and pharmaceutical company labs have begun to develop Crispr-based research tools, such as cancerous mice—perfect for testing new chemotherapies. A team at MIT, working with Zhang, used Crispr-Cas9 to create, in just weeks, mice that inevitably get liver cancer. That kind of thing used to take more than a year. Other groups are working on ways to test drugs on cells with single-gene variations to understand why the drugs work in some cases and fail in others. Zhang’s lab used the technique to learn which genetic variations make people resistant to a melanoma drug called Vemurafenib. The genes he identified may provide research targets for drug developers.

The real money is in human therapeutics. For example, labs are working on the genetics of so-called elite controllers, people who can be HIV-positive but never develop AIDS. Using Crispr, researchers can knock out a gene called CCR5, which makes a protein that helps usher HIV into cells. You’d essentially make someone an elite controller. Or you could use Crispr to target HIV directly; that begins to look a lot like a cure.

Or—and this idea is decades away from execution—you could figure out which genes make humans susceptible to HIV overall. Make sure they don’t serve other, more vital purposes, and then “fix” them in an embryo. It’d grow into a person immune to the virus.

But straight-out editing of a human embryo sets off all sorts of alarms, both in terms of ethics and legality. It contravenes the policies of the US National Institutes of Health, and in spirit at least runs counter to the United Nations’ Universal Declaration on the Human Genome and Human Rights. (Of course, when the US government said it wouldn’t fund research on human embryonic stem cells, private entities raised millions of dollars to do it themselves.) Engineered humans are a ways off—but nobody thinks they’re science fiction anymore.

Even if scientists never try to design a baby, the worries those Asilomar attendees had four decades ago now seem even more prescient. The world has changed. “Genome editing started with just a few big labs putting in lots of effort, trying something 1,000 times for one or two successes,” says Hank Greely, a bioethicist at Stanford. “Now it’s something that someone with a BS and a couple thousand dollars’ worth of equipment can do. What was impractical is now almost everyday. That’s a big deal.”

In 1975 no one was asking whether a genetically modified vegetable should be welcome in the produce aisle. No one was able to test the genes of an unborn baby, or sequence them all. Today swarms of investors are racing to bring genetically engineered creations to market. The idea of Crispr slides almost frictionlessly into modern culture.

In an odd reversal, it’s the scientists who are showing more fear than the civilians. When I ask Church for his most nightmarish Crispr scenario, he mutters something about weapons and then stops short. He says he hopes to take the specifics of the idea, whatever it is, to his grave. But thousands of other scientists are working on Crispr. Not all of them will be as cautious. “You can’t stop science from progressing,” Jinek says. “Science is what it is.” He’s right. Science gives people power. And power is unpredictable.

[The ominous last paragraph aside, why should this column take special notice of Genome Editing? A formal reason is that the motto of HolGenTech, Inc. has been for years "Ask what you can do for your genome". Now the answer, in theory, is obvious. "If there are defects in your genome, get them edited out". However, it is a well known question "what is the difference between theory and practice?". "In theory, there is no difference. The difference is in practice". Genome Editing may be "easy" (as the title of this summary says) IF YOU KNOW WHAT TO EDIT OUT AND WHAT THE REPLACEMENT SHOULD BE. In simple cases, like well-known single nucleotide polymorphisms (the ethical barrier - outside of China - aside) genome editing is truly a straightforward process. It is like for a spell-checker to click on a red-lined word, and the single character is changed. However, to edit a language with complex glitches, one must understand the meaning - there is no way around it. "Fractal DNA grows fractal organisms" provides the mathematics (fractal geometry) that leads us to such understanding. If you think (and eveyone should) that Genome Editing "Will remake the World", size up the value of (mathematical) understanding put together with the mechanism of editing! andras_at_pellionisz_dot_com]

Could You Be any Cuter? Genome Editing and the Future of the Human Species

GEORGE W. SLEDGE, JR., MD, Chief of Oncology at Stanford University

Thursday, May 14, 2015

If you want to see what the future holds for us, let me suggest two recent articles. The first, published in the March 5th issue of the MIT Technology Review by Antonio Regalado, is called “Engineering the Perfect Baby.” The second, published in Nature just a week later by a group of concerned scientists, is called “Don’t Edit the Human Germ Line.” Both discuss recent advances that, for all practical purposes, turn science fiction into science. It’s an interesting story.

The story goes back three years to the development of CRISPR/Cas-9 technology for gene editing by Jennifer Doudna and Emmanuelle Charpentier. CRISPRs (short for Clustered Regularly Interspaced Short Palindromic Repeats) are short DNA segments in which segments of viral DNA are inserted, which are then transcribed to a form of RNA (cr-RNA). This viral-specific cr-RNA then directs the nuclease Cas9 to the invading complementary viral DNA, which is cleaved.

We do not think of bacteria as either needing or having an immune system, but CRISPR/Cas9 functions as one in the prokaryote/bacteriophage arms race. It is elegant and simple, a profoundly cool invention far down on the evolutionary tree that somehow failed to make it to mammals.

Doudna and Charpentier had the exceedingly clever, and in retrospect quite obvious, idea that this could be used to edit specific DNA sequences. I say “in retrospect quite obvious,” but it is the sort of retrospective obviousness that turns previously obscure professors working in equally obscure fields into Nobel laureates, as their 2012 Science CRISPR/Cas-9 paper certainly will.

Molecular biologists love this technology, and for good reason. With CRISPR/Cas-9 one can add or subtract genes almost at will. The technology, while not perfect (more on this later), is a straightforward, off-the-shelf tool kit that allows practically anyone to manipulate the genome of practically any cell. It is a game changer for laboratory research. The technology has launched an astonishing number of papers, several new biotech start-ups, and (already) the inevitable ugly patent lawsuits over who got there first.

Because bacterial DNA and human DNA are forged from the same base elements, what one can do inE. coli one can do in H. sapiens. Whether it is wise for H. sapiens to reproduce E. coli technology is the real question.

What Regaldo’s article suggests, and what the Nature article confirms, is that we are close to a tipping point in human history. It is easily conceivable that CRISPR tech can be used to edit the genes of human germ-line cells. We will, in the very near future, be able to alter a baby’s genome, with almost unimaginable consequences.

Is this a line we want to cross? Some, unsurprisingly, find this prospect disturbing. The authors of theNature paper suggested a moratorium on gene editing of human stem cells until we can be work out all of the important practical and ethical issues. Let us slow down, they say, take a deep breath, think things through, and then proceed with caution.

A wonderful idea, but a bit too late, as it turns out. March was so last month. A group of Chinese investigators at the Sun Yat-Sen University in Guangzhou took human stem cells (defective leftovers from a fertility clinic) and used CASPR/Cas-9 to introduce the b-globin gene. b-globin mutations are responsible for beta thalassemia, which afflicts a significant population of patients.

The paper was published in the April 18 issue of Protein & Cell (a journal I had never heard of before), reportedly after having been rejected by Nature and Science on ethical grounds. It is rather like when Gregor Mendel published his article on the genetics of peas in Proceedings of the Natural History Society of Brünn, only now we have PubMed and the world is a very small place. I suspect Protein & Cell’s impact factor just took a quantum leap upwards.

The paper suggests we are not quite there yet: of the 86 embryos where the authors used CRISPR/Cas-9 to introduce the gene, only 4 “took”, and many had off-target mutational events, not a good thing if you are trying to eliminate a genetic defect. In other words, don’t expect this to be available at your local fertility clinic next week.

But if not next week, then maybe next year, or the year after: this field is moving at light speed, and the Chinese doctors were (or so a recent Science article suggests) using last year’s techniques. Lots of very smart people are piling into the field. This will soon be feasible, then eventually trivial, technology.

And as for a moratorium on gene editing of human stem cells? It might stick for a while, but I am not sanguine about its long-term prospects. I think it is a given that any moratorium will eventually fail.

To answer why this is the case, just look at the history of attempts to limit the use of new technologies:

First, the atomic bomb. In 1945, after the first nuclear explosion at Alamogordo, a group of Manhattan Project scientists, led by Leo Szilard (who famously first thought of the nuclear chain reaction that would occur once one split the uranium atom), petitioned the President to halt the use of the bomb. The petition, dated July 17, 1945, stated “the nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale."

The powers that be were not amused. The US government had spent two billion 1945 dollars developing the A bomb as a war measure, it faced the likelihood of an invasion of Japan with untold potential casualties, and it had little sympathy for Japanese civilians. It also saw the bomb as a long-term source of political and military power. The niggling objections of the atomic scientists (and by no means all objected) were ignored, and literally within weeks Hiroshima and Nagasaki ushered in the Atomic Age, in all its frightful glory.

That decision tells you that technologies rapidly get out of control of those who create them. In the Atomic Age, one at least needed a well-heeled nation-state to back you if you wanted to build a bomb, a partial barrier (though only partial: impoverished Pakistan, two generations later, is capable of immolating its neighbors). And nation-states, since 1945, have thankfully not used these weapons on other nation-states, though nuclear proliferation sadly continues.

But in the Genome Era, just about any college biology graduate soon will be able to insert genes that eliminate defects or increase function. For practical purposes, Lichtenstein and Monaco could be the biologic equivalent of today’s nuclear powers five years from now. Unless the moratorium is worldwide, all you would need to do would be to fly somewhere that didn’t share the biomedical ethical stance of the Nature authors. And if I knew I carried a deadly genetic defect, I would do anything to save my children from the same fate.

By the way, you might say that comparing the atom bomb to CRISPR/Cas9 is a somewhat ridiculous comparison given the relative significance of the two. And you would be right, though perhaps not in the way you might first think: CRISPR/Cas9 is likely to be far more significant in the long run. A technology that allows a species to intentionally evolve new characteristics is far more important for the history of that species. Gills, anyone? Chlorophyll rather than melanin in your skin? All those pesky vitamins we don’t make ourselves? Edit them in.

The somewhat more pertinent analogy, and one commented on by many, is the Asilomar conference. After Cohen and Boyer performed the first recombinant DNA experiments, there was a similar terror of Dr. Frankenstein experiments by mad scientists. The city fathers of Cambridge, Massachusetts, appropriately frightened by the proximity of Harvard and MIT, passed a law banning the use of recombinant DNA technology within its city limits.

The then-small community of molecular biologists met at the Asilomar conference center (near San Francisco) in 1975 and voluntarily developed limits on certain types of genetic experiments until their safety could be determined. It was a highly moral stance by the leaders of a new biologic revolution, but also a highly practical one, as it decreased public opposition to recombinant DNA technology.

The moratorium turned out to a brief one (no one, to my knowledge, has ever been killed by recombinant DNA, at least not yet), and with its lifting the biotech industry was born, and we never gave those early qualms a second thought.

I’ve been to Asilomar several times: my Oncology division at Stanford holds its annual scientific retreat there. It is a lovely state park on the Pacific coast, and a great place to hold a conference: watching the sunset over the ocean at Asilomar is an awe-inspiring experience.

But Asilomar is just not the right model for what is happening today. Molecular biology is ubiquitous, a global enterprise carried on by tens or hundreds of thousands of scientists, not the small handful in the 1970s. A few academic scientists no longer drive it; big pharma and biotech call the shots, and can be expected to remain highly ethical just so long as no obscene profits can be made from a new technologic development.

Jennifer Doudna has suggested that we need an Asilomar equivalent for CRISPR/Cas9 gene editing of embryos, and indeed there has already been a preliminary meeting of scientists, lawyers, and bioethicists in Napa Valley’s Carneros Inn earlier this year. By the way, the Carneros Inn is even nicer than Asilomar: one should always hold scientific retreats at great resorts in wine country. It greatly improves the meeting outputs.

The Asilomar scientists had what were, in essence, short-term concerns: will recombinant DNA, let loose on the world, be the scientific equivalent of the Four Horsemen of the Apocalypse? Well, no, and we knew the answer quickly.

But CRISPR-Cas9 stem cell germ-line editing, once the technical wrinkles are worked out, is a technology whose medical and social implications will take generations to play out. The pressure to use it for medical purposes will be enormous. Edit out or fix a gene that causes some dreadful neurodegenerative disease (a Huntington’s chorea or its equivalent) and no one will notice the difference for forty or fifty years. These diseases will go away, and who will miss them? And who among my great-grandchildren will even care, it having been something they have always lived with?

Perhaps (one already knows the objections) we should not assign God-like powers over creation to ourselves, but how long will that dike hold when a Senator’s or a billionaire’s or a dictator’s misbegotten embryo needs genomic resuscitation?

And edit in something that makes one smarter or faster or—dare I say—cuter? Cosmetic editing will be popular the moment we figure out how to do it. Pretty much the first law of the consumer electronics industry is that every new technical advance (viz: VCR, CD-ROM, streaming video) is used almost immediately for pornography. I can only imagine what will happen with gene editing.

I simply do not trust us not to use CRISPR/Cas-9 germ-line editing. There is a certain technologic imperialism that renders it inevitable. We always want to play with the cool new toys, and this one will be really, really easy to play with. What will my descendants look like? Probably not like me. And there are those who would say that is a good thing.

[This overview of Genome Editing is not the latest - meanwhile Drs. Doudna and Charpentier received the $3M Breakthrough Prize for their pioneering, and a couple of days ago a third contributor (Dr. Zhang) was prominently, according to some somewhat myopically so, by a "history overview". We do not get into the issue of personality of reviews. Geography, yes; the recent review shows that even tiny Lithuania edged into postmodern genomics - and the Global Map of Economy is certain to change:

The Twenty-Year Story [as interpreted by E.L.] of CRISPR Unfolded across Twelve Cities in Nine Countries. For each ‘‘chapter’’ in the CRISPR ‘‘story,’’ the map shows the sites where the primary work occurred and the first submission dates of the papers. Green circles

refer to the early discovery of the CRISPR system and its function; red to the genetic, molecular biological, and biochemical characterization; and blue to the final

step of biological engineering to enable genome editing.

[Back to the Stanford review with the "cute" title yet pondering utterly serious global issues, the historical comparison of the impact of "nuclear science and technology" is particularly worth considering. When the atom, that was axiomatically not supposed to split, did split, scientists were flabbergasted for some time. Likewise, when the human DNA was fully sequenced, scientists were flabbergasted by the meager number of "genes", followed by the even more staggering realization next year that the mouse has not only essentially the same number, but practically the same genes as we do. Although the utility of FractoGene ("Fractal DNA grows fractal organisms") was submitted to the US Patent Office 2002 and "Fractal Defects" were revealed by 2007 (the last CIP-date of the patent filing), old school genomists were staring at Fractal Defects with glazed eyes "So what? Can we do anything about them?" For nuclear science and technology the scale of interest catapulted when the very practial benefit was realized (colossal energy released either by nuclear fission or fusion). The Stanford review, with "cute" title, masks the similarly profound global implications of Genome Editing. Exploration of the horizon of what this science and technology may mean for homo sapiens glosses beyond imminent practial opportunities. Let us take another example of an explosion of technology; the Internet. We all know that it started as a small scale utility of computer system administrators to email along the massively connected net. The technology truly took off when private industry discovered the immense profit-making ability by e.g. Amazon, eBay (etc). (Amazon is today the World's largest "store" without a single "brick and mortar store" at all). Genome Editing will not take off in the distant future by making us "cuter". Rather, small countries (e.g. Denmark, Lithuania, etc) may invent extremely lucrative ways to turn genome editing (which is definitely not GMO) into enormous profit. (Back to Internet, Skype developed in Estonia by two students yielded the biggest investment-return, ever, while the HQ and the core of developers are still in Tallin, Estonia). It is just guesswork at this moment, what twists will catapult which country into the lead, e.g. by a combination of Mining Fractal Defects and the use of Genome Editing to elegantly getting rid of them. True, some people do not like to live by metaphors. We can not resist to provide the visual metaphor that "getting rid of inclusions in diamonds" is already a very profitable business. Of course, inclusions in diamonds are visible - while one has to use FractoGene to find Fractal Defects in much murkier DNA - andras_at_pellionisz_dot_com]



The Telegraph

Jan. 22.

Madhumita Murgia

Chinese scientists create 'designer dogs' by genetic engineering

Two beagles created using the CRISPR technology were customised to be born with double the amount of muscles as a typical dog

Dogs with double muscles by deleting a single gene called myostatin.

[Note, that Genome Editing is totally different from GMO. Genome Editing does not introduce foreign DNA sequence (like someone who would change an existing text with foreign thoughts). Genome Editing can "fix the spelling" (like a word processor spell-checker does), or in this case takes away (not add) a snippet from an existing DNA - AJP]

Belgian Blue cattle (bull) naturally lacks the myostatin gene and hence is very muscley

[Note that the bull above naturally lacks the myostatin gene, probably as a result of selection by human breeders over raising many generations of cattle. To copy such "invention of nature" in other livestock could yield massive economic benefits to agriculture and animal husbandry - AJP]

You've heard of designer babies in science fiction, but it's getting closer to reality: scientists in China claim they are the first to use gene editing to create "designer dogs" with special characteristics.

Two beagle puppies called Tiangou and Hercules, were created to be extra muscley - with double the amount of muscle mass than typical - by deleting a single gene called myostatin.

The team from the Guangzhou Institutes of Biomedicine and Health reported their results last week in the Journal of Molecular Cell Biology, saying the goal was to create dogs with other DNA mutations, including ones that mimic human diseases such as Parkinson’s and muscular dystrophy, so human treatments could be tested on them.

The muscle-enhanced beagles Tiangou and Hercules were creating using a gene-editing technology called CRISPR-Cas 9 - a sort of cut-and-paste tool for DNA that allows you to design living creatures the way you want on a computer, and then actually create them.

A natural genetic disorder, caused by the myostatin genebeing knocked out, leads to exceptionally muscled whippets.

"It’s the one of the most precise and efficient ways we have of editing DNA in any cell, including humans," said Professor George Church of Harvard University, who is a pioneer in the field of genetic engineering.

It works by digitally designing a piece of nucleic acid that recognises a single place in your genome, and then allows cutting and editing at that point.

According to the MIT Technology Review and the published paper, Chinese researchers inserted this DNA-modifying tool into more than 60 dog embryos and cut out the myostatin gene which blocks muscle production, so that the beagles’ bodies would produce extra muscle.

Of the 27 puppies born, only two of the dogs, Tiangou and Hercules, had both copies of the gene disrupted which should have led to physical changes.

Tiangou, the female beagle, showed obvious physical changes compared to other puppies, while Hercules was still producing some myostatin and was less muscled.

Only a few weeks previously, the Beijing Genomics Institute said it had created designer 'micropigs' that will be sold for $1600 as pets.

Since the technique is relatively simple, many believe humans could be next. In April another Chinese team reported altering human embryos in the laboratory, to try curing beta-thalassemia through gene editing.

"We have already modified embryos of both pigs and primates," Professor Church told the Telegraph. "It might actually be safer, and developmentally important to make corrections in a sperm or embryo, rather than a young child or an adult."

For instance, he said, gene editing can be used to correct some forms of blindness, but it has to be done on babies, or young children, before their neurons become solidified and more resistant to change in adulthood.

But because the technology is so new, the long-term effects are still unclear. "There has to be extensive testing on animals and human adults first," Professor Church said.

Gene edited pigs may soon become organ donors

Denmark News

January 16, 2016

George Church (Harvard) - a pioneer of Genome Editing

WASHINGTON US scientists are closing in on their bid to create designer pigs through new gene editing techniques to source heart, livers and kidneys suitable for transplant into seriously ill people.

After two key gene changes, the scientists say they have cleared the path to the lifesaving transplants.

In a paper published in Science journal, they describe using the CRISPR editing method in pig cells to destroy DNA sequences at 62 sites in the animal's genome that could be potentially harmful to a human recipient. Previous efforts with the technology have only managed to cut away six areas of the genome at one go.

The latest result is the most extreme example to date of the selective trimming of unwanted parts of the genome possible through CRISPR.

The latest study led by Dr. George Church, a geneticist from Harvard Medical School has shown that it is feasible to drastically edit the genome of pigs to remove native porcine endogenous retrovirus (PERV), which has been shown to move from pig to human cells in a dish, and to infect human cells transplanted into mice with weak immune systems.

The report states that the pig DNA is riddled with many copies of a DNA sequence that is the remnant of a virus and can still produce infectious viral particles.

Church, who first presented his study at a workshop at the National Academy of Sciences on October 5, strongly believes the technology will one day make it possible for pig organs to be used as a substitute for human organs for patients in need of a transplant and for whom there are no suitable donor organs.

The wait for suitable donor organs is considerably huge. In the US alone about 122,500 people are waiting for a life-saving organ transplant, and some have argued that a steady supply of pig organs could make up the shortage, because they are similar in size to those of people.

But so far, no one has been able to get around the violent immune response that pig cells provoke.

Working towards a breakthrough, Church has co-founded a biotechnology company 'eGenesis' to produce pigs for organ transplants.

Pig-to-human transplants are not novel. Currently, pig heart valves that have been scrubbed and depleted of pig cells are commonly used to repair faulty human heart valves.

But whole pig organs, which are functionally similar to human organs, have so far not been used for transplant due to associated risks.

Besides studying the potential risks, Church's team is also looking to address ethical concerns of human genome editing.

The ethical debate has been ignited in the wake of reports that biologists in China are allegedly carrying out the first experiment to alter the DNA of human embryos. British scientists have subsequently asked for permission to edit human embryos.

[Interestingly enough, this article was printed in a Journal in Denmark - a small European country with more pigs than people. Next door, Holland has long benefitted from a highly lucrative flower-industry; by coming up with formerly non-existent beuties (like black tulips, etc, etc). People pay premium prices for such genomic novelties - in the case of Holland they are non-edible GMO flowers. Denmark prospers from highly advanced agriculture, e.g. pork food industry. A pound of pork fetches a couple of dollars at the check-out counter. Imagine that a person ANYWHERE would pay for an organ-transplant to save his/her life (e.g. how much Steve Jobs gladly paid for a liver-transplant, a wild guess is 1,000,000x or even a million-x per pound). Would a person think twice if the "replacement organ" would be porcine? Most likely, yes. However, after having learnt the science background and the alternatives, after careful contemplation may opt for it. This article is closely connected to an earlier Nature publication (see below), mentioning that "Novartis initially planned to spend more than $1 billion on xenotransplantation". One can tell that this issue has an enormous potential to health-care, the science of genome analytics, as well as global economy. -]

New life for pig-to-human transplants

Nature 527, 152–154 (12 November 2015) doi:10.1038/527152a

Sara Reardon

10 November 2015

Gene-editing technologies have breathed life into the languishing field of xenotransplantation.

Pale on its bed of crushed ice, the lung looks like offal from a butcher’s counter. Just six hours ago, surgeons at the University of Maryland’s medical school in Baltimore removed it from a hefty adult pig and, with any luck, it will soon be coaxed back to life, turning a rich red and resuming its work in the chest of a six-year-old baboon.

An assistant brings the lung to Lars Burdorf and his fellow surgeons, who currently have their hands in the baboon’s splayed chest. The team then begins the painstaking process of connecting the organ to the baboon’s windpipe and stitching together the appropriate arteries and blood vessels. But this 5-hour, US$50,000 operation is just one data point in a much longer experiment — one that involves dozens of labs and decades of immunological research and genetic engineering to produce a steady and safe source of organs for human transplantation. If the baboon’s immune system tolerates this replacement lung, it will be a sign that the team is on the right track.

Robin Pierson heads the Maryland lab, which has performed about 50 pig-to-primate transplants like this one to test different combinations of genetic modifications in the pig and immune-suppressing drugs in the primate. Even so, the team has not had a primate survive for longer than a few days. The complexities of the immune system and the possibility of infection by pig viruses are formidable and drove large companies out of the field in the early 2000s.

That trend may now be reversing, thanks to improved immunosuppressant drugs and advances in genome-editing technologies such as CRISPR/Cas9. These techniques allow scientists to edit pig genes, which could cause rejection or infection, much more quickly and accurately than has been possible in the past. In October, eGenesis, a life-sciences company in Boston, Massachusetts, announced that it had edited the pig genome in 62 places at once.

Some researchers now expect to see human trials with solid organs such as kidneys from genetically modified pigs within the next few years (see ‘Choice cuts’). United Therapeutics, a biotechnology company in Silver Spring, Maryland, has spent $100 million in the past year to speed up the process of making transgenic pigs for lung transplants — the first major industry investment in more than a decade. It says that it wants pig lungs in clinical trials by 2020. But others think that the timeline is unrealistic, not least because regulators are uneasy about safety and the risk of pig organs transmitting diseases to immunosuppressed humans.

“I think we’re getting closer, in terms of science,” says transplant surgeon Jeremy Chapman of the University of Sydney’s Westmead Hospital in Australia. “But I’m not yet convinced we’ve surpassed all the critical issues that are ahead of us. Xenotransplantation has had a long enduring reality that every time we knock down a barrier, there’s another one just a few steps on.”

Long history

Surgeons have been attempting to put baboon and chimpanzee kidneys into humans since at least the 1960s. They had little success — patients died within a few months, usually because the immune system attacked and rejected the organ. But the idea of xenotransplantation persisted. It could, proponents say, help to save the lives of the tens of thousands of people around the world who die each year while waiting for a suitable human donor. And having a steady supply of farm-grown organs would allow doctors to place recipients on immunosuppressant drugs days ahead of surgery, which should improve survival rates.

When details about why non-human organs are rejected began to emerge in the 1990s, the transplantation field was ready to listen. In 1993, surgeon David Cooper of the University of Pittsburgh in Pennsylvania and his colleagues discovered that most of the human immune reaction was directed at a single pig antigen: a sugar molecule called α-1,3-galactose, or α-gal, on cell surfaces that can cause organ rejection within minutes1. An enzyme called α-1,3-galactosyltransferase is necessary for producing this sugar, and knocking out the gene that produces the enzyme should temper the reaction.

This discovery and other advances in transplantation medicine made the problem seem more tractable to big pharmaceutical companies. In 1996, Novartis in Basel, Switzerland, began to invest heavily in xenotransplantation research, says Geoffrey MacKay, who was the firm’s business director for transplants and immunology at the time and oversaw the xenotransplantation effort. “They wanted to not only put a dent into the organ shortage but really solve it via transgenic pigs.” MacKay is currently interim chief executive at eGenesis.

Novartis initially planned to spend more than $1 billion on xenotransplantation, including both scientific research and planning the infrastructure that would be needed to grow pigs in germ-free facilities around the world. Other companies put some skin in the game, including Boston-based Genzyme and PPL Therapeutics, the British company that collaborated in the creation of Dolly, the first cloned sheep. Regulators such as the US Food and Drug Administration (FDA) began to draw up the guidance and standards that companies would need to meet before the technology could be moved into people.

But the immune system turned out to be much more complex than anticipated, and baboons that received pig organs never survived longer than a few weeks, even when the researchers were able to suppress α-gal production with drugs. A second major concern, especially to regulators, was the risk of infection. Even if pigs could be kept entirely sterile, the pig genome is sprinkled with dozens of dormant porcine endogenous retroviruses (PERVs), and studies conflicted as to whether these could become active in humans.

The challenges proved too daunting, and in the early 2000s Novartis killed its xenotransplantation programme, reshuffling or laying off its researchers. Other companies followed suit. It became, Pierson says, “the third rail of biotech to discuss xenotransplantation as a business plan”.

For the next ten years, the business side of the field went dark, at least as far as solid-organ transplants were concerned. Meanwhile, a few research teams and start-up companies began pursuing pig tissue transplants: a much simpler goal than using solid organs because the immune response is not as severe. In April, Chinese regulators approved the use of pig corneas from which all the cells have been removed2. Also on the near horizon are pig insulin-producing islet cells that might be transplanted into people with diabetes.

The first commercially available islets are likely to come from technology designed by Living Cell Technologies (LCT), a biotech company based in Auckland, New Zealand, that has developed a process to encapsulate pig islet cells in a gelatinous ‘dewdrop’ that protects them from the human immune system. The product, called DIABECELL, is currently in late-stage clinical trials in several countries. Patients implanted with the cells have survived more than nine years without evidence of immune rejection or infection3.

“I think people are coming around to look at xenotransplantation in a more-favourable light knowing that we have strong safety data,” says LCT research lead Jackie Lee. Diatranz Otsuka Limited, in Auckland, is now running the programme.

Solid organs still pose a challenge. The handful of researchers who have continued to work with them have solved some of the problems that vexed Novartis, such as identifying other key pig antigens and the correct combinations of immunosuppressant drugs. But different organs have different problems: kidneys may be safer than hearts, for instance. Lungs, as Pierson’s team has discovered, are extremely difficult to transplant, because they have extensive networks of blood vessels, which provides more opportunities for primate blood to meet pig proteins and to coagulate. Pierson’s current trials use lungs from an α-gal-knockout pig that includes five human genes. The baboon is treated with a combination of four immunosuppressant drugs.

Most US researchers, including Pierson and Cooper, have relied on pigs made by a regenerative-medicine company called Revivicor in Blacksburg, Virginia, that spun-out from PPL Therapeutics. In 2003, Revivicor co-founder David Ayares and his colleagues created the first cloned pig genetically modified to delete α-gal4. The company has since been experimenting with altering other protein antigens that trigger the immune system or cause human blood to coagulate.

These modifications have greatly lengthened the time that an organ can survive in a baboon. In one trial, surgeon Muhammad Mohiuddin at the National Heart, Lung, and Blood Institute in Bethesda, Maryland, and his colleagues took the heart from an α-gal-free pig that had two human genes that protect from coagulation and sewed it into the abdomen of a baboon5. The organ did not replace the baboon’s heart, but the animal lived with the implant for two and a half years.

Mohiuddin says that the group is now attempting a ‘life-supporting’ transplant by replacing the baboon’s heart with a pig heart. The longest life-supporting transplant was published in June6, when Cooper’s group announced that a kidney transplant from a Revivicor pig with six modified genes supported a baboon for 136 days.

Generation game

But the process is slow, Cooper says. It generally takes several generations of breeding to knock out both copies of just one given gene in a pig. Deleting multiple genes or swapping them for their human counterparts takes many more generations, because every litter contains pigs with different combinations of the modified genes.

That is why so many are excited about precise genome-editing tools such as CRISPR/Cas9, which can precisely cut both copies of a gene — or genes — straight from a pig embryo in one go. “Our first [α-]gal-knockout pig took three full years,” says Joseph Tector, a transplant surgeon at Indiana University in Indianapolis. “Now we can make a new pig from scratch in 150 days.” His group recently used CRISPR to knock out two pig genes simultaneously7. The researchers are now beginning to transplant CRISPR-modified pig organs into macaques, one of which has survived for more than three months.

Eventually, gene editing might even eliminate the need for immunosuppression, says Bernhard Hering, a transplant surgeon at the University of Minnesota in Minneapolis. His group is using CRISPR to create pig islets that could be transplanted without the need for drugs. Partly because of LCT’s success with encapsulated islets, many are hopeful that islet cells will be the first genetically modified tissue to make it into clinical trials, paving the regulatory pathway for the more-difficult organs. A non-profit organization has built a germ-free facility in which to raise Hering’s pigs.

Technology revival

The gene-editing advances have brought new investment into the field. In 2011, United Therapeutics acquired Revivicor for about $8 million and announced an ambitious plan to start clinical trials of gene-edited pig lungs by the end of the decade. The company’s co-chief executive, Martine Rothblatt, secured land in North Carolina for a farm that could produce 1,000 pig organs per year and says she expects to break ground by 2017. The facility’s elaborate plans include solar panels and helicopter landing pads to help speed fresh organs to those in need.

In 2014, United Therapeutics formed a $50-million partnership with the biotech firm Synthetic Genomics (SGI) in La Jolla, California, founded by genome-sequencing pioneer Craig Venter. Rather than simply knocking out antigens, SGI is also engineering tissues that sidestep rejection in a different way — such as pig cells that produce surface receptors that act as ‘molecular sponges’ and sop up human immune signalling factors that would otherwise attack the organ. CRISPR and other methods also allow the researchers to make tweaks such as lowering a gene’s expression rather than deleting it completely, says Sean Stevens, head of SGI’s mammalian synthetic-biology group. In September, United Therapeutics committed another $50 million.

Peter Cowan, an immunologist at St Vincent’s Hospital in Melbourne, Australia, is taking a different approach. His group has made pigs that generate antibodies against human immune cells. In their design, the antibodies would be made only by transplanted liver cells, ensuring that the immune system is suppressed just around the organ.

eGenesis was founded in April by bioengineer Luhan Yang and geneticist George Church of the Wyss Institute and Harvard University in Cambridge, Massachusetts. MacKay says that the firm plans to begin transplanting organs into primates next year. To that end, Church says that the company has made embryos that have more than 20 genetic alterations to cell-surface antigens and other factors and is ready to implant the embryos into female pigs. One of its first publications used CRISPR to inactivate 62 occurrences of PERV genes in pig kidney cells8. The researchers have since transferred the cells’ nuclei into pig embryos.

Incidentally, few researchers in the field see the PERV problem as a major safety concern. The virus replicates poorly in human tissues and the risk of spreading it is virtually non-existent, says Jay Fishman, an infectious-disease specialist at Massachusetts General Hospital in Boston. He says that researchers have tracked dozens of people who received unregulated porcine skin grafts, and none seems to have developed disease.

But dealing with PERVs may be a regulatory necessity. The FDA said in an e-mail to Nature that it is still concerned about the possibility of disease caused by PERVs. There are other pathogens to worry about, too. Most major epidemics start with an animal pathogen that jumps to humans, warns Peter Collignon, an infectious-disease scientist at the Australian National University in Canberra. “If you want to do the perfect experiment for finding new novel viruses and letting them multiply, this is it.”

Unless xenotransplants are proved to be extremely safe, the FDA suggests that they be limited to people with life-threatening conditions who have no other options. It will be even harder to get organs from genetically modified pigs to market, the agency says, because regulators must approve both the genetic construct used to make the animal and the organ itself.

Even if safety can be assured, questions remain about whether pig organs would work correctly in their new home, Chapman says. It is unclear whether a pig kidney would, for instance, respond to the human hormones that regulate urination, or whether proteins produced by a pig liver would interact correctly with human systems. And because pigs live for only about ten years, their organs might not survive a human lifetime. Even using a xenotransplant as a ‘bridge’ until a suitable human donor is found will be difficult. After a heart transplant, for instance, fibrous tissue forms around the new organ, making second transplants very difficult, Chapman says.

Given the long list of known hurdles, the surprise setbacks that researchers encounter along the way can be particularly disheartening. About half an hour after its surgery at the University of Maryland, the baboon with a pig’s lung woke up in a cage wearing a small vest that monitored its vital signs. The lung functioned well overnight and was even able to provide enough oxygen to the animal when blood flow to its other lung was temporarily blocked. But the next day, the animal became ill and had to be killed. That was unexpected, Pierson says, because the pig’s multiple genetic modifications seem to have worked well with the baboon’s immune system. A post-mortem examination revealed that fluid had accumulated in the lung and the organ had developed blood clots. Like so many other aspects of xenotransplantation, Pierson says, “this is a problem that we are still learning about”.

Conceptual illustration of a pig farm capable of producing 1,000 organs for transplant per year. Centrally located operating theatres would have helipads for shipping fresh organs for transplant.

[For comment, see the connected article above - andras_at_pellionisz_dot_com]

Genome Editing [What is the code that we are editing?]

MIT Technology Review

The Experiment

By Christina Larson

Until recently, Kunming, capital of China’s southwestern Yunnan province, was known mostly for its palm trees, its blue skies, its laid-back vibe, and a steady stream of foreign backpackers bound for nearby mountains and scenic gorges. But Kunming’s reputation as a provincial backwater is rapidly changing. On a plot of land on the outskirts of the city—wilderness 10 years ago, and today home to a genomic research facility—scientists have performed a provocative experiment. They have created a pair of macaque monkeys with precise genetic mutations.

Last November, the female monkey twins, Mingming and Lingling, were born here on the sprawling research campus of Kunming Biomedical International and its affiliated Yunnan Key Laboratory of Primate Biomedical Research. The macaques had been conceived via in vitro fertilization. Then scientists used a new method of DNA engineering known as CRISPR to modify the fertilized eggs by editing three different genes, and they were implanted into a surrogate macaque mother. The twins’ healthy birth marked the first time that CRISPR has been used to make targeted genetic modifications in primates—potentially heralding a new era of biomedicine in which complex diseases can be modeled and studied in monkeys.

CRISPR, which was developed by researchers at the University of California, Berkeley, Harvard, MIT, and elsewhere over the last several years, is already transforming how scientists think about genetic engineering, because it allows them to make changes to the genome precisely and relatively easily (see “Genome Surgery,” March/April). The goal of the experiment at Kunming is to confirm that the technology can create primates with multiple mutations, explains Weizhi Ji, one of the architects of the experiment.

Ji began his career at the government-affiliated Kunming Institute of Zoology in 1982, focusing on primate reproduction. China was “a very poor country” back then, he recalls. “We did not have enough funding for research. We just did very simple work, such as studying how to improve primate nutrition.” China’s science ambitions have since changed dramatically. The campus in Kunming boasts extensive housing for monkeys: 75 covered homes, sheltering more than 4,000 primates—many of them energetically swinging on hanging ladders and scampering up and down wire mesh walls. Sixty trained animal keepers in blue scrubs tend to them full time.

The lab where the experiment was performed includes microinjection systems, which are microscopes pointed at a petri dish and two precision needles, controlled by levers and dials. These are used both for injecting sperm into eggs and for the gene editing, which uses “guide” RNAs that direct a DNA-cutting enzyme to genes. When I visited, a young lab technician was intently focused on twisting dials to line up sperm with an egg. Injecting each sperm takes only a few seconds. About nine hours later, when an embryo is still in the one-cell stage, a technician will use the same machine to inject it with the CRISPR molecular components; again, the procedure takes just a few seconds.

During my visit in late February, the twin macaques were still only a few months old and lived in incubators, monitored closely by lab staff. Indeed, Ji and his coworkers plan to continue to closely watch the monkeys to detect any consequences of the pioneering genetic modifications.

The Impact

By Amanda Schaffer

The new genome-editing tool called CRISPR, which researchers in China used to genetically modify monkeys, is a precise and relatively easy way to alter DNA at specific locations on chromosomes. In early 2013, U.S. scientists showed it could be used to genetically engineer any type of animal cells, including human ones, in a petri dish. But the Chinese researchers were the first to demonstrate that this approach can be used in primates to create offspring with specific genetic alterations.

“The idea that we can modify primates easily with this technology is powerful,” says Jennifer Doudna, a professor of molecular and cell biology at the University of California, Berkeley, and a developer of CRISPR. The creation of primates with intentional gene alterations could lead to powerful new ways to study complex human diseases. It also poses new ethical dilemmas. From a technical perspective, the Chinese primate research suggests that scientists could probably alter fertilized human eggs with CRISPR; if monkeys are any guide, such eggs could grow to be genetically modified babies. But “whether that would be a good idea is a much harder question,” says Doudna.

The prospect of designer babies remains remote and far from the minds of most researchers developing CRISPR. Far more imminent are the potential opportunities to create animals with mutations linked to human disorders. Experimenting with primates is expensive and can raise concerns about animal welfare, says Doudna. But the demonstration that CRISPR works in monkeys has gotten “a lot of people thinking about cases where primate models may be important.”

At the top of that list is the study of brain disorders. Robert Desimone, director of MIT’s McGovern Institute for Brain Research, says that there is “quite a bit of interest” in using CRISPR to generate monkey models of diseases like autism, schizophrenia, Alzheimer’s disease, and bipolar disorder. These disorders are difficult to study in mice and other rodents; not only do the affected behaviors differ substantially between these animals and humans, but the neural circuits involved in the disorders can be different. Many experimental psychiatric drugs that appeared to work well in mice have not proved successful in human trials. As a result of such failures, many pharmaceutical companies have scaled back or abandoned their efforts to develop treatments.

Primate models could be especially helpful to researchers trying to make sense of the growing number of mutations that genetic studies have linked to brain disorders. The significance of a specific genetic variant is often unclear; it could be a cause of a disorder, or it could just be indirectly associated with the disease. CRISPR could help researchers tease out the mutations that actually cause the disorders: they would be able to systematically introduce the suspected genetic variants into monkeys and observe the results. CRISPR is also useful because it allows scientists to create animals with different combinations of mutations, in order to assess which ones—or which combinations of them—matter most in causing disease. This complex level of manipulation is nearly impossible with other methods.

Guoping Feng, a professor of neuroscience at MIT, and Feng Zhang, a colleague at the Broad Institute and McGovern Brain Institute who showed that CRISPR could be used to modify the genomes of human cells, are working with Chinese researchers to create macaques with a version of autism. They plan to mutate a gene called SHANK3 in fertilized eggs, producing monkeys that can be used to study the basic science of the disorder and test possible drug treatments. (Only a small percentage of people with autism have the SHANK3 mutation, but it is one of the few genetic variants that lead to a high probability of the disorder.)

The Chinese researchers responsible for the birth of the genetically engineered monkeys are still focusing on developing the technology, says Weizhi Ji, who helped lead the effort at the Yunnan Key Laboratory of Primate Biomedical Research in Kunming. However, his group hopes to create monkeys with Parkinson’s, among other brain disorders. The aim would be to look for early signs of the disease and study the mechanisms that allow it to progress.

The most dramatic possibility raised by the primate work, of course, would be using CRISPR to change the genetic makeup of human embryos during in vitro fertilization. But while such manipulation should be technically possible, most scientists do not seem eager to pursue it.

Indeed, the safety concerns would be daunting. When you think about “messing with a single cell that is potentially going to become a living baby,” even small errors or side effects could turn out to have enormous consequences, says Hank Greely, director of the Center for Law and the Biosciences at Stanford. And why even bother? For most diseases with simple genetic causes, it wouldn’t be worthwhile to use CRISPR; it would make more sense for couples to “choose a different embryo that doesn’t have the disease,” he says. This is already possible as part of in vitro fertilization, using a procedure called preimplantation genetic diagnosis.

It’s possible to speculate that parents might wish to alter multiple genes in order to reduce children’s risk, say, of heart disease or diabetes, which have complex genetic components. But for at least the next five to 10 years, that, says Greely, “just strikes me as borderline crazy, borderline implausible.” Many, if not most, of the traits that future parents might hope to alter in their kids may also be too complex or poorly understood to make reasonable targets for intervention. Scientists don’t understand the genetic basis, for instance, of intelligence or other higher-order brain functions—and that is unlikely to change for a long time.

Ji says creating humans with CRISPR-edited genomes is “very possible,” but he concurs that “considering the safety issue, there would still be a long way to go.” In the meantime, his team hopes to use genetically modified monkeys to “establish very efficient animal models for human diseases, to improve human health in the future.”

[2016 hit with full force; the potential of genome editing is both real and colossal. Perhaps the only thing in the history of science and technology to compare is, when after the embarrassing realization that the smallest unit of elements (atoms) were not suppose to split, they did split. The turmoil yielded to the staggering realizations that a) unbelievable amounts of energy are realized both by fission of large atoms, and even larger amount of energy can be gained from fusion of small atoms. Suddenly, a scientific embarrasment changed into a horse-race of superpowers a) to develop the underlying mathematics of nuclear physics (quantum mechanics) and b) to spend Manhattan-project sized funds to hone technology that can actually deliver on the promise. With Genome Editing, we are at the first (a) at the moment (realization of staggering potential). The question, however, is inevitable "What code are we editing?". Simply put, with very few exceptions aside, those highly skilled in the art of genome editing do not really know the mathematics of the code they edit. To illustrate this point, we invoke here the metaphor that it is a generally held notion that "genes" are like keys of a piano - each key creates a tone of certain frequency. An improvement of such "theory of genome function" advanced lately that "genes are turned on and off". Thus, piano music is brutally reduced to "turning keys on and off". Chopin probably would not like that crash oversimplication very much. True, half a year ago the metaphor advanced to "The human genome: a complex orchestra". This is better. One still lacks a true understanding of the art of a music director how to create magnificient music from individual pieces of instruments. A colossal amount of funds are spent on generating Big Data - and now Genome Editing is virtually unstoppable to through out parts of the genome (particularly, of its regulatory system) and replace pieces with something else that is supposed to be better. Imagine a nuclear industry (either peaceful or otherwise) rushing ahead without proper mathematical understanding. It could destroy the World as we know it, some could say. Instead of a trickle at best, we need a massive effort towards laying down the mathematical underpinning of genome regulation, ASAP. - andras_at_pellionisz_dot_com]

CRISPR helps heal mice with muscular dystrophy

Science News

By Jocelyn Kaiser 31 December 2015

The red-hot genome editing tool known as CRISPR has scored another achievement: Researchers have used it to treat a severe form of muscular dystrophy in mice. Three groups report today in Science that they wielded CRISPR to snip out part of a defective gene in mice with Duchenne muscular dystrophy (DMD), allowing the animals to make an essential muscle protein. The approach is the first time CRISPR has been successfully delivered throughout the body to treat grown animals with a genetic disease.

DMD, which mainly affects boys, stems from defects in the gene coding for dystrophin, a protein that helps strengthen and protect muscle fibers. Without dystrophin, skeletal and heart muscles degenerate; people with DMD typically end up in a wheelchair, then on a respirator, and die around age 25. The rare disease usually results from missing DNA or other defects in the 79 exons, or stretches of protein-coding DNA, that make up the long dystrophin gene.

Researchers haven’t yet found an effective treatment for the disorder. It has proven difficult to deliver enough muscle-building stem cells into the right tissues to stop the disease. Conventional gene therapy, which uses a virus to carry a good version of a broken gene into cells, can’t replace the full dystrophin gene because it is too large. Some gene therapists are hoping to give people with DMD a “micro” dystrophin gene that would result in a short but working version of the protein and reduce the severity of the disease. Companies have also developed compounds that cause the cell’s DNA-reading machinery to bypass a defective exon in the dystrophin gene and produce a short but functional form of the crucial protein. But these so-called exon-skipping drugs haven’t yet won over regulators because they have side effects and only modestly improved muscle performance in clinical trials.

Now, CRISPR has entered the picture. The technology, which Science dubbed 2015’s Breakthrough of the Year, relies on a strand of RNA to guide an enzyme called Cas9 to a precise spot in the genome, where the enzyme snips the DNA. Cells then repair the gap either by rejoining the broken strands or by using a provided DNA template to create a new sequence. Scientists have already used CRISPR to correct certain genetic disorders in cells taken from animals or people and to treat a liver disease in adult mice. And last year, researchers showed CRISPR could repair flawed dystrophin genes in mouse embryos.

But using CRISPR to treat people who already have DMD seemed impractical, because mature muscle cells in adults don’t typically divide and therefore don’t have the necessary DNA repair machinery turned on for adding or correcting genes. CRISPR could, however, be used to snip out a faulty exon so that the cell’s gene reading machinery would make a shortened version of dystrophin—similar to the exon-skipping and microgene approaches.

Now, three teams have done just this in young mice with DMD. Graduate student Chengzu Long and others in Eric Olson’s group at University of Texas Southwestern Medical Center in Dallas used a harmless adeno-associated virus to carry DNA encoding CRISPR’s guide RNA and Cas9 into the mice’s muscle cells and cut out the faulty exon. In the treated mice, which had CRISPR-ferrying viruses injected directly into muscles or into their bloodstream, heart and skeletal muscle cells made a truncated form of dystrophin, and the rodents performed better on tests of muscle strength than untreated DMD mice. Teams led by biomedical engineer Charles Gersbach of Duke University in Durham, North Carolina, and Harvard stem cell researcher Amy Wagers, both collaborating with CRISPR pioneer Feng Zhang of Harvard and the Broad Institute in Cambridge, Massachusetts, report similar results. CRISPR’s accuracy was also reassuring. None of the teams found much evidence of off-target effects—unintended and potentially harmful cuts in other parts of the genome.

The Wagers team also showed that the dystrophin gene was repaired in muscle stem cells, which replenish mature muscle tissue. That is “very important,” Wagers says, because the therapeutic effects of CRISPR may otherwise fade, as mature muscle cells degrade over time.

The treatment wasn’t a cure: The mice receiving CRISPR didn’t do as well on muscle tests as normal mice. However, “there’s a ton of room for optimization of these approaches,” Gersbach says. And as many as 80% of people with DMD could benefit from having a faulty exon removed, Olson notes. However, he adds, researchers are years away from clinical trials. His group now plans to show CRISPR performs equally well in mice with other dystrophin gene mutations found in people, then establish that the strategy is safe and effective in larger animals.

Other muscular dystrophy researchers are encouraged. “Collectively the approach looks very promising for clinical translation,” says Jerry Mendell of Nationwide Children’s Hospital in Columbus. Adds Ronald Cohn of the Hospital for Sick Children in Toronto, Canada: “The question we all had is whether CRISPR gene editing can occur in vivo in skeletal muscle.” The new studies, he says, are “an incredibly exciting step forward.”

[Genome Editing is likely to become most promising revolutionary methodology to really cure diseases caused by single nucleotide polymorphysm (one letter of A,C,T,G), that changes a normally protein-coding codon into a stop-codon, thereby producing a "truncated" protein. There are thousands of such diseases. With the disease of DMD the problem is, that in addition to single point mutations of the DNA, Non-Coding RNAs have also been implicated. (Thus, it is listed as a "Junk DNA disease"). Genome Editing is presently in its infancy, presently focusing on animal models (in this case, mice). Further, non-coding DNA and non-coding RNA, along with other "fractal defects" have not yet been replaced by "spell-checked" sequence-snippets, to the knowledge of FractoGene inventor. One must be careful in assessing the integrity of "protein-coding gene"(s), as it is becoming evident, see publication on "microexons", i.e. that "genes" are found fractured in old school (fractal in new school) - Andras_at_Pellionisz_dot_com]

Credit for CRISPR: A Conversation with George Church

The Scientist, Dec 29, 2015

George Church ["The Edison of Genomics - AJP"]

The media frenzy over the gene-editing technique highlights shortcomings in how journalists and award committees portray contributions to scientific discoveries.

Jennifer Doudna, Emmanuelle Charpentier, and Feng Zhang are widely cited as the primary developers of CRISPR/Cas9 technology. These researchers were undoubtedly key to the development of the bacterial immune defense system into a powerful and accessible gene-editing tool, but by assigning credit to just three individuals, most news reports overlook the contributions of countless other scientists, including George Church, who alerted The Scientist to this issue after reading an article on December’s Human Gene Editing Summit.

In the article, my colleague Jef Akst highlighted Doudna, Charpentier, and Zhang as the three seminal figures in the development of CRISPR/Cas9 technology: “The attendees are a veritable who’s who of genome editing: Jennifer Doudna of the University of California, Berkeley, Emmanuelle Charpentier of Max Planck Institute for Infection Biology, and Feng Zhang of the Broad Institute of MIT and Harvard—the three discoverers of the CRISPR-Cas9 system’s utility in gene editing—plus dozens of other big names in genome science,” Akst wrote. In assigning the lion’s share of credit for CRISPR/Cas9 gene editing to Doudna, Charpentier, and Zhang, Akst echoed countless articles on the technology’s origin story.

“I’m trying not to complain,” Church told me when we chatted a few days later. “I’m just making what I thought was a little technical correction, which was the particular way she phrased it.” His point? He and many other scientists also contributed to developing the “CRISPR-Cas9 system’s utility in gene editing.”

If you’ve read anything about CRISPR, you’re likely familiar with the following: in a 2012 Science paper, Doudna, Charpentier, and their colleagues published the first account of programming the CRISPR/Cas9 system to precisely cut naked plasmid and double-stranded DNA. Zhang and his colleagues applied this precision-cutting approach to mouse and human cells in vitro, publishing their results in a February 2013 issue of Science.

But, as is the case whenever intensive scientific inquiry in involved, the story was not nearly so simple. Although it’s not often included with the aforementioned studies, Church’s team published a similar study—using CRISPR/Cas9 to edit genes in human stem cells—in the same issue of Science as Zhang and his colleagues.

Church emphasized that Doudna and Charpentier were major players in elevating CRISPR/Cas9, a naturally occurring form of immune defense employed by bacteria to fight off invading viruses, from a biological curiosity to a potentially transformative gene-editing tool. “They were definitely pioneers in studying this particular enzyme system,” he said. But he contends that their specific contributions don’t constitute the whole story of the technology’s development. “The spark that [Doudna] had was that CRISPR would be a programmable cutting device,” Church said. “But getting it to do precise editing, via homologous recombination, was a whole other thing.”

The CRISPR/Cas system is a naturally occurring form of immune defense employed by bacteria to fight off invading viruses. A small constellation of researchers aided in describing, isolating, and studying CRISPR decades before it was ever imagined as a gene-editing tool.

In 1987, Yoshizumi Ishino and his colleagues at Osaka University in Japan published the sequence of a peculiar short repeat, called iap, in the DNA of E. coli . Eight years later, Francisco Mojica from the University of Alicante in Spain and his colleagues characterized what would become known as a CRISPR locus; The researchers later realized that what they and others had considered disparate repeat sequences actually shared common features.

Mojica and his colleague Ruud Jansen coined the term CRISPR (for clustered regularly-interspaced short palindromic repeats) in correspondence with each other in the late 90s and early 2000s, and Jansen used it in print for the first time in 2002. A steady trickle of research on the prokaryotic immune module followed, with industry scientists such as Philippe Horvath and Rodolphe Barrangou from dairy manufacturer Danisco joining academic researchers—among them, Luciano Marraffini at Rockefeller University, John Van der Oost at Wageningen University in the Netherlands, Sylvain Moineau of Canada’s Laval University, Virginijus Siksnys at Vilnius University in Lithuania, and Eugene Koonin of the University of Haifa in Israel—pursuing a more robust understanding of how CRISPR worked in nature. This early work on CRISPR was “kind of a community effort,” said Church.

Zhang agreed. “This is a remarkable scientific story in its own right, and the work on genome editing . . . was only possible because of a strong, global foundation of basic research into the biology of CRISPR,” he wrote in an email to The Scientist. “Many researchers contributed to the discovery and understanding of CRISPR,” he added. “Any discussion of the development of CRISPR into the genome-editing tool it is today would be incomplete without recognizing the critical contributions of each of these individuals and their teams.”

Now that the technology is being applied, its origin story has been oversimplified in both published accounts and by award organizations. “It’s a litany now,” Church said. “It’s like a hymn.”

And of all the researchers who might deserve more credit for developing CRISPR, Church contends that he’s at the top of the list. “There were definitely at least two teams [Doudna’s and Charpentier’s] involved in getting cutting to work,” Church continued, “and then there were two teams [Zhang’s and mine] that got it to work in humans with homologous recombination. So you could say two and two. But to oversimplify that back down to three, is like consciously omitting one.”

Why that happened isn’t readily apparent, said Doudna. “Looking at peer-reviewed publications, George Church published a paper at the same time in the same issue of Science magazine as Feng Zhang on using CRISPR technology in human cells,” she told The Scientist. “It’s very clear what’s in the scientific record.”

That CRISPR/Cas9 gene-editing was a larger collaborative effort that extends beyond Doudna, Charpentier, and Zhang is an issue that others have spoken and written about. An economic manifestation of the debate, in the form of a patent dispute, has even sprung up within the oft-cited CRISPR trinity. Then there are the prizes. In 2014, Doudna and Charpentier were awarded a $3 million Breakthrough Prize. And last year Thomson Reuters predicted a Nobel Prize in Chemistry for the duo. (The 2015 honors went to a trio of DNA repair researchers instead.)

Meanwhile, the media continues to perpetuate the condensed CRISPR origin story when mentioning the technology’s evolution in the space of a sentence or two. Part of that oversimplification is rooted in the fact that most modern life-science researchers aren’t working to uncover broad biological truths. These days the major discoveries lie waiting in the details, meaning that any one lab is unlikely to shed all the necessary light on a complex phenomenon—much less on how to adopt that phenomenon for human purposes—in isolation. That reality does little to allay what is probably a fundamental human urge to pin a few names and faces on major breakthroughs.

But how do we fix a problem of public perception that stems from the very nature of scientific discovery in the modern age? Doudna had a suggestion. “I think it’s great that journalists look into this and explain the process of science,” she said. “Things don’t happen overnight; they happen through a process of investigation. And very typically there are multiple laboratories that are working in an area, and it’s almost universally true.”

[Comment by Andras_at_Pellionisz_dot_com below]

Pellionisz, Fig 16 of 2009

[George Church invited me to his Cold Spring Harbor meeting in 2009. I searched already at that time "Fractal Defects", see above. At that ime, there was already an established industry to sequence full genomes. However, there was neither an established industry for Synthetic Genomics (to cheaply manufacture sequences of any design). Nor was George Church fully geared at that time for Genome Editing (to insert the edited correct version to replace Fractal Defects"). Today, we have the full triad! Full sequencing is a commodity. In the spirit of the conclusion of the talk with Prof. George Church, the accomplishments of multiple laboratories and broad biological truths a truly enterprising revolutionary move became possible. A triad can be put together even for non-coding DNA segments of a) The protected intellectual property of FractoGene compute Fractal Defects (in force for more than the next decade), b) Synthetic genomics to cheaply manufacture an edited replacement-sequence, and c) Genome editing patent (and I assume tons of pre-existing trade secrets, causing a feeding frenzy in genome editing) - though editors must first know what is e.g. the mathematical (fractal) language of non-coding regulatory DNA. Already in 2009, "glitches could be found". The famed seven years later, by 2016, "glitches might become edited out by a synthetic correct sequence". "Presenilin", linked to Alzheimer's, is present also in mice, and even in the tiny genome of C. Elegans. Fractal Defects, found since 2007 were shown also for Parkinson's-linked sequences (and other genomic syndromes). Presented to the Parkinson Institute, they were not ready for funding before means to do something definite about them. A lucid cartoon of Genome Editing is here. - andras_at_pellionisz_dot_com ]


Genome misfolding unearthed as new path to cancer

IDH mutations disrupt how the genome folds, bringing together disparate genes and regulatory controls to spur cancer growth

[Compare to Defects of Hilbert-Fractal Folding Clog "Proximity", see Figure above Table of Contents here, from 2012 Proceedings - Andras_at_Pellionisz_dot_com]


Nature, December 23, 2015]

In a landmark study, researchers from the Broad Institute and Massachusetts General Hospital reveal a completely new biological mechanism that underlies cancer. By studying brain tumors that carry mutations in the isocitrate dehydrogenase (IDH) genes, the team uncovered some unusual changes in the instructions for how the genome folds up on itself. Those changes target key parts of the genome, called insulators, which physically prevent genes in one region from interacting with the control switches and genes that lie in neighboring regions. When these insulators run amok in IDH-mutant tumors, they allow a potent growth factor gene to fall under the control of an always-on gene switch, forming a powerful, cancer-promoting combination. The findings, which point to a general process that likely also drives other forms of cancer, appear in the December 23rd advance online issue of the journal Nature.

"This is a totally new mechanism for causing cancer, and we think it will hold true not just in brain tumors, but in other forms of cancer," said senior author Bradley Bernstein, an institute member at the Broad Institute and a professor of pathology at Massachusetts General Hospital. "It is well established that cancer-causing genes can be abnormally activated by changes in their DNA sequence. But in this case, we find that a cancer-causing gene is switched on by a change in how the genome folds." [Yes, this paper seeds in the 2009 "Mr. President, the Genome is Fractal" Science Cover Article, featuring the Hilbert-curve for the fractal globule of DNA folding. Dr. Bernstein was among the co-authors, with Erez Lieberman as the first-author and Dr. Eric Lander as lead-author. Eric Lander is acknowledged in the reviewed Bernstein et al. Nature-paper [full pdf]- AJP]

When extended from end to end, the human genome measures some six and a half feet. Although it is composed of smaller, distinct pieces (the chromosomes), it is now recognized that the pieces of the genome fold intricately together in three dimensions, allowing them to fit compactly within the microscopic confines of the cell. More than mere packaging, these genome folds consist of a series of physical loops, like those of a tied shoelace, that bring distant genes and gene control switches into close proximity.

By creating these loops -- roughly 10,000 of them in total -- the genome harnesses form to regulate function. "It has become increasingly clear that the functional unit of the genome is not a chromosome or even a gene, but rather these loop domains, which are physically separated -- and thereby insulated -- from neighboring loop domains," said Bernstein.

But Bernstein's group did not set out to study this higher-order packing of the genome. Instead, they sought a deeper molecular understanding of glioma, a form of brain cancer, including the highly aggressive form, glioblastoma. Relatively little progress has been made in the last two decades in treating these often incurable malignancies. In order to unlock these tumors' biology, Bernstein and his colleagues combed through vast amounts of data from recent cancer genome projects, including the Cancer Genome Atlas (TCGA). They detected an unusual trend in IDH-mutant tumors: When a growth factor gene, called PDGFRA, was switched on, so was a faraway gene, called FIP1L1. When PDGFRA was turned off, so, too, was FIP1L1.

"It was really curious, because we didn't see this gene expression signature in other contexts -- we didn't see it in gliomas without IDH mutations," said Bernstein.

What made this signature stand out is that the two genes in question sit in different genomic loops, which are separated by an insulator. Just as the loops of a tied shoelace come together at a central knot, two insulators in the genome bind to one another, forming a loop. These insulators join together through the action of multiple proteins, which bind to specific regions of the genome, called CTCF sites.

Bernstein and his team were surprised to find that this strange phenomenon could be seen across the genome, involving many other CTCF sites and gene pairs, suggesting that IDH-mutant tumors have a global disruption in genome insulation. But how does this happen, and what role does IDH play?

IDH gene mutations signify one of the early success stories to flow from the large-scale sequencing of tumor genomes. Historically, IDH genes were thought to be run-of-the-mill "housekeeping" genes, not likely drivers of cancer -- exactly the kinds of unexpected finds scientists hoped to uncover through systematic searches of the cancer genome.

Fast forward a few years, and the biology of IDH-mutant tumors remains poorly understood. IDH encodes an enzyme that, when mutated, produces a toxic metabolite that interferes with a variety of different proteins. Exactly which ones are relevant in cancer is unknown, but what is known is that the DNA of IDH-mutant tumors is modified in an important way -- it carries an unusually large number of chemical tags, called methyl groups. The significance of this hypermethylation is not yet clear. "Based on the genome-wide defect in insulation that we observed in IDH-mutant gliomas, we looked for a way to put all these pieces of the IDH puzzle together," said Bernstein.

Using a combination of genome-scale approaches, he and his colleagues found that the hypermethylation in IDH-mutant gliomas localizes to CTCF sites across the genome, where it disrupts their insulator functions.

Taken together with their earlier results, their work shows that PDGFRA and FIP1L1, which are normally confined to separate loop domains and rarely interact, become closely associated in IDH-mutant tumors -- like untying a shoelace and then retying it in a new configuration. This unusual relationship emerges as a result of the hypermethylation at the intervening CTCF site.

"A variety of other tumors carry IDH mutations, including forms of leukemia, colon cancer, bladder cancer, and many others," said Bernstein. "It will be very interesting to see how generally this applies beyond glioma."

Although these early findings need to be extended through additional studies of IDH-mutant gliomas as well as other forms of IDH-mutant cancers, they offer some intriguing insights into potential therapeutic approaches. These include IDH inhibitors, which are now in clinical development, as well as agents that reduce the associated DNA methylation or target the downstream cancer genes.

[This landmark paper, clinching experimental support for the Fractal approach by Pellionisz since 2002, will be commented in appropriate detail - andras_at_pellionisz_dot_com.

A most interesting point in case the crisis how the entire NIH (National Cancer Institute) struggles how to come to terms of my "Fractal approach", already endorsed by major, highly mathematically minded leaders (Nobelist Michael Levitt of Stanford, Double-degree biomathematician Eric Schadt of New York, Eric Lander of Broad/MIT, Pioneer of Fractals in Biology and Medicine Gabriele Losa [et al.] of Switzerland, Govindarajan Ethirajan, India, etc, etc); see

A significant sector of the Old School is, however, still hesitant to embrace advanced mathematics. It is becoming an embarrassment, since (as illustrated by the 2015 May YouTube by a bright layperson Wai h tsang) Unification of Neuroscience and Genomics is almost taken for granted, just by looking around and spotting "fractals everywhere, sprouting from fractal seeds" even by most every bright (lay)person. Even the behavior of "old schools" shows the typical fractal "self-similarity"; repeating the same mistake over and over again. It has happened many times in the history of science that major disruptions were recognized only after undue delays of several decades. For FractoGene (2002) the first "critical seven years" resulted in recognition of the genome as a Hilbert-fractal (2009). Now, after another critical second seven years, in 2016 the overwhelming evidence may become too embarrassing for true scholars to hide facts.]

The Fractal Brain and Fractal Genome [by layperson Wai h tsang]

YouTube of Wai h tsang

[Googling "Pellionisz" will reveal a good number of peer-reviewed publications (references available through the "Professional Page"), as well as a 2008 Google Tech Talk YouTube on nonlinear dynamics (fractals and chaos) as the common intrinsic mathematics of both the genome and the brain. Particularly important was, after ENCODE-1, to clinch the Principle of Recursive Genome Function. Starting from a fractal model of the Purkinje cell (Pellionisz, 1989, also shown in the above video) first the FractoGene approach explained how fractal genome grows fractal organ(ism)s, and the 2012 paper on "geometrical unification of genomics and neuroscience" textbook-chapter belabors the topic. Happily, the geometrical approach to biology of Pellionisz (since 1965) has apparently found its way to bright minds of a younger generation everywhere. Somewhat sadly, "Old School Neuroscience and Genomics" has had a rather hard time in coming to terms of advanced mathematics (see in "news" column, e.g. the NIH Cancer Institute published a paper in two versions; one based on fractals, while the other version completely devoid of even the word, let alone citing pioneers).Breakthrough, however, is inevitable - though having already wasted over a quarter of a Century (and counting). Meanwhile, hundreds of millions died e.g. of cancer. Time to wake up - the tardiness of old school is becoming an embarrassment for professionals. - AJP]

2016 - The Genome Applicance; Taking the Genome Further in Healthcare

Tech Crunch - December, 2015

Brendan Frey is the CEO and president of Deep Genomics.

Collecting genome data is reliable, fast and cheap. Yet, interpreting that data is unreliable, slow, and expensive — when it’s even possible.

Today, genome interpretation is a burgeoning science, but it’s not yet a technology. A stricken patient has their genes sequenced and their mutations identified. But then, it can take a highly trained, and highly paid, expert many hours to make a judgment call on a single unfamiliar mutation.

All too often, the result is no diagnosis, no therapy and gut wrenching uncertainty. The problem is made worse because there are not enough knowledgeable experts to handle the rising tide of genome data, and there never will be — exponential growth in the number of human experts is not a viable option.

Genome interpretation is already a pain point for doctors, hospitals, diagnostic labs, pharmaceutical companies and insurance providers. That means it’s also a pain point for everyday patients and their families, whether they know it or not.

The capability gap between the collection of genome data and the interpretation of it is widening faster than ever. If that gap is allowed to continue growing unabated, it represents a shameful lost opportunity to avoid heartache and struggle for millions of people.

How will computer-aided genome interpretation be used to improve the lives of patients? Dozens of ventures are attempting to answer this question and, when the dust settles, healthcare will look dramatically different than it does now.

There are exciting entrepreneurial opportunities in genome-driven personalized medicine, arising from huge potential value and extreme uncertainties in the five-year perspective. We can think of them as rungs on the ladder of information value.

First Rung: Genetic Data Generation And Secure Data Storage

These entrepreneurial opportunities provide the raw material for genomic medicine: whole genome sequences, exome sequences, gene panels and rich phenotype information such as an individual’s predisposition to disease.

This data can be used to determine the set of mutations that a patient has, compared to a reference genome, or it can be used to determine the mutations that tumor cells have, compared to healthy cells. Large databases form crucial resources that support higher rungs on the ladder.

Examples include the sequencers developed and in development at Illumina, PacBio and Oxford Nanopore, the data storage systems in development at Google Genomics and DNAnexus, and the genotype-phenotype data being generated at 23andMe and Human Longevity.

The uncertainties here mainly involve rapidly dropping costs of genome sequencing and phenotyping technologies on the one hand, and increasing concerns about patient confidentiality on the other.

Second Rung: Data Organization, Brokering And Visualization

The value added here is in sharing and comparing the data of individual patients, as well as integrating diverse kinds of large-scale datasets. Pertinent datasets may be public or private, and may have conditions attached, such as those involving confidentiality, non-competition and complex licensing.

Brokering “data trades” in a technologically streamlined manner is crucial. These opportunities do not produce actionable information, but they provide important support for higher rungs on the ladder.

Examples include NextBio, SolveBio and DNAstack. Here, there is uncertainty in the gain in value that can be achieved by combining and sharing genomic data, since without proper interpretation and without addressing patient confidentiality the data may not be actionable.

Third Rung: Software To Bridge The Genotype-Phenotype Gap

This is the most challenging, yet potentially highest-value, entrepreneurial opportunity. Currently, there is a lack of technologies that can reliably link genotype to phenotype and address the crucial question of how genetic modifications, whether natural or therapeutic, impact molecular and biological processes involved in disease. Bridging this gap would be highly disruptive in several verticals, including genetic testing, drug targeting, patient stratification, precision medicine and insurance.

In a recent study, it was shown that the success rate of drugs at phase three in clinical trials could be doubled when even the most simplistic genome interpretation data is taken into account. Imagine what could be achieved if accurate systems for genome interpretation were broadly available.

Bridging the genotype-phenotype gap is the most difficult challenge on the ladder, because it addresses a very complex, multi-faceted task.

The genome is a digital recipe book for building cells, written in a language that no human will ever fully understand. [Define "fully", or replace "fully understand" with "understand without the intrinsic mathematics of nature" - AJP]. Our only window into this tiny, complex world is by high-throughput experiments such as DNA and RNA sequencing, proteomics assays, single-cell experiments and gene editing with CRISPR-Cas9 screens.

Identifying valuable experiments is one way entrepreneurs on this rung can create value, but only if they have the computational know-how to make sense of the data. Machine learning is by far the best technology at our disposal for using such data to discover how the underlying biology works. [This is debatable. "Machine Learning" (maiden name: "Artificial Intelligence") was declared "Brain Dead" by the originator & chamption of AI (Marvin Minsky) when we developed the entire new field of "Neural Net Algorithms" [1571 citations] - AJP]. It will play a crucial role in bridging the genotype-phenotype gap.

For this rung, there is no uncertainty about the transformative nature of the technologies and their value. The uncertainty lies in how successful we can be, from a technological standpoint, in bridging the gap. Do we have enough data? The right type of data? The right machine learning algorithms? [The "uncertainty" is not in our technology savvy - the deep question is if "reverse engineering methods" are suitable to "reverse engineer" a natural system that was never ever "engineered" in the first place - AJP]

Fourth Rung: Diagnostics, Therapies, Precision Medicine And Insurance

These opportunities derive their value from directly addressing the needs of patients. Going forward, this rung will increasingly benefit from the lower rungs on the ladder, and companies that fail to leverage the full stack of the ladder will be left behind. Currently, companies on the fourth rung struggle to make full use of genomic data because good systems for genome interpretation are not yet available.

For instance, the reliability of the current generation of computational tools for genome interpretation is unclear, according to the American College of Medical Genetics and Genomics, the widely accepted oversight body. This will inevitably change as systems for genome interpretation improve and are proven.

Examples of diagnostic companies include Counsyl, Invitae, Myriad and Human Longevity’s Health Nucleus; examples of pharmaceutical companies that are increasingly using these systems include the big pharmas, plus data-driven companies such as 23andMe and Capella Biosciences. Risks here include the uncertainties involved in obtaining regulatory approval and sidestepping the dreaded 10-year drug development cycle.

A Way Forward

Bridging the genotype-phenotype gap is one of the most important outstanding challenges for which machine learning is truly needed. Facebook, Google and DeepMind have made amazing progress in helping computers catch up to humans in understanding images, speech and language, but humans already do these tasks every day and we excel at them. Genome interpretation is different; not a part of our daily lives, yet, in a sense, more urgent.

The gap between our ability to merely collect genetic information and our ability to interpret it at scale is widening faster than ever. [Might wish to re-visit the Google Tech Talk forecast in 2008 "Is IT ready for the Dreaded DNA Data Deluge" - AJP]. Closing that gap will change the lives of hundreds of millions of people.

Our objective in this industry should be to 10X multiply the scale, speed and, most of all, accuracy of genome interpretation. I believe we can do this in three to five years by accelerating the pace of development in computational methods for genome interpretation, and especially machine learning.

Genome interpretation is a software problem that will require the concerted efforts of genome biologists, machine learning experts and software engineers. ["Software" is to put mathematical algorithm(s) into executable lines of code. If the algorithms are unsuitable, cost of software development, often very pricey, might be a waste. Further, in the changing climate of IP protection, securing software is not the best approach. 2016 will emerge as the year of "the genome appliance" - AJP]

Whole-Genome Analysis of the Simons Simplex Collection (SSC)

SFARI is pleased to announce that it has awarded five grants in response to the Whole-Genome Analysis for Autism Risk Variants request for applications.

We are also announcing plans for the release of whole-genome sequencing data from the Simons Simplex Collection (SSC) for analysis by the entire research community. There are currently 560 genomes available, and we expect that all 2,160 genomes (from 540 SSC quad families) will be available by the end of February 2016.

[An entry 3 months ago "Head of Mental Health Institute Leaving for Google Life Sciences"; New York Times, Sept. 15, 2015 - see further down in this column) may be a very relevant entry. (Dr. Thomas R. Insel (63), head of NIH-NIMH resigns by November, 2015 to Google Life). Readers may wish to correlate the Sept. 15 and Dec. 15 news; two landmarks in just a few months apart that signal the shift of focus towards privately funded modern genome informatics combatting autism - Andras_at_Pellionisz_dot_com]

The role of big data in medicine


An interview with Eric Schadt

November, 2015

The role of big data in medicine is one where we can build better health profiles and better predictive models around individual patients so that we can better diagnose and treat disease.

One of the main limitations with medicine today and in the pharmaceutical industry is our understanding of the biology of disease. Big data comes into play around aggregating more and more information around multiple scales for what constitutes a disease—from the DNA, proteins, and metabolites to cells, tissues, organs, organisms, and ecosystems. Those are the scales of the biology that we need to be modeling by integrating big data. If we do that, the models will evolve, the models will build, and they will be more predictive for given individuals.

It’s not going to be a discrete event—that all of a sudden we go from not using big data in medicine to using big data in medicine. I view it as more of a continuum, more of an evolution. As we begin building these models, aggregating big data, we’re going to be testing and applying the models on individuals, assessing the outcomes, refining the models, and so on. Questions will become easier to answer. The modeling becomes more informed as we start pulling in all of this information. We are at the very beginning stages of this revolution, but I think it’s going to go very fast, because there’s great maturity in the information sciences beyond medicine.

The life sciences are not the first to encounter big data. We have information-power companies like Google and Amazon and Facebook, and a lot of the algorithms that are applied there—to predict what kind of movie you like to watch or what kind of foods you like to buy—use the same machine-learning techniques. Those same types of methods, the infrastructure for managing the data, can all be applied in medicine.

In the past three or four years, we’ve hired more than 300 people, spanning from the hardware side and big data computing to the sequence informatics and bioinformatics to the CLIA-certified2 genomics core—to generate the information—to the machine-learning and predictive-modeling guys and the quantitative guys, to build the models. And then we’ve linked that up to all the different disease-oriented institutes at Mount Sinai, and to some of the clinics directly, to start pushing this information-driven decision making into the clinical arena.

Not all the physicians were on board and, of course, there are lots of people who will try to cause all sorts of fear about what kind of world we’re going transform into if we are basing medical decisions on sophisticated models where nobody really understands what’s happening. So it was all about partnering with individuals such as key physicians who were viewed as thought leaders—leading their area within the system—and carrying out the right kinds of studies with those individuals.

In all of these different areas, we’re recruiting experts, and we view what we build as sort of a hub node that we want linked to all the different disease-oriented institutes to enable them to take advantage of this great engine. But you need people to help translate it, and that’s what these key hires have done. They have a strong foot within the Icahn Institute, but they also care about disease. And so they form their whole lab around the idea of how to more efficiently translate the information from the big information hub out to the different disease areas. That’s still done mainly by training individuals within those labs to be able to operate at a lower level. I think what needs to happen beyond that is better engagement through software engineering: user-interface designers, user-experience designers who can develop the right kinds of interfaces to engage the human mind in that information.

One of the biggest problems around big data, and the predictive models that could build on that data, really centers on how you engage others to benefit from that information. Beyond the tools that we need to engage noncomputational individuals in this type of information and decision making, training is another element. They’ve grown up in a system that is very counter to this information revolution. So we’ve started placing much more emphasis on the generation of coming physicians and on how we can transform the curriculum of the medical schools. I think it’s a fundamental transformation of the medical-school curriculum, and even the basic life sciences, where it becomes more quantitative, more computational, and where everybody’s taking statistics and combinatorics and machine learning and computing.

Those are just the tools you need to survive. And it has to start at that earlier stage, because it’s very, very difficult to take somebody already trained in biology or a physician and teach them the mathematics and computer science that you need to play that game.

Bringing together the right talent (YouTube video)

Researchers ID Copy Number Changes Associated With Cancer in Normal Cells


NEW YORK (GenomeWeb) – Researchers from Uppsala University in Sweden have identified copy number alterations typically associated with cancer in normal cells of breast cancer patients, suggesting that the mutations could be early indicators of disease. [Same is true for Autism - high time to measure them for early and exact diagnosis and precision therapy - AJP]

Reporting their work recently in Genome Research, the researchers aimed to look for markers that predict a risk for breast cancer in individuals without a hereditary risk. Approximately 10 percent of women in developed countries get non-familial breast cancer, also called sporadic breast cancer. The disease is heterogeneous and individuals differ in clinical manifestation, radiologic appearance, prognosis, and outcome. Yet, there are no good markers to predict a woman's risk for developing the disease.

Mammography is used to screen older women, yet it has a limited sensitivity and often only identifies disease once a tumor poses a significant mortality risk, the authors wrote.

In order to look for potential markers that could predict risk of disease at an earlier stage, the researchers studied 282 female sporadic breast cancer patients who underwent mastectomy. From each patient, they evaluated primary tumor tissue, several normal-looking tissue samples at various distances from the tumor, and normal blood or skin samples.

The team characterized all the samples via microarrays and three with low-coverage whole-genome sequencing. From 1,162 non-tumor breast tissue, 183 samples from 108 patients had at least one aberration. The researchers noted that the more sites they sampled from a patient, the more likely they were to find one containing an aberration, suggesting that the identified aberrations may represent only a part of all aberrations that might exist in the studied individuals.

Twenty-seven samples had highly aberrant genomes, affecting over 39 percent of the genomes. Alterations spanned large regions, even whole chromosomes, and there were differences between individual cells, suggesting heterogeneity.

Next, they stratified the remaining 157 tissue samples by mutation load. Because the goal was to identify the earliest markers of breast cancer, they first looked at the samples with a low mutation load.

Copy number gains were the most frequent alteration observed, suggesting that "oncogenic activation (up-regulation) of genes via increased copy number might be a pre-dominant mechanism for initiation of the SBC disease process," the authors wrote.

The authors confirmed that the genomic alterations identified in the normal breast tissue were also found in the primary tumor, with two exceptions. In one case, the team identified a deletion to a tumor suppressor gene that was not present in the tumor, and in another case, the researchers found eight alterations in the normal tissue, only four of which were in the primary tumor.

The most common event in samples with low mutational loads was a copy number gain of ERBB2, which was also the third most common event among all samples. The researchers also found this event in patients' epithelial and mesenchymal cells, demonstrating that "early predisposing genetic signatures are present in normal breast parenchyma as an expression of field cancerization and are not likely to be derived from migrating tumor cells," the authors wrote.

Recurrent gains to receptor genes were also found in EGFR, FGFR1, IGF1R, NGFR, and LIFR.

"Our analysis represents a snapshot picture of a progressive process that is likely going on for many years, if not decades," the authors wrote.

The findings raise important questions about tumor resection and point to a new method of early detection, although further validation studies are needed to determine their clinical significance.

For instance, tumor resection in breast cancer patients is a well-established standard of care; however, there is debate about how much tissue should be removed to ensure all cancer cells are taken. The authors reported that their study provides some evidence for altered cells "sometimes located at unexpected distances" from primary tumors. If those cells are left behind, they "may represent the source of local recurrence," the authors wrote.

In addition, if the findings are confirmed, they could point the way toward better and earlier diagnostics. For instance, in the future, researchers could potentially design imaging tests to detect the proteins located on the cell surface of breast cells that are encoded by cancer genes that have copy number gains.

"Such tests could detect an ongoing disease process much earlier (years, possibly even decades) compared to mammography," the authors wrote.

["FractoGene is Fractal Genome Grows Fractal Organisms". This "Heureka!" cause-and-effect realization translates into immediate use for early and mathematically exact diagnosis, and precision therapy. Mr./Mrs. Billionaire, want to wait until pathological fractal organisms (cancerous tumors) show up? By exact measurements of fractal defects and their correlation one can make statistical diagnosis and probabilistic prognosis for precision therapy. A slew of conditions (cancer, autism, schizophrenia, auto-immune diseases, etc) are already closely linked to fractal defects of the genome (that is replete of repeats in all, healthy or not cases). Andras_at_Holgentech_dot_com]

Genome Pioneer: We Have The Dangerous Power To Control Evolution (Interview with Craig Venter)

The World Post

Oct. 5, 2015

[Before reading further, see Dr. Barnsley's Classic - Craig, you'll need FractoGene!]

[The "genes" failed us - your "complexion" is fractal, so is your genome (FractoGene). Franz Och might provide yet another proof of FractoGene and Craig's Longevity, Inc. might get a license... andras_at_pellionisz_dot_com]

You are the pioneering cartographer of the human genome. How much do we know? What percentage of the functions of genes do we know today?

The cell that we’ve designed in the computer has the smallest genome of any self-replicating organism. In this case, 10 percent of the genes, or on the order of about 50 genes in that organism, are of unknown function. All we know is that if certain genes are not present, you can’t get a living cell.

The human genome is almost the flip side. I would say that we only know well the functions of, maybe, 10 percent of our genome. We know a lot about a little bit; we know far less about a lot more. We don’t know most of the real functions of most of the genes. A big percentage of that can potentially come in the next decade as we scale up to get huge numbers and use novel computing to gain a deeper understanding.


How are you discovering the genes that determine a person’s facial features?

The way it works in reality is that your genes determine your face, so it’s not a wild stretch of the imagination that it might be doable, right? We all look a little bit different based on the small differences in our genetic code.

We have a series of cameras that snap a 3-D photograph of faces and take about 30,000 unique measurements -- the distance between your eyes, for example, and other physical parameters. We then look into the genome for those 30,000 measurements to see if we can find parts of the genetic code that clearly determine that factor.

Obviously, there’s a lot of variation across the human species, so it’s not a simple algorithm. I'm less confident we will be able to take your genome sequence to predict your voice, though, but we’ll get approximations of it. Perfect pitch is genetic. Cadence is genetic. But there are a lot of other things that go into how we sound.

[A deeper, genome-industry-wide analysis of Dr. Craig Venter's landmark release is available upon request - Andras_at_Pellionisz_dot_com]

Genetic Analysis Supports Prediction That Spontaneous Rare Gene Mutations Cause Half Of All Autism Cases

A new study appears to show devastating “ultra-rare” gene mutations play a causal role in roughly half of all cases of Autism Spectrum Disorder.

Genetic Analysis Supports Prediction That Spontaneous Rare Gene Mutations Cause Half Of All Autism Cases

Quantitative study identifies 239 genes whose “vulnerability” to devastating de novo mutation makes them priority research targets

Peter Tarr • Cold Spring Harbor Laboratory

Cold Spring Harbor, NY – A team led by researchers at Cold Spring Harbor Laboratory (CSHL) this week publishes in PNAS a new analysis of data on the genetics of autism spectrum disorder (ASD). One commonly held theory is that autism results from the chance combinations of commonly occurring gene mutations, which are otherwise harmless. But the authors’ work provides support for a different theory.

They find, instead, further evidence to suggest that devastating “ultra-rare” mutations of genes that they classify as “vulnerable” play a causal role in roughly half of all ASD cases. The vulnerable genes to which they refer harbor what they call an LGD, or likely gene-disruption. These LGD mutations can occur “spontaneously” between generations, and when that happens they are found in the affected child but not found in either parent.

Although LGDs can impair the function of key genes, and in this way have a deleterious impact on health, this is not always the case. The study, whose first author is the quantitative biologist Ivan Iossifov, a CSHL assistant professor and on faculty at the New York Genome Center, finds that “autism genes” – i.e., those that, when mutated, may contribute to an ASD diagnosis – tend to have fewer mutations than most genes in the human gene pool.

This seems paradoxical, but only on the surface. Iossifov explains that genes with devastating de novo LGD mutations, when they occur in a child and give rise to autism, usually don’t remain in the gene pool for more than one generation before they are, in evolutionary terms, purged. This is because those born with severe autism rarely reproduce.

The team’s data helps the research community prioritize which genes with LGDs are most likely to play a causal role in ASD. The team pares down a list of about 500 likely causal genes to slightly more than 200 best “candidate” autism genes.

The current study also sheds new light on the transmission to children of LGDs that are carried by parents who harbor them but whose health is nevertheless not severely affected. Such transmission events were observed and documented in the families used in the study, part of the Simons Simplex Collection (SSC). When parents carry potentially devastating LGD mutations, these are more frequently found in the ASD-affected child than in their unaffected children, and most often come from the mother.

This result supports a theory first published in 2007 by senior author Michael Wigler, a CSHL professor, and Kenny Ye, a statistician at Albert Einstein College of Medicine. They predicted that unaffected mothers are “carriers” of devastating mutations that are preferentially transmitted to children affected with severe ASD. Females have an as yet unexplained factor that protects them from mutations which, when they occur in males, will be significantly more likely to cause ASD. It is well known that at least four times as many males as females have ASD.

Wigler’s 2007 “unified theory” of sporadic autism causation predicted precisely this effect. “Devastating de novo mutations in autism genes should be under strong negative selection,” he explains. “And that is among the findings of the paper we’re publishing today. Our analysis also revealed that a surprising proportion of rare devastating mutations transmitted by parents occurs in genes expressed in the embryonic brain.” This finding tends to support theories suggesting that at least some of the gene mutations with the power to cause ASD occur in genes that are indispensable for normal brain development.

The work described here was supported by the Simons Foundation Autism Research Initiative.

“Low load for disruptive mutations in autism genes and their biased transmission” appears in the Early Edition of Proceedings of the National Academy of Sciences the week of September 21, 2015. The authors are: Ivan Iossifov, Dan Levy, Jeremy Allen, Kenny Ye, Michael Ronemus, Yoon-ha Lee, Boris Yamrom and Michael Wigler. The paper can be obtained [in full .pdf] at:

About Cold Spring Harbor Laboratory

Celebrating its 125th anniversary in 2015, Cold Spring Harbor Laboratory has shaped contemporary biomedical research and education with programs in cancer, neuroscience, plant biology and quantitative biology. Home to eight Nobel Prize winners, the private, not-for-profit Laboratory is more than 600 researchers and technicians strong. The Meetings & Courses Program hosts more than 12,000 scientists from around the world each year on its campuses in Long Island and in Suzhou, China. The Laboratory’s education arm also includes an academic publishing house, a graduate school and programs for middle and high school students and teachers. For more information, visit .

[Autism is a "genome disease" that is leading with the time-proven best approach to science. First, based on preliminary knowledge, theories emerge. Experiments, then follow predictions of a theory, which can be either be supportive of the theory, or contradict to it - leading to improvements of any given theory. (In this particular case, and potential improvement is to cover not only the "autism genes", but including te 98% of the DNA that is not genic. Structural variants of the so-called "non-coding, non-genic" sequences are also very well known to to be among the causes of genomic diseases). is certainly not a shere accident that SFARI (led by a world-class mathematician, James Simons) supports all sorts of approaches to understanding a genome disease, but as a mathematician clearly prefers those that do not stop at "big data gathering", but are based on (at the moment still competing) scientific theories. Based on my mathematization of neuroscience and genomics, by now in pursuit this theory/experimentation approach for nearly half a Century (1966-2015), the sheer cost of genome analysis will force a return to this "mathematical theory-based approach". Andras_at_Pellionisz_dot_com]

Sorry, Obama: Venter has no plans to share genomic data

J. Craig Venter plans to keep the genomic data gathered at Human Longevity tight to the chest.

Much like the White House’s Precision Medicine Initiative, the genomics luminary has announced plans to sequence one million genomes by 2020. So, in keeping with the current vogue of open-sourcing data, does Venter have any interest in commingling his genomic database with the government’s?

“Unlikely,” Venter said, during a keynote speech at the Mayo Clinic’s Individualizing Medicine Conference in Rochester, Minnesota this week.

“I think this notion that you can have genome sequences from public databases is extremely naive,” Venter said. “We’re worried there will be future lawsuits from people who were guaranteed anonymity who will clearly not have it.”

This stance will likely inform Venter’s policy on Human Longevity’s new consumer-facing genomics business, which was just announced today. In collaboration with a South African health insurer, Human Longevity will soon offer whole exome sequencing that tells individuals about their most medically relevant genetic information – for just $250.

This public offering could dramatically increase Human Longevity’s access to larger swaths of diverse DNA – helping make that goal of one million sequenced genomes by 2020 a reality.

Venter said that he’ll keep the Human Longevity data private because it’s challenging to deal with the accuracy and quality of data when it comes from from multiple sources. While the genomes studied at Human Longevity are all sequenced with Illumina’s HiSeq X Ten, Venter has his doubts about the machines and methods used to sequence genomes from other organizations.

“We get the highest quality of data of any center off the Illumina X10 sequencers, and don’t want to comingle it with data from other sources that don’t necessarily have the same degrees of accuracy.

The Human Longevity database will be built on self-generated data, he said, though it’ll likely share information about allele frequencies.

It was interesting to have Venter come straight out and say why Human Longevity is keeping its data proprietary. Venter has skirted the issue in the past, despite participating in White House precision medicine events. Last year, he wrote:

It is encouraging that the US government is discussing taking a role in a genomic-enabled future, especially funding the Food and Drug Administration (FDA) to develop high-quality, curated databases and develop additional genomic expertise. We agree, though, that there are still significant issues that must be addressed in any government-funded and led precision medicine program. Issues surrounding who will have access to the data, privacy and patient medical/genomic records are some of the most pressing.

We look forward to continuing the dialogue with the Administration, FDA and other stakeholders as this is an important initiative in which government must work hand in hand with the commercial sector and academia.

The Mayo Clinic discussion was a much more finite stance on his concerns of privacy in data sharing – and the consistency of data quality. Different scientists and different machines will interpret data from next-generation sequencing in a different manner.

But we’re not looking at a Sony vs. Betamax situation here – it’s unlikely that Human Longevity is competing with the government. This looks more like a matter of efficiency and pushing forward at a pace that’s easier in the private sector than in a bureaucracy.

[Just in the middle of Silicon Valley (Mountain View - Cupertino) now we have Google, Apple and Human Longevity competing in the Genome Informatics business. I never thought I would live this day! The old wisdom said "war is too important to be left for the generals". The up-to-date version is "Genome Informatics is too important to be left for government bureaucrats". Not that they are not good, but they do not create wealth, their greatness is to redistribute it as they please. The very same transition happened to Internet. It started as a government (defense) information-network (of computer system managers) - but it became such a hugely important business tool that President Clinton handed the Internet over to Silicon Valley Private Industry. In experts' hands, it took off in unprecedented ways. Internet Industry, or course, is both very capital-intensive and extremely competitive. Those who ever invested the kind of money needed to cart-in "the Next Big Thing" (or even just a major part of their life-effort) are "unlikely" to throw all their efforts into the wind. Surprising? Venter says "naive". He may be right. Andras_at_Pellionisz_dot_com.

J. Craig Venter to Offer DNA Data to Consumers

A genomic entrepreneur plans to sell genetic workups for as little as $250. But $25,000 gets you “a physical on steroids.”

MIT Technology Review

By Antonio Regalado on September 22, 2015

Fifteen years ago, scientific instigator J. Craig Venter spent $100 million to race the government and sequence a human genome, which turned out to be his own. Now, with a South African health insurer, the entrepreneur says he will sequence the medically important genes of its clients for just $250.

Human Longevity Inc. (HLI), the startup Venter launched in La Jolla, California, 18 months ago, now operates what’s touted as the world’s largest DNA-sequencing lab. It aims to tackle one million genomes inside of four years, in order to create a giant private database of DNA and medical records.

In a step toward building the data trove, Venter’s company says it has formed an agreement with the South African insurer Discovery to partially decode the genomes of its customers, returning the information as part of detailed health reports.

The deal is a salvo in the widening battle to try to bring DNA data to consumers through novel avenues and by subsidizing the cost of sequencing. It appears to be the first major deal with an insurer to offer wide access to genetic information on a commercial basis.

Jonathan Broomberg, chief executive of Discovery Health, which insures four million people in South Africa and the United Kingdom, says the genome service will be made available as part of a wellness program and that Discovery will pay half the $250, with individual clients covering the rest. Gene data would be returned to doctors or genetic counselors, not directly to individuals. The data collected, called an “exome,” is about 2 percent of the genome, but includes nearly all genes, including major cancer risk factors like the BRCA genes, as well as susceptibility factors for conditions such as colon cancer and heart disease. Typically, the BRCA test on its own costs anywhere from $400 to $4,000.

“I hope that we get a real breakthrough in the field of personalized wellness,” Broomberg says. “My fear would be that people are afraid of this and don’t want the information—or that even at this price point, it’s still too expensive. But we’re optimistic.” He says he expects as many as 100,000 people to join over several years.

Venter founded Human Longevity with Rob Hariri and Peter Diamandis (see “Microbes and Metabolites Fuel an Ambitious Aging Project”), primarily to amass the world’s largest database of human genetic and medical information. The hope is to use it to tease out the roles of genes in all diseases, allow accurate predictions about people’s health risks, and suggest ways to avoid those problems. “My view is that we know less than 1 percent of the useful information in the human genome,” says Venter.

The company this year began accumulating genomes by offering to sequence them for partners including Genentech and the Cleveland Clinic, which need the data for research. Venter said HLI keeps a “de-identified” copy along with information about patients’ health. HLI will also retain copies of the South Africans’ DNA information and have access to their insurance records.

“It will bring quite a lot of African genetic material into the global research base, which has been lacking,” says Broomberg.

Deals with other insurers could follow. Venter says that only with huge numbers will the exact relationship between genes and traits become clear. For instance, height—largely determined by how tall a person’s parents are—is probably influenced by at least hundreds of genes, each with a small effect.

Citing similar objectives, the U.S. government this year said it would assemble a study of one million people under Obama’s precision-medicine initiative (see “U.S. to Develop DNA Study of One Million People”), but it may not move as fast as Venter’s effort.

HLI has assembled a team of machine-learning experts in Silicon Valley, led by the creator of Google Translate, to build models that can predict health risks and traits from a person’s genes (see “Three Questions for J. Craig Venter”). In an initial project, Venter says, volunteers have had their facial features mapped in great detail and the company is trying to show it can predict from genes exactly what people look like. He says the project is unfinished but that just from the genetic code, HLI “can already describe the color of your eyes better than you can.”

Venter also said that this October the company will open a “health nucleus” at its La Jolla headquarters, with expanded genetic and health services aimed at self-insured executives and athletes. The center, the first of several he hopes to open, will carry out a full analysis of patients’ genomes, sequence their gut bacteria or microbiome, analyze more than two thousand other body chemicals, and put them through a full-body MRI scan. “Like an executive physical on steroids,” he says.

The health nucleus service will be priced at $25,000. These individuals would also become part of the database, Venter said, and would receive constant updates as discoveries are made.

While the quality of Venter’s science is not in much doubt, this is the first time since he was a medic in Vietnam that he’s doled out medicine directly. “I think it’s a good concept,” says Charis Eng, chair of the Cleveland Clinic’s Genomic Medicine Institute, which collaborates with Venter’s company. “But we who practice genomic medicine—we say HLI has absolutely no experience with patient care. I want to inject caution: it needs to be medically sound as well as scientifically sound.”

Venter has a history of selling big concepts to investors and then using their money to carry out exciting, but not necessarily profitable, science. In 1998 he formed Celera Genomics to privately sequence the human genome, but he was later booted as its president when its business direction changed. The economics of his current plan are also uncertain. Venter’s pitch is that with tens of thousands and ultimately a million genomes, he will uncover the true meaning of each person’s DNA code. But all those discoveries lie in the future.

And at a cost of around $1,000 to $1,500 each, a million completely sequenced genomes add up to an expense of more than a billion dollars. HLI has so far raised $80 million, but Venter says he is now meeting with investors in order to raise far larger sums.

Venter says he intends to offer several other common kinds of testing, including pre-conception screening for parents (to learn if they carry any heritable genetic risks), sequencing of tumors from cancer clinics, and screening of newborns. Those plans could bring HLI into competition with numerous other startups and labs that offer similar services.

“It would be just one more off-the-shelf genetic testing company, if the entire motivation weren’t to build this large database,” he says. “The future game is 100 percent in data interpretation. If we are having this conversation five to 10 years from now, it’s going to be very different. It will be, ‘Look how little we knew in 2015."

[Those who know Craig, will have little doubt that he will very rapidly become "the next generation 23andMe". True, the trailblazing 23andMe started 9.5 years ago, and the affordable technology just wasn't there to interrogate more than SNP (Single Nucleotide Polymorphisms, max. 1.6 million bases out of the full genome of 6.2 billion bases. Now the technology of full genome sequencing is affordable. Yet, there are two main issues to seriously contemplate. First, it paints a sad picture of US that 23andMe was seriously set back by FDA thus can not provide health advice - likewise Craig had to go through South Africa to the United Kingdom to avoid shooting himself in the foot in his homeland. Second, "exomics" (checking the integrity of the amino-acid-coding-sections of "exons") is certainly a big step ahead (there are over 5,000 known "Mendelian diseases" that can be traced back to structural variants of "exons"), focusing on only "less than 2%" is unlikely to yield clues for e.g. cancer, autism, auto-immune diseases etc. When Craig says "The future game is 100 percent in data interpretation", not only I absolutely agree, but sharpen his focus that "within the 100 percent, 99 percent of the game is "understanding genome regulation" - that involves according to ENCODE-2 "more than 80% of the full human genome, that is functional". While Craig, for personal reasons, was absent when I presented in his Institute my FractoGene (2007), based on Recursive Genome Function already in manuscript, his Institute was preoccupied (for 15 years...) by kicking into action a marginally reduced gene-set (of the smallest DNA of free living organisms). Assumption was that "there is not much regulation, if at all, of the ~300 genes, don't bother with it". Why took it 15 years to kick the reduced set to life? Why is Craig's full DNA sitting on the shelves without understanding how it works? The solution may lie NOT in comparing gezillion genomes - but in better understanding a single one. NOTE [to those with domain expertise in physics]: Generating "Big Data" by a super-expensive "super-collider" is unthinkable in physics, without an underlying quantum-physics model. Computers never "compare" gezillions of trajectories of particles - they compare how the experimentally observed trajectories are DIFFERENT from those predicted by models worked out through many decades. Genomics could waste any number of dollar billions, or even trillions, by trying to skip even the basics of solid mathematical theory. Yes, there are some. Look up just the peer reviewed papers, argue if you can. Provide yours if that looks better. Andras_at_Pellionisz_dot_com.]

Google (NASDAQ: GOOG) Dips into Healthcare Business

Alphabet Inc., the newly formed holding company tied to Google (NASDAQ: GOOG), is also joining hands with big investment projects, one of them being advanced medical research. The list of medical companies under the Google umbrella includes Google X, the research laboratory and Calico, the biotechnology company. The life sciences team of the company is also a part of it. Yet to be given a formal name, the team is slated to work on new technologies, pushing from the R&D stage to the final clinical testing.

Alphabet has offered minimal disclosure on its healthcare initiatives. Investment banks, however, hold the belief that the company is on its way to make a new multimillion-dollar business. Investors believe that Google initiatives will unlock substantial value. The initiative will begin to be clearer when the company will divide its finances into two parts: Alphabet and Google Inc. in the fourth quarter. Industry reports say that substantial efforts by Google reveal that it is targeting huge markets with wide-ranging projects. The advantage of such an approach is that the company can recoup its investments even with small success.

Google's strengths lie in three major technological trends: genome sequencing, health data digitization and shift from paying for healthcare dependent on its value. The company's expertise in cloud computing helps in data digitization and the other two are taken care of by the Life Sciences and Calico companies. The former inked a partnership with a pharmaceutical company Dexcom (NASDAQ: DXCM), to manufacture high technology diabetics products. The estimated market for such goods is estimated to be worth a minimum of $20 billion.

High Technology Investments

The Life Sciences team has worked on a number of projects in the past like the nanopill (for detecting cancer), a special sensor for monitoring patients with multiple sclerosis, and a baseline study to make the most comprehensive portrait of human genome and body. The company also worked on a kind of contact lens complete with embedded chip so that blood sugar levels can be monitored in individuals suffering from diabetes.

Investment analysts, however, are not putting any headline number to total basket as of yet. The reason is that development in the medical field proceeds very slowly, hindered by research and regulatory unpredictability. It is estimated that during Q4, when Google demarcates its core business results from Alphabet for its first time, the company will show R&D costs in the $3 billion to $5 billion region outside Google Inc. Considerable chunk of this money will probably be spent on healthcare.

[This is a very promising preview - especially in light of the news-item below. The journalist probably meant by listing Google's strength as "genome sequencing" the more appropriate "genome analytics by Big Data" (that started at some 2 years ago as "Google Genomics"). All in all (along with the news that Microsoft also threw their hat into the ring, see couple of entries below), we are at the point predicted in 2009 (see YouTube remark by pharma-guru Dr. Nikolich, at 104:45m "what if Microsoft would acquire a pharma company, or Google would buy and build a pharma company because it makes sense"). In a mere six years, both (and more, see Intel, GE Health, Amazon Web Services, Apple etc) are happening at the tune of many $Billions. Dr. Insel, for instance, even at NIMH gathered genome sequence data on autism (and NCI did so for cancer). Would any NIH Institute (NIMH, NCI, or any of the 27 Institutes and Centers) become at any time be a World-leader in Informatics? Personally, my experience does not suggest any strong likelihood. On the other hand, there is hardly any doubt, that we are already in a formative period in the IT business. Sure, it my take anything from 1/2 to 2 years when a full-blown IT-driven-Health-Care-pie will be sliced out. Based on what? Mostly on cross-domain expertise, and since it is competitive business, ruled by entrenched in IP, I would say. andras_at_pellionisz_dot_com]

Head of Mental Health Institute Leaving for Google Life Sciences

New York Times


SEPT. 15, 2015

Thomas R. Insel (63), head of NIH-NIMH resigns by November, 2015 to Google Life

Dr. Thomas R. Insel, the director of the National Institute of Mental Health, announced on Tuesday that he planned to step down in November, ending his 13-year tenure at the helm of the world’s leading funder of behavioral-health research to join Google Life Sciences, which seeks to develop technologies for early detection and treatment of health problems.

The announcement is no small personnel matter for the behavioral sciences.

Losing Dr. Insel leaves the agency — which is growing in importance and visibility in the wake of the Obama administration’s brain initiative — with a large hole, mental health experts in and out of government said. Dr. Insel has been an agreeable, determined, politically shrewd presence at an agency that has often taken fire from advocacy groups, politicians and scientists.

In hiring him, Google, which is in the process of reorganizing into a new company called Alphabet, lands a first-rate research scientist and administrator with an exhaustive knowledge of brain and behavioral sciences. He has also recruited some of the top researchers into the brain sciences from other fields.

“Tom’s leaving is a great loss for all of us,” said Dr. E. Fuller Torrey, the executive director of the Stanley Medical Research Institute, a nonprofit that supports research into severe mental illnesses. “He refocused N.I.M.H. on its primary mission — research on the causes and better treatments for individuals with serious mental illness.”

Dr. Francis S. Collins, the director of the National Institutes of Health, appointed Dr. Bruce Cuthbert as acting director of the agency while he looks for a replacement. Dr. Cuthbert, who has held leadership positions within the N.I.M.H., has made it clear he would prefer to continue work on initiatives within the agency, rather than run it, the agency said.

In an interview, Dr. Collins said he planned to fill the position as quickly as he could, “but realistically that means six months at minimum, and maybe not until next summer.” He said he would appoint a search committee, made up of institute directors and outside experts, and would consult with Dr. Insel closely. He said that he and Dr. Insel agreed in broad terms about the direction of the agency, but that the search “would not be about zeroing in on a clone of Tom; there are others out there who will have a slightly different view and that’s fine.”

Dr. Insel’s jump to the private sector represents a clear shift in his own thinking, if not necessarily behavioral sciences as a whole.

A brain scientist who made his name studying the biology of attraction and pair bonding, Dr. Insel took over the N.I.M.H. in 2002 and steered funding toward the most severe mental disorders, like schizophrenia, and into basic biological studies, at the expense of psychosocial research, like new talk therapies. His critics — and there were plenty — often noted that biological psychiatry had contributed nothing useful yet to diagnosis or treatment, and that Dr. Insel’s commitment to basic science was a costly bet, with uncertain payoffs.

“The basic science findings are fascinating, but have failed so far to provide clinically meaningful help to a single patient,” said Dr. Allen James Frances, an emeritus professor of psychiatry at Duke University. “Meanwhile, we neglect 600,000 of the severely ill, allowing them to languish in prison or on the street because treatment and housing are shamefully underfunded.”

In his new job, Dr. Insel will do an about-face of sorts, turning back to the psychosocial realm, only this time with a new set of tools. One project he has thought about is detecting psychosis early, using language analytics — algorithms that show promise in picking up the semantic signature of the disorganized thinking characteristic of psychosis.

“The average duration of untreated psychosis in the U.S. is 74 weeks, which is outrageous, completely unacceptable,” he said in an interview. “I think it’s not unreasonable, with data analytics — Google’s sweet spot — to get that down to seven weeks, by 2020.”

Moment-to-moment mental tracking has also become a commercial reality, he said, and that technology could help identify, and more precisely address, the sources of depression and anxiety, including social interactions or sleep disruption. “The idea is to use the power of data analytics to make behavioral studies much more objective than they have been before,” he said.

[Wikipedia insert - AJP

At NIMH he quickly focused on serious mental illnesses, such as schizophrenia, bipolar illness, and major depressive disorder with a defining theme of these illnesses as disorders of brain circuits. Building on the genomics revolution, he created large repositories of DNA and funded many of the first large genotyping and sequencing efforts to identify risk genes. He established autism as a major area of focus for NIMH and led a large increase of NIH funding for autism research. Under his leadership, autism, as a developmental brain disorder, became a prototype for mental disorders, most of which also emerge during development].

["Budget cuts hit autism research" insert - AJP

Federal support for autism research quadrupled between 2003 and 2010, but those boom days are over, National Institute of Mental Health director Thomas Insel told attendees at the International Meeting for Autism Research in San Diego yesterday. The base budget for the National Institutes of Health (NIH) was slashed by $1.6 billion this year, forcing one percent cuts across the board. Meanwhile, $122 million earmarked for autism research from the American Recovery and Reinvestment Act — the stimulus bill passed early in the Obama administration — ran out in 2010. “We’re concerned and we hope that you are concerned as well,” Insel told the audience. “We are at a turning point.” In 2009, the last year for which numbers are available, the NIH funded two-thirds of the $314 million spent on autism research. This year’s cuts will affect both investigators who already have grants — which will receive one percent less than in 2010 — and those applying for funding. “We also won’t have as much as we like for new and competing grants this year,” Insel said. “We will be reducing the number of new awards very significantly.” The current Congress also appears unlikely to reauthorize the Combating Autism Act of 2006, which created the Interagency Autism Coordinating Committee, which sets priorities for government-funded research, said Insel. “There’s a bit of a taboo in Congress these days to do disease-specific authorizations,” Insel said. Public-private partnerships are one strategy to help meet the federal funding shortfall, Insel suggested.”]

Google Life Sciences is already developing a contact lens that tracks glucose levels for diabetes management, and a heart activity monitor worn on the wrist. Dr. Insel’s ideas for mood and language tracking are, for now, just that — ideas. The company has not yet decided on where first to invest in mental health. [While cancer is the low-hanging fruit for genomic precision diagnosis/therapy, autism and schizophrenia are also eminent candidates. With these "genomic diseases" massive re-arrangements of genomic sequences are already proven - AJP]

When he steps down in November, Dr. Insel, 63, will have been the longest-serving director of N.I.M.H since Dr. Robert H. Felix, the agency’s founder, who left in 1964. Dr. Insel’s tenure spanned four presidential terms, during which he honed an easygoing political persona and an independent vision of the agency’s direction. He was, especially in recent years, outspoken in defense of his methodologies, at one point publicly criticizing establishment psychiatry for its system of diagnosis, which relies on observing behaviors and not any biological markers.

In taking the Life Sciences job, he and his wife, Deborah, a writer, will be moving to the Bay Area, a place they once knew well, when he spent time studying at the University of California, San Francisco. Both of their children were born in that city. But that was more than 20 years ago, and some things have changed, he said.

“We were just out there, looking for a tiny cottage,” he said. “We’re still recovering from sticker shock.”

[See comment after the 6-months old news below - AJP]


Outgoing U.S. cancer chief reflects on his record, what’s next


By Jocelyn Kaiser 5 March 2015

Nobelist Harold Varmus (75), head of NIH-NCI resigns, March 15, 2015 to New York Genome Center (etc)

Late yesterday afternoon, as Washington, D.C., was readying to shut down for a snowstorm, National Cancer Institute (NCI) chief and Nobel Prize–winning cancer biologist Harold Varmus announced that he is stepping down at the end of this month. Although few even on his own staff were expecting the news, it was not a big surprise coming less than 2 years before the end of the Obama administration, when many presidential appointees leave for their next job.

In a resignation letter to the research community, Varmus decried the harsh budget climate he has faced and pointed to a list of accomplishments, from creating an NCI center for global health to launching a project to find drugs targeting RAS, an important cell signaling pathway in cancer. “I think he’s done a wonderful job under difficult circumstances,” says cancer biologist Tyler Jacks of the Massachusetts Institute of Technology in Cambridge and chair of NCI’s main advisory board. “He brought tremendous scientific credibility to the position. And he managed to do some new and creative things.” NCI Deputy Director Douglas Lowy will serve as acting director.

In a phone interview this morning as the first snowflakes began to fall, Varmus reflected on his time at NCI and what he will do when he returns full time to New York City. (He has been commuting from his home there to NCI in Bethesda, Maryland.) He will run a “modestly sized” lab at Weill Cornell Medical College in New York City, Varmus wrote in his letter, as well as serve as an adviser to its dean, and work with the New York Genome Center.

[Nobelist Dr. Varmus, at the age of 75, did not overly surprised those 6 months ago who carefully monitor the government-to-private-sphere exodus. With Dr. Varmus' outstanding achievements, staying as a government bureaucrat seems not as attractive than saving his commute from NYC to DC, and greatly contribute to an elite mix of local (NYC) private institutions. An entry in this column already shows that at a "Critical Juncture in the fight against cancer", after the leave of Dr. Varmus, already produced some remarkable symptoms of a profound crisis at NCI. Nobody seems to deny that "cancer is the disease of the genome" - yet some are clueless (bordering on professional/ethical vulnerability) if one or another theory of informatics is the way to go. Not an easy job for the head of NIH to sort out (with a Ph.D. that started from quantum physics).

The case of Dr. Varmus seems "routine" compared to the totally stunning switch by Dr. Insel (63) from MIMH to Google Life. I never expected that in my lifetime I will wittness the dramatic switch of another outstanding scientist, Dr. Insel. It is not just a huge move in the exodus from government bureaucracy to for-profit private sphere. When I published in 1989 (Cambridge University Press) a Fractal Model of a Brain Cell (along with the explicit pointer that genome-driven growth of fractal structures calls for "recursive genome function", (ibid, pp. 461-462) my existing NIH grant was discontinued, and my application for the new NIMH Program "Mathematical/Computational/Theoretical Neuroscience (cited ibid, p.462) was declined. The "reason" was that with my principle of recursive genome function I overturned BOTH cardinal axioms of Old School Genomics ("Junk DNA" AND "Central Dogma"). In all fairness to Dr. Insel it is cardinal to point out that all his happened BEFORE Dr. Insel became director of NIHM).

Now we see the Director of NIMH, who stepped in shortly after the above double fiasco, and he turned increasingly from neuroscience to genomics (e.g. in autism), is on his way to head one of the biggest of Big IT (to Google Life - Apple is even bigger). With the help of Big IT, there is no limit to gainfully engage the world's most sophisticated algorithmic/computing power guided by top domain expertise of New School Genomics/Neuroscience. In part, IP already exists, in part a beautiful mathematics is already emerging to unify neuroscience and genomics. andras_at_pellionisz_dot_com]

Bill Gates and Google back genome editing firm Editas

Wired UK (Aug. 15)

Bill Gates and Google are among some of the high-profile backers of a genome editing company that's raised $120 million (£77 million) to help develop DNA-editing technology.

According to Bloomberg, Editas Medicine Inc. has received funding from Boris Nikolic, former chief adviser for science and technology to Microsoft founder Bill Gates, who has also backed the donations.

In a statement released by the Cambridge, Massachusetts-based biotech company, it was also revealed that Nikolic has joined its board. Other notable investors in Editas include Silicon Valley's Google Ventures and venture-capital firm Deerfield Management Co.

The funding is designed to support development of Crispr-Cas 9, a technology that can be used to treat potentially deadly diseases by "fixing" faulty genes. Editas is currently testing the technology to help correct eye disorders, and is collaborating with Juno Therapeutics Inc., a firm which genetically engineers immune-system cells to help fight cancer.

The pioneering technology enables scientists to "correct" the human genome by removing the malfunctioning sections of DNA -- almost like using highly precise scissors -- and putting healthy, "working" ones in their place. Unlike many other genome editing methods currently used, Crispr is relatively cheap and easy to use, attracting interest from a broad range of scientists looking to modify everything from human cells to plants.

However, the technology has also generated controversy, with some scientists calling for Crispr to be banned from modifying the "human germline": human sperm, eggs and embryos. Although Editas CEO Katrine Bosley said that the company is yet to begin human trials on its treatments, the company has assured that it doesn't work on the human germline.

Jim Flynn, managing partner at Deerfield, which has invested in the project to the tune of $20 million (£13.8 million), said Crispr has "broad applicability". Acknowledging the long-term goals of the company, which is joined by other genome editing firms in the field such as Intellia Therapeutics Inc. and Precision BioSciences Inc., Bosley commented: "This is a marathon that we are in here, and all of these investors understand that."

[A totally new dimension is opened for the HolGenTech, Inc. logo "Ask what you can do for your genome"! With FractoGene ("Fractal DNA grows fractal organisms") there is already a potential to find in the full human DNA "fractal defects" (that can change e.g. a fractal lung into one with a cancerous tumor - by the way, cancerous growth is also fractal, but it is defective). Many may ask (worth writing a book on the subject) "what can I do for my genome?" Genome Editing, while in a rudimentary form is already a reality should not be misinterpreted as an immediate quick solution, certainly removes the presentlyl rather lethargic attitude into a hopeful stance. It is a matter of will of the medical community, the amount of resources devoted and the time required to carry "Genome Editing" into a regular medical practice. It may take years or even decades. Just think about it, however, if Steve Jobs would have more years and Apple would have devoted at least a few percent of its resources, we could all be better off already. Bill Gates apparently fully understands the issue! - andras_at_pellionisz_dot_com]

Zephyr Health grabs $17.5M with infusion from Google Ventures

This health data company expands Google Venture’s portfolio as healthtech becomes top focus of the VC’s investments

Zephyr Health, an up-and-coming health data company, has just completed a new funding round of $17.5 million with the lead investment coming from Google Ventures. The company to date has raised $33.5 million, including participation from investors Kleiner Perkins Caufield & Byers and Icon Ventures.

The company collects data via its Illuminate solution from multiple sources (epidemiology data, sales and profile data for healthcare providers (HCPs), and hospitals according to Zephyr’s website) in order to better inform health professionals on appropriate treatment regimens for patients. The data can also be used to measure which drugs and products are more popular by region and to adjust their sales strategies accordingly with predictive analytics. Their data sync can also integrate with other office organizing software like Salesforce and Oracle according to the company.

Google Ventures increases focus on Health Startups

Google Ventures touts having invested in over 300 companies, comprising a very diverse portfolio up to this point. According to the VC’s website they have “a unique focus on machine learning and life science investing.” The health section of GV’s portfolio jumped from the smallest to largest recipient of its funds between 2012 and 2014. That shift might be reflected in the growth of Google’s Life Sciences division in 2013, which could be poised for more growth following the corporate shakeup that gave birth to Alphabet Inc. two weeks ago.

The health section of that investment strategy is hearty. The VC lists Bill Maris, Krishna Yeshwant, Blake Byers, Scott Davis, Anthony Philippakis and Ben Robbins among its top investing partners. GV has invested in genetics startup 23andme, oncology data company Flatiron, genomic treatment firm Editas Medicine and several more. Flatiron itself has been the recipient of $100 million in Google Ventures investments.

Zephyr counts Genentech, Gilead, Medtronic, Onyx and Amgen among its corporate clients.

The company was founded in 2011 by CEO William King. While the company has its headquarters in San Francisco, they maintain offices in London and Pune, India.


Google co-founder and Alphabet president Sergey Brin published a blog post this morning announcing Life Sciences as the first new company created under the Alphabet umbrella. The move was expected, as Alphabet CEO Larry Page wrote during the announcement of the new holding company that this area of focus was the perfect example of why Google needed to restructure itself. It's a bold bet with an enormous potential reward, but one that is far removed from Google's core business, and not likely to be financially self-sustaining anytime soon.

There are a number of already public projects that will be rolled up into life sciences:

Smart contact lenses that can monitor the blood sugar levels of diabetics

Nanoparticles that can be used to detect and fight cancer

A baseline study that will create the richest portrait yet of the human body and genome

The Life Sciences company will be headed up by Andy Conrad, who was previously the head of....Google Life Sciences. Not much will change under Alphabet, in other words, besides a shuffling of titles and corporate structure. Still, there is no denying that the company's goals are an exciting use of Google's ample profits for humanity, if perhaps not as appealing to investors in Google's advertising business.

"They’ll continue to work with other life sciences companies to move new technologies from early stage R&D to clinical testing — and, hopefully — transform the way we detect, prevent, and manage disease," wrote Brin. "The team is relatively new but very diverse, including software engineers, oncologists, and optics experts. This is the type of company we hope will thrive as part of Alphabet and I can’t wait to see what they do next."

Update: This post originally stated that Calico would be part of the new Life Sciences company. Calico was already an independent company from Google and will remain that way under Alphabet. It will not be rolled up into Life Sciences.

What About the Moon?

Aug 13, 2015

Google's reorganization as Alphabet has left many people wondering just what the move means for the company's various ventures, including its biotech aspirations.

As FierceBiotech notes, this restructuring could open up the company's 'moonshot' ventures, including Calico, Google's project aimed at exploring human longevity, to the scrutiny of investors.

Currently, Calico benefits from an undisclosed budget and a "long leash," FierceBiotech says. At Forbes, Matthew Herper notes that the company, headed by Arthur Levinson, has "been incredibly quiet, and deliberate, and I have no idea what they're doing."

He adds that Calico is "stocked with world-class scientists, people like David Botstein, who helped invent the science of genomics, and Cynthia Kenyon, one of the world's top aging researchers."

But as re/code reports, part of this rearrangement at Google is to increase transparency. And Forbes' Herper, among others, wonders how the glare of investors might affect the prospect of moonshot projects like Calico.

"[G]iving investors a view of how the base business is working through separate financial reports will help calm their nerves," he says, "But do the pitchforks ever come out from the myopic crowds? Could Calico ever be stuck in the terrible, deceitful purgatory of the biotechnology industry, where companies try to break up years-long scientific endeavors into quarterly bites?"

In an email, Levinson tells Herper not to worry, as he doesn't expect "Calico's mission, directions or goals (either near or long- term)" to changes because or the restructuring.

In the end, Herper says that the restructuring itself may not matter. "It's a dramatic way for [Google's Sergey] Brin and [Larry] Page to say that they will remake their company to protect their bets on alpha, their moonshots, that that stuff isn't changing," he says.

[The massive reorganization of traditional Google, creating Alphabet, Inc. with subsidiaries as (new) "G" Google, "L" Life, further distancing the spin-off Calico (etc) might take a while. It seems presently unknown, for instance how Google Genomics will emerge from the reorg. It is unlikely that it will remain part of the "core business" ("G", Google). "Getting a little bit pregnant with genomics" has happened to quite a number of companies that I rather closely witnessed over decades. Thus, to me as a presently neutral genome informatics domain expert, the options appear to be the following: (a) Since "G" in the Alphabet is already taken (by Google search/advertisement), HoloGenomics "H" might become one of the subsidiaries to be deemed as presently marginally profitable, but as "the next big thing with extremely lucrative profits". (b) A lesser alternative is to put GG under "Life". "C" is even weaker (c) Have GG tucked underneath "Cloud". (d) The default is to abort Genome Informatics altogether. In the "Cloud space", Amazon Genomics is already doing better, and Apple is already claiming the most lucrative software/cloud/smartphone "combination slice" of the Genome pie (see their announcement on July 15 in this column, added with the new information that Apple, in addition to the Spaceship 2nd HQ ready next year, bought yet another "campus", 47 acres, with the office-capacity of "Spaceship"). If just in the USA which of the four Big IT companies (Google, Apple, Amazon or Microsoft) will become the winner in the genome space largely depends on an issue that has long been neglected by companies that "gotten little bit pregnant with genomics". A proper "mother" company needs to ensure not only the massive amount of resources, but has to be blessed by the "domain expertise" to carry such a baby to term. There is a noteworthy historical precedent. "Nuclear Technology" (peaceful or not) could have happened either to Germany or to the USA, and the "grey matter" made the fortunate outcome. Success depended which power could successfully recruit the best of the "Copenhagen group" of quantum-mechanics. Without a breakthrough theory it would have been not only foolish, but utterly dangerous to start "nuclear technology" lacking the underpinning of "nuclear physics" - andras_at_pellionisz_dot_com]

Evolution 2.0 by Perry Marshall

[The Book and the Website]

"Armed with computer science and electrical engineering, Perry fights an uphill battle to unite the space between those who believe evolution is random and those who believe species are designed by God, who in some cases deny evolution itself. Some will never yield their 'God-given right to be atheists'. For them, Perry's fluid reasoning, his vivid, readable explanations, easy style and enjoyable storytelling may be deemed 'unreasonable' or 'argued to death'.

Unless, of course, someone wins the technology prize (capped at $10 Million)! Should that happen, nobody will argue with success. Until then, people will be debating this book for years.

Judge the book by the science within its pages - and enjoy the story"

This is how I (with further degrees, one also a Ph.D. in biology) endorsed Perry's book. Both I (and Perry Marshall) agree on he key tenet that "Evolution is a fact". Nonetheless, nobody, not even "Evolution experts in biology" are satisfied with "Evolution as a theory" - in fact "Evolution expert biologists" fiercely fight one-another; see respective blog-infights. Darwin's simplistic concept of "random mutation & natural selection" hardly satisfies anyone in our times. Thus, the real and admirably daring question is depicted by the diagram of Evolution "version two" - where the O-shaped figure contains two "designs". The inner core is the man-made design of a mechanical clock - while the outer shell is a Nature-generated fractal design. Most readers would clearly know that my FractoGene ("fractal DNA grows fractal organisms") opts for the latter.

Perry destroys by a few eminently readable "family conversation" the misbelief if anyone conveniently reduces the label of an apparently complex pattern as "random". Read in his page 281:

"My own musical sweet spot is an odd place where hard rock overlaps with jazz. One day I had the music cranked up, playing a rock/jazz piece that's right in my zone. My wife walks in the room. "Will you please turn that down?" "Oh, you don't like the distorted guitars?" "I don't mind the guitar all that much actually. But I can hear the entire bass line in the other room and I can't stand the randomness."

"Randomness?! That's not random. It's fractal!"

Perry Marshall links on the bottom of the same page (281) to my 2002 website "The Evolution Revolution", and cites for the fractal mathematics a co-authored textbook chapter (Pellionisz et al., 2013) e.g. with Jean-Claude Perez (France), with direct new line of evidence of the fractality of the genome.

Genome researchers raise alarm over big data

Storing and processing genome data will exceed the computing challenges of running YouTube and Twitter, biologists warn.

Erika Check Hayden

07 July 2015

The computing resources needed to handle genome data will soon exceed those of Twitter and YouTube, says a team of biologists and computer scientists who are worried that their discipline is not geared up to cope with the coming genomics flood.

Other computing experts say that such a comparison with other ‘big data’ areas is not convincing and a little glib. But they agree that the computing needs of genomics will be enormous as sequencing costs drop and ever more genomes are analysed.

By 2025, between 100 million and 2 billion human genomes could have been sequenced, according to the report1, which is published in the journal PLoS Biology. The data-storage demands for this alone could run to as much as 2–40 exabytes (1 exabyte is 1018 bytes), because the number of data that must be stored for a single genome are 30 times larger than the size of the genome itself, to make up for errors incurred during sequencing and preliminary analysis.

The team says that this outstrips YouTube’s projected annual storage needs of 1–2 exabytes of video by 2025 and Twitter’s projected 1–17 petabytes per year (1 petabyte is 1015 bytes). It even exceeds the 1 exabyte per year projected for what will be the world’s largest astronomy project, the Square Kilometre Array, to be sited in South Africa and Australia. But storage is only a small part of the problem: the paper argues that computing requirements for acquiring, distributing and analysing genomics data may be even more demanding.

Major change

“This serves as a clarion call that genomics is going to pose some severe challenges,” says biologist Gene Robinson from the University of Illinois at Urbana-Champaign (UIUC), a co-author of the paper. “Some major change is going to need to happen to handle the volume of data and speed of analysis that will be required.”

Narayan Desai, a computer scientist at communications giant Ericsson in San Jose, California, is not impressed by the way the study compares the demands of other disciplines. “This isn’t a particularly credible analysis,” he says. Desai points out that the paper gives short shrift to the way in which other disciplines handle the data they collect — for instance, the paper underestimates the processing and analysis aspects of the video and text data collected and distributed by Twitter and YouTube, such as advertisement targeting and serving videos to diverse formats.

Nevertheless, Desai says, genomics will have to address the fundamental question of how much data it should generate. “The world has a limited capacity for data collection and analysis, and it should be used well. Because of the accessibility of sequencing, the explosive growth of the community has occurred in a largely decentralized fashion, which can't easily address questions like this," he says. Other resource-intensive disciplines, such as high-energy physics, are more centralized; they “require coordination and consensus for instrument design, data collection and sampling strategies”, he adds. But genomics data sets are more balkanized, despite the recent interest of cloud-computing companies in centrally storing large amounts of genomics data.

Coordinated approach

Astronomers and high-energy physicists process much of their raw data soon after collection and then discard them, which simplifies later steps such as distribution and analysis. But genomics does not yet have standards for converting raw sequence data into processed data.

The variety of analyses that biologists want to perform in genomics is also uniquely large, the authors write, and current methods for performing these analyses will not necessarily translate well as the volume of such data rises. For instance, comparing two genomes requires comparing two sets of genetic variants. “If you have a million genomes, you’re talking about a million-squared pairwise comparisons,” says Saurabh Sinha, a computer scientist at the UIUC and a co-author of the paper. “The algorithms for doing that are going to scale badly.”

Observational cosmologist Robert Brunner, also at the UIUC, says that, rather than comparing disciplines, he would have liked to have seen a call to arms for big-data problems that span disciplines and that could benefit from a coordinated approach — such as the relative dearth of career paths for computational specialists in science, and the need for specialized types of storage and analysis capacity that will not necessarily be met by industrial providers.

“Genomics poses some of the same challenges as astronomy, atmospheric science, crop science, particle physics and whatever big-data domain you want to think about,” Brunner says. “The real thing to do here is to say what are things in common that we can work together to solve.”

Nature doi:10.1038/nature.2015.17912

[During the summer of 2015 practically all Big IT companies of the world signed up for "Genomics turned Informatics" - originally heralded by LeRoy Hood, 2002. The line-up is marked by Microsoft also joining the fray in Silicon Valley by Intel, Apple, a reorganized Google Genomics all claiming a slice of the silicon pie. The analysis will show how the present challenge is different from previous disruptive science/technology endeavors; in need of much more cohesion than at any time in the history of basic science breakthroughs translated into immediate applications - Andras_at_Pellionisz_dot_com]

Intricate DNA flips, swaps found in people with autism

Print Jessica Wright

A surprisingly large proportion of people with autism have complex rearrangements of their chromosomes that were missed by conventional genetic screening, researchers reported 2 July in the American Journal of Human Genetics1.

The study does not reveal whether these aberrations are more common in people with autism than in unaffected individuals. But similar chromosomal rearrangements that either duplicate or delete stretches of DNA, called copy number variations, are important contributors to autism as well as to other neuropsychiatric disorders. These more complex variations are likely to be no different, says lead researcher Michael Talkowski, assistant professor of neurology at Harvard University.

Talkowski’s team found intricate cases of molecular origami in which two duplications flank another type of structural variation, such as an inversion or deletion.

“This is going to become an important class of variation to study in autism, long term,” Talkowski says.

The finding is particularly important because current methods of genetic analysis are not equipped to detect this type of chromosomal scrambling. The go-to method for clinical testing — which compares chopped-up fragments of an individual’s DNA with a reference genome on a chip — can spot duplications or deletions. But this method cannot tell when a DNA sequence has been flipped or moved from one chromosomal location to another, for example.

Variations like this even confound genome-sequencing technologies. Last year, for example, researchers published the results of two massive projects that sequenced every gene in thousands of people with autism. But because these genetic jumbles often fall outside gene-coding regions, they remained unnoticed.

“The complexity of genomic variation is far beyond what current genomic sequencing can see,” says James Lupksi, professor of molecular and human genetics at the Baylor College of Medicine in Houston, Texas, who was not involved in the study. “We don't have the analysis tools to see it, even though it's right there before our very eyes.”

Complex chromosomes:

Researchers have long had hints that complex variations exist, but they had no idea how prevalent they are. In 2012, using a method that provides a rough picture of the shape of chromosomes, Talkowski and his team found pieces of DNA swapped between chromosomes in 38 children who have either autism or another neurodevelopmental disorder2.

Lupski’s team also found examples in which two duplications bracket a region that appears in triplicate3. Then last year, Talkowski and his colleagues reported one example of a chromosomal duplication that flanks a flipped, or inverted, section of DNA4.

In the new study, the researchers looked at 259 individuals with autism and found that as many as 21, or 8 percent, harbor this type of duplication-inversion-duplication pattern. And a nearly equal number of individuals have other forms of rearrangement, such as deleted segments sandwiched between duplications.

The researchers were able to reveal these complex variants by sequencing each genome in its entirety. The traditional method chops up the genome into fragments that are about 100 bases long. When mapped back to a reference genome, however, these short fragments may miss small duplications or rearrangements.

The new method instead generates larger fragments, containing roughly 3,700 nucleotides apiece. Scientists then sequence the 100 nucleotides at the ends of each fragment. When mapped back to a reference genome, the large fragments reveal structural changes. For example, when a pair of sequenced ends brackets more DNA than is found in the reference sequence, that fragment may contain a duplication.

Because the approach generates multiple overlapping fragments, researchers also end up with about 100 pieces of sequence that include the junctions, or borders, of the rearranged fragments. The abundance of overlapping sequences provides significantly more detail than the standard method, which covers each nucleotide only a few times.

“The researchers have a found a more novel way to sequence and dug in to an insane degree — it’s work that almost no one else would want to try to attempt, because it’s so difficult,” says Michael Ronemus, research assistant professor at Cold Spring Harbor Laboratory in New York, who was not involved in the study. “The findings give us a sense of how common these things might be in human genomes in general.”

Whether these rearrangements are important contributors to autism and neurodevelopmental disorders is still an open question — one that Talkowski and his colleagues are gearing up to address. The genomes they sequenced came from the Simons Simplex Collection, a database that includes the DNA of children with autism and their unaffected parents and siblings. (The collection is funded by the Simons Foundation,’s parent organization.)

The researchers are using their methods to sequence the genomes of the children’s relatives. This experiment will reveal whether complex variants are more common in people with autism than in unaffected family members.

Already, there are hints that the rearrangements contribute to autism risk in some individuals. Overall, the variants in the study duplicate 27 genes, introduce 3 mutations and in one case fuse two genes together. (The particular genes involved depend on where the mix-up occurs in the genome.) Sequencing studies have tied one of the duplicated genes, AMBP, to autism. And a regulatory gene that is disrupted by the rearrangement, AUTS2, also has strong links to the disorder.

News and Opinion articles on are editorially independent of the Simons Foundation.


1: Brand H. et al. Am. J. Hum. Genet. 97, 170-176 (2015) PubMed

2: Talkowski M.E. et al. Cell 149, 525-537 (2012) PubMed

3: Carvalho C.M. et al. Nat. Genet. 43, 1074-1081 (2011) PubMed

4: Brand H. et al. Am. J. Hum. Genet. 95, 454-461 (2014) PubMed

The case for copy number variations in autism

Print Meredith Wadman

17 March 2008

Following a series of papers in the past two years, what seems irrefutable is that copy number variations ― in which a particular stretch of DNA is either deleted or duplicated ― are important in autism1,2.

Already, "CNVs are the most common cause of autism that we can identify today, by far," notes Arthur Beaudet, a geneticist at the Baylor College of Medicine in Houston.

What confronts researchers now is uncovering when and how CNVs influence autism. Do these variations cause the disease directly by altering key genes, or indirectly, in combination with other distant genes, or are they coincidental observations with no link to the disease?

The answer seems to be all of the above.

"In some cases these CNVs are causing autism; in some they are adding to its complexity; and in some they are incidental," says Stephen Scherer, director of the Center for Applied Genomics at The Hospital for Sick Children in Toronto. "We need to figure out which are which."

In February, Scherer published the latest CNV paper identifying 277 CNVs in 427 unrelated individuals with autism3. In 27 of these patients, the CNVs are de novo, meaning that they appear in children with autism, but not in their healthy parents.

Among the key findings in that paper are de novo CNVs on chromosome 16, at the same spot previously identified by a report published in January by Mark Daly and his colleagues.

Hot spots:

Different teams have documented a few of these 'hot spots' on the genome where CNVs are seen in up to one percent of people with autism ― and virtually never in those without it.

There are intriguing suggestions that CNVs uncovered at these hot spots may not be autism-specific. For example, three of the patients found to have a duplication on chromosome 16 in the January paper have been diagnosed with developmental delay and not autism.

A laundry list of other CNVs have only been identified in a single, individual with autism, making it difficult to tag them as a cause of the disease.

"[When] people publish big lists of regions, there's an implicit thing that if my kid has this, it's going to have autism," says Evan Eichler, a Howard Hughes Medical Institute investigator at the University of Washington in Seattle. But, "there's no proof," he notes.

To replicate lone findings in other individuals with autism, some researchers are trying to screen much larger samples of individuals with autism.

"Screening 5,000 families instead of 500 would really be of huge benefit," says Jonathan Sebat of the Cold Spring Harbor Laboratory in New York. Sebat and Mike Wigler propelled the field forward last year with a a high-profile list of de novo CNVs4. Their team is gearing up to scan 1,500 families with just one affected child ― in whom de novo mutations are more likely to turn up.

Scherer's group is screening the most promising CNVs from their February paper ― those they identified in two or more unrelated people, or that overlap with a gene already suspected in autism ― in a larger sample of nearly 1,000 patients.

Complex scenarios:

The team is drilling down to find smaller changes: deletions or duplications shorter than 1,000 bases in length. But the answers are unlikely to be simple.

For instance, Scherer found one 277 kilobase deletion at the tip of chromosome 22 in a girl with autism. Another team had reported in 20065 that mutations in this region cause autism in several families by crippling one of the body's two copies of the gene coding for SHANK3, a protein that is crucial for healthy communication between brain cells. In the same girl, however, Scherer also found something new: a duplication of a chunk of genome on chromosome 20 that is five times as big as the deletion on chromosome 22.

If the chromosome 22 deletion hadn't already been documented ― and if Scherer's study hadn't resolved down to 277 kilobases ― it would have been easy to assume that the chromosome 20 duplication was entirely responsible for the girl's autism.

As it stands, however, "probably some of the genes that are being duplicated on chromosome 20 are adding complexity to her autism," Scherer says, noting that the girl's symptoms include epilepsy and abnormal physical features.

The fact that the same hot spot has been implicated in different cognitive disorders adds to the complexity. A given CNV "is not always associated just with autism," says Eichler. "That's what messing with people's minds."

Eichler raises another issue that researchers need to resolve: nomenclature.

Copy number variations are a subset of a bigger category of mutations called structural variations. These include other changes such as inversions and translocations of large chunks of sequence, which don't lead to a net gain or loss in sequence as deletions and duplications do, but can still have significant consequences for cognitive function6.

"Copy number is not as good a term," says Eichler. "Structural variation includes inversion and translocation, [and is] a much more encompassing term."


Jacquemont M.L. et al. J. Med. Genet. 43, 843-849 (2006) PubMed ↩

Weiss L.A. et al. N. Engl. J. Med. 358, 667-675 (2008) PubMed ↩

Marshall C.R. et al. Am. J. Hum. Genet. 82, 477-488 (2008) PubMed ↩

Sebat J. et al. Science 316, 445-449 (2007) PubMed ↩

Durand C.M. et al. Nat. Genet. 39, 25-27 (2006) PubMed ↩

Bonaglia, M.C. et al. Am. J. Hum. Genet. 69, 261-268 (2001) PubMed ↩

[A biophysicist to mathematicians: Please note that his article, holding the conclusion "irrefutable is that copy number variations ― in which a particular stretch of DNA is either deleted or duplicated ― are important in autism" originated in 2008 - the proverbial 7 years ago. Biophysicists are overjoyed when the eminently measurable "repeats" are "irrefutably" linked to "mysterious" diseases, such as autism, cancer and a slew of auto-immune diseases, see summary in Pellionisz (2012), Pellionisz et al (2013). Gaining a mathematical handle, indeed, is a major step towards software-enabling algorithms to engage vast computer power to unlock "genomic mysteries". However, mathematicians often drill down to find the definition of any new mathematical-looking entity. In seven years till the above article, CNV (Copy Number Variation) has not been mathematically defined in a generally accepted manner. Some "define" as a "copy" a string of bases that is composed of 1,000 bases, others define "copy" that is composed of 10,000, or 100,000, or even 1,000,000 bases. Too many "definitions" is "no definition". FractoGene is based on the universally accepted fact that the human genome is replete with repeats of different lenghts - and since Pellionisz (2009) the measurable characteristics of control versus diseased genomes are their Zipf-Mandelbrot-Fractal-Parabolic-Distribution-Curves. After the proverbial 7-years, we stand ready for deployment. Andras_at_Pellionisz_com]

The mystery of the instant noodle chromosomes

July 23, 2015

This is an example of hierarchical folded package of globule. Credit: L. Nazarov

A group of researchers from the Lomonosov Moscow State University tried to address one of the least understood issues in the modern molecular biology, namely, how do strands of DNA pack themselves into the cell nucleus. Scientists concluded that packing of the genome in a special state called "fractal globule", apart from other known advantages of this state, allows the genetic machinery of the cell to operate with maximum speed due to comperatively rapid thermal diffusion. The article describing their results was published in Physical Review Letters which is one of the most prestigious physics journals with the impact factor of 7.8.

Fractal globule is a mathematical term. If you drop a long spinning fishing line on the floor, it will curtail immediately into such an unimaginably vile tangle that you will either have to unravel it for hours, or run to the store for a new one. An entangled state like this is an example of the so-called equilibrium globule. Fractal globule is a much more convenient state. Sticking to the fishing line example fractal globule is a lump, where the line is never fastened in a knot, instead it is just curled into series of loops with no loops tangled with each other. Such a structure—a set of free loops of different sizes - can be unraveled by just pulling it by two ends.

Due to this structure of loops or crumples,, which reminds the structure of an instant noodle block, Soviet physicists Alexander Grosberg, Sergey Nechayev and Eugene Shakhnovich, who first predicted it back in 1988, named this structure "crumpled globule". In the recent years it is more often called a fractal globule. On the one hand, this new name just sounds more sophisticated and serious than "crumpled globule", but on the other hand, it fully reflects the properties of such a globule, because, like all fractals, its structure, which, in this case is represented by a set of loops of different sizes, is repeated in the small and large scale.

For a long time the predicted crumpled globule state remained a purely theoretical object. However, the results of the recent studies indicate that the chromosomes in the cell nucleus may be packed into a fractal globule. There is no consensus on this issue in the scientific community, but the specialists working in this area are much intrigued about this possibility and during the last 5-7 years there has been a flood of research on fractal globule packing of the genome.

The idea that chromatin (that is to say, a long strand consisting of DNA and attached proteins) in a cell nucleus may be organized in a fractal globule makes intuitive sense. Indeed, the chromatin is essentially a huge library containing all the hereditary information "known" to a cell, in particular, all the information about synthesis of all the proteins which the organism in principle is able to produce. It seems natural that such a huge amount of data, which should be preserved and kept readable in a predictable way, should be somehow organized. It makes no sense to make the strands consisting different parts of information entangled and knotted around each other, such an action seems akin to gluing or tying up together the volumes in a library: obviously, it makes the contents of the books much less accessible to a visitor.

In addition, it seems natural that a strand in a fractal globule has, in the absence of knots, a greater freedom of movement, which is important for the genome function: it is necessary for the gene transcription regulation that the individual parts of the genome meet each other at the right time, "activating" the signal for reading the entire system and pointing the place where the reading should start. Moreover, all of this must happen quickly enough.

"According to the existing theories if the polymer chain is folded into a regular equilibrium globule, the mean square of the chain link thermal displacement increases with time as time to the power 0.25",—says Mikhail Tamm, a senior researcher at the Department of Polymer and Crystal at the Physics Faculty of the Lomonosov Moscow State University.

According to Mikhail Tamm, he and his colleagues managed to come up with a somewhat similar theory for a link of a polymer chain folded in a fractal globule.

"We were able to evaluate the thermal dynamics inherent to this type of conformation. The computer simulations we have conducted are in good agreement with our theoretical result",—says Mikhail Tamm.

Scientists from the Lomonosov Moscow State University developed a computer modeling algorithm that allows to prepare a chromatin chain packed in a fractal globule state and to monitor the thermal processes taking place there. Importantly, they managed to model a very long chain, consisting of one quarter million units, which is the longest accessible so far.

According to Mikhail Tamm, chains in the modeling need to be long in order to get meaningful results, but modeling of long chains is usually hampered by the fact that it takes them a very long time to equilibrate, while without proper equilibration the results on thermal diffusion as well as other characteristics of the chains are unreliable.

The researchers were able to successfully solve this problem by the combination of a properly constructed software and access CPU time on the MSU supercomputer "Lomonosov", and assess the dynamics of the thermal motion in a fractal globule. It was found that the links of the chromatin chain packed in a fractal globule moves faster than in a comparable equilibrium one. Indeed, the mean square thermal displacement of the link does not grow in proportion to the time to the power 0.25 anymore, but as time to the power 0.4. It means that the movement of the links turns out to be much faster. It seems to be an additional argument in support of the fractal globule model of the chromatin.

The researchers hope that their work will help to provide better insight in the functioning of the gene storage and expression machinery in the cell nucleus.

"From the point of view of dynamics, we would like to understand what are the built-in characteristic times, what processes can occur simply due to thermal motion, and which ones inevitably require the use of active elements to speed up the functioning of DNA",—summed up Mikhail Tamm.

More information: Physical Review Letters DOI: 10.1103/PhysRevLett.114.178102

Can ‘jumping genes’ cause cancer chaos?

Category: Science blog July 10, 2015 Kat Arney

[Fig. 2. of the science article linked below]

Statistically speaking, your genome is mostly junk.

Less than two per cent of it is made up of actual genes – stretches of DNA carrying instructions that tell cells to make protein molecules. A larger (and hotly debated) proportion is given over to regulatory ‘control switches’, responsible for turning genes on and off at the right time and in the right place. There are also lots of sequences that are used to produce what’s known as ‘non-coding RNA’. And then there’s a whole lot that is just boring and repetitive.

As an example, the human genome is peppered with more than half a million copies of a repeated virus-like sequence called Line-1 (also known as L1).

Usually these L1 repeats just sit there, passively padding out our DNA. But a new study from our researchers in Cambridge suggests that they can start jumping around within the genome, potentially contributing to the genetic chaos underpinning oesophageal cancer.

Let’s take a closer look at these so-called ‘jumping genes’, and how they might be implicated in cancer.

Genes on the hop

The secret of L1’s success is that it’s a transposon – the more formal name for a jumping gene. These wandering elements were first discovered in plants by the remarkable Nobel prize-winning scientist Barbara McClintock, back in 1950. [As we know, Barbara McClintock' discovery was denied in the most unprofessional manner from 1950 till 1983 when she received her Nobel-prize. 33 years (a full generation) was so bad that Dr. McClintock could consider her lucky that she survived a systemic denial. The set-back of science by that denial was much longer than 33 years, however. Consider that science actually proceeded "to fight the wrong enemy", to borrow a phrase from Nobelist Jim Watson . How many people died misrable deaths over the negligence? Andras_at_Pellionisz_dot_com ]

They’re only a few thousands DNA ‘letters’ long, and many of them are damaged. But intact L1 transposons contain all the instructions they need to hijack the cell’s molecular machinery and start moving.

Firstly, their genetic code is ‘read’ (through a process called transcription) to produce a molecule of RNA, containing instructions for both a set of molecular ‘scissors’ that can cut DNA, together with an unusual enzyme called reverse transcriptase, which can turn RNA back into DNA.

Together these molecules act as genetic vandals. The scissors pick a random place in the genome and start cutting, while the L1 RNA settles itself into the resulting gap. Then the reverse transcriptase gets to work, converting the RNA into DNA and weaving the invader permanently into the fabric of the genome.

This cutting and pasting is a risky business. Although many transposons will land safely in a stretch of unimportant genomic junk without causing any problems, there’s a chance that one may hopscotch its way into an important gene or control region, affecting its function.

So given that cancers are driven by faulty genes, could hopping L1 elements be responsible for some of this genetic chaos?

In fact, this idea isn’t new.

More than two decades ago, scientists in Japan and the US published a paper looking at DNA from 150 bowel tumour samples. In one of them they discovered that an L1 transposon had jumped into a gene called APC, which normally acts as a ‘brake’ on tumour growth. This presumably caused so much damage that APC could no longer work properly, leading to cancer.

Because every L1 ‘hop’ is a unique event, it’s very difficult to detect them in normal cells in the body. But tumours grow from individual cells or small groups of cells, known as clones. So if a transposon jump happens early on during cancer development, it will probably be detectable in the DNA of most – if not all – of the cells in a tumour.

Thanks to advances in DNA sequencing technology, it’s now possible to detect these events – something that researchers are starting to do in a range of cancer types.

Jumping genes and oesophageal cancer

In the study published today, the Cambridge team – led by Rebecca Fitzgerald and Paul Edwards – analysed the genomes of 43 oesophageal tumour samples, gathered as part of an ongoing research project called the International Cancer Genome Consortium.

Surprisingly, they found new L1 insertions in around three quarters of the samples. On average there were around 100 jumps per tumour, although some had up to 700. And in some cases they had jumped into important ‘driver’ genes known to be involved in cancer.

The findings also have relevance for other researchers studying genetic mutations in cancer. Due to technical issues with analysing and interpreting genomic data, it looks like new L1 insertions are easily mistaken for other types of DNA damage, and may be much more widespread than previously thought.

So what are we to make of this discovery?

Finding evidence of widespread jumping genes doesn’t prove that they’re definitely involved in tumour growth, although it certainly looks very suspicious, and there are a lot of questions still to be answered.

For a start, we need to know more about how L1 jumps affect important genes, and whether they’re fuelling tumour growth.

It’s also unclear why these elements go on the move in cancer cells in such numbers: are they the cause of the genetic chaos, or does their mobilisation result from something else going awry as cancer develops for other reasons?

Looking more widely, and given that it seems to be particularly tricky to correctly identify new L1 jumps in DNA sequencing data, it’s still relatively unknown how widespread they are across many other types of cancer.

Finding the answers to these questions is vital. Rates of oesophageal cancer are rising, particularly among men, yet survival remains generally poor. As part of our research strategy we’ve highlighted the urgent need to change the outlook for people diagnosed with the disease, through research into understanding its origins, earlier diagnosis and more effective treatments.

By understanding what’s going on as L1 elements hopscotch their way across the genome, we’ll gain more insight into the genetic chaos that drives oesophageal cancer.

In turn, this could lead to new ideas for better ways to diagnose, treat and monitor the disease in future. Let’s jump to it.


Paterson et al. Mobile element insertions are frequent in oesophageal adenocarcinomas and can mislead paired end sequencing analysis. BioMed Central Genomics. DOI: 10.1186/s12864-015-1685-z.

[It is sinking in deeper and deeper that Nonlinear Dynamics (Chaos & Fractals) are lurking behind cancer. The Old School is becoming brutally oversimplified with "genes" and "Junk DNA". Hundreds of millions are dying of the most dreadful illness ("the disease of the genome", a.k.a. "cancer") - and some may still hide in the denial that the sole cause of cancer is a handful of "genes" ("oncogenes") going wild. While the linked science article does not dip into the mathematics, their cited Fig. 2. shows an obviously "non-random" pattern - look at most of the evolving fractals. Andras _at_Pellionisz_dot_com]

Why you should share your genetic profile [the Noble Academic Dream and the Harsh Business Climate]

Fifteen years ago, a scrappy team of computer geeks at UC Santa Cruz assembled the first complete draft of the human genome from DNA data generated by a global consortium, giving humanity its first glimpse of our genetic heritage.

And then we did something the private corporation competing with us never would have done: We posted the draft on the Web, ensuring that our genetic blueprint would be free and accessible to everyone, forever.

This opened the door to global research and countless scientific breakthroughs that are transforming medicine. Today, every major medical center offers DNA sequencing tests; we can sequence anybody’s genome for about $1,000.

This is a game-changer. The era of precision medicine is upon us.

Consider the 21st century war on cancer: When a patient is diagnosed with cancer, her doctor compares her tumor’s genome to those in an enormous worldwide network of shared genomes, seeking matches that point to the best treatment strategies and the best outcomes.

This is not fantasy. UC Santa Cruz already manages more than 1 quadrillion bytes of cancer-genomics data — the world’s largest collection of genomic data from the most diverse collection of cancerous tumors ever assembled for general scientific use.

A multinational consortium of children’s hospitals is enabling members to compare each child’s cancer genome to this huge set of pediatric and adult cancer genomes. This is how we will decode cancer. It’s how we will tailor treatment to individual patients. It will save lives.

But this will come to pass only if we work together.

Competition among medical centers can make them reluctant to share data with each other. There are ethical and privacy considerations for patients. We need to overcome these challenges, build a secure network of data-sharing, and usher in the long-sought era of precision medicine.

Patients can help by asking their doctors and medical centers to share their genetic profiles — securely — with researchers around the world through the Global Alliance for Genomics and Health. The alliance has mobilized hundreds of institutions worldwide to build the definitive open-source Internet protocols for sharing genomic data. Our goal is to speed doctors’ ability to tailor treatments to the genetic profiles of individual patients.

The power of this data network will be only as strong as it is vast. The bigger the pool of samples, the greater the likelihood of finding molecular matches that benefit patients, as well as patterns that shed new light on how normal cells become malignant. Genomics can help us decode diseases from asthma and arthritis to Parkinson’s and schizophColorenia.

Fifteen years ago, when we released that first sequence of our genome, humanity’s genetic signature became open-source. I remember the feelings of awe and trepidation I experienced that day, realizing that we were passing through a portal through which we could never return, uncertain exactly what it would mean for humanity.

Today, the meaning is clear. We are finally realizing the promise of genomics-driven precision medicine.

David Haussler is professor of biomolecular engineering, director of the Genomics Institute at UC Santa Cruz, and a co-founder of the Global Alliance for Genomics and Health.

[David Haussler, a longtime colleague and friend, is one of the towering Giants of Genome Informatics . His uniquely profuse school at the Genomics Institute at UC Santa Cruz, of turning out perhaps the largest number of brilliant Ph.D. graduates (in Stanford, throughout Academia and some even in business) put the University of California at Santa Cruz (and the parent organization of The University of California System) at a special juncture of history.

There is no doubt that his Academic Dream ("let's all pitch in for free") is the Noblest goal of a High Road. We all believe in dreams and wish good luck to Dave. Incidentally, the dream of Al Gore to create a "free for all Information Superhighway" (The Internet) was based on similarly Noble Aspiration. I took part (at that time, at NASA Ames Research Center in Silicon Valley) of putting together a "Blue Book" that outlined the future of Internet - on a $2 Bn government budget. It was Bill Clinton, who released the Internet (originally a shoe-string project of defense information network, capable of surviving even if the Soviets would blow out major information-hubs like NYC, D.C., Chicago, or even Colorado Springs). The defense-backbone of Internet is now stronger than ever - but President Clinton's decision to release massive development to Private Industry exploded the $2 Bn National Budget to levels, when a few days ago valuation of just a single company (Google) catapulted, on a single day, by $17 Bn.

With "one thousand dollar sequencing, a million dollar interpretation", it is easy to do the math for the budget necessary to build a "1 million human DNA fully sequenced" for a genome-based "precision medicine".

Since the Private Sector (led by Craig Venter) announced such a plan even before the US Government floated a sentence in the "State of the Union", we are talking about a $2 Trillion ticket (one from Government, one from Private Industry, predictably with not much overlap). This makes sense, since the US Health Care System ("Sick Care System", rather, branded by Francis Collins) is in the yearly $2 Trillion range. To effectively change it, one would require commensurate funds. The promise to ask $200 Million from Congress, even if granted, would amount to 1% of the needed expenditure.

The University of California System, on a Sacramento budget and with severe restrictions in its Charter, may be unlikely to catch the tiger of Global Private Industry by the tail. One might argue that even the entire budget of NIH (a yearly $30 Bn) might be unrealistic for this colossal task. On the other hand, in Private Sector, Apple, Google, Microsoft valuation combined is already above the $1 Trillion range - and it is predicted that Google or Apple might reach that valuation alone.

Granted, e.g. Google spends on "Google Genomics" presently "on the side" - at best. However, they have already clinched a business model (see in this column) that for-profit-users of Google Genomics (such as Big Pharma that can easily afford), are already obligated to pay license fees to the Broad Institute for their proprietary software toolkit. (It infuses massive domain expertise into the art & technology of "handling data-deluge" of any kind by Google). It is interesting to note, that the amount of genomic data at Google presently amounts to a mere 1/3 of "YouTube". As I predicted in my 2008 Google Tech Talk YouTube, the problem is NOT "Information Technology", but "Information Theory".

It is predicted herein, that massive amounts will be paid for people with cancer for their extremely precious "genomic data along with medical profile". Individuals might never get a penny of it directly, just like you use Google for "free" (you pay when you buy as a result of a "click-through"). Existing business model and cash-flow is worked out through the monstrous advertising business & coupled "recommendation engines". With cancer, when you will opt for genome-based therapy, you will get a "cut" (virtual payment) if you "freely donate" your genomic data and health profile. Surely, while arriving at a deal with the advertising business is fairly straightforward, forging viable business models with the colossal Health Care System is a bit more advanced. However, it already started, see in this column that Google could even work out a business deal with the non-profit Broad Institute.

Working with Intellectual Property holdings is a breeze - Andras_at_Pellionisz_dot_com ].

Why James Watson says the ‘war on cancer’ is fighting the wrong enemy

Andrew Porterfield | May 26, 2015 | Genetic Literacy Project

Since President Richard Nixon asked Congress for $100 million to declare a “war on cancer” in 1971, hundreds of billions of dollars worldwide have been dedicated to research unlocking the mystery of the various forms of the disease, and how to treat it. But some suggest the war may be being fought on the wrong front.

To be sure, our understanding of genetics, cellular growth and cancers has grown exponentially. We know how cancer can be linked to mutations of genes that either encourage abnormal cell growth, or wreck the internal system of checks and balances that normally stymie that growth. We have narrowed the number of those genes down to several hundred. And, we know about genes that can halt abnormal development. We’re inserting them into cancerous cells in trials. Perhaps most significantly, we’re at a stage in which cancer specialists prefer to refer to cancers by genetic makeup, instead of by the traditional organ of first appearance.

But for many cancers, none of this is working. To be sure, overall cancer death rates have decreased, by 1.8 percent a year for men, and 1.4 percent a year for women in recent decades. But death rates from some cancers have remained stubbornly constant, while others have risen. Additionally, the National Cancer Institute estimates that the number of people with cancer will increase from 14 million to 22 million over the next 20 years.

The thing about war is: if you’re fighting and the enemy’s numbers are increasing (or at least not dropping very much), victory probably isn’t near.

A spreading, migrating issue

One issue might be the fact that primary tumors—cancers that first appear in the body, and are recognized by that location, be it the liver, lung, brain or colon—aren’t the reason most people die from cancer. Most people die because of cancer cells that break off from primary tumors, and settle in other parts of the body. This process of metastasis is responsible for 90 percent of cancer deaths. However, only 5 percent of European government cancer research funds, and 2 percent of U.S. cancer research funds, are earmarked for metastasis research.

So, for as much as we understand the genetics of primary, initial tumors, we know far less about the cancers that truly kill. And to James Watson–the molecular biologist, geneticist and zoologist, best known as one of the co-discoverers of the structure of DNA in 1953–that’s a central problem with cancer research. In a recent “manifesto” published in Open Biology, Watson asked for another war:

The now much-touted genome-based personal cancer therapies may turn out to be much less important tools for future medicine than the newspapers of today lead us to hope. Sending more government cancer monies towards innovative, anti-metastatic drug development to appropriate high-quality academic institutions would better use National Cancer Institute’s (NCI) monies than the large sums spent now testing drugs for which we have little hope of true breakthroughs. The biggest obstacle today to moving forward effectively towards a true war against cancer may, in fact, come from the inherently conservative nature of today’s cancer research establishments.

Watson, who shared a Nobel Prize with Francis Crick and Maurice Wilkins for discovering the structure of DNA, is well known for his pronouncements. which often have been labeled immodest, insulting and worse. But in this case, he also may be right.

What do other scientists say?

Mark Ptashne, a cancer researcher at Memorial Sloan Kettering Cancer Center in New York, agrees that money is being misspent on the wrong kind of drugs. Cancer cells are smart enough to work around the drugs. And cancer cells that have migrated and reformed (metastasized) may be quite different from their original parent tumor cells. Still other cancers have metastasized, but from where is unknown. Finally, in the brain, most adult tumors there are metastatic. This all means that even if a treatment is effective for a primary cancer, it likely won’t be for a metastatic one.

Metastasis is extremely complicated. Very slowly, institutions are starting to look more closely at metastasis, and provide more research funding for it. But, as the Memorial Sloan Kettering Cancer Center warned, it could take a long time before treatments arise. But it’s probably going to take more than 2-5 percent of government cancer research funding.

Dig in for a long war.

Andrew Porterfield is a writer, editor and communications consultant for academic institutions, companies and non-profits in the life sciences. He is based in Camarillo, California. Follow @AMPorterfield on Twitter.


[Jim Watson is on record of the Royal Society at least still 2013: "Still dominating NCI's big science budget is The Cancer Genome Atlas (TCGA) project, which by its very nature finds only cancer cell drivers as opposed to vulnerabilities (synthetic lethals). While I initially supported TCGA getting big monies, I no longer do so. Further 100 million dollar annual injections so spent are not likely to produce the truly breakthrough drugs that we now so desperately need." - Andras_at_Pellionisz_dot_com ]

National Cancer Institute: Fractal Geometry at Critical Juncture of Cancer Research

[Dr. Simon Rosenfeld at National Cancer Institute is on record with an original open access text, reproduced below (note the free use, mirrored here). The single-author original manuscript naming the fractalist Dr. Grizzi (in Italy) as "corresponding author" (sic), see mirror, was submitted to a Journal "Fractal Geometry and Nonlinear Analysis in Medicine and Biology" (with an Italian doctor who knows a little bit about fractals as "Editor in Chief" of the brand new Journal on "Fractals"). Once the original manuscript, with appropriate references on fractals, was accepted, the single author "changed his mind" and replaced the original submission (compare to "mirror") by a compromised truncated pdf paper reflecting on a "Critical Junction" of Cancer Research. Excerpts below from the open access text (comprised by the running title of the review article "Fractal Geometry and Nonlinear Analysis in Medicine and Biology) demonstrate another endorsement of the FractoGene approach. FractoGene papers are linked here to the free full pdf files of the original peer-reviewed articles cited in the open access text. Note that 40 of the 50 original references point to "fractal".

Those involved (see above) have been duly notified on the potential IP-issues, but perhaps out of embarrassment and since all of them are (presently) pursue non-profit academic activities, apparently decided to turn down even the ethical minimum of "requests for Erratum". NIH and its National Cancer Institute bears responsibility for the ethical conduct of taxpayer-supported academic decisions at a declared "Critical Junction". Those already pursuing for-profit activities (or with an ambition to do so) should be henceforth on "Notice that IP-infringements are monitored and proper consequences will ensue". Andras_at_Pellionisz_dot_com]

Critical Junction: Nonlinear Dynamics, Swarm Intelligence and Cancer Research

Simon Rosenfeld

National Cancer Institute, Division of Cancer Prevention, USA

E-mail :

DOI: 10.15761/FGNAMB.1000103

Complex biological systems manifest a large variety of emergent phenomena among which prominent roles belong to self-organization and swarm intelligence. Despite astoundingly wide repertoire of observed forms, there are comparatively simple rules governing evolution of large systems towards self-organization, in general, and towards swarm intelligence, in particular. In this work, an attempt is made to outline general guiding principles in exploration of a wide range of seemingly dissimilar phenomena observed in large communities of individuals devoid of any personal intelligence and interacting with each other through simple stimulus-response rules. Mathematically, these guiding principles are well captured by the Global Consensus Theorem (GCT) allowing for unified approach to such diverse systems as biological networks, communities of social insects, robotic communities, microbial communities, communities of somatic cells, to social networks, and to many other systems. The GCT provides a conceptual basis for understanding the emergent phenomena of self-organization occurring in large communities without involvement of a supervisory authority, without system-wide informational infrastructure, and without mapping of general plan of action onto cognitive/behavioral faculties of its individual members. Cancer onset and proliferation serves as an important example of application of these conceptual approaches. A growing body of evidence confirms the premise that disruption of quorum sensing, an important aspect of swarm intelligence, plays a key role in carcinogenesis. Other aspects of swarm intelligence, such as collective memory, adaptivity (a form of learning from experience) and ability for self-repair are the key for understanding biological robustness and acquired chemoresistance. Yet another aspects of swarm intelligence, such as division of labor and competitive differentiation, may be helpful in understanding of cancer compartmentalization and tumor heterogeneity.Conclusion

Complex hierarchy of perfectly organized entities is a hallmark of biological systems. Attempts to understand why's and how's of this organization lead inquiring minds to various levels of abstraction and depths of interpretation. In this paper, we have attempted to convey the notion that there exists a set of comparatively simple and universal laws of nonlinear dynamics which shape the entire biological edifice as well as all of its compartments. These laws are equally applicable to individual cells, as well as to biochemical networks within the cells, as well as to the societies of cells, as well as to the societies other than the societies of cells, as well as to the populations of individual organisms. These laws are blind, automatic, and universal; they do not require existence of a supervisory authority, system-wide informational infrastructure or some sort of premeditated intelligent design. In large populations of individuals interacting only by stimulus-response rules, these laws generate a large variety of emergent phenomena with self-organization and swarm intelligence being their natural manifestations.

Key words

global consensus theorem, swarm intelligence, biomolecular networks, carcinogenesis


Swarm intelligence of social insects and microbial colonies vividly demonstrates how far the evolution may progress having at its disposal only simple rules of interaction between unsophisticated individuals. The Lotka-Volterra (LVS) family of mathematical models, being among the first models capable of describing the very complex systems with very simple rules of interactions, demonstrates how complex may be the behaviors of even a simple food web consisting of only one predator and one prey. The repertoire of behaviors of multispecies populations is virtually unlimited. In particular, it has been shown that swarm intelligence may originate from rather mundane reasons rooted in simple rules of interactions between these entities. The goal of this paper is to provide a brief overview of properties of the multidimensional nonlinear dynamical systems which have a potential of producing the phenomenon of self-organized behavior and manifesting themselves as swarm intelligence.

Swarm intelligence, definitions and manifestations

By definition, swarm intelligence is the organized behavior of large communities without global organizer and without mapping the global behavior onto the cognitive/behavioral abilities of the individual members of the community [1]. It should be emphasized that what is called here communities are not necessarily the communities of living entities like bee hives or ant hills or microbial colonies. Moreover, complexity of collective behavior of the community as a whole does not require its individual members to have any extensive analytical tools or even memory on their own. The key prerequisite for the possibility of community-wide self-organization is that individual members may interact following the stimulus-response rules. Large-scale community-wide behaviors and self-organized modalities are completely determined by these low-level local interactions. There are a number of closely related but distinctly different aspects of swarm intelligence. These are collective memory, adaptivity, division of labor, cooperation, sensing of environment (a.k.a., stigmergy) and quorum sensing. All these aspects are the emergent properties resulting from local member-to-member interactions without a general plan of action, without a supervisory authority, and without a system-wide information infrastructure. From the mathematical standpoint, a large system of locally interacting units is a dynamic network governed by the laws of nonlinear dynamics. The following question, therefore, is in order: what exactly are the laws of local interactions leading to emergence of complex behaviors which are referred to as swarm intelligence?

Mechanistic origins of self-organization and swarm intelligence

A comparatively simple, and abundantly well studied, example of the system manifesting the property of swarm intelligence is neural network (NN) [2,3]. NN functionality originates from and closely mimics the neuronal networks constituting the nervous systems of higher organisms. Among the analytical tools collectively known as artificial intelligence, NNs retain the leading positions in a variety of computational tasks; among them are pattern recognition and classification, short- and long-term storage of information, prediction and decision making, optimization, and other. Due to the fundamental property of being universal approximators, the NNs are capable, in principle, of representing any nonlinear dynamical system. Such systems may possess a number of asymptotically stable attractors. This means that starting with a large variety of initial conditions belonging to a certain basin of attraction the system may evolve towards one of the several well-defined stable manifolds. This process is in fact nothing other than classification of initial states which occurs in the system without any organizational force or supervisory authority.

The Lotka-Volterra Systems (LVS) is a large class of dynamical systems described by the ordinary differential equation with quadratic nonlinearities [4]. Originally inspired by ecology and population dynamics, the LVS theory largely retains their flavors and terminology. In particular, independent variables are assumed to be the population levels of corresponding species, the coefficients describe the rates of reproduction and extinction. Interactions between the species may be mutualistic (cooperative) or antagonistic (competitive). This terminology evokes dramatic visions of struggle for survival, either individually or collectively, so frequently observed in the world of living entities. However, from the mathematical standpoint, there is nothing dramatic in the LVS dynamics: all the systems describable by LVS, whether belonging to biological, or physical, or technological, or social, or financial realms, will have similar dynamical behaviors and analogous emergent properties. Due to this reason, and in order to avoid direct ecological connotations, the variables in LVS are often called quasi-species thus emphasizing that the actual nature of these species is of secondary importance.

A fundamental question pertaining to competitive LVS is the question of dynamic stability. In the context of population dynamics, stability means that, despite the fact that all the species are struggling with each other, they may nevertheless come to some sort of peaceful coexistence or consensus regarding the distribution of limited resources. Since nothing except the pair-wise interactions is included in LVS dynamics, this consensus cannot be a result of collective decision making or planning. The challenge and fundamental importance of the question of stability have been articulated by S. Grossberg: "The following problem, in one form or another, has intrigued philosophers and scientists for hundred of years: How do arbitrarily many individuals, populations, or states, each obeying unique and personal laws, ever succeed in harmoniously interacting with each other to form some sort of stable society, or collective mode of behavior? Otherwise expressed, if each individual obeys complex laws, and is ignorant of other individuals except via locally received signals, how is social chaos averted? How can local ignorance and global order, or consensus, be reconciled? ...What design constrains must be imposed on a system of competing populations in order that it be able to generate a global limiting pattern, or decision, in response to arbitrary initial data?...How complicated can a system be and still generate order? [5]"

The questions outlined above have been successfully resolved within a wide class of competitive nonlinear dynamical systems, with NNs and LVS being their particular cases. In order to avoid cumbersome mathematical notation and explicit definitions within this paper we will call these system G-systems. The fundamental Global Consensus Theorem (GCT), proved by S. Grossberg in a series of publications [5-8] claims that within the class of G-systems the tendency to self-organization is rooted in fairly simple nature of things: any complex system whose unstoppable growth is inhibited by progressively dwindling resources will end up with some sort of self-structuring and consensus regarding the distribution of resources. Generality and simplicity of the G-systems dynamics guarantees its applicability to very wide class of natural, technological and societal phenomena. Transition from the dominance of one quasi-species to another may appear as a struggle for survival, and it is indeed an existential struggle in the predator-prey food chains. Although the metaphor of struggle for survival is widely used beyond the world of living entities, it is obvious from the GCT that the reasons for competitive dynamics leading to consensus may be much simpler and may have nothing to do with personal motivation of a living entity to survive. In this context, it is not out of place to recall that the co-founder of LVS, Alfred Lotka, pointed out that natural selection should be approached more like a physical principle subject to treatment by the methods of statistical mechanics, rather than as struggle of living creatures motivated by the desire to survive [9].

The GCT provides a deep insight into the seemingly miraculous property of complex hierarchical systems to be self-organized at each level without supervisory authority, without informational infrastructure, without necessity for its units to have understanding of the process as a whole, and without invoking the metaphor of struggle for survival. The GST also provides the clues on how such complex emergent phenomenon as swarm intelligence may appear in the systems consisting of only unsophisticated individuals devoid of any personal intelligence and interacting with each other only through simple pair-wise stimulus-response rules.

Swarm intelligence in G-systems

Chemical networks

Perhaps the simplest G-system fully satisfying the provisions of the GCT is a system of concurrent chemical reactions usually called a chemical network. It is not, however, immediately evident whether or not chemical constituents interacting through stimulus-response rules (chemical reactions) may form a network capable of solving intelligent tasks such as pattern recognition or computation. In this venue, the simplest model of a chemical neuron has been proposed by Okamoto et al. [10]. The possibility of connecting the Okamoto-type chemical neurons into a network has been analyzed in-depth in the series of publications by Hjelmfelt and Ross [11-14]. In particular, in Hjelmfelt et al. [11,14] the feasibility of a chemical finite-state computing machine has been demonstrated; such a machine would include the most fundamental elements of traditional electronic computers, namely binary decoder, binary adder, stack of memory and internal clock. The possibility of a programmable chemical NN capable of storing patterns and solving pattern recognition problems has been proved in Hjelmfelt et al. [12]. At last, an ultimate computer science conjecture – whether or not a Turing Machine can be constructed from oscillating chemical reactions – has been also resolved affirmatively [13].

A systematic study of biochemical information-processing systems has been reported in [15]. A detailed comparison of computational capabilities of NNs and those of biochemical networks suggests the idea that these capabilities have very much in common. In a more general context, it should be noted that any system representable through NN may be considered as a version of a Turing Machine. And an even more powerful statement is also valid: any function computable by Turing Machine can be also computed by an appropriately parameterized processor net constructed of biochemical entities [16]. In practical terms, all this means that each biochemical network may be thought as an entity performing certain computation and may be formally represented through an appropriately constructed Turing Machine. And conversely, any function computable by a Turing Machine may also be computed by a specially designed biochemical network.

The famous question posed by Alan Turing in his groundbreaking paper "Can a machine think?" [17] continues to be a highly disputed topic in computer science, cognitive science and philosophy [18]. However, given the convincingly demonstrated equivalence between the NNs and Turing Machines, between the chemical networks and NNs, between NNs and population dynamics, etc., it seems reasonable to pose similar questions: "Can a chemical network think?"; "Can a population of dumb individuals, as a whole, think?"; "Can a microbial community think?; “Can a community of cells think?”. From the discussion above, it is reasonable to infer that a swarm of locally interacting individuals lacking any personal intelligence can think at least in the same sense and at the same level of intelligence as Turing Machines and computers.

Robotic communities

A community of inanimate robots mutually interacting only through stimulus-response rules but lacking any analytical tools for premeditated collective strategy, is well qualified to be such a community of individuals interacting in accordance with LVS rules and satisfying the provisions of GCT. Proof of the principle that these communities may possess the elements of self-organization and swarm intelligence has been convincingly demonstrated in [19,20]. In these works, a group of memoryless micro-robots have been programmed to mimic individual behaviors of cockroaches. The micro-robots, however, were not hard-wired to have any analytical tools to gather information regarding behaviors of other robots or regarding the general plan of action. It has been shown experimentally that this community is capable of reproducing some patterns of collective behavior similar to those of real cockroaches. Division of labor in communities of robots has been studied in [21]. A comprehensive review of various aspects of swarm intelligence in communities of robots and biological entities is given in [22]. Cooperative behaviors in communities of autonomous mobile robots has been reviewed in [23].

Maltzahn et al. [24] constructed a system in which the synthetic biological and nanotechnological components communicate in vivo to enhance disease diagnostics and delivery of therapeutic agents. In these experiments, the swarms typically consisted of about one trillion nanoparticles. It has been shown “that communicating nanoparticle systems can be composed of multiple types of signaling and receiving modules, can transmit information through multiple molecular pathways, can operate autonomously and can target over 40 times higher doses of chemotherapeutics to tumors than non-communicating controls.”

Microbial communities

Highly sophisticated forms of swarm intelligence have been observed in microbial communities. These communities represent a perfect example of species in competition governed by the Lotka-Volterra dynamics [25-27]. Social organization of bacterial communities has been extensively analyzed in [28]. Bacterial communities are found to possess a form of inheritable collective memory and the ability of maintaining self-identity. Secondly, the bacterial communities are capable of collective decision-making, purposeful alterations of the colony structures, and recognition and identification of other colonies. In essence, bacterial communities as a whole may be seen as multicellular organisms with loosely organized cells and a sophisticated form of intelligence [29].

Communities of somatic cells

From the perspective of Lotka-Volterra dynamics, somatic cells are just another example of locally interacting units possessing, as a community, the emergent property of swarm intelligence. As mentioned in [29], “Bacteria invented the rules for cellular organization.” However, in contrast to microbial communities which have a freedom of spatial restructuring, self-organization in a community of somatic cells is mostly manifested through collective shaping their internal phenotypic traits [30]. All this means that a community of somatic cells acts as a self-sufficient intelligent superorganism capable of taking care of its own survival through cooperative manipulation with intra-cellular states.

Disruption of quorum sensing as a prerequisite for triggering carcinogenesis

Carcinogenesis is a complex systemic phenomenon encompassing the entire hierarchy of biological organization. A great emphasis in carcinogenesis is placed on the role of disruption of the cell-to-cell signaling. With destruction of signaling pathways, not only the normal regulation of individual cellular processes is damaged, but also a blow is dealt to the, so to speak, mental capabilities of the community as a whole. Its collective memory is wiped out or distorted, customary division of labor between subpopulations is shifted towards aberrant modalities, community-wide self-defensive mechanisms are weakened or broken. In summary, the community as a whole falls into the state of disarray and amnesia in which it is feverishly searching for new ways towards survival. These processes in turn cause shift in expression profiles and metabolic dynamics and eventually penetrates to the level of DNA causing multiple mutations.

Quorum sensing (QS) is an important aspect of swarm intelligence. Agur et al. [31] provide a brief review of relevant biological facts and propose a mathematical model of QS boiled down to its simplest mechanistic elements. They arrive to important insight that "that cancer initiation is driven by disruption of the QS mechanism, with genetic mutations being only a side-effect of excessive proliferation." Detailed analysis of societal interactions and quorum sensing mechanisms in ovarian cancer metastases is given in [32]. These authors present compelling arguments supporting the view that QS "provides a unified and testable model for many long-observed behaviors of metastatic cells."

Swarm intelligence is a key to understanding acquired chemoresistance

Numerous observations confirm the notion that a cancer tumor may be regarded as a society of cells possessing the faculty of swarm intelligence. One of the important aspects of swarm intelligence is adaptivity which is a form of learning from experience.

It has been also long recognized that cancer cells, after the fleeting inhibitory effect of a chemotherapeutic agent, may develop the capabilities of resistance to treatment. These capabilities termed as acquired resistance, are the manifestations of robustness of cancer cells, both individually and collectively. In literature, in attempts to conceptualize this complex phenomenon, there is a reductionist tendency to associate adaptivity with multiple layers of negative feedback loops [33]. It is obvious, however, that the entire system comprising myriads of such loops cannot succeed in fulfilling its task unless these individual controls are working coherently, sharing a common goal. Observed astounding coherence between all the innumerable elementary processes comprising tumor adaptivity allows one to see tumor as a separate organ [34,35] and to talk about its defensive tactics [36]. Fundamentally, such capabilities are nothing else than manifestations of swarm intelligence in the community of tumor cells. It is, therefore, admissible to hypothesize that, when developing therapeutic strategies against cancer, one needs to recognize that the enemy is intelligent, capable of discerning the weapon applied against it and mounting a counteroffensive.


Complex hierarchy of perfectly organized entities is a hallmark of biological systems. Attempts to understand why's and how's of this organization lead inquiring minds to various levels of abstraction and depths of interpretation. In this paper, we have attempted to convey the notion that there exists a set of comparatively simple and universal laws of nonlinear dynamics which shape the entire biological edifice as well as all of its compa