Moving Next-Generation Sequencing into the Clinic

On a glorious day (but with Arctic-like winds!) earlier this week, I attended a symposium on exploiting NGS in the diagnostic genetics clinic:

The speakers were clinicians, bioinformaticians and biomedical researchers; a good mix.  The organisers got things off to a smooth start and the keynote talk was given by:

Dr. Anneke Seller, Director of Genetics Laboratories, Oxford NHS trusts

“Transforming genetic testing in the NHS: the application of next generation sequencing to the diagnosis of Mendelian disorders”

Dr Seller guided us along a timeline of the development of genetic testing in the Oxford region NHS, noting that their main methods were focused on small panels of genes typed by either Sanger sequencing or one of the NGS platforms.  She explained how diagnosis of the variants causing hypertrophic cardiomyopathy (HCM) has moved from using denaturing HPLC onto high-resolution melting curve analysis and now to Haloplex PCR and the Illumina MiSeq platform.  Using NGS increased clinical sensitivity or “diagnostic yield” and when combined with control population data, improved classification of variants found in HCM, making it easier to define them as “unclassified” rather than as “likely pathogenic”.

The Oxford Clinical Genetics Labs validate variants using Sanger sequencing, but plan to stop this soon.  Looking ahead, their goal was to use sequencing of whole exomes to increase the success rate in finding causative variants.  Dr Seller emphasised the need to introduce better bioinformatics, rather than to struggle with data in Excel spreadsheets. Finally, she proposed that it was essential that the NHS transformed clinical genetic testing by the widespread introduction of NGS.

Elliott Margulies, from Illumina UK, spoke about:

“Whole Genome Sequencing and Analyses for the Clinic”

Dr Margulies introduced the Illumina sequencing platform briefly and then talked about some recent technical developments including:

  • the ability to use smaller size samples as sources of DNA, e.g. formalin-fixed paraffin wax embedded tissues
  • an open-source software alignment and variant calling tool, iSAAC
  • a modified file format for sequence variants, called gVCF.
  • a tool to facilitate easier filtering of a list of DNA sequence variants, tentatively called iAFT.

iSAAC is claimed to be able to align and call variants from a whole-genome sequence dataset in about 24 hrs, if it is run on a 12 core computer with 64 Gb RAM.

The variant filtering tool has a graphical user interface and is built upon open-source underpinnings e.g. the Ensembl VEP and uses data from various sources, including the Exome variant project and is provisionally called iAFT.  Using this tool reduces the scale of the problem of finding causative  variants, but when questioned by an audience member, Dr Margulies emphasised that, in the final analysis, the last decision is still the responsibility of the clinician.

Looking ahead, Elliott Margulies predicted a clinical “ecosystem” starting with taking a DNA sample at birth, much like the current heel-prick blood sampling, used for whole-genome sequencing, following up some individuals with exome sequences and linked with an electronic health record maintained throughout life.

The third speaker of the morning session was Matthew Addis, from Arkivum:

“Managing retention and access of genomics data”

We were presented with some salutary and entertaining tales of catastrophic data loss and then Matthew Addis explained the painstaking and rigorous approach that Arkivum take to ensure that their clients always have a backup copy.  Physical copies in multiple locations and regular checks on data integrity are key aspects to the system, including even a backup kept by a third party, in escrow.

In the last talk of the morning, Bas Vroling, from Bio-Prodict, spoke about:

“3DM: Data integration and next-generation variant effect predictions”

Using 3-dimensional protein models for every protein in a superfamily as their starting point, Bio-Prodict have built a tool that integrates multiple data sources in order to, or so they claim, infer the functional effect of sequence variants.  The delightful aspect of this approach is that 3-D models of proteins from non-humans can be used to infer the effect of variants in the human homolog.

One example that Dr Vroling gave was of a variant found in a protein involved in long-QT syndrome in horses could be used to predict the effect of variants in the equivalent human protein.  Using a large set of validated variants found in long-QT syndrome, the detection sensitivity of 3DM was 95% compared with ~65% achieved by another standard tool, PolyPhen.  The potential of the 3DM tool is clear, but whether it can be scaled up to cope with a complete set of all the proteins encoded in the human genome remains to be seen.

I’ve put summaries for the afternoon talks in another post.

This entry was posted in Genomics. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

One Trackback

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

CommentLuv badge
  • Connect with us

    Link to ourRss
    Link to ourTwitter
  • Connect with us

    Link to ourLinkedin
    Link to ourRss
    Link to ourTwitter