The Evolution of Proteomics – Professor John Yates

makarovmollyimage1559917723279.jpg

[ad_1]

The final instalment of The Evolution of Proteomics series features an interview with Professor John Yates from the Department of Molecular Medicine at Scripps Research. The Yates laboratory is focused on developing strategies and tools in proteomics to answer basic biological questions.

The work of Yates and his lab has been instrumental in driving the evolution of proteomics, with key achievements including the development of shotgun proteomics, the creation of the SEQUEST algorithm allowing tandem mass spectrometry (MS) to be correlated with protein sequences, and of course the development of Multidimensional Protein Identification Technology (MudPIT) that resulted in a shift from traditional 2D gel-based MS techniques to liquid chromatography approaches in proteomics.

Molly Campbell: In a 2018 talk you mention the idea that proteomics was a “great unintended consequence of genomics”. In your opinion, what have been some of the most exciting breakthroughs in proteomics?

John Yates (JY): The biggest breakthrough is that proteomics exists at all. Back when genomes were first being sequenced, protein biochemistry analysis focused on one protein at a time – it was laborious, you could spend an entire year trying to sequence just one protein. It was also incredibly inefficient, relative to what we can do today. Now, in just a few hours, you can sequence an entire protein complex and identify what each component of the protein complex is doing. The advances have been stunning.

The reason that proteomics is great unintended consequence of genomics is that nobody was talking about the impact of genome sequencing on protein biochemistry, it really was something that came out of nowhere and had a huge impact. When you read a report by the National Academy of Sciences in the US, and why they should sequence the human genome, most of the discussion centers around “oh, bioinformatics will figure out what everything does, and we’ll learn about how cells work” and so forth, and really no discussion about the impact it might have on protein biochemistry.

MC: The Yates Laboratory at the Scripps Research Institute develops and applies  MS-based proteomics techniques to study conditions such as Alzheimer’s disease, schizophrenia and depression. How can a proteomics approach enhance our understanding of the pathophysiology of these conditions?

JY: These are very complicated diseases. There have been a number of genome wide association (GWAS) studies trying to figure out what the genetic components of these diseases are that have been unfortunately somewhat unsuccessful. As a result of such studies, the concept of “missing heritability” came about -but maybe it’s not missing heritability? Maybe it’s not genetics, maybe it’s the environment, together with the genes that is affecting protein networks in ways that we don’t quite understand yet.

Alzheimer’s disease in particular seems to be a disease of a breakdown in the proteostasis system, the system that maintains protein folding and degrades proteins. When they misfold, you get an accumulation of misfolded proteins in the brain that becomes toxic to cells and so forth. There are a number of diseases now which are clearly failures in the proteostasis network, where protein misfolding can result in a loss of function or a gain of function. So, we really need to study these diseases at the protein level, as you will only get so far with genetics and genomics. In order to do more at of the protein level, we still need to advance our technology so that we’re competitive with genomics technologies. 

Molly Campbell (MC): You have pioneered the development of several methods and software systems that have shaped proteomics research. What technical challenges do you face in further refining proteomics techniques so that they are increasingly sensitive and specific?

JY: Some of the trends that are occurring in the field include people trying to come up with ways to be more efficient and more high-throughput. One of the complaints from funding agencies is that you can sequence literally thousands of genomes very quickly but you can’t do the same in proteomics. There’s a push to try to increase the through-put of proteomics so that we are more compatible with genomics. One of the real exciting things in my opinion is the move of proteomics to single cell. People are finally making progress on cells that are biologically relevant, not just those that are packed with a few proteins such as red blood cells. That’s going to be a great area.

I just went to a think tank, sponsored by one of the NIH Institutes, that was discussing single cell proteomics. I think there’s enough excitement there that funding agencies can start putting some money into it to advance it.

One of the things that we are dependent upon in the MS field is for instrument manufacturers to keep advancing the technology. Some of the very fundamental basic research in MS takes place in academia, but really in order to make that technology useful it must be commercialized and advanced with the quality control and standards that commercialization brings to the instruments. It’s always exciting when you go to ASMS to see what instruments or technologies are going to be introduced by the manufacturers. 

MC: Please can you tell us about your recently published work in cystic fibrosis, and how this research may help to identify novel drug targets? 

JY: One of the papers we published looked at the interactome of the protein that is involved with cystic fibrosis, called the cystic fibrosis ion transport regulator (CFTR).

We looked at the interome between the wild type version of the protein and the most common disease form of the protein, which is the Delta f5 away, and there was a disease specific interactome. As we began to study the interactomes we found about 40 proteins where if we knocked down their expression, we could influence the maturation of the disease form of the protein in some fashion.

We tested a handful of these, about eight, to make sure that they actually restore channel function.  Out of the eight that we tested, seven did. The ones that are enzymes would be fairly easy to target by drugs as you can inhibit their activity.

We actually did an experiment where we took one of the one of the proteins that we were studying and found an inhibitor for it published in the literature. We made the inhibitor and tested it and what we found was that we could rescue the mutant form of the protein. We’ve identified a number of proteins, which are potential targets, where if you inhibit their activity, you can rescue the protein.

We’ve also recently published work where we looked at. A number of the proteins that interact with CFTR are kinases and phosphatases, and so we started looking at the modifications of CFTR and we found some modifications that looked like they may be important to the decision making process of whether the protein is mature or not and should be sent to the cell surface. We established that there is in fact a post-translational modification code that determines whether a protein is mature. I’m not sure how that would turn into the creation of drug targets, but it is certainly interesting biochemistry.  

Read the previous instalment of The Evolution of Proteomics, an interview with Professor Alexander Makarov, here.

MC: Your research encompasses the areas of bioinformatics and software development, methods development and biological applications. Do you face any difficulties in integrating these elements, and if so, how do you overcome those difficulties? 

JY: It’s not really that difficult to integrate them, the challenge has always been trying to prioritize which elements need to be done first (especially in a lab where a lot of people are doing different things)!

We’ve got a fairly robust and well-established pipeline of software tools that are used for a wide variety of things that are used by anybody that’s doing any kind of proteomics research at Scripps.

Where a lot of people spend time is addressing the question of “what biology have I discovered in my experiment?” and trying to come up with the tools that help people become more efficient at answering that question. When we have group meetings and discussions about bioinformatics, they are always the most contentious, heated and lively discussions and they are a very important topic.

MC: As an expert in quantitative proteomics with many years’ experience in the field, what do you envision for the future of proteomics?

JY: These are always tough questions. I think proteomics is going to advance in a few areas. It is going to be more sensitive as we push down towards single-cell analysis. It’s going to become more high-throughput so that we can analyze more patient samples and so forth, enabling it to be on par with RNA seq. type strategies. The scale of proteomics is going to advance to the point where we can obtain an entire proteome in a single experiment. We’re close to this now, and we may actually be close enough. Some of the experiments that we’re seeing in single-cell experiments see 1200 to 1500 proteins, and if you look at the RNA seq. experiments, they’re only seeing around 3000 or so genes – so we aren’t far off. Another main goal in proteomics is to bring down the cost of mass spectrometers.

Professor John Yates was speaking with Molly Campbell, Science Writer, Technology Networks. 

[ad_2]

Source link