Making Sense of Genomic Data: Where are we in the function annotation race?

dna_search

Due to the increasing genome-sequencing initiatives worldwide and the cheaper associated costs, a huge amount of genomic data is now accumulating. In contrast, the functions of only around 1% of these sequences are currently known from experimental studies. The gap between unannotated (sequences whose function is not known) and annotated sequences will continue to rise further since the experimental functional characterisation of such large amounts of genomic data is not feasible. In order to bridge this widening gap, computational function prediction approaches will be essential.

The information encoded in the genome is translated into proteins which carry out the biological functions required for proper functioning of a cell. They are made up of a linear chain of amino acids (determined by the nucleotide sequence in genes) linked together by peptide bonds and they can be as diverse as the functions they serve. Depending on their amino-acid composition and sequence, proteins can fold into their native three-dimensional conformation, which allows them to interact with other proteins or molecules and perform their function. Proteins are often considered as the ‘workhorse’ molecules of the cell and they can perform diverse functions – as biological catalysts, structural elements, carrier molecules or roles in cell signalling and cellular metabolism amongst others. As a result, in order to have a better understanding the cell at the molecular level and ‘decode’ the available genomic data, it is essential to characterize protein functions.

The functional role of a protein can be studied or described in many different ways – by the molecular function or biochemical activity of the protein, its role in a biological process or its relatedness to a disease. Hence, the term ‘protein function’ can be very ambiguous unless the context in which the function of a protein is described is stated clearly. Protein function descriptions written in the natural language used in the literature have been found to be too vague and unspecific to accurately describe the function of proteins; this has led to the subsequent development of a common organized protein annotation vocabulary – the Gene Ontology. This is the largest and the most widely used resource of protein functions which can be used to assign functions to proteins using different contexts irrespective of the source organism.

The conventional method used to predict protein function is a protein sequence (or structure) homology search to identify similar sequences from a protein sequence (or structure) database, followed by extrapolating from the known functions of the most similar sequence (or structure). This is based on the principle that evolutionarily related proteins having high sequence (or structure) similarity have similar, if not identical functions. However, this approach is error prone in cases when the protein in question is substantially different from any other protein in the database with a known function; when proteins do not follow the simple linear relationship between protein similarity and function; or when proteins ‘moonlight’.

Typically, function prediction methods are based on the assumption that the more similar the proteins (based on sequence or structure), the more alike their function. However, in reality, in some cases minor variations in protein sequences or structures can lead to a substantial change in molecular function and sometimes very different proteins can perform similar or identical functions.

Moonlighting proteins pose additional challenges for function prediction methods since, without any significant change in their sequence or structure, they are capable of carrying out more than one diverse functions based on where they are localized or their concentration. Phosphoglucose isomerase is one such ‘moonlighting’ protein which functions as a cell-metabolism enzyme inside the cell and as a nerve growth factor outside the cell.

In order to predict functions of uncharacterised genomic sequences, most of the recent function prediction methods combine several sequence-based and structure-based methods using machine-learning approaches, since most of them are hard to characterise using a single method. Many methods are available today which provide computational function predictions exploiting different approaches. However, it is essential for experimental biologists to understand whether the vast number of function prediction methods are of any value to them and whether the function predictions made by them can be relied upon. The Critical Assessment of Function Annotation (CAFA) experiment is one such major bioinformatics initiative; this aims to provide an unbiased large-scale assessment of protein function prediction methods: to decide which method performs better and to understand the ability of the whole field in providing function predictions for the colossal amounts of genomic data currently available.

The first CAFA experiment in 2013 was successful in providing an understanding of the performance of existing function prediction methods. At the same time, it also highlighted the major challenges and limitations of automated function prediction for computational biologists, database curators and experimental biologists. One of the main challenges is the importance of having accurate experimentally known functions of characterized proteins in databases – since all methods can only make predictions based on the available protein function data. However, significant biases have been recently identified in databases due to the recent increase in use of high-throughput experiments which contribute only very general functions towards experimental protein annotations. As a result of this, currently almost 25% of the characterized proteins in public databases have ‘binding’ listed as their function. Moreover, experimentally known functions for most proteins are incomplete, as the experiments are biased by experimenter choice and the annotations are limited by the scope of experiments. The existence of these biases not only affects our understanding of protein function in general but also affects the relationship between protein function prediction methods and the predicted function. Hence, it is essential for both developers of automated function prediction methods and experimental biologists who use computational function annotations to guide their experiments to be aware of the existence of such experimental biases.

Oracle SQL Developer v 4.0.0 – new GUI for multi-db management

Oracle have released a new version of their GUI database development tool: Oracle SQL Developer v 4.0.0.
This tool is provided free (it’s the various database engine flavours for which Oracle charge) and provides a wealth of features, such as browsing/manipulating database objects, writing/debugging/running SQL queries, managing security, comparing databases and running a large range of reports.

Particularly helpful is the ability to connect to a number of different database engines, in addition to Oracle; so far I have successfully connected to MySQL and PostgreSQL, but any database with compliant JDBC drivers (Java Database Connectivity) should work.

As SQL Developer is a pretty hefty tool, I’ll mention two topics here, which are helpful to have sorted early on:

  1. How to connect to a PostgreSQL database (or other non-Oracle db)
  2. How to avoid JavaVM errors due to heap space

Connecting to a PostgreSQL database

  1. Download the JDBC driver from: http://jdbc.postgresql.org/download.html
    Use this version: JDBC41 Postgresql Driver, Version 9.3-1100 as it is compatible with JVM 1.7, used in SQL Developer 4.
  2. In SQL Developer 4, link to the JDBC driver via ‘Tools’, ‘Preferences’, ‘Add Entry’ as below:Oracle_connect_1
  3.  Now create a new connection (via ‘File’ / ‘New’ or the green cross in the ‘Connections’ pane).  If you have successfully linked  the JDBC library, you will have a new tab ‘PostgreSQL’ on the ‘New Connection’ dialogue.  Give the connection a name, set your database username & password and you should be ready to go…Oracle_connect_2

Increasing memory available to JavaVM

You need to edit the file ide.conf – this will be under the installation folder:

../../sqldeveloper/ide/bin/ide.conf

Find the following lines (probably at the bottom) and increase the memory available, for example:

// The maximum memory the JVM can take up.
AddVMOption -Xmx2048M  

// The initial memory available on startup.
AddVMOption -Xms512M 

***

In a future post I hope to look at database migration – there is a comprehensive wizard to allow migration of existing MySQL or PostgreSQL databases to Oracle. When I understand the options and get it working fully, I’ll let you know!

Please note this is an early version (4.0.0.13) of the SQL Developer tool and there may well be odd bugs and quirks until it reaches a more mature release…

 

migrating home directory files using dolphin

This is one convenient way you can start to move your old home directory files over to new home directories:

1) If you havent got ian to activate your UCL account then do so.
2) login to your machine as your ucl user
3) start dolphin &
4) right click on the ‘places’ column of the dolphin window
and click ‘add entry’
5) in the resulting pop-up window put in the remote location of your home directory:
e.g.
label: old_home
Location: sftp://username@bsmlx**/home/bsm3/lees
substitute username and ** as appropriate

6) click split at the top of the dolphin window to give you a split screen
Then click on your new home directory in the places directory and
start copying things over.

Pre-processing metagenomic data

A vital part of metagenome analysis is first ensuring that you are working with good quality data. During my PhD I have used the tools PRINSEQ and DeconSeq (Schmieder and Edwards, 2011) to process and filter metagenomic data sequenced using the 454 pyrosequencing technology.

PRINSEQ generates general statistics, which can also indicate data quality. Data can be filtered based upon a number of criteria including: the removal of duplicate sequences, the removal of sequence containing x% ambiguous (i.e. “n”) bases, and the removal of sequences with a mean quality score of x. Sequences can be trimmed to a defined length, to exclude poor quality bases, and to trim poly-A/T tails. Finally, your output data can be re-formatted between FASTA+QUAL and FASTQ formats, between DNA and RNA sequences, and sequence IDs can be changed.

DeconSeq is typically used after PRINSEQ and involves the removal of ‘contaminate’ sequences, i.e. sequences that belong to species you do not wish to analyse. Data is scanned, using BLAST, against a reference database of the contaminant species of choice to remove any matching sequences.

Both of these tools are free to use and are available either online or through a standalone version.

Schmieder R and Edwards R: Quality control and preprocessing of metagenomic datasets. Bioinformatics 2011, 27:863-864. [PMID: 21278185]

Coping with millions of small files: appending to a tar archive

Most file systems will struggle when you read and write millions of tiny files. Exactly how much a particular file system will struggle will depend on a bunch of factors: the formatting of the file system (ext3, gpfs, etc), hardware configurations (RAM / networking bottlenecks) and so on. However, the take home message is that storing many millions of tiny files on standard file systems (especially network file systems) is going to cause performance problems.

We recently came across this when performing millions of structural comparisons (with SSAP). Each comparison results in a small file, so running 25 million of these comparisons on the HPC caused a number of problems with storage.

The solution? Well, as always, there are lots.

You could store the file contents in a database (rather than a file system). The downside being that this requires the overhead of running the database: dependencies, concurrency issues when accessing from many places at the same time and generally more tools in the chain to go wrong.

Since we’re running this in an HPC environment, we’ll want a solution that is simple, scales well, and requires few dependencies. A possible alternative would be to store all these small files in a single large archive.

Looking at the tar man page, we can create a new tar archive with the ‘-c’ flag:

$ tar -cvf my_archive.tar existing_file

and we can append extra files to that archive with the ‘-r’ flag:

$ tar -rvf my_archive.tar extra_file1
$ tar -rvf my_archive.tar extra_file2

you can then list the contents of that archive with the ‘-t’ flag:

$ tar -t my_archive.tar
existing_file
extra_file1
extra_file2

So, looking at a real example…

Let’s say we split our job of 25 million pairwise comparisons into 2500 lists, each containing 10000 pairs:

$ split --lines 10000 -a 4 -d ssap_pairs.all ssap_pairs.

That will result in a 2500 files, each containing 10000 lines (called ssap_pairs.0001, ssap_pairs.0002, …).

We can then submit a job array to the scheduler so that a single script can process each of these input files.

$ qsub -t 1-2500 ssap.submit.pbs

Rather than post the full contents of the script ‘ssap.submit.pbs’ here – I’ll just focus on the loop that creates these 10000 small alignment files:

# name the archive file
ALN_TAR_FILE=ssap_alignments.`printf "%04d" $SGE_TASK_ID`.tgz

# create the archive (start the archive off with an existing file)
tar -cvf $ALN_TAR_FILE ssap_pairs.all

while read pairs; do
    # name of the alignment file that SSAP will produce
    # (e.g. 1abcA001defA00.list)
    aln_file=`echo "$pairs" | tr -d " "`.list

    # run SSAP
    $BIN_DIR/SSAP $pairs >> ssap_results.out

    # append alignment file to archive
    tar -rvf $ALN_TAR_FILE $aln_file

    # remove alignment file
    \rm $aln_file

done < $PAIRS_FILE

# compress then copy the archive back to the original working directory
gzip $ALN_TAR_FILE 
cp $ALN_TAR_FILE.gz ${SGE_O_WORKDIR}

Job done.

Running SSAPs within your scripts

This is just a post to help people within the CATH group use our internal Perl modules (it may end up on CPAN, but probably not in the very near future..).

If you’ve got CATH Perl stuff setup okay then you should be able to type the following in a terminal

perldoc Cath::Ssap

and that documentation should help you get to a script like:

#!/usr/bin/env perl

use strict;
use warnings;

use Cath::Ssap;

my $cath_version = '3.5';

my $ssap = Cath::Ssap->new( query => '1oaiA00', match => '1oksA00', version => $cath_version );

my $result = $ssap->run(); # returns Cath::Ssap::Entry

print $result;

# 1oaiA00 1oksA00 59 50 71.82 40 67 2 4.20

$result->query; # 1oaiA00
$result->match; # 1oksA00
$result->query_length; # 59
$result->match_length; # 50
$result->ssap_score; # 71.82
$result->aligned_residues; # 40
$result->percent_overlap; # 67
$result->percent_identity; # 2
$result->rmsd; # 4.20
$result->simax; # 6.195

Obviously you’d need to change the version => ‘3.2’ to something more sensible (e.g. ‘3.5’ or ‘latest’)

If you’re doing tons of SSAPs then there are things to help with automating that too – have a look at:

perldoc Cath::SsapList
perldoc Cath::SsapList::Matrix

Finally, bear in mind we’ve already calculated the results of comparing domain SSAPs within superfamilies and stored them in the database:

perldoc Cath::SsapList::DBIC

If any of that code saves you any time, then please feel free to repay some of that time by adding documentation to those files…

CATH v3.5 has been released

The next version of CATH (v3.5) is now available at www.cathdb.info.

This release brings together a number of features including: a large update to the underlying classification database, a number of new data features and a complete recoding of the web pages.

  • 20,000+ new domains classified from 5000+ PDB structures
  • Integrated genomic sCATH + Gene3D: added genomic sequence data
  • New data: Functional Family alignments
  • New web pages: faster, easier to search, more user-friendly

 

Public Release of CATH v3.3

CATH Release v3.3
CATH Release v3.3

The latest version of CATH (v3.3) has now been officially released and available at www.cathdb.info. This release includes 226 new superfamilies (total 2,593), 1,148 sequence families (total 10,019) and 14,473 newly classified protein domains.


The release was actually fixed a while ago, but it has taken us a few months to generate all the extra data and extra features that the site now contains. These extra features include:

  • Structural clusters and alignments
  • Structural comparisons
  • Functional annotations
  • Greatly improved search facilities
  • More intuitive interface (hopefully)

More information on the release can be found on the links below. Please do let us know if you have any comments, suggestions, requests by sending an email to cathteam@biochem.ucl.ac.uk.

Links to additional information: