Thursday, 12 September 2024

Do NOT use SDFdb

It's not even a real database. All it does is:

import sdfdb
db = sdfdb.SDFdb("mysdf.sdf")
molfile = db.get_mol("mytitle")

Ok, so it's quick to index even quite a large SD file. But if Andrew Dalke ever sees the corners it cuts...! I mean, it doesn't even support Windows line endings.

And the code? Well, it speaks for itself:

import re

class SDFdb:
    def __init__(self, fname):
        self.fname = fname
        self.file = open(self.fname, "rb")
        self._create_index()

    def _create_index(self):
        patt = re.compile(b"\$\$\$\$\n(.*)\n")
        chunksize = 100000
        self.lookup = {}
        self.start = []
        idx = 0
        position = 0
        title = self.file.readline().rstrip().decode("ascii")
        self.lookup[title] = idx
        idx += 1
        self.start.append(position)
        position = self.file.tell()
        while text := self.file.read(chunksize):
            if text[-1] != '\n':
                text += self.file.readline()
            # Invariant: text ends with "\n"
            if text.endswith(b"\n$$$$\n"):
                text += self.file.readline()
            # Invariant: text never ends with "\n$$$$\n"
            for m in patt.finditer(text):
                title = m.groups()[0].decode("ascii")
                if title in self.lookup:
                    print(f"WARNING: Duplicate title {title}")
                self.lookup[title] = idx
                offset = m.start() + 5
                idx += 1
                self.start.append(position + offset)
            position = self.file.tell()
        self.start.append(self.file.tell()) # should be EOF

    def get_mol(self, title):
        idx = self.lookup.get(title, None)
        if idx is None:
            raise KeyError(f"Title '{title}' not found")
        self.file.seek(self.start[idx])
        text = self.file.read(self.start[idx+1]-self.start[idx])
        return text.decode("ascii")

    def close(self):
        self.file.close()

...though maybe I can see it would be useful for a large number of random accesses into a large (well-behaved) SD file.

That is, if they weren't already in a database. (Or *cough* they were in a database but you thought this was simpler than writing the code to batch up queries.)

Tuesday, 4 June 2024

Every ChEMBL everywhere, all at once

The team over at ChEMBL has just announced a symposium to celebrate 15 years of ChEMBLdb and 10 of SureChEMBL. With this in mind (or perhaps for other reasons), I have attempted to extract data from all 34 releases of ChEMBLdb. Let's see how I got on...

One of the things I love about ChEMBL is that they still provide all of the download files for all previous releases. But before w'get wget, let's see if there's an easier way, namely Charles Tapley Hoyt's ChEMBL Downloader - if it can do the work for me, then that'll save a whole bunch of work. Looking into it, however, the ability to run SQL queries relies on an SQLite database being available, which is only the case for ChEMBL 21 or later.

So old skool it is, which leads to the question of which database format to use. Well, it turns out that there's only one database format that is available for every single release of ChEMBL, and that's MySQL. One bash loop later, and I have all 34 chembl_nn_mysql.tar.gz files (well, two extra wgets for 22_1 and 24_1). A bit of unzipping later and my home folder is 300G heavier and we are ready to roll.

That is, if we can navigate the slightly differently named/arranged files in every case. Is the unzipped folder chembl_nn_mysql, or is it chembl_nn with the chembl_nn_mysql folder inside that? Are the setup instructions in INSTALL or INSTALL_mysql? Is the SQL data in a .sql file or a .dmp file? It is of course nitpicky to even mention this given the nature of what I'm getting for free from ChEMBL, but have you ever tried to install 34 ChEMBLs? :-) In the end, a carefully crafted find/grep was able to pull out the correct create command:

for d in `find . | grep INSTALL | sort`; do grep "create database" $d |sed "s/mysql>//g" ; done

This is simply 'create database chembl_nn' from version 1 until version 22, and then from 23 onwards it changes to specify the character set as UTF-8. For the actual import, I used a Python script to work out the right commands, wrote them to a shell script and then ran them. Just another 700G later, and it's all available in MySQL.

What I wanted to do was simply export a set of SMILES, InChIs and CHEMBL Ids from each of the versions. What I thought would be a problem turned out not to be at all; while the schema as a whole has grown and changed quite a bit over the years, the only effect for me was that 'compounds' became 'compound_structures' in ChEMBL 9.

More of an issue was that the data stored has changed over the years. I had forgotten that ChEMBL 1 did not have ChEMBL Ids, nor version 2; in fact, it wasn't until version 8 that they appeared on the scene. Before that, there was an attempt to use ChEBI ids and at the very start there was just the molregno. Furthermore, version 1 doesn't have Standard InChI - it just has a non-standard one (why, or what the settings are, I don't know ...Added 30/06/2024: Standard InChI was not available until shortly before ChEMBL 1 - that would explain it!). This at least we can pull in from version 2 by matching on molregno and SMILES, and then calculate any missing Standard InChIs off-line.

There was a point to all this, but that'll wait for another day. What I've shown here is that despite 15 years covering 34 releases, it's possible to recreate every single release and relive those glory days (ah, ChEMBL 16, will I ever forget you?).

Thursday, 28 March 2024

Learning the LINGO

LINGO is a simple similarity measure for SMILES strings, based on shared n-grams between the strings themselves. In many respects the relationship between an ECFP4 fingerprint and a molecule is equivalent to the relationship between a LINGO and a SMILES string. While it has obvious drawbacks, its charm lies in its ease of calculation which lends itself to high-speed implementations by some very clever people (Haque/Pande/Walters, Grant...Sayle).

I'm not going to add an entry to that canon, but I recently realised that the Counter class in Python supports union and intersection, which makes it just perfect for an elegant implementation in Python:

import itertools
import collections

from mtokenize import tokenize

def sliding_window(iterable, n):
    """Collect data into overlapping fixed-length chunks or blocks
    This is one of the recipes in the itertools documentation.

    >>> ["".join(x) for x in sliding_window('ABCDEFG', 4)]
    ['ABCD', 'BCDE', 'CDEF', 'DEFG']
    """
    it = iter(iterable)
    window = collections.deque(itertools.islice(it, n-1), maxlen=n)
    for x in it:
        window.append(x)
        yield tuple(window)

def lingo(seqA, seqB):
    """Implement a 'count' version of LINGO given tokenized SMILES strings"""
    counts = [collections.Counter(sliding_window(seq, 4)) for seq in [seqA, seqB]]
    intersection = counts[0] & counts[1]
    union = counts[0] | counts[1]
    tanimoto = len(intersection)/len(union)
    return tanimoto

if __name__ == "__main__":
    import doctest
    doctest.testmod()

    smiA = "c1ccccc1C(=O)Cl"
    smiB = "c1ccccc1C(=O)N"
    print(lingo(tokenize(smiA), tokenize(smiB)))
For 'mtokenize' I used code for tokenizing SMILES from the last blogpost. As an aside, try renaming mtokenize.py to tokenize.py and making the corresponding change to the import statement. See how long it takes you to work out why this causes an error :-).