Thursday 14 June 2018

Cheminformatics for deep learners: Canonical SMILES and why you should avoid them

I've already gotten the "valid SMILES" issue off my chest, but there's something else that's been bugging me about deep learning papers in chemistry, and that's the use of canonical SMILES for training. Why would you use canonical SMILES? And if you did, wouldn't that introduce some problems?

This is one of those cases where this just feels like a mistake to me - and I'm going to do my best to articulate my concerns. However, I'm not familiar enough with the mechanics of DNNs to be sure, but I hope those in the field will consider the points I raise.

The problem

So, canonical SMILES. Behind the scenes, the canonicalisation procedure gives a unique label to each atom (typically 1 to N). These labels are then used to generate a canonical SMILES, typically by starting at the lowest label (but not necessarily). The canonicalisation procedure is based upon the attributes of the graph, with the result that the first atom in a canonical SMILES tends to favor particular atom types and avoid others.

For example, if you look at the atom types for the first atom in the canonical SMILES generated by RDKit for ChEMBL molecules, you will find that the second most common atom type in ChEMBL (namely, *-C(-*)=*) never appears as the first atom in a canonical SMILES string. This is by design and you'll see similar behaviour with other toolkits - SMILES strings tend to start with degree 1 atoms.

So what if the distribution of atom types is different for the first atom?

Well, firstly, I predict that as a result these atom types will be over-represented in structures generated by DNNs (and others under-represented). If you train on canonical SMILES, then the probabilities for the first atom will be determined by the atom types favored as starting atoms by canonical SMILES. Consider the extreme example where fluorine always occurs as the first atom in any canonical SMILES that contains it; you should see an increased number of fluorines as the first atom in the generated molecules. Now you could argue that the probabilities for the remaining atoms will be adjusted accordingly, but I believe that there is a strong edge affect and that any correction will attenuate as the SMILES string becomes longer.

Secondly, this bias makes the DNNs job harder. Instead of a relatively even distribution of atom types at all points in the SMILES string, the distribution will depend on the distance from the starting atom. It's now trying to learn something about the properties of canonical SMILES instead on concentrating on the task at hand...

...which bring me nicely to the third point. Predictive models attempt to deduce a property value from the structure, and a SMILES string is used by DNNs as a proxy for the structure. Using a canonical SMILES string is another step removed. What about a molecule with a very similar structure but very different canonical SMILES? Surely the goal of a robust model is to handle this well. Restricting good predictive power to only those structures that are both similar and have similar canonical SMILES is to develop a model with a reduced applicability. A fun task is do is to measure the degree to which this fitting to canonical SMILES occurs; this is left to the reader.

The solution

The solution is simple. Use random SMILES. A single one, or multiple. The use of multiple random SMILES has already been described by Thomas Bergwinkl and subsequently by Esben Jannik Bjerrum as a 'data augmentation technique', but I see it as just avoiding the inherent bias of using canonical SMILES. But either way, I like this quote from Thomas:
The output for alternative representations of a molecule should be the same, if you understand SMILES. Using alternative representations in the test data allows to verify if the neural network understands SMILES. Spoiler: After a while the output becomes very close for the alternatives!

So why do people use canonical SMILES in the first place? I have my theory on this.

I believe it's because the generative models more quickly converge to generation of syntactically valid SMILES strings when they train on canonical SMILES. And for some reason, the percentage of syntactically valid SMILES strings generated by the model has become a figure of merit.

But this makes no sense - who cares what this percentage is? Sure, we can all overfit to canonical SMILES and get high percentages quickly. But how is this a good thing? You know that feeling you get when you see a QSAR model with very high R2 on training data - that's how I feel when I see a high value for this percentage. If it's actually doing what it's supposed to be doing (i.e. learning the underlying structure rather than the training set of canonical SMILES), then the percentage should really be lower. What do I care if the percentage of syntactically valid SMILES is 1%? So long as that 1% solves my problem, it's irrelevant - these structures are spewed out of these models thousands per second (I presume, but even so).

Please let him stop talking now

Okay, okay - I'm done. What do you think?

Wednesday 6 June 2018

Notes from the 11th International Conference on Chemical Structures

Just back from the recent ICCS in Noordwijkerhout, the Netherlands. I really enjoyed it although, as you can see from the picture, it was hard work at times.

Here are my notes on the scientific program, which I have just extracted from Twitter. A big thank-you to ThreadReader, without which I couldn't have done the extraction. Naturally, all errors are my own - I may have misunderstood something or lost the thread of the talk. Also, just to note, I didn't take notes on all of the talks.

If you are interested in a particular talk, and paste the provided link into a browser, you can see if anyone on Twitter added a comment on the tweet.

If you want to follow the entire Twitter conversation on the meeting, which used the hashtag #11thICCS, go to this link. Again, if a particular Tweet has replies, you need to click on it to see them.

As well as taking notes, I also presented a poster entitled "Can we agree on the structure represented by a SMILES string? A benchmark dataset". For more info, follow the link over to the NextMove blog.

Note to self:
Next time include the hashtag with every tweet. Otherwise it's hard to extract, and hard for attendees to follow automatically.

Image credit:
Image courtesy of Jason Cole.