{ "cells": [ { "cell_type": "markdown", "id": "f651ee02", "metadata": {}, "source": [ "# Analysis of T231 complex with `af_analysis`\n", "\n", "This notebook uses the `af_analysis` library to post-process AlphaFold2, ColabFold and AF3 models of the T231 complex. The main steps are:\n", "\n", "- Load all models from the `T231/` run directory into a `Data` object and a summary `DataFrame`.\n", "- Compute reference-based quality metrics (e.g. DockQ, LRMS) against the experimental structure 8RMO.\n", "- Use `af_analysis.analysis` to derive inter-chain PAE and interface-level confidence scores (e.g. ipTM).\n", "- Use `af_analysis.docking` to compute docking-oriented scores on the peptide–receptor interface (ipSAE, pDockQ2, PAE- and pLDDT-based metrics, LIS, etc.).\n", "- Use `af_analysis.clustering` to cluster models based on backbone RMSD and visualise the conformational landscape.\n", "\n", "All these scores are stored as columns in `my_data.df` and are later correlated and visualised to compare methods (AF3 / ColabFold / AF2 via the `weight` column).\n", "\n", "\n", "You can download the required data here:\n", "\n", "wget https://owncloud.rpbs.univ-paris-diderot.fr/owncloud/index.php/s/KS8j1KQLOqBTMHw\n", "\n", "unzip the data:\n", "\n", "```bash\n", "unzip T231.zip\n", "```\n", "\n", "If you dont have it create the `af_analysis` environment:\n", "\n", "```bash\n", "conda create -n af_analysis_2 python==3.12 jupyter ipympl notebook==7.3.2\n", "conda activate af_analysis_2\n", "# Get the last version of af_analysis\n", "pip install git+https://github.com/samuelmurail/af_analysis.git@main\n", "```\n" ] }, { "cell_type": "code", "execution_count": null, "id": "6fd638c8-51a8-471c-8c68-d548ac8adae6", "metadata": {}, "outputs": [], "source": [ "import af_analysis\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "from tqdm.auto import tqdm\n", "\n", "import scipy.stats\n", "\n", "import pdb_numpy\n", "from pdb_numpy.analysis import dockQ\n", "from pdb_numpy import visu\n", "\n", "from af_analysis import analysis, plot, docking, clustering" ] }, { "cell_type": "code", "execution_count": null, "id": "519159a3-35d3-496d-aa2e-e4f2cfb9def5", "metadata": {}, "outputs": [], "source": [ "# To debug use:\n", "# af_analysis.show_log()" ] }, { "cell_type": "code", "execution_count": null, "id": "7946059c-97d4-4c0a-995d-ac154881ddc0", "metadata": {}, "outputs": [], "source": [ "%matplotlib widget" ] }, { "cell_type": "markdown", "id": "62ae4355", "metadata": {}, "source": [ "## Loading models with `af_analysis.Data`\n", "\n", "We create a `Data` object using `af_analysis.Data('T231/', format='full_massivefold')`, where `T231/` is the data path, and `full_massivefold` is the data format. `af_analysis` supported format are :\n", "* AlphaFold 2\n", "* AlphaFold 3\n", "* ColabFold\n", "* AlphaFold-Multimer\n", "* AlphaPulldown\n", "* Boltz1\n", "* Chai-1\n", "* MassiveFold\n", "\n", "\n", "- The `Data` class scans the `T231/` directory and registers every predicted complex (AF3 / ColabFold / AF2 runs) produced by the MassiveFold pipeline.\n", "- It builds `my_data.df`, a `pandas.DataFrame` where each row corresponds to one model with metadata such as:\n", "\n", " - File paths to the PDB/mmCIF (`pdb` column).\n", " - Run identifier / method in the `weight` column (e.g. AF3 vs ColabFold vs AF2).\n", " - Built-in AlphaFold scores like `ranking_confidence`, `ipTM`, `pTM`, mean pLDDT, etc. (depending on what is available).\n", "\n", "This `DataFrame` is the central table to which we will progressively add new columns containing all the analysis and docking scores." ] }, { "cell_type": "code", "execution_count": null, "id": "b5f9c85e-2f2b-4615-bcec-720f134ea9b6", "metadata": {}, "outputs": [], "source": [ "data_path = 'T231/'\n", "PDB_ref = '8RMO'\n", "#data_path = 'H1140/'\n", "#PDB_ref= '9ERT'\n", "\n", "#data_path = 'A0A2U7UDN4/'\n", "\n", "my_data = af_analysis.Data(data_path, format='full_massivefold')" ] }, { "cell_type": "markdown", "id": "fa201d4b", "metadata": {}, "source": [ "### Task 1 – Explore the prediction table\n", "- Inspect `my_data.df`. Answer:\n", " - How many rows (models) are there in total?\n", " - How many models per method in the `weight` column (AF3 / ColabFold / AF2)?\n", " - List the main score columns that are already present (e.g. `ranking_confidence`, `ipTM`, `pTM`, pLDDT-based metrics).\n", "- Write your code for this exploration in the next code cell and summarise your observations in 2–3 sentences." ] }, { "cell_type": "code", "execution_count": null, "id": "fc202d3b", "metadata": {}, "outputs": [], "source": [ "# Task 1: basic inspection of my_data.df\n", "# 1) Show the first 5 rows\n", "# 2) Count number of models by 'weight'\n", "# 3) Print available score columns\n", "\n", "# your code here" ] }, { "cell_type": "code", "execution_count": null, "id": "ef5c6cb2-bfaf-4ee1-8bb3-ab6311aceab5", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "cdcde353", "metadata": {}, "source": [ "### Task 2 – Understand chain mapping for DockQ\n", "\n", "\n", "Explain:\n", "- Why we use `rec_chains=['A','B']` and `lig_chains=['C']` for the predicted complex.\n", "- Why the native receptor/ligand chains are `['H','L']` / `['F']` in 8RMO.\n", "- What could go wrong in DockQ computation if you accidentally swapped or mismatched these chain IDs.\n", "\n", "Bellow you can have a look on the native structure chains and sequences, as the first model chains and sequences.\n", "\n", "> ⚠️ **The DockQ compute here is not the peptide CAPRI score**: Be very careful here if you use the same code with a peptide protein complex." ] }, { "cell_type": "code", "execution_count": null, "id": "87be4fec-fe02-4526-bb77-ee31de9346fc", "metadata": {}, "outputs": [], "source": [ "coor = pdb_numpy.Coor(my_data.df['pdb'][0])\n", "coor.get_aa_seq()" ] }, { "cell_type": "code", "execution_count": null, "id": "afed0346-9850-4e25-b61b-33a2e92e2e07", "metadata": {}, "outputs": [], "source": [ "visu.get_nglview(coor)" ] }, { "cell_type": "code", "execution_count": null, "id": "ca0042bb-0e9b-4c42-b8fe-9481852eec3a", "metadata": {}, "outputs": [], "source": [ "ref = pdb_numpy.Coor(pdb_id=PDB_ref)\n", "ref.get_aa_seq()" ] }, { "cell_type": "code", "execution_count": null, "id": "57a59d00-8e08-4f1c-9e09-b586e90d3201", "metadata": {}, "outputs": [], "source": [ "visu.get_nglview(ref)" ] }, { "cell_type": "code", "execution_count": null, "id": "aca8d0ab-7a64-41b6-9c3f-5841c78c8d85", "metadata": {}, "outputs": [], "source": [ "# Compute dockQ\n", "# If the operation is too long, you can load the computed value directly in the next cell\n", "\n", "if 0:\n", " \n", " ref_protein = ref.select_atoms('protein')\n", " dockq_list = []\n", " lrms_list = []\n", " irms_list = []\n", " fnat_list = []\n", " \n", " for pdb in tqdm(my_data.df['pdb'], total=len(my_data.df)):\n", " \n", " coor = pdb_numpy.Coor(pdb)\n", " \n", " dockq_res = dockQ(\n", " coor,\n", " ref_protein,\n", " rec_chains=['A', 'B'],\n", " lig_chains=['C'],\n", " native_rec_chains=['H', 'L'],\n", " native_lig_chains=['F'],\n", " )\n", " # print(dockq_res['DockQ'][0])\n", " \n", " dockq_list.append(dockq_res['DockQ'][0])\n", " lrms_list.append(dockq_res['LRMS'][0])\n", " irms_list.append(dockq_res['iRMS'][0])\n", " fnat_list.append(dockq_res['Fnat'][0])\n", " \n", " # Add data in dataframe\n", " my_data.df['dockq'] = dockq_list\n", " my_data.df['Fnat'] = fnat_list\n", " my_data.df['LRMS'] = irms_list\n", " my_data.df['iRMS'] = lrms_list\n", " \n", " # Save Data as csv file\n", " my_data.export_csv('T231_dockq.csv')" ] }, { "cell_type": "code", "execution_count": null, "id": "bce42dde-801c-4f26-b655-0e11afcd6653", "metadata": {}, "outputs": [], "source": [ "my_data = af_analysis.Data(csv='T231_dockq.csv')" ] }, { "cell_type": "code", "execution_count": null, "id": "eecac09a-624b-4cc2-aad8-49c53be43ecb", "metadata": {}, "outputs": [], "source": [ "my_data.df.columns" ] }, { "cell_type": "markdown", "id": "607bf751-0e4d-40b0-911c-e9e64edf0206", "metadata": {}, "source": [ "### Task 3 – Explore DockQ and LRMS distributions\n", "* a) Print min, max, mean DockQ and LRMS\n", "* b) Plot a histogram of DockQ\n", "* c) Make a scatterplot of DockQ vs LRMS (optional: colour points by 'weight')" ] }, { "cell_type": "code", "execution_count": null, "id": "9f29e13a-67ec-401c-9fc2-ec79dd6c26e0", "metadata": {}, "outputs": [], "source": [ "# your code here" ] }, { "cell_type": "markdown", "id": "a2513589", "metadata": {}, "source": [ "### Task 4 – Is ipTM a good predictor?\n", "After you compute and plot DockQ vs ipTM:\n", "- Report the correlation coefficient (r).\n", "- Comment briefly: does higher ipTM generally correspond to higher DockQ for this complex?\n", "- Do you see differences between methods (`weight`) in this relationship?" ] }, { "cell_type": "code", "execution_count": null, "id": "4520d341", "metadata": {}, "outputs": [], "source": [ "# Task 4 – Relate DockQ to ipTM\n", "# Compute the Pearson correlation between 'dockq' and 'ipTM'\n", "# and make a scatterplot DockQ vs ipTM coloured by 'weight'.\n", "\n", "# Hint: use scipy.stats.pearsonr or linregress and seaborn.scatterplot.\n", "\n", "# your code here" ] }, { "cell_type": "markdown", "id": "4a307f41", "metadata": {}, "source": [ "## Global summary plots (`af_analysis.plot`)\n", "\n", "`plot.show_info(my_data)` provides an overview of the prediction set:\n", "\n", "- Summaries and distributions of key columns in `my_data.df` (e.g. `ranking_confidence`, `ipTM`, `pTM`, pLDDT-based metrics).\n", "- Basic per-method comparisons via the `weight` column (AF3 vs ColabFold vs AF2), depending on what is present in the DataFrame.\n", "\n", "Use this function to quickly check that the run has been parsed correctly and to spot gross differences between methods / settings before diving into more detailed docking analysis.\n", "\n", "> ⚠️ **Sort your dataframe before inspecting data**: You may want to sort you data based on `ipTM`, `ranking_confidence`, or your prefered score." ] }, { "cell_type": "code", "execution_count": null, "id": "89d579c0-99ab-4cec-bc6c-faa7cb672bcc", "metadata": {}, "outputs": [], "source": [ "plot.show_info(my_data)" ] }, { "cell_type": "markdown", "id": "85348242-ed0b-41a1-a421-b48c01c4175c", "metadata": {}, "source": [ "You may experience some bugs due to conflicts with Matplotlib, Jupyter and JavaScript. It depends on a lot of factors, and solving them is extremely time-consuming. Here is an alternative which is less smooth but more convenient:" ] }, { "cell_type": "code", "execution_count": null, "id": "642ab654-f63b-435b-923a-7c9b29559fe7", "metadata": {}, "outputs": [], "source": [ "my_data.show_plot_info()" ] }, { "cell_type": "markdown", "id": "34832182", "metadata": {}, "source": [ "## Docking-oriented interface metrics (`af_analysis.docking`)\n", "\n", "This block applies several functions from `af_analysis.docking` to enrich `my_data.df` with interface scores, one row per model:\n", "\n", "- `docking.pae_pep(my_data)`: PAE-based summary over ligand residues (smallest chain), to quantify the uncertainty of the ligand conformation.\n", "- `docking.pae_contact_pep(my_data)`: PAE restricted to ligand residues that are in contact with the receptor (interface contacts).\n", "- `docking.pdockq2_lig(my_data)`: a DockQ-like confidence score for the ligand, derived from AF outputs (e.g. PAE, pLDDT, contacts).\n", "- `docking.ipSAE_lig(my_data)`: interface score based on PAE matrix and ligand interface.\n", "- `docking.LIS_pep(my_data)`: Local Interaction Score derived from the PAE matrix.\n", "- `docking.ipTM_d0_lig(my_data)`: ipTM-like recomputed based on PAE matrix.\n", "- `docking.plddt_contact_pep(my_data)`: mean pLDDT over ligand residues that are in contact with the receptor.\n", "- `docking.plddt_pep(my_data)`: mean pLDDT over all ligand residues.\n", "\n", "All these functions add new columns (named after the corresponding score) to `my_data.df`, which are then correlated with experimental DockQ to identify the most informative AF-derived metrics." ] }, { "cell_type": "code", "execution_count": null, "id": "cb88b3d6-cd1b-4d3c-b1bd-62d7de0a66cb", "metadata": {}, "outputs": [], "source": [ "docking.LIS_pep(my_data)\n", "docking.cLIS_lig(my_data)\n", "docking.iLIS_lig(my_data)\n", "\n", "docking.pae_pep(my_data)\n", "docking.pae_contact_pep(my_data)\n", "docking.pdockq2_lig(my_data)\n", "docking.ipSAE_lig(my_data)\n", "\n", "docking.ipTM_d0_lig(my_data)\n", "docking.plddt_contact_pep(my_data)\n", "docking.plddt_pep(my_data)" ] }, { "cell_type": "markdown", "id": "a9019b68", "metadata": {}, "source": [ "### Task 5 – Which AF metrics best predict DockQ?\n", "Using your summary table:\n", "- Identify the 2–3 metrics with the highest R² with DockQ.\n", "- Briefly discuss why these particular metrics might correlate well with experimental interface quality." ] }, { "cell_type": "code", "execution_count": null, "id": "a457da52-f6a0-483e-bf51-02b48beb9cb2", "metadata": {}, "outputs": [], "source": [ "my_data.df.columns" ] }, { "cell_type": "code", "execution_count": null, "id": "8a4c789b", "metadata": {}, "outputs": [], "source": [ "# Task 5 – Screen AF-derived metrics\n", "# For each metric below, compute R^2 with DockQ\n", "# (using linear regression or correlation) and build a summary table\n", "# sorted from best to worst predictor of DockQ.\n", "\n", "metrics = ['ranking_confidence', 'ipTM', 'pTM',\n", " 'ipSAE_lig', 'pdockq2_lig', 'ipTM_d0_lig',\n", " 'PAE_pep_rec', 'PAE_rec_pep', 'PAE_contact_pep_rec', 'PAE_contact_rec_pep',\n", " 'plddt_contact_lig', 'plddt_pep', 'LIS_rec_pep', 'LIS_pep_rec',\n", " \"cLIS_rec_lig\", \"cLIS_lig_rec\", \"iLIS_rec_lig\", \"iLIS_lig_rec\",\n", " 'ipTM_d0_rec_lig', 'ipTM_d0_lig_rec']\n", "\n", "# your code here" ] }, { "cell_type": "code", "execution_count": null, "id": "7be24fbb-e27a-4731-bc8d-659806975f33", "metadata": {}, "outputs": [], "source": [ "ref_score = 'dockq'\n", "#ref_score = 'LRMS'\n", "\n", "for score in metrics:\n", "\n", " slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(my_data.df[ref_score], my_data.df[score])\n", "\n", " print(f\" {score:20} r={r_value:6.3f}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "1f4f5ca0-49e1-4584-a166-ae3ad0481681", "metadata": {}, "outputs": [], "source": [ "ref_score = 'dockq'\n", "#ref_score = 'LRMS'\n", "\n", "for score in metrics:\n", "\n", " slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(my_data.df[ref_score], my_data.df[score])\n", "\n", " plt.figure()\n", " sns.scatterplot(my_data.df, y=ref_score, x=score, hue=\"weight\")\n", " plt.title(f'{score} r = {r_value:.3f}' )\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "d773ff89", "metadata": {}, "source": [ "## Model clustering with `af_analysis.clustering`\n", "\n", "Here we use `clustering.hierarchical` from `af_analysis.clustering` on a subset of models:\n", "\n", "- We first remove AF3 models (`no_af3_df = my_data.df[my_data.df['weight'] != 'af3']`) because AF3 uses a different output model with different atom number, which complicates direct RMSD comparisons.\n", "\n", "- `clustering.hierarchical(no_af3_df, threshold=1, align_selection=\"backbone and chainID A B\", rmsd_scale=False)` will do:\n", "\n", " - Aligns models on the selected backbone atoms of chains A and B.\n", " - Computes pairwise RMSD distances between models (chain C).\n", " - Performs hierarchical clustering with a cut at 1 Å (via `threshold`).\n", " - Projects models into 2D MDS space and adds columns such as `'MDS 1'`, `'MDS 2'` and `'cluster'` to the DataFrame.\n", "\n", "These clustering results are later visualised with scatter plots and boxplots to relate cluster identity to DockQ/LRMS and to identify structurally distinct solution families." ] }, { "cell_type": "code", "execution_count": null, "id": "cf422bed-833a-42e4-9bce-a69e8c0ebe71", "metadata": {}, "outputs": [], "source": [ "## Atom number in af3, cf, and af2 is different\n", "# Here we are analysing only cf and af2 results\n", "\n", "no_af3_df = my_data.df[my_data.df['weight'] != 'af3']\n", "\n", "# clustering.hierarchical(no_af3_df , threshold=1, align_selection={\"T231\": \"backbone and chainID A B\"}, rmsd_scale=False)\n", "clustering.hierarchical(no_af3_df , threshold=1, align_selection=\"backbone and chainID A B\", rmsd_scale=False)\n" ] }, { "cell_type": "markdown", "id": "cb031bcf-4c7d-48e6-943b-00d9271439e1", "metadata": {}, "source": [ "### Task 6 – Visualise clusters in MDS space\n", "\n", "Make a scatterplot of 'MDS 1' vs 'MDS 2' for 'no_af3_df',\n", "coloured by 'cluster' and sized or coloured by DockQ." ] }, { "cell_type": "code", "execution_count": null, "id": "62f7e2f2-149e-4ee7-956d-8d55231cb254", "metadata": {}, "outputs": [], "source": [ "# your code here" ] }, { "cell_type": "code", "execution_count": null, "id": "0d2221db-8ce1-4cb6-968e-29efcf3b3826", "metadata": {}, "outputs": [], "source": [ "plt.figure()\n", "sns.scatterplot(no_af3_df, x='MDS 1', y='MDS 2', hue=\"cluster\")" ] }, { "cell_type": "code", "execution_count": null, "id": "10824a3e-538a-4b96-b2da-03e009d17c7d", "metadata": {}, "outputs": [], "source": [ "plt.figure()\n", "sns.boxplot(no_af3_df, x='cluster', y='dockq', hue=\"cluster\")" ] }, { "cell_type": "code", "execution_count": null, "id": "d47fb2e3-8fb9-45c1-894e-4858409d8b26", "metadata": {}, "outputs": [], "source": [ "plt.figure()\n", "sns.boxplot(no_af3_df, x='cluster', y='ipTM_d0_lig_rec', hue=\"cluster\")" ] }, { "cell_type": "markdown", "id": "e09b13d5", "metadata": {}, "source": [ "### Task 7 – Interpret the clusters\n", "After making the plots:\n", "- Do some clusters correspond to clearly higher DockQ or lower LRMS?\n", "- Are certain methods (`weight`) enriched in specific clusters?\n", "Write a short paragraph (4–5 sentences) interpreting your results." ] }, { "cell_type": "code", "execution_count": null, "id": "1c5cf948", "metadata": {}, "outputs": [], "source": [ "# Task 7 – Compare DockQ and LRMS across clusters\n", "# Recreate boxplots of DockQ and LRMS by 'cluster' for 'no_af3_df'.\n", "\n", "# your code here" ] }, { "cell_type": "code", "execution_count": null, "id": "5d3e9454-9174-42c6-9768-9b7231adbd0a", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "7dd875eb", "metadata": {}, "outputs": [], "source": [ "# Task 6 – Identify and describe the best model\n", "# 1) Find the index of the model with maximal DockQ\n", "# 2) Print its method ('weight') and key scores\n", "# (DockQ, LRMS, ipTM, pdockq2_lig, ipSAE_lig)\n", "\n", "# your code here" ] }, { "cell_type": "code", "execution_count": null, "id": "216ee698-dfc0-48c5-8c5c-3d466af31687", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "1bf4f4c6", "metadata": {}, "source": [ "### Task 8 – Which method performs best?\n", "Using your boxplots and summary statistics:\n", "- Which method has the best average DockQ?\n", "- Does the best average method also show the lowest LRMS?\n", "- Comment on the variability (spread) of scores for each method." ] }, { "cell_type": "code", "execution_count": null, "id": "fda82780", "metadata": {}, "outputs": [], "source": [ "# Task 8 – Compare methods (AF3 / ColabFold / AF2)\n", "# a) Make boxplots of DockQ by 'weight'\n", "# b) Make boxplots of LRMS by 'weight'\n", "# c) Optionally, build a small summary table of mean/median DockQ and LRMS per method.\n", "\n", "# your code here" ] }, { "cell_type": "code", "execution_count": null, "id": "dec04edd-4681-4c6c-8ee5-092f985a98fc", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "97543f99-cb7a-4c53-9285-708051748b28", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "27f6c729-0779-4256-8ddf-7303ee67d96a", "metadata": {}, "source": [ "### Task 9 – Extension to another complex\n", "\n", "Re-run (part of) this workflow on 'H1140/' \n", "adapting the chain IDs for DockQ and comparing the behaviour of\n", "metrics between complexes.\n", "\n", "Create a new notebook based on this one." ] }, { "cell_type": "code", "execution_count": null, "id": "9abd9b2c-9798-479f-8b80-6b8726da45b4", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.8" } }, "nbformat": 4, "nbformat_minor": 5 }