scrna5/6 Jupyter Notebook lamindata

Train a machine learning model on a collection#

Here, we iterate over the artifacts within a collection to train a machine learning model at scale.

import lamindb as ln
馃挕 connected lamindb: testuser1/test-scrna
ln.settings.transform.stem_uid = "Qr1kIHvK506r"
ln.settings.transform.version = "1"
ln.track()
馃挕 notebook imports: lamindb==0.71.0 torch==2.3.0
馃挕 saved: Transform(uid='Qr1kIHvK506r5zKv', name='Train a machine learning model on a collection', key='scrna5', version='1', type='notebook', updated_at=2024-05-06 19:34:55 UTC, created_by_id=1)
馃挕 saved: Run(uid='8zzSHSjnhCVbs7tRGuZ1', transform_id=5, created_by_id=1)

Query our collection:

collection = ln.Collection.filter(
    name="My versioned scRNA-seq collection", version="2"
).one()
collection.describe()
Hide code cell output
Collection(uid='0DOiM9qW27jgqmb5UCqF', name='My versioned scRNA-seq collection', version='2', hash='HNR3VFV60_yqRnUka11E', visibility=1, updated_at=2024-05-06 19:34:29 UTC)

Provenance:
  馃搸 transform: Transform(uid='ManDYgmftZ8C5zKv', name='Standardize and append a batch of data', key='scrna2', version='1', type='notebook')
  馃搸 run: Run(uid='z81MRZ4LN68TXmVRvvDI', started_at=2024-05-06 19:33:59 UTC, is_consecutive=True)
  馃搸 created_by: User(uid='DzTjkKse', handle='testuser1', name='Test User1')
  馃搸 input_of (core.Run): ['2024-05-06 19:34:41 UTC']
Features:
  var: FeatureSet(uid='rtDhckqSahZZL3Tw84qW', n=36508, type='number', registry='bionty.Gene')
    'SHROOM2', 'LINC01589', 'IGLVI-70', 'LINC01635', 'ALDH9A1', 'RIPK3', 'TRAF1', 'LINC00690', 'PHKG2', 'ATF7', 'TBC1D22A-AS1', 'TTC6', 'OR4E2', 'CEP97', 'GABRA1', 'LINC02821', 'TNFRSF11B', 'RCAN2', 'PALM3', 'NAV1', ...
  obs: FeatureSet(uid='fsbbypYLWPZMyFa8AQE1', n=4, registry='core.Feature')
    馃敆 donor (12, core.ULabel): 'A31', 'D496', 'A37', 'A29', 'A36', '637C', '640C', 'A52', 'D503', '582C', ...
    馃敆 tissue (17, bionty.Tissue): 'ileum', 'mesenteric lymph node', 'liver', 'jejunal epithelium', 'blood', 'caecum', 'thoracic lymph node', 'thymus', 'sigmoid colon', 'skeletal muscle tissue', ...
    馃敆 cell_type (40, bionty.CellType): 'megakaryocyte', 'gamma-delta T cell', 'alpha-beta T cell', 'CD14-positive, CD16-negative classical monocyte', 'plasma cell', 'memory B cell', 'non-classical monocyte', 'classical monocyte', 'dendritic cell', 'effector memory CD8-positive, alpha-beta T cell, terminally differentiated', ...
    馃敆 assay (3, bionty.ExperimentalFactor): '10x 5' v2', '10x 3' v3', '10x 5' v1'
Labels:
  馃搸 tissues (17, bionty.Tissue): 'ileum', 'mesenteric lymph node', 'liver', 'jejunal epithelium', 'blood', 'caecum', 'thoracic lymph node', 'thymus', 'sigmoid colon', 'skeletal muscle tissue', ...
  馃搸 cell_types (40, bionty.CellType): 'megakaryocyte', 'gamma-delta T cell', 'alpha-beta T cell', 'CD14-positive, CD16-negative classical monocyte', 'plasma cell', 'memory B cell', 'non-classical monocyte', 'classical monocyte', 'dendritic cell', 'effector memory CD8-positive, alpha-beta T cell, terminally differentiated', ...
  馃搸 experimental_factors (3, bionty.ExperimentalFactor): '10x 5' v2', '10x 3' v3', '10x 5' v1'
  馃搸 ulabels (12, core.ULabel): 'A31', 'D496', 'A37', 'A29', 'A36', '637C', '640C', 'A52', 'D503', '582C', ...

Create a map-style dataset#

Let us create a map-style dataset using using mapped(): a MappedCollection. This is what, for example, the PyTorch DataLoader expects as an input.

Under-the-hood, it performs a virtual inner join of the features of the underlying AnnData objects and thus allows to work with very large collections.

You can either perform a virtual inner join:

with collection.mapped(obs_keys=["cell_type"], join="inner") as dataset:
    print(len(dataset.var_joint))
749

Or a virtual outer join:

dataset = collection.mapped(obs_keys=["cell_type"], join="outer")
len(dataset.var_joint)
36508

This is compatible with a PyTorch DataLoader because it implements __getitem__ over a list of backed AnnData objects. The 5th cell in the collection can be accessed like:

dataset[5]
Hide code cell output
{'X': array([ 0.   ,  0.   ,  0.   , ...,  0.   ,  0.   , -0.456], dtype=float32),
 '_store_idx': 0,
 'cell_type': 39}

The labels are encoded into integers:

dataset.encoders
Hide code cell output
{'cell_type': {'megakaryocyte': 0,
  'gamma-delta T cell': 1,
  'alpha-beta T cell': 2,
  'CD14-positive, CD16-negative classical monocyte': 3,
  'plasma cell': 4,
  'memory B cell': 5,
  'non-classical monocyte': 6,
  'classical monocyte': 7,
  'dendritic cell': 8,
  'effector memory CD8-positive, alpha-beta T cell, terminally differentiated': 9,
  'CD8-positive, alpha-beta memory T cell, CD45RO-positive': 10,
  'CD16-negative, CD56-bright natural killer cell, human': 11,
  'macrophage': 12,
  'naive B cell': 13,
  'group 3 innate lymphoid cell': 14,
  'mucosal invariant T cell': 15,
  'lymphocyte': 16,
  'effector memory CD4-positive, alpha-beta T cell, terminally differentiated': 17,
  'germinal center B cell': 18,
  'effector memory CD4-positive, alpha-beta T cell': 19,
  'CD8-positive, CD25-positive, alpha-beta regulatory T cell': 20,
  'CD16-positive, CD56-dim natural killer cell, human': 21,
  'conventional dendritic cell': 22,
  'naive thymus-derived CD4-positive, alpha-beta T cell': 23,
  'CD8-positive, alpha-beta memory T cell': 24,
  'dendritic cell, human': 25,
  'alveolar macrophage': 26,
  'B cell, CD19-positive': 27,
  'CD4-positive, alpha-beta T cell': 28,
  'T follicular helper cell': 29,
  'naive thymus-derived CD8-positive, alpha-beta T cell': 30,
  'animal cell': 31,
  'progenitor cell': 32,
  'CD38-positive naive B cell': 33,
  'CD4-positive helper T cell': 34,
  'mast cell': 35,
  'plasmablast': 36,
  'plasmacytoid dendritic cell': 37,
  'regulatory T cell': 38,
  'cytotoxic T cell': 39}}

Create a pytorch DataLoader#

Let us use a weighted sampler:

from torch.utils.data import DataLoader, WeightedRandomSampler

# label_key for weight doesn't have to be in labels on init
sampler = WeightedRandomSampler(
    weights=dataset.get_label_weights("cell_type"), num_samples=len(dataset)
)
dataloader = DataLoader(dataset, batch_size=128, sampler=sampler)

We can now iterate through the data loader:

for batch in dataloader:
    pass

Close the connections in MappedCollection:

dataset.close()
In practice, use a context manager
with collection.mapped(obs_keys=["cell_type"]) as dataset:
    sampler = WeightedRandomSampler(
        weights=dataset.get_label_weights("cell_type"), num_samples=len(dataset)
    )
    dataloader = DataLoader(dataset, batch_size=128, sampler=sampler)
    for batch in dataloader:
        pass