Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.23.513389v1?rss=1
Authors: Zhao, B., Wei, D., Xiong, Y., Ding, J.
Abstract: The ever-increasing availability of single-cell transcriptomic data offers unrivaled opportunities to profile cellular states in various biological processes at high resolution, which has brought substantial advancements in understanding complex mechanisms underlying a large variety of bioprocesses. As limited by the protocol and technology, single-cell measurements in one study are often performed in batches, which unavoidably induces biological and technical differences in the single-cell measurements of the same study. Consequently, it presents challenges in analyzing all single-cell data from different batches together, particularly if the measurements were assayed with different technologies. Several methods have been developed to remove the aforementioned batch effects recently. However, there remain challenges unaddressed with those existing methods, including but not limited to the risk of over-correction, the need for the assumption of gene expression distribution, and expensive computation. To mitigate those limitations, we develop a novel deep learning method called scCobra that combines contrastive learning, domain adaptation, and generative adversarial networks to remove batch effects in single-cell RNA-seq data. The contrastive learning network is utilized to learn latent embeddings to represent the cells, domain-adaptation is employed to batch-normalize the latent embeddings of cells from distinct batches, while generative adversarial networks further optimize the blending effect. The proposed method does not require any prior assumption of gene expression distribution. We applied the scCobra method to one simulated and two real single-cell datasets with significantly experimental differences. Our method outperforms other benchmarked methods in batch correction and biological conservation, and its running efficiency is also among the best.
Copy rights belong to original authors. Visit the link for more info
Podcast created by Paper Player, LLC