Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.04.20.537615v1?rss=1
Authors: Chasles, S., Major, F.
Abstract: Prediction of RNA secondary structure from single sequences still needs substantial improvements. The application of machine learning (ML) to this problem has become increasingly popular. However, ML algorithms are prone to overfitting, limiting the ability to learn more about the inherent mechanisms governing RNA folding. It is natural to use high-capacity models when solving such a difficult task, but poor generalization is expected when too few examples are available. Here, we report the relation between capacity and performance on a simpler related problem: determining whether two sequences are fully complementary. Our analysis focused on the impact of model architecture and capacity as well as dataset size and nature on classification accuracy. We observed that low-capacity models are better suited for learning with mislabelled training examples, while large capacities improve the ability to generalize to structurally dissimilar data. Given a more complex task like RNA folding, it comes as no surprise that the scarcity of usable examples hurdles the applicability of machine learning techniques to this field.
Copy rights belong to original authors. Visit the link for more info
Podcast created by Paper Player, LLC