Hey there! I am Yerzhaisang Taskali!ML Researcher with experience in applying neural networks to improve the performance of Relational Database Management Systems
Projects
About
Master's Degree earned from Skoltech
I am interested in Kaggle Competitions. I also like to apply Machine Learning algorithms to solve problems in fields where ML is not a standard practice.
Adaptive Query Optimization using Neural Networks
Cardinality estimation is an essential open problem for query optimization in databases. For complex queries, cardinality estimation errors can cause inefficient execution plan selection, and therefore increase query execution time up to a hundred times. In this paper, we propose a new approach with the use of neural networks and introduce the design of a new common feature space. The experimental evaluation shows that neural networks have better generalization ability than previous approaches, and thus find more efficient execution plans. Moreover, our approach helps to overcome another limitation which is related to memory consumption.
Exploring Autoencoders and Contrastive Learning in application to Deep Reinforcement Learning
Deep Reinforcement Learning is a combination of deep neural networks and policy-based reinforcement learning algorithms that facilitates learning the optimal set of actions agent takes in a virtual environment to attain the maximum possible reward. Training reinforcement learning agent using high-dimensional images is very time inefficient and often requires a huge amount of actions to learn the optimal policy. In our work, we aimed to enhance this process by learning from hidden representations obtained by training multiple autoencoders and contrastive learning architecture from raw images mined from the virtual environment. We compare various approaches to learn efficient data encodings by linking the maximum reward and the number of actions taken by the agent.
We make the overview of article by Vivek Ramanujan Et al. ”What’s Hidden in a Randomly Weighted Neural Network?” In this article, we observe the algorithm for finding the subnets, which show good results without training with initial weights. Then we make experiments on different architectures: ResNet-34, ResNet-18, U-Net, VGG-11 and datasets that were not presented in the article. We empirically show how randomly weighed neural networks with fixed weights are getting wider and deeper, the ”untrained subnet” results are approaching results of networks with trained weights.
Use this area of the page to describe your project. The icon above is part of a free icon set by Flat Icons. On their website, you can download their free set with 16 icons, or you can purchase the entire set with 146 icons for only $12!
Use this area of the page to describe your project. The icon above is part of a free icon set by Flat Icons. On their website, you can download their free set with 16 icons, or you can purchase the entire set with 146 icons for only $12!
Use this area of the page to describe your project. The icon above is part of a free icon set by Flat Icons. On their website, you can download their free set with 16 icons, or you can purchase the entire set with 146 icons for only $12!