Deep dive into uncertainty quantification methods in machine learning, covering proper prediction uncertainty estimation, point predictions limitations, and implementation techniques.
A comprehensive exploration of reinforcement learning's impact on modern language models, covering RLHF, DPO, and future perspectives.
Deep dive into the internals of transformer architecture, including step-by-step implementation and optimization techniques.
Deep dive into modern computer vision model architectures, from data preparation to deployment, with hands-on implementation of vision transformers.
Deep dive into synthetic data generation for LLM training and the growing importance of small language models in practical applications.
The talk explored LLMs' core principles, covering Prompt Engineering, RAG, Fine-Tuning, System Design, and evaluation methods for performance, reliability, and ethics.
Explored Operational Research, optimization solvers, and Data Envelopment Analysis for complex decision-making optimization.
Discussion on adapting open-source LLMs for Georgian language through tokenizer transfer and continual pretraining.
Review of causal inference methods beyond conventional correlation techniques for answering 'What if' questions.
Discussion of time series forecasting algorithms, benchmarks, common pitfalls, and best practices.