Master Deep Learning and Generative AI with PyTorch – From Beginner to AI Researcher
About Course
This course is a complete, end-to-end journey into Deep Learning, Generative AI, and Large Language Models using PyTorch.
You start from the foundations of mathematics and neural networks and progress all the way to research-driven implementations of Transformers, Vision Transformers, Generative Models, NLP systems, and Computer Vision architectures.
Every concept is explained from first principles and reinforced with hands-on PyTorch coding, enabling you to build, understand, and customize AI models confidently.
This course also covers advanced topics like BERT, GPT, LLaMA, Swin Transformers, Stable Diffusion, and multimodal AI systems.
What Will You Learn?
- Master PyTorch from beginner level to AI research-grade development
- Understand deep learning from fundamentals to complex architectures
- Implement neural networks from scratch without black-box shortcuts
- Build real-world projects in regression, classification, NLP, CV, and generative AI
- Learn all activation functions, loss functions, and optimizers with PyTorch
- Develop strong intuition for backpropagation, gradients, and training dynamics
- Implement Transformer-based architectures and Vision Transformers from research papers
- Build and understand large language models and multimodal AI systems
- Work with text, image, and generative models used in cutting-edge AI research
- Create portfolio-ready projects aligned with industry and research standards
Course Content
Programming & Data Science Fundamentals for AI
This section covers all the essential tools you need to start your journey in AI and Deep Learning. You’ll build a strong foundation in Python programming and core data science libraries including NumPy, Pandas, Matplotlib, and Seaborn.
You’ll learn how to write clean Python code, work with arrays and datasets, perform data analysis, and visualize results effectively—skills that are mandatory before moving into machine learning and deep learning with PyTorch.
-
intro
03:00 -
PYTHON Full Course for Beginners
16:43:34 -
Python Numpy Full Tutorial For Beginners
04:33:00 -
PANDAS Full Course with PRACTICAL
01:42:05 -
Matplotlib Full Tutorial
04:11:06 -
Python SEABORN Tutorial
03:58:27 -
GIT Full Tutorial for Beginners
02:48:56 -
Git and GitHub Tutorial for Beginners
01:14:17
Core Deep Learning Concepts – From Perceptron to Backpropagation
Dive into the most important foundations of deep learning. This section covers Perceptrons, Multi-Layer Perceptrons (MLPs), forward propagation, and backpropagation, giving you the practical understanding and skills to build neural networks from scratch. Each lecture is hands-on and explained step-by-step, so you can apply these concepts directly in PyTorch projects.
-
Perceptron Explained: The Foundation of Neural Networks
48:49 -
Perceptron Explained: The Foundation of Neural Networks 2
18:42 -
MLP (Artificial Neural Network) Explained with Notation
25:00 -
Feed Forward Propagation in Neural Networks Explained
42:00 -
Feed Forward Propagation in Neural Networks Explained
27:00 -
Backpropagation in Neural Networks
52:00 -
Core Deep Learning Concepts – Quiz 1
PyTorch: From Fundamentals to AI Research-Level Development
Master PyTorch from first principles, starting with tensors and autograd, and progressing to research-grade model development. Learn how to write clean, scalable, and experiment-ready code used in real AI labs and research teams. By the end, you’ll be able to read papers, implement architectures, and run serious AI experiments in PyTorch.
-
How to Set Up PyTorch in VS Code & Google Colab (2026 Ultimate Guide)
12:00 -
PyTorch Tensor Creation – torch.tensor(), torch.as_tensor(), torch.from_numpy() Complete Guide
33:56 -
PyTorch Tensor Initialization: torch.zeros(), torch.ones(), and torch.empty() Explained
23:00 -
torch.tensor() vs torch.Tensor() Explained — PyTorch Beginner Mistake!
07:00 -
Tensor Initialization – torch.full(), torch.eye(), torch.diag() Methods Explained
18:00 -
PyTorch Random Tensors Explained: torch.rand(), torch.randn(), torch.randint() Tutorial
11:00 -
Advanced Random Functions – torch.randperm(), torch.multinomial(), torch.normal() Guide
20:00 -
torch.zeros_like(), torch.ones_like(), torch.empty_like(), torch.full_like() Tutorial| Ali Hassan
05:00 -
Random Like Functions – torch.rand_like(), torch.randn_like(), torch.randint_like() | Ali Hassan
08:00 -
Sequence Generation – torch.arange(), torch.linspace(), torch.logspace() Tutorial| Ali Hassan
20:00 -
Tensor Device Creation – Creating Tensors on CPU vs GPU with .to() and .cuda() , .cpu()| Ali Hassan
12:00 -
Complex Tensors – torch.complex(), torch.polar(), torch.view_as_complex() Tutorial| Ali Hassan
27:00 -
Memory Layout – torch.contiguous(), torch.is_contiguous(), memory_format Parameter | Ali Hassan
27:00 -
Tensor Reshaping – .reshape()vs .view() vs .resize_() Complete Comparison| Ali Hassan
26:00 -
Tensor Attributes – .dtype, .device, .requires_grad Properties Explained| Ali Hassan
07:00 -
Advanced Tensor Creation in PyTorch – torch.stack(), torch.cat(), torch.chunk() | Ali Hassan
16:00 -
Tensor Cloning in PyTorch – .clone(), .detach(), .copy_() Explained| Ali Hassan
15:00 -
Tensor Conversion in PyTorch – .numpy(), .tolist(), .item() Methods| Ali Hassan
09:00 -
Tensor Validation – torch.is_tensor(), torch.is_storage(), torch.is_complex() Functions| Ali Hassan
09:00 -
Tensor Comparison PyTorch – torch.equal(), torch.allclose(), torch.isclose() Explained | Ali Hassan
15:00 -
Tensor Info Functions in PyTorch – .shape, .size(), .dim(), .stride() Explained| Ali Hassan
08:00 -
Tensor Utilities – torch.numel(), torch.element_size(), torch.storage_offset() Guide| Ali Hassan
11:00 -
torch.set_default_dtype() – PyTorch’s Global Floating-Point Precision Setter| Ali Hassan
07:00 -
Tensor Memory – .storage(), .data_ptr(), .untyped_storage() Advanced Guide| Ali Hassan
12:00 -
Broadcasting Tensors – torch.broadcast_tensors(), torch.broadcast_to() Tutorial| Ali Hassan
28:00 -
Basic Arithmetic in PyTorch – .add(), .sub(), .mul(), .div() Operations| Ali Hassan
13:00 -
Advanced Arithmetic in PyTorch – .addcdiv(), .addcmul(), .lerp() Functions| Ali Hassan
14:00 -
Power Operations in PyTorch – .pow(), .sqrt(), .rsqrt(), .square() Complete Tutorial| Ali Hassan
10:00 -
Exponential Functions – .exp(), .exp2(), .expm1(),Guide| Ali Hassan
13:00 -
Logarithmic Operations in PyTorch – .log(), .log10(), .log2(), .log1p(), .logaddexp(), .logaddexp2()
23:00 -
Trigonometric Functions in PyTorch – .sin(), .cos(), .tan(), .asin(), .acos(), .atan()| Ali Hassan
13:00 -
Hyperbolic Functions in PyTorch – .sinh(), .cosh(), .tanh(), .asinh(), .acosh(), .atanh | Ali Hassan
11:00 -
Rounding Operations – .round(), .floor(), .ceil(), .trunc(), .frac() Guide| Ali Hassan
16:00 -
Sign and Absolute Functions in PyTorch – .abs(), .sign(), .signbit(), .copysign()| Ali Hassan
10:00 -
Clamping Operations – .clamp(), .clamp_min(), .clamp_max() Complete Guide| Ali Hassan
10:00 -
Remainder Operations in PyTorch – .remainder(), .fmod(),Tutorial| Ali Hassan
14:00 -
Comparison Operations in PyTorch- .eq(), .ne(), .lt(), .le(), .gt(), .ge() Guide| Ali Hassan
12:00 -
Logical Operations in PyTorch- .logical_and(), .logical_or(), .logical_not, .logical_xor| Ali Hassan
18:00 -
Finite Checks in PyTorch- .isfinite(), .isinf(), .isnan(), .isneginf(), .isposinf() | Ali Hassan
07:00 -
Type Checking Functions in PyTorch- .is_complex(), .is_floating_point(), .is_signed() | Ali Hassan
08:00 -
Precision Control in PyTorch- .half(), .float(), .double(), precision conversion| Ali Hassan
04:00 -
Dimension Manipulation in PyTorch – .squeeze(), .unsqueeze(), .flatten() Tutorial| Ali Hassan
13:00 -
Tensor Transposition in PyTorch – .transpose(), .t(), .permute() Methods Explained | Ali Hassan
13:00 -
Tensor Expansion in PyTorch – .expand(), .expand_as(), .repeat() Complete Guide| Ali Hassan
10:00 -
Advanced Tensor Expansion in PyTorch – .repeat_interleave(), .tile(), broadcasting | Ali Hassan
08:00 -
Tensor Slicing in PyTorch – .narrow(), .select(), .slice() Methods Tutorial| Ali Hassan
44:00 -
Tensor Splitting in PyTorch – torch.split(), torch.chunk(), torch.tensor_split() | Ali Hassan
17:00 -
Tensor Uniqueness in PyTorch – torch.unique(), torch.unique_consecutive() Tutorial| Ali Hassan
08:00 -
Tensor Unfolding, Folding in PyTorch – .unflatten(), torch.nn.Unfold(),torch.nn.fold()| Ali Hassan
28:00 -
PyTorch Tensor Indexing – select(), take(), fill(), copy(), and put() Explained| Ali Hassan
15:00 -
Advanced Indexing in PyTorch – .gather(), .scatter(), .scatter_add(), torch.index_add()| Ali Hassan
17:00 -
Tensor Masking in PyTorch – .masked_select(), .masked_fill(), .masked_scatter() Guide| Ali Hassan
09:00 -
Conditional Operations in PyTorch – torch.where(), .masked_fill_()| Ali Hassan
10:00 -
Tensor Padding in PyTorch – torch.nn.functional.pad() and Padding Modes Explained | Ali Hassan
18:00 -
Tensor Sorting in PyTorch – torch.sort(), torch.argsort(), and torch.topk() Explained| Ali Hassan
09:00 -
Tensor Flipping in PyTorch – torch.flip(), torch.fliplr(), torch.flipud() Tutorial| Ali Hassan
05:00 -
Tensor Rolling in PyTorch – torch.roll(), torch.rot90() explained| Ali Hassan
04:00 -
Sum Operations in PyTorch – .sum(), .nansum(), .cumsum(), .cumprod() ,keepdim| Ali Hassan
29:00 -
PyTorch Mean Operations Explained – .mean(), .nanmean(), .median(), .nanmedian() | Ali Hassan
20:00 -
PyTorch Statistics Operations Explained – .std(), .var(), .std_mean(), .var_mean() | Ali Hassan
21:00 -
PyTorch Min/Max Operations – .min(), .max(), .aminmax(), torch.amin(), torch.amax()| Ali Hassan
10:00 -
PyTorch Index-Finding Operations – .argmin(), .argmax(), .mode(), torch.argwhere()| Ali Hassan
16:00 -
PyTorch Product Operations Explained – .prod() Function Tutorial| Ali Hassan
04:00 -
PyTorch Matrix Trace Operations – .trace() and .diagonal().sum() Explained| Ali Hassan
04:00 -
PyTorch Triangular Matrices Explained – .tril() and .triu() Tutorial| Ali Hassan
13:00 -
PyTorch Matrix Multiplication Explained – torch.mm(), torch.matmul(), torch.bmm(), @, .mv(), .ger()
58:00 -
PyTorch Linear Layers Explained – nn.Linear(), nn.LazyLinear(), nn.Bilinear() Tutorial | Ali Hassan
24:00 -
PyTorch Fundamentals Quiz
Hands-On Project: Linear Regression with PyTorch
Apply your deep learning knowledge with a real-world Linear Regression project using PyTorch. This section walks you through data preparation, model building, training, and evaluation, giving you practical experience and confidence to implement AI models from scratch.
-
Linear Regression with PyTorch: Hands-On Project for Beginners
02:57:00 -
Linear Regression Basics in PyTorch
Mathematics for Artificial Intelligence
Learn the essential mathematics that powers artificial intelligence. This section covers linear algebra, calculus, probability, statistics, and discrete math, giving you the foundation to understand algorithms, machine learning, and AI models. Each concept is explained practically, so you can apply it to real-world AI problems with confidence.
-
Lecture 1: Math for AI (Artificial Intelligence)– Real, Rational, Complex, Logarithms,Exponents
03:49:59 -
Lecture 2: Math for AI (Artificial Intelligence)– Set Theory, Mathematical Logic
02:14:00 -
Lecture 3 : Matrix Theory & Linear Systems | From Basics to AI Applications
04:00:00 -
Lecture 4 : Sequences, Series, Factorials, Permutations, Combinations & Binomial Theorem Explained
03:57:00 -
Lecture 5: Complete Geometry and Mensuration: From Basic Points to 3D Shapes
02:32:00 -
Mathematics for AI
All Activation Functions in Deep Learning – Explained with PyTorch
Learn and implement all major activation functions used in deep learning. This section covers: Linear, Threshold, Sigmoid, Tanh, Softmax, ReLU, LeakyReLU, Parametric ReLU (PReLU), ELU, SELU, Swish, Softplus, and Mish, with step-by-step explanations and practical PyTorch coding. By the end, you’ll know when and how to use each function in your neural networks to build efficient, high-performing AI models.
-
Linear Activation Function Explained || Why Use a Linear Activation Function?
03:00 -
Coding the Linear Activation Function in PyTorch: Step-by-Step Guide
02:00 -
Threshold Activation Function Explained || Why Use a Threshold Activation Function?
05:00 -
Coding the Threshold Activation Function in PyTorch: Step-by-Step Guide
05:00 -
Sigmoid Activation Function Explained & It’s Derivative || Why Use a Sigmoid Activation Function?
01:13:00 -
Coding the Sigmoid Activation Function in PyTorch: Step-by-Step Guide
02:00 -
Tanh Activation Function Explained & It’s Derivative || Why Use a Tanh Activation Function?
15:00 -
Coding the Tanh Activation Function in PyTorch: Step-by-Step Guide
02:00 -
Softmax Activation Function Explained & It’s Derivative || Why Use a Softmax Activation Function?
23:00 -
Coding the Softmax Activation Function in PyTorch: Step-by-Step Guide
04:00 -
Rectified Linear Unit (ReLU) Activation Function Explained & It’s Derivative
29:00 -
Coding the Rectified Linear Unit (ReLU) Activation Function in PyTorch: Step-by-Step Guide
03:00 -
Leaky ReLU (Leaky Rectified Linear Unit) Activation Function Explained & It’s Derivative
19:00 -
Coding the Leaky ReLU (Leaky Rectified Linear ) Activation Function in PyTorch: Step-by-Step Guide
05:00 -
Parametric ReLU (PReLU) Activation Function Explained & It’s Derivative
07:00 -
Coding the Parametric ReLU (PReLU) Activation Function in PyTorch: Step-by-Step Guide
03:00 -
Exponential Linear Unit (ELU) Activation Function Explained & It’s Derivative
16:00 -
Coding the Exponential Linear Unit (ELU) Activation Function in PyTorch: Step-by-Step Guide
04:00 -
Scaled Exponential Linear Unit (SELU) Activation Function Explained & It’s Derivative
13:00 -
Coding the Scaled Exponential Linear Unit (SELU) Activation Function in PyTorch: Step-by-Step Guide
03:00 -
Swish Activation Function Explained & It’s Derivative
18:00 -
Coding the Swish Activation Function in PyTorch: Step-by-Step Guide
01:00 -
Softplus Activation Function Explained & It’s Derivative
11:00 -
Coding the Softplus Activation Function in PyTorch: Step-by-Step Guide
04:00 -
Mish Activation Function Explained & It’s Derivative
08:00 -
Coding the Mish Activation Function in PyTorch: Step-by-Step Guide
01:00 -
All Activation Functions in Deep Learning – Explained with PyTorch
All Loss & Cost Functions in Deep Learning – Explained with PyTorch
Master all key loss and cost functions used in neural networks and AI. This section covers: Mean Squared Error (MSE), Mean Absolute Error (MAE), Mean Bias Error (MBE), Root Mean Squared Error (RMSE), Root Mean Squared Log Error (RMSLE), Huber Loss (Smooth L1), Log-Cosh Loss, Binary Cross-Entropy (BCE), Categorical Cross-Entropy, and other important cost functions, with step-by-step PyTorch implementations. You’ll learn how each function works, when to use it, and how it impacts model performance.
-
All Loss(Cost) Functions Explained || Loss Function vs Cost Function| Convex and non Convex loss
13:00 -
33:00
-
Coding the Mean Squared Error (MSE)(L2 LOSS) Cost Function in PyTorch: Step-by-Step Guide
03:00 -
Mean Absolute Error (MAE)(L1 Loss) Cost Function Explained & It’s Derivative
13:00 -
Coding the Mean Absolute Error (MAE)(L1 Loss) Cost Function in PyTorch: Step-by-Step Guide
03:00 -
Mean Bias Error (MBE) Cost Function Explained & It’s Derivative
04:00 -
Coding the Mean Bias Error (MBE) Cost Function in PyTorch: Step-by-Step Guide
03:00 -
Root Mean Squared Error (RMSE) Cost Function Explained & It’s Derivative
05:00 -
Coding the Root Mean Squared Error (RMSE) Cost Function in PyTorch: Step-by-Step Guide
05:00 -
Root Mean Squared Logarithmic Error (RMSLE) Cost Function Explained & It’s Derivative
15:00 -
Root Mean Squared Logarithmic Error (RMSLE) Cost Function in PyTorch: Step-by-Step Guide
05:00 -
Huber Loss(Smooth L1 Loss) Cost Function Explained & It’s Derivative
16:00 -
Huber Loss(Smooth L1 Loss) Cost Function in PyTorch: Step-by-Step Guide
09:00 -
Log-Cosh Loss Cost Function Explained & It’s Derivative
05:00 -
Log-Cosh Loss Cost Function in PyTorch: Step-by-Step Guide
02:00 -
Binary Cross-Entropy Loss (BCE)(logloss) Cost Function Explained & It’s Derivative
43:00 -
Binary Cross-Entropy Loss (BCE)(logloss) Cost Function in PyTorch: Step-by-Step Guide
07:00 -
Categorical Cross-Entropy Loss Cost Function Explained & It’s Derivative
37:00 -
Coding Categorical Cross-Entropy Loss Cost Function in PyTorch: Step-by-Step Guide
06:00 -
All Loss & Cost Functions in Deep Learning – Explained with PyTorch
All Optimizers in Deep Learning – Explained with PyTorch
Learn all major optimization algorithms that make neural networks train effectively. This section covers: Gradient Descent (Batch, Stochastic, Mini-Batch), Exponentially Weighted Moving Average (EWMA), SGD with Momentum, Nesterov Accelerated Gradient (NAG), AdaGrad, RMSProp, and Adam (Adaptive Moment Estimation) — all explained theoretically and implemented step-by-step in PyTorch. By the end, you’ll know how to select and apply the right optimizer to achieve faster convergence and better model performance.
-
Batch Gradient Descent Explained || Stochastic Gradient Descent (SGD)|| Mini-batch Gradient Descent
01:56:00 -
Coding Batch,SGD,Mini,Gradient Descent in PyTorch: Step-by-Step Guide
04:00 -
Exponentially Weighted Moving Average (EWMA) Explained || Understanding EWMA and Its Applications
45:00 -
SGD with Momentum Explained || Boosting Gradient Descent with Momentum
34:00 -
Coding SGD with Momentum Optimizer in PyTorch: Step-by-Step Guide
02:00 -
Nesterov Accelerated Gradient (NAG) Optimizer Explained & It’s Derivative
15:00 -
Coding Nesterov Accelerated Gradient (NAG) Optimizer in PyTorch: Step-by-Step Guide
02:00 -
AdaGrad & RMSProp Optimizers Explained & It’s Derivative
26:00 -
Coding AdaGrad & RMSProp Optimizer in PyTorch: Step-by-Step Guide
03:00 -
Adam (Adaptive Moment Estimation) Optimizers Explained & It’s Derivative
13:00 -
Coding Adam (Adaptive Moment Estimation) Optimizer in PyTorch: Step-by-Step Guide
01:00 -
All Optimizers in Deep Learning – Explained with PyTorch
Hands-On Project: Logistic Regression with PyTorch
Build practical skills with a Logistic Regression project in PyTorch. You’ll learn data preprocessing, model creation, training, and evaluation, giving you hands-on experience and a portfolio-ready AI project to showcase your machine learning expertise.
-
Logistic Regression with PyTorch: Hands-On Project for Beginners
03:12:00 -
Hands-On Project: Logistic Regression with PyTorch
Hands-On Project: Classification with Neural Networks in PyTorch
Get practical experience building classification models using neural networks in PyTorch. This project walks you through data preparation, model building, training, and evaluation, giving beginners hands-on skills and a portfolio-ready project to showcase real AI expertise.
-
Classification With Neural Networks with PyTorch: Hands-On Project for Beginners
01:23:00 -
Hands-On Project: Classification with Neural Networks in PyTorch
Improving Neural Network Performance
Learn how to optimize and stabilize your neural networks for better performance. This section covers: Vanishing & Exploding Gradients Overfitting & Underfitting Regularization Techniques: L1, L2, Elastic Net Dropout for Robust Models All Key Normalizations: Batch, Layer, Group, Instance, RMS, and input normalization All concepts are explained theoretically and implemented step-by-step in PyTorch, giving you hands-on experience to build efficient and high-performing AI models.
-
Vanishing & Exploding Gradients Explained || Why Do Gradients Vanish or Explode?
18:06 -
Overfitting & Underfitting Explained || Why Do Models Overfit or Underfit?
27:00 -
Regularization in Deep Learning || L1, L2, and Elastic Net Explained!
42:00 -
Coding Regularization in PyTorch || L1, L2, and Elastic Net
07:00 -
Dropout in Deep Learning Explained || Preventing Overfitting in Neural Networks!
09:00 -
All Normalizations Explained: Batch, Layer, Instance, Group, RMS
02:57:00 -
Coding All Normalizations Explained With Pytorch : Batch, Layer, Instance, Group, RMS
00:00 -
Improving Neural Network Performance
Natural Language Processing (NLP) – Sentiment Analysis, LSTM & Seq2Seq Models
Master NLP techniques with hands-on PyTorch projects. This section covers: Sentiment Analysis using Word Embeddings and Neural Bag of Words Recurrent Neural Networks (LSTM) Sequence-to-Sequence (Seq2Seq) Models You’ll learn how to process text, build NLP models, and implement real-world applications, gaining practical experience to handle AI language tasks and build portfolio-ready projects.
-
must watch
00:00 -
Introduction to NLP
47:18 -
End to End NLP Pipeline
01:18:00 -
Text Preprocessing
01:07:00 -
Text Representation
01:44:00 -
Word2vec
01:16:00 -
Hands-On Project: Sentiment Analysis with Word Embeddings in PyTorch
04:20:00 -
Hands-On Project: Sentiment Analysis with LSTM in PyTorch
02:40:00 -
Hands-On Project: Build a Mini Google Translate using Seq2Seq in PyTorch
05:41:00 -
Natural Language Processing (NLP) – Sentiment Analysis, LSTM & Seq2Seq Models
Implementing Transformers from Scratch in PyTorch (Research-Driven Approach)
Implement the Transformer architecture line-by-line in PyTorch while systematically studying the original research paper. Translate mathematical formulations—self-attention, multi-head attention, positional encoding, and normalization—directly into working code. This section builds true architectural understanding, preparing you for LLM research, model scaling, and advanced AI engineering roles.
-
12:46:00
-
Implementing Transformers from Scratch in PyTorch
Computer Vision with PyTorch: From Fundamentals to Modern Architectures
Learn how machines see, understand, and reason about images using PyTorch. This section covers core computer vision concepts, feature learning, and modern deep learning approaches used in real-world systems. Build a strong foundation for image classification, detection, segmentation, and vision-based AI research.
-
Convolutional Layers: nn.Conv2d, Filters, Padding, Kernels, and Image Types (Grayscale & RGB) CV 001
45:00 -
Image Classification with Logistic Regression in PyTorch | Beginner Friendly Tutorial | Ali Hassan
01:19:00 -
Image Classification with MLP in PyTorch – Step-by-Step Tutorial| Ali Hassan
03:40:00 -
Master PyTorch Conv2d: Filters, Edge Detection, LazyConv2d & Output Size Formula CV 002 | Ali Hassan
01:26:00 -
Pooling Layers in PyTorch Explained – MaxPool2d, AvgPool2d, AdaptiveMaxPool2d & AdaptiveAvgPool2d
56:00 -
LeNet-5 from Scratch in PyTorch – Image Classification LeNet-5 in PyTorch | Ali Hassan
40:00 -
01:26:00
-
PyTorch Min–Max Normalization | Scale Images & Feature Maps to [0,1] (with Code)| Ali Hassan
17:00 -
PyTorch nn.Sequential Explained | Build Neural Networks Step by Step| Ali Hassan
09:00 -
Learning Rate Finder in PyTorch | Best LR & Exponential Learning Rate Scheduler Explained|Ali Hassan
01:42:00 -
AlexNet from Scratch in PyTorch | Image Classification Tutorial | Ali Hassan
01:14:00 -
VGGNet from Scratch in PyTorch | VGG11, VGG13, VGG16, VGG19 for Image Classification | Ali Hassan
01:24:00 -
02:56:00
-
Computer Vision with PyTorch: From Fundamentals to Modern Architectures
Vision Transformer (ViT): From Research Paper to PyTorch Implementation
Implement the Vision Transformer from scratch in PyTorch by rigorously studying the original research paper. Translate patch embeddings, positional encoding, self-attention, and classification heads from theory into clean, modular code. Gain a deep research-level understanding of how Transformers replace CNNs in modern computer vision systems.
-
06:52:00
-
Vision Transformer (ViT): From Research Paper to PyTorch Implementation
Generative Models – From Fundamentals to Advanced Architectures
Learn how AI systems create new data instead of just predicting it. This section builds strong intuition behind modern generative models, training strategies, and real-world use cases. A foundation that prepares you for advanced Generative AI, image synthesis, text generation, and future models.
-
Hands-On Project: Building a Generative Model from Scratch with PyTorch
02:22:00 -
Hands-On Project: Image Generation from Scratch with Deep Convolutional Generative Models (PyTorch)
02:40:00 -
Generative Models – From Fundamentals to Advanced Architectures
BERT From Scratch in PyTorch – Research-Grade Implementation
In this section, you will implement BERT (Bidirectional Encoder Representations from Transformers) completely from scratch using PyTorch, without relying on high-level libraries. You’ll start by understanding the original BERT research paper and its core ideas, then translate each concept into clean, modular code.
You will build every major component step by step, including token embeddings, segment embeddings, positional encodings, multi-head self-attention, encoder stacks, and layer normalization. You will also implement Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) to understand how BERT is pre-trained in real research and industry settings.
By the end of this section, you won’t just use BERT—you’ll fully understand how it works internally, preparing you for LLM research, model fine-tuning, custom transformer architectures, and advanced NLP engineering roles.
What You’ll Build
A full BERT encoder architecture in PyTorch
Masked Language Modeling (MLM) training pipeline
Next Sentence Prediction (NSP) objective
A research-ready BERT implementation you can extend or fine-tune
-
bert
13:00:00
Stable Diffusion From Scratch in PyTorch
In this section, you will build Stable Diffusion from scratch using PyTorch, starting from diffusion theory to a working text-to-image generation pipeline. You’ll understand forward and reverse diffusion processes, noise schedules, and how latent diffusion models drastically reduce computational cost.
You will implement core components including UNet denoising networks, text conditioning, CLIP-style embeddings, and sampling strategies used in modern generative AI systems. This section gives you a research-level understanding of how tools like Stable Diffusion and Midjourney work internally.
-
Stable Diffusion From Scratch in PyTorch
22:00:00
Swin Transformer From Scratch – Hierarchical Vision Transformers
This section focuses on implementing the Swin Transformer (Shifted Window Transformer) from scratch in PyTorch. You’ll learn how Swin introduces hierarchical feature learning, window-based self-attention, and shifted windows to make transformers scalable for high-resolution vision tasks.
You will code window attention, patch merging, shifted window mechanisms, and full Swin blocks step by step. By the end, you’ll understand why Swin Transformers outperform CNNs and vanilla ViTs in real-world computer vision systems.
-
Swin Transformer From Scratch
08:00:00
LLaMA 2 From Scratch – Large Language Model Engineering
In this section, you will implement LLaMA 2 from scratch using PyTorch, gaining deep insight into how modern open-source large language models are designed and trained. You’ll study architectural optimizations such as RMSNorm, Rotary Positional Embeddings (RoPE), grouped-query attention, and efficient transformer blocks.
You will build a decoder-only transformer architecture suitable for large-scale language modeling and research experimentation. This section prepares you for LLM fine-tuning, scaling laws, and advanced NLP research.
-
LLaMA 2 From Scratch
18:00:00
PaliGemma – Multimodal Vision–Language Model From Scratch
This section explores PaliGemma, a multimodal vision-language model, implemented from scratch in PyTorch. You’ll learn how visual encoders and language models are fused to enable image understanding, visual question answering, and multimodal reasoning.
You will integrate vision embeddings with transformer-based language models, handle cross-modal attention, and understand how multimodal LLMs power modern AI assistants. This section builds a strong foundation for Vision-Language research and multimodal AI systems.
-
PaliGemma – Multimodal Vision–Language Model From Scratch
16:00:00
GPT From Scratch – Decoder-Only Transformer Architecture
In this section, you will implement GPT (Generative Pre-trained Transformer) from scratch in PyTorch. You’ll start with causal self-attention and masking, then build transformer decoder blocks exactly as described in the original research.
You’ll understand autoregressive language modeling, token prediction, and text generation pipelines. By the end, you’ll have a fully working GPT model that you can train, fine-tune, and extend for research or production use.
-
GPT From Scratch – Decoder-Only Transformer Architecture
15:00:00
U-Net From Scratch in PyTorch – Backbone of Diffusion Models
In this section, you will implement U-Net from scratch using PyTorch, the backbone architecture behind diffusion models, image segmentation, and generative vision systems.
You will build encoder-decoder pathways, skip connections, downsampling and upsampling blocks, and understand why U-Net is critical for tasks requiring precise spatial information. This section directly supports advanced topics like Stable Diffusion, medical imaging, and image-to-image translation.
-
U-Net From Scratch in PyTorch – Backbone of Diffusion Models
08:00:00
And Lot more is Coming.
Student Ratings & Reviews
Company
Quick Links
Contact
+92 341 1145196
support@mrelan.com
I-8 Markaz in Islamabad
Copyright © 2026 All Right Reserved by Mrelan
Want to receive push notifications for all major on-site activities?