Ahead of AI

Ahead of AI

Home
Notes
Support
LLMs From Scratch Book
Archive
About

Sitemap - 2024 - Ahead of AI

Noteworthy AI Research Papers of 2024 (Part One)

LLM Research Papers: The 2024 List

Understanding Multimodal LLMs

Building A GPT-Style LLM Classifier From Scratch

Building LLMs from the Ground Up: A 3-hour Coding Workshop

New LLM Pre-training and Post-training Paradigms

Instruction Pretraining LLMs

Developing an LLM: Building, Training, Finetuning

LLM Research Insights: Instruction Masking and New LoRA Finetuning Experiments

How Good Are the Latest Open LLMs? And Is DPO Better Than PPO?

Using and Finetuning Pretrained Transformers

Tips for LLM Pretraining and Evaluating Reward Models

A LoRA Successor, Small Finetuned LLMs Vs Generalist LLMs, and Transparent LLM Research

Improving LoRA: Implementing Weight-Decomposed Low-Rank Adaptation (DoRA) from Scratch

Support Independent AI Research

Model Merging, Mixtures of Experts, and Towards Smaller LLMs

Understanding and Coding Self-Attention, Multi-Head Attention, Causal-Attention, and Cross-Attention in LLMs

© 2025 Raschka AI Research (RAIR) Lab LLC
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More