Sitemap - 2023 - Ahead of AI
Ten Noteworthy AI Research Papers of 2023
Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation)
A Potential Successor to RLHF for Efficient LLM Alignment and the Resurgence of CNNs
From Self-Alignment to LongLoRA
LLM Training: RLHF and Its Alternatives
The Missing Bits: Llama 2 Weights Have Changed
New Foundation Models: CodeLlama and other highlights in Open-Source AI
Llama 2, Flash-Attention 2, and More
Large Language Models and Nearest Neighbors
Long Contexts and Scaling Transformers to 1,000,000,000 Tokens
State of Computer Vision 2023: From Vision Transformers to Neural Radiance Fields
Accelerating PyTorch Model Training
Understanding Encoder And Decoder LLMs
Direct-Preference Optimization for Human Feedback and More
LLM Tuning & Dataset Perspectives
Finetuning LLMs Efficiently with Adapters
Transformers for Long Inputs and Less Training Data
Insights from Large-Scale LLM Training Runs
Understanding Parameter-Efficient LLM Finetuning: Prompt Tuning And Prefix Tuning
Finetuning Large Language Models
Understanding Large Language Models
TrAIn Differently: Do We Need Reinforcement Learning with Human Feedback (RLHF)?
RevAIval of Ideas: From Next-Generation Convolutional Neural Networks to LLMs