Sitemap - 2023 - Ahead of AI

Ten Noteworthy AI Research Papers of 2023

Research Papers in Nov 2023: Tackling Hallucinations, Boosting Reasoning Abilities, and New Insights into the Transformer Architecture

Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation)

Research Papers in Oct 2023: A Potential Successor to RLHF for Efficient LLM Alignment and the Resurgence of CNNs

AI and Open Source in 2023

LLM Business and Busyness: Recent Company Investments and AI Adoption, New Small Openly Available LLMs, and LoRA Research

Research Papers Aug-Sep 2023: From Self-Alignment to LongLoRA

LLM Training: RLHF and Its Alternatives

The Missing Bits: Llama 2 Weights Have Changed

New Foundation Models: CodeLlama and other highlights in Open-Source AI

Research Highlights Jul-Aug 2023: Llama 2, Flash-Attention 2, and More

Large Language Models and Nearest Neighbors

AI Research Highlights June-July 2023: Long Contexts and Scaling Transformers to 1,000,000,000 Tokens

State of Computer Vision 2023: From Vision Transformers to Neural Radiance Fields

Accelerating PyTorch Model Training

Understanding Encoder And Decoder LLMs

AI Research Highlights May-June 2023: Direct-Preference Optimization for Human Feedback and More

LLM Tuning & Dataset Perspectives

About LayerNorm Variants in the Original Transformer Paper, and Some Other Interesting Historical Tidbits About LLMs

Finetuning LLMs Efficiently with Adapters

AI Research Highlights April-May 2023: Transformers for Long Inputs and Less Training Data

Insights from Large-Scale LLM Training Runs

Understanding Parameter-Efficient LLM Finetuning: Prompt Tuning And Prefix Tuning

Finetuning Large Language Models

Understanding Large Language Models

Large Language Models 3.0

TrAIn Differently: Do We Need Reinforcement Learning with Human Feedback (RLHF)?

RevAIval of Ideas: From Next-Generation Convolutional Neural Networks to LLMs

Looking Back at 2022: A Big Year For AI