61 Comments
User's avatar
Daniel Kleine's avatar

Apart from the architectural differences, what would be interesting to know is on which text data the LLMs have been trained on. From my pov, it's really unfortunate that this info is typically not disclosed, even for open-source LLMs. Not just the amount of training data (e.g. number of tokens) but also the data quality as factors for a true scientific comparison.

Expand full comment
Sebastian Raschka, PhD's avatar

Agreed. I summarized some of approaches last year (https://magazine.sebastianraschka.com/p/new-llm-pre-training-and-post-training), but it's tricky due to the lack of full disclosure. Btw on that note I recently stumbled upon https://libremodel.xyz/, which aims to be transparent in that regard. It's not going to be a SOTA model, but still interesting.

Expand full comment
Daniel Kleine's avatar

With a budget of under $1,000, wow!

I found the recent SmolLM3 data and training procedure quite interesting: https://huggingface.co/blog/smollm3#data-mixture-and-training-stages

Expand full comment
Leo Benaharon's avatar

Amazing article! This is evidence that we haven't hit a wall yet with LLMs as all these labs haven't converged to the same architectures.

Cohere Labs is also doing some great work for open source and have some interesting work. I feel a lot of people don't know who they are as they are trying to appeal to businesses/governments.

Expand full comment
Sebastian Raschka, PhD's avatar

Good point. Cohere flies a bit under the radar in the open-weight LLM circles. Maybe because of the enterprise focus that you mentioned. I think their Command model is also > 1.5 years old now and more RAG-focused so I didn't include it (but please correct me if I'm wrong).

Expand full comment
Nathan Ren's avatar

Really amazing and very helpful!

Thank you so much for consistently sharing such valuable articles.

One more thing I must tell the whole world that your book, Build a Large Language Model(From Scratch), is definitely worth a read! It was the true catalyst that sparked my journey into the LLM field!

Expand full comment
Sebastian Raschka, PhD's avatar

Thanks for the kind words and also kindly recommending my book! It’s awesome to hear that it’s been a career-starter!

Expand full comment
Lirio's avatar

Amazing article!

Expand full comment
Dante's avatar

I appreciate all the work you are putting in compiling this information

Expand full comment
Paul T's avatar

> MLA is a clever trick to reduce KV cache memory use while even slightly outperforming MHA in terms of modeling performance.

What’s the intuition for why MLA improves performance? It seems that compressing would if anything be slightly lossy.

Is this just a non-stat-sig result and we should just say “no evidence that it’s worse”? Or is there some mechanical reason that it is indeed better?

Expand full comment
Sebastian Raschka, PhD's avatar

Good question. This is based on the results shown in Figure 4. You are right, you’d expect slightly worse performance as it’s a workaround/approximation. My guess it’s because the additional “layer” adds more expressiveness.

Expand full comment
Daniel Kleine's avatar

Great overview!

As a small side note, I noticed that in Fig. 4, the bottom left comment appears to read 'MQA.' Should this perhaps be 'MLA' instead?

Expand full comment
Sebastian Raschka, PhD's avatar

Good catch, thanks!

Expand full comment
Daniel Kleine's avatar

I might have found another small nitpick: In Fig. 19, on the right side, the comment for the intermediate layer dimension should be 1536 (as it must be divisible by 8). Would you mind checking if this is correct?

Expand full comment
Sebastian Raschka, PhD's avatar

Thanks, that should indeed be 1536 not 1535 (I need to start wearing glasses)

Expand full comment
Daniel Kleine's avatar

:D Thanks! Could you please also update Fig. 1 for Qwen3?

I have also noticed that the arrows on the left side of Fig. 10 seem a bit misaligned. Would you mind taking a look to see if they could be adjusted for clarity?

BTW I like really the visual comparisons of the architectures!

Expand full comment
Sebastian Raschka, PhD's avatar

👍👍

Expand full comment
Daniel Kleine's avatar

Thanks!

Expand full comment
Thor Avenstrup's avatar

Thanks for the article! Just a small correction, Gemma 3 uses GELU not SiLU in the feed forward (in the figure comparing Gemma 3 27B to Mistral Small 3.1 24B).

Expand full comment
Sebastian Raschka, PhD's avatar

Thanks for the note! You are absolutely right, must have been a copy&paste error or so (I coincidentally had a section on GELU vs SiLU in my most recent article). Anyways, should be fixed now!

Expand full comment
Yechan.ai's avatar

Thanks. This is insanely useful for study

Expand full comment
weah's avatar

Very Amazing article.!!! Thanks, but I have a question, Qwen3-235B-A22B employs a fully integrated Mixture-of-Experts (MoE) architecture, rather than alternating between dense and MoE layers every two layers.

Expand full comment
Sebastian Raschka, PhD's avatar

Thanks for the note, you are absolutely right. This must have been a copy-paste error from the Llama 4 architecture. Just fixed it.

Expand full comment
Narendran's avatar

Very Helpful. Thanks

Expand full comment
Paolo Perrone's avatar

this is fantastic

Expand full comment
rizkhi_33's avatar

nice

Expand full comment
active_sky's avatar

Are you interested in explaining the principles of RoPE (which has almost become a de facto standard in every architecture mentioned in your articles)? The mathematical derivation process is really giving me a headache.😭

Expand full comment
Sebastian Raschka, PhD's avatar

One day! I have a long list of things I want to do but only so little time 😅

Expand full comment
active_sky's avatar

Thank you for your reply!

Expand full comment
me's avatar

are you sure that the qwen moe has a dense MLP in every other layer?

Expand full comment
Sebastian Raschka, PhD's avatar

Good callout. That was from sth else. Afaik they don’t have dense MLP layers

Expand full comment
vishal chauhan's avatar

Great insight! I think the biggest leap forward will come from architectural changes that eliminate the auto-regressive nature of current LLMs. Curious to see who comes up with clever solutions to break that paradigm.

Expand full comment