Large Language Models 1.0. It's been about half a decade since we saw the emergence of the original transformer model, BERT, BLOOM, GPT 1 to 3, and many more. This generation of large language models (LLMs) peaked with PaLM, Chinchilla, and LLaMA. What this first generation of transformers has in common is that they were all pretrained on large unlabeled text corpora.
Great work Sebastian!
Thanks!!
Boston dynamics?
Could you clarify the context? 😅