This is fantastic. Really looking forward to going through each of these papers. The rate of progress is so fast that collections like these are essential so that people who are not at the core of the field can keep up with the key insights.
Awesome compilation! Really helpful for folks who are starting out. It would be amazing if you can write a similar blog for computer vision to catch up on SOTA like diffusion models , Vision transformers etc..
One minor typo, in the BERT section you write "..The BERT paper above introduces the original concept of masked-language modeling, and next-sentence prediction remains an influential decoder-style architecture.." which I think it should be "..encoder-style architecture..."
Really liked the article. Shall be following the links for further reading. Much thanks! One things, in "
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning" You misspelt Deshpande, one of the authors. p and h are interchanged. Great read tho! I would love more ML articles. Even historical ones.
I just subscribed today. Thanks for compiling such a great list. I am looking forward to reading through the ones I have yet to read. Also, I have all three books. I am looking forward to future newsletter articles.
What are your thoughts on the applicability of older systems like Hadoop and Ambari via Apache. The distributed filesystem strikes me as useful for clusters?
This is fantastic. Really looking forward to going through each of these papers. The rate of progress is so fast that collections like these are essential so that people who are not at the core of the field can keep up with the key insights.
I'd love to read additional ML & AI articles from you, outside of your existing newsletter format! So you've got my vote ✅
Awesome compilation! Really helpful for folks who are starting out. It would be amazing if you can write a similar blog for computer vision to catch up on SOTA like diffusion models , Vision transformers etc..
Great work! Congratulations!!
One minor typo, in the BERT section you write "..The BERT paper above introduces the original concept of masked-language modeling, and next-sentence prediction remains an influential decoder-style architecture.." which I think it should be "..encoder-style architecture..."
I vote YES as well. Please keep going on ML & AI topics. Thanks for sharing.
Stunning explanations! Thanks
Thanks for the article!
A minor note: I think there is a typo in "BlenderBot 3: A Deployed Conversational Agent that Continually Learns to Responsibly Rngage".
I like your idea of posting some additional articles related to machine learning and AI.
Hi Sebastian,
I love your blog. Your blogs are very helpful to keep me updated on emerging technology.
Request you to write blog on
1) why/how LLM learn in-context without training on data. ( zero shot learning )
2) Prompt Chaining
This is really good. I also enjoyed reading your book on ML with pytorch and SK Learn. Recommend to everyone.
Really liked the article. Shall be following the links for further reading. Much thanks! One things, in "
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning" You misspelt Deshpande, one of the authors. p and h are interchanged. Great read tho! I would love more ML articles. Even historical ones.
This is a really great and detailed article. It seems like everything is converging towards transformers.
Transformers is literally taking over AI.
Do you think that transformers will be the monad for AI for the coming decade?
Love your content and eagerly look forward to it. Keep it going!
For sure, your content is always a great read!
I just subscribed today. Thanks for compiling such a great list. I am looking forward to reading through the ones I have yet to read. Also, I have all three books. I am looking forward to future newsletter articles.
What are your thoughts on the applicability of older systems like Hadoop and Ambari via Apache. The distributed filesystem strikes me as useful for clusters?