8 Comments

Another great analogy, Sairam. I am fascinated with the comparisons.

Expand full comment

Thanks, Karena. I really appreciate it :)

Expand full comment

Thanks for this series, Sairam. Are you planning to include BERT and transformers as well?

Expand full comment

Glad you like this series, Abhimanyu. I'll do a language model series after this one. I did a vision transformer series before starting this one so that might be something you'd like to check out. :)

Expand full comment

Vincent going at it!

Expand full comment

Haha, thanks Ivan. Yeah, he's not planning on stopping.

Expand full comment

The story about Vincent actually motivated me to experiment more with my own writing haha. Really helpful analogy to explain VAE.

Also, you mention at the end "computer vision still needs to be solved," I find that has sprung up a lot of question in my head! Why? What's the difference?

Expand full comment

Thanks, Michelle :) it was a note for myself (too) to experiment more.

What I meant is that computer vision as a field still has a lot of problems that need to be solved. Things like image recognition have been really well solved, but, there's a few other problems that aren't solved to that level yet. Generative models have exploded on the scene and now I feel the tide has shifted towards them. Hence my statement 😁

Expand full comment