Stable Genius — A.I Models in the Generative Media Space Have Blown up in the Past Year

Rohit Majumdar
5 min readJan 16, 2023

The last couple of years have seen a plethora of Artificial Intelligence (AI/ML) models create a lot of buzz, especially with the extremely online cognoscenti. When GPT-3, the language model based on deep learning neural networks, was released in an incrementally phased manner by OpenAI, it unleashed the imaginations of experts and hobbyists alike on the use cases where such large language models (LLMs) can be used effectively. These span diverse areas like developing automated chat bots for customer service teams, cutting down the time it takes to write well-converting marketing copy and product descriptions, to translating legacy code bases into simple to understand English. There have been way too many think pieces about the coming wave in AI generated media recently.

Photo by Birmingham Museums Trust on Unsplash

As with the evolution of the internet from the 90’s through the 2000’s, where we saw the internet as a medium morph our consumption patterns of text, then audio, then video, in a similar fashion, these large AI models started with GPT-3, trained using humongous amounts of text (and around 175 billion parameters). Naturally, the media next in line to be seized by AI models would be images, or visual media.

Stability AI is one of the companies at the forefront of this hype fueled space. As an alternative to…

--

--

Rohit Majumdar
Rohit Majumdar

Written by Rohit Majumdar

Writing about technology, media & society. Internet and digital economy enthusiast. Remixing ideas.

No responses yet