Hopefully, you can see how exciting and powerful SD and its accompanying cousin models are. Why should you care? Generated by Author via SD Or generate a logo for an open-source project. While what SD generates might not be used as final assets, it could be used to generate textures in a prototype game. Ultimately, this will be another tool to express ideas faster and in more accessible ways. It's not perfect, and if you look closely there are lots of artifacts, but it was good enough for me for this project /Scl2as7lhJ The results are incredible.Ħ/ And it turns out, it DOES work for clothes! She combines the output of Dalle (SD should work just fine here) with EBSynth, an AI good at taking one modified image and extrapolating how it should apply to subsequent frames. SD struggles with consistency between generation passes, as demonstrates in her attempt to change a video of someone walking to have a different outfit. Adding birds and a low-strength pass brings it together in a gorgeous scene. He then draws in a simple spaceship in the sky and asks SD to turn it into a beautiful spaceship, and after a few passes, it fits into the scene beautifully. He fed this image back in, changing it to have a post-apocalyptic vibe. Hopefully, you can tell which one drew and which one SD generated. He fed this image into SD, asking for “Digital fantasy painting of the Seattle city skyline. recently did this, starting with a very simple drawing of Seattle. Don’t think of this as a “put in a thing, get a new thing out” the system can loop back on itself. The real magic of SD and other image generation is human and computer interaction. Or follow a guide and get it set up on your machine (any guide we write here will be woefully out of date within a few weeks). If you’re interested in playing around with it, go to huggingface, dreamstudio.ai, or Google collab and use their web-based interface (all currently free). Many of them are decent and can be iterated on (more on that later). Step away for a glass of water, and you have ~15 images to sift through when you come back. Many of the incredible examples you see online are cherry-picked, but the fact that you can fire up your desktop with a low-end RTX 3060 and crank out a new image every 13 seconds is mind-blowing. That doesn’t make it any less incredible. Boston Terrier with a mermaid tail, at the bottom of the ocean, dramatic, digital art. How do you use it?Īfter playing with SD on our home desktop and fiddling around with a few of the repos, we can confidently say that SD isn’t as good as Dall-E 2 when it comes to generating abstract concepts. The speed at which this has come into existence is dizzying. Other models are incorporated into the flow, such as image upscaling or face correction. There are plugins to Photoshop and Krita. People are training new models or fine-tuning models to generate different styles of content better. There are dozens of repos with different features, web UIs, and optimizations. This touches on the more important thing about SD.
#Rocket script font mac mac
But due to its open-source nature, patches and tweaks enable it to be CPU only, AMD powered, or even Mac friendly.
#Rocket script font mac series
They recommend a 3xxx series NVIDIA GPU with at least 6GB of RAM to get decent results. Additionally, you can run Stable Diffusion (SD) on your computer rather than via the cloud, accessed by a website or API. What makes Stable Diffusion special? For starters, it is open source under the Creative ML OpenRAIL-M license, which is relatively permissive. Canvas can transform it into a beautiful landscape. Nvidia’s Canvas AI allows someone to create a crude image with various colors representing different elements, such as mountains or water. This year we have seen several image generation AIs such as Dall-e 2, Imagen, and even Craiyon. Why is it important, how do you use it, and why should you care? It is an AI model that can generate images based on a text prompt or an input image. Perhaps you’ve heard about it and some of the hubbub around it. As of writing this article, it’s less than a few weeks old. You might not have heard about Stable Diffusion.