AI Art Generators Can Be Fooled Into Making NSFW Images::Nonsense words can get around DALLE-2’s and Stable Diffusion’s filters
AI Art Generators Can Be Fooled Into Making NSFW Images::Nonsense words can get around DALLE-2’s and Stable Diffusion’s filters
Aren’t there NSFW filters after the generation? Bing Image Creator for example will frequently generate images with a borderline-NSFW[1] prompts, but only show you a subset of the four it generated, not all. Some prompts will also be rejected before any generation takes place at all. But I don’t see how this would help you getting through the filter that happens after the generation.
[1] “borderline-NSFW” really just means anything involving woman or violence, the filter on that thing can be extremely prude and often times a bit nonsensical (e.g. “woman in bikini” is blocked, “woman in 1950 bikini” that’s ok).