pink hair guy in glasses, photograph, sporty body, cinematic lighting, clear eyes, perfect face, blush,  beautiful nose, beautiful eyes, detailed eyes
 
Anime Diffusion2 is a latent text-to-image diffusion model based on Vintedois (22h) Diffusion model trained on BLIP captions for Danbou set + Demon slayer + arts from 4ch
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
This model can be used for entertainment purposes and as a generative art assistant.
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
    'AlexWortega/AnimeDiffuion2',
    torch_dtype=torch.float32
).to('cuda')
negative_prompt = """low-res, duplicate, poorly drawn face, ugly, undetailed face"""
d = 'white girl'
prompt = f"pink hair guy in glasses, photograph, sporty body, cinematic lighting, clear eyes, perfect face, blush,  beautiful nose, beautiful eyes, detailed eyes" 
num_samples = 1
with  torch.inference_mode():
    images = pipe([prompt] * num_samples,
                  negative_prompt = [negative_prompt]*num_samples,
                  height=512, width=512,
                  num_inference_steps=50,
                  guidance_scale=8,
                  ).images
    
images[0].save("test.png")
 This project would not have been possible without the incredible work by the CompVis Researchers .
In order to reach me, here is my blog: