Published on:
9 November 2023
Primary Category:
Computer Vision and Pattern Recognition
Paper Authors:
Jingwen Chen,
Yingwei Pan,
Ting Yao,
Tao Mei
Presents text-driven stylized image generation task
Proposes ControlStyle diffusion model with modulation network
Uses diffusion regularizations to train with unpaired data
Achieves higher quality than two-stage methods
Demonstrates generalization to unseen styles
Text-guided stylized image creation
This paper introduces a new method to generate stylized images guided by text prompts and example style images, without needing an existing content image. It trains a diffusion model to generate images matching text prompts, and modulates it with style image conditions. This allows high-quality stylized image generation in a single model.
No comments yet, be the first to start the conversation...
Sign up to comment on this paper