{"work":{"id":"98e51b10-54bd-4251-8a2d-f79bd6215c19","openalex_id":null,"doi":null,"arxiv_id":"2308.06721","raw_key":null,"title":"IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models","authors":null,"authors_text":"Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, Wei Yang","year":2023,"venue":"cs.CV","abstract":"Recent years have witnessed the strong power of large text-to-image diffusion models for the impressive generative capability to create high-fidelity images. However, it is very tricky to generate desired images using only text prompt as it often involves complex prompt engineering. An alternative to text prompt is image prompt, as the saying goes: \"an image is worth a thousand words\". Although existing methods of direct fine-tuning from pretrained models are effective, they require large computing resources and are not compatible with other base models, text prompt, and structural controls. In this paper, we present IP-Adapter, an effective and lightweight adapter to achieve image prompt capability for the pretrained text-to-image diffusion models. The key design of our IP-Adapter is decoupled cross-attention mechanism that separates cross-attention layers for text features and image features. Despite the simplicity of our method, an IP-Adapter with only 22M parameters can achieve comparable or even better performance to a fully fine-tuned image prompt model. As we freeze the pretrained diffusion model, the proposed IP-Adapter can be generalized not only to other custom models fine-tuned from the same base model, but also to controllable generation using existing controllable tools. With the benefit of the decoupled cross-attention strategy, the image prompt can also work well with the text prompt to achieve multimodal image generation. The project page is available at \\url{https://ip-adapter.github.io}.","external_url":"https://arxiv.org/abs/2308.06721","cited_by_count":null,"metadata_source":"pith","metadata_fetched_at":"2026-05-14T23:48:19.245041+00:00","pith_arxiv_id":"2308.06721","created_at":"2026-05-08T18:23:55.357148+00:00","updated_at":"2026-05-14T23:48:19.245041+00:00","title_quality_ok":true,"display_title":"IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models","render_title":"IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models"},"hub":{"state":{"work_id":"98e51b10-54bd-4251-8a2d-f79bd6215c19","tier":"hub","tier_reason":"10+ Pith inbound or 1,000+ external citations","pith_inbound_count":68,"external_cited_by_count":null,"distinct_field_count":7,"first_pith_cited_at":"2023-10-30T13:12:40+00:00","last_pith_cited_at":"2026-05-13T11:44:14+00:00","author_build_status":"not_needed","summary_status":"needed","contexts_status":"needed","graph_status":"needed","ask_index_status":"not_needed","reader_status":"not_needed","recognition_status":"not_needed","updated_at":"2026-05-15T03:47:26.501771+00:00","tier_text":"hub"},"tier":"hub","role_counts":[{"context_role":"background","n":2}],"polarity_counts":[{"context_polarity":"background","n":2}],"runs":{"context_extract":{"job_type":"context_extract","status":"succeeded","result":{"enqueued_papers":25},"error":null,"updated_at":"2026-05-14T09:38:39.670068+00:00"},"graph_features":{"job_type":"graph_features","status":"succeeded","result":{"co_cited":[{"title":"SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis","work_id":"8034c587-fba6-4941-87ba-c98f2ac962cb","shared_citers":22},{"title":"Hierarchical Text-Conditional Image Generation with CLIP Latents","work_id":"0c6a768b-70b8-4242-bb0e-459f1008c9fc","shared_citers":15},{"title":"Qwen-Image Technical Report","work_id":"d06d7ecc-7579-4f89-a60b-4278a0f3c562","shared_citers":15},{"title":"An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion","work_id":"ca618c21-3ba6-448e-bd86-bcecff3cdeb5","shared_citers":14},{"title":"Denoising Diffusion Implicit Models","work_id":"8fa2128b-d18c-405c-ac92-0e669cf89ac0","shared_citers":14},{"title":"Classifier-Free Diffusion Guidance","work_id":"acf2c588-c088-4a6c-938e-150ad7c666d7","shared_citers":11},{"title":"Instantid: Zero-shot identity-preserving generation in seconds.arXiv preprint arXiv:2401.07519","work_id":"85490b0d-f13f-4217-a587-51a62742c242","shared_citers":11},{"title":"DINOv2: Learning Robust Visual Features without Supervision","work_id":"26b304e5-b54a-4f26-be7e-83299eca52e4","shared_citers":9},{"title":"Qwen2.5-VL Technical Report","work_id":"69dffacb-bfe8-442d-be86-48624c60426f","shared_citers":9},{"title":"Wan: Open and Advanced Large-Scale Video Generative Models","work_id":"ad3ebc3b-4224-46c9-b61d-bcf135da0a7c","shared_citers":9},{"title":"Emerging Properties in Unified Multimodal Pretraining","work_id":"e0cfd82c-f5d4-44fd-b531-ec73ab0a805b","shared_citers":8},{"title":"FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space","work_id":"5dfe19d5-3541-4803-8fe9-3c8b9e29b281","shared_citers":8},{"title":"HunyuanVideo: A Systematic Framework For Large Video Generative Models","work_id":"881efa7e-7e73-4c66-9cc3-2803e551061c","shared_citers":8},{"title":"OmniGen2: Towards Instruction-Aligned Multimodal Generation","work_id":"d3153e5f-b6e2-4ab3-9f41-e24e24d64496","shared_citers":8},{"title":"Prompt-to-Prompt Image Editing with Cross Attention Control","work_id":"196f7eef-d65a-47e4-b815-9a188f6aedcf","shared_citers":8},{"title":"Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets","work_id":"4f68eada-27e3-437a-a2fe-6e4ca524d0d3","shared_citers":8},{"title":"GPT-4 Technical Report","work_id":"b928e041-6991-4c08-8c81-0359e4097c7b","shared_citers":7},{"title":"PixArt-$\\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis","work_id":"77157568-e4be-4041-bb20-388177fc59d0","shared_citers":7},{"title":"Qwen3-VL Technical Report","work_id":"1fe243aa-e3c0-4da6-b391-4cbcfc88d5c0","shared_citers":7},{"title":"CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer","work_id":"f38fc088-12aa-4bf4-9ecd-08d3e797ccb7","shared_citers":6},{"title":"Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution","work_id":"8abcfe4f-e0fb-44b7-9123-448fac95f90a","shared_citers":6},{"title":"Score-Based Generative Modeling through Stochastic Differential Equations","work_id":"d9110e53-a5d4-4794-a4c5-a575e91c31ad","shared_citers":6},{"title":"Step1X-Edit: A Practical Framework for General Image Editing","work_id":"3392f2c8-a1cb-4d6c-8c82-2cdccffa33f9","shared_citers":6},{"title":"Auto-Encoding Variational Bayes","work_id":"97d95295-30e1-42b4-bbf6-85f0fa4edb44","shared_citers":5}],"time_series":[{"n":4,"year":2024},{"n":2,"year":2025},{"n":57,"year":2026}],"dependency_candidates":[]},"error":null,"updated_at":"2026-05-14T09:48:22.316666+00:00"},"identity_refresh":{"job_type":"identity_refresh","status":"succeeded","result":{"items":[{"title":"Qwen3 Technical Report","outcome":"unchanged","work_id":"25a4e30c-1232-48e7-9925-02fa12ba7c9e","resolver":"local_arxiv","confidence":0.98,"old_work_id":"25a4e30c-1232-48e7-9925-02fa12ba7c9e"}],"counts":{"fixed":0,"merged":0,"unchanged":1,"quarantined":0,"needs_external_resolution":0},"errors":[],"attempted":1},"error":null,"updated_at":"2026-05-14T09:38:43.517034+00:00"},"summary_claims":{"job_type":"summary_claims","status":"succeeded","result":{"title":"IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models","claims":[{"claim_text":"Recent years have witnessed the strong power of large text-to-image diffusion models for the impressive generative capability to create high-fidelity images. However, it is very tricky to generate desired images using only text prompt as it often involves complex prompt engineering. An alternative to text prompt is image prompt, as the saying goes: \"an image is worth a thousand words\". Although existing methods of direct fine-tuning from pretrained models are effective, they require large computing resources and are not compatible with other base models, text prompt, and structural controls. I","claim_type":"abstract","evidence_strength":"source_metadata"}],"why_cited":"Pith tracks IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models because it crossed a citation-hub threshold.","role_counts":[]},"error":null,"updated_at":"2026-05-14T09:48:30.400326+00:00"}},"summary":{"title":"IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models","claims":[{"claim_text":"Recent years have witnessed the strong power of large text-to-image diffusion models for the impressive generative capability to create high-fidelity images. However, it is very tricky to generate desired images using only text prompt as it often involves complex prompt engineering. An alternative to text prompt is image prompt, as the saying goes: \"an image is worth a thousand words\". Although existing methods of direct fine-tuning from pretrained models are effective, they require large computing resources and are not compatible with other base models, text prompt, and structural controls. I","claim_type":"abstract","evidence_strength":"source_metadata"}],"why_cited":"Pith tracks IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models because it crossed a citation-hub threshold.","role_counts":[]},"graph":{"co_cited":[{"title":"SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis","work_id":"8034c587-fba6-4941-87ba-c98f2ac962cb","shared_citers":22},{"title":"Hierarchical Text-Conditional Image Generation with CLIP Latents","work_id":"0c6a768b-70b8-4242-bb0e-459f1008c9fc","shared_citers":15},{"title":"Qwen-Image Technical Report","work_id":"d06d7ecc-7579-4f89-a60b-4278a0f3c562","shared_citers":15},{"title":"An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion","work_id":"ca618c21-3ba6-448e-bd86-bcecff3cdeb5","shared_citers":14},{"title":"Denoising Diffusion Implicit Models","work_id":"8fa2128b-d18c-405c-ac92-0e669cf89ac0","shared_citers":14},{"title":"Classifier-Free Diffusion Guidance","work_id":"acf2c588-c088-4a6c-938e-150ad7c666d7","shared_citers":11},{"title":"Instantid: Zero-shot identity-preserving generation in seconds.arXiv preprint arXiv:2401.07519","work_id":"85490b0d-f13f-4217-a587-51a62742c242","shared_citers":11},{"title":"DINOv2: Learning Robust Visual Features without Supervision","work_id":"26b304e5-b54a-4f26-be7e-83299eca52e4","shared_citers":9},{"title":"Qwen2.5-VL Technical Report","work_id":"69dffacb-bfe8-442d-be86-48624c60426f","shared_citers":9},{"title":"Wan: Open and Advanced Large-Scale Video Generative Models","work_id":"ad3ebc3b-4224-46c9-b61d-bcf135da0a7c","shared_citers":9},{"title":"Emerging Properties in Unified Multimodal Pretraining","work_id":"e0cfd82c-f5d4-44fd-b531-ec73ab0a805b","shared_citers":8},{"title":"FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space","work_id":"5dfe19d5-3541-4803-8fe9-3c8b9e29b281","shared_citers":8},{"title":"HunyuanVideo: A Systematic Framework For Large Video Generative Models","work_id":"881efa7e-7e73-4c66-9cc3-2803e551061c","shared_citers":8},{"title":"OmniGen2: Towards Instruction-Aligned Multimodal Generation","work_id":"d3153e5f-b6e2-4ab3-9f41-e24e24d64496","shared_citers":8},{"title":"Prompt-to-Prompt Image Editing with Cross Attention Control","work_id":"196f7eef-d65a-47e4-b815-9a188f6aedcf","shared_citers":8},{"title":"Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets","work_id":"4f68eada-27e3-437a-a2fe-6e4ca524d0d3","shared_citers":8},{"title":"GPT-4 Technical Report","work_id":"b928e041-6991-4c08-8c81-0359e4097c7b","shared_citers":7},{"title":"PixArt-$\\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis","work_id":"77157568-e4be-4041-bb20-388177fc59d0","shared_citers":7},{"title":"Qwen3-VL Technical Report","work_id":"1fe243aa-e3c0-4da6-b391-4cbcfc88d5c0","shared_citers":7},{"title":"CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer","work_id":"f38fc088-12aa-4bf4-9ecd-08d3e797ccb7","shared_citers":6},{"title":"Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution","work_id":"8abcfe4f-e0fb-44b7-9123-448fac95f90a","shared_citers":6},{"title":"Score-Based Generative Modeling through Stochastic Differential Equations","work_id":"d9110e53-a5d4-4794-a4c5-a575e91c31ad","shared_citers":6},{"title":"Step1X-Edit: A Practical Framework for General Image Editing","work_id":"3392f2c8-a1cb-4d6c-8c82-2cdccffa33f9","shared_citers":6},{"title":"Auto-Encoding Variational Bayes","work_id":"97d95295-30e1-42b4-bbf6-85f0fa4edb44","shared_citers":5}],"time_series":[{"n":4,"year":2024},{"n":2,"year":2025},{"n":57,"year":2026}],"dependency_candidates":[]},"authors":[]}}