You can drop your own file here
Click or Drag-n-Drop
You can drop your own file here
GPT-Image-1 Edit is OpenAI’s powerful inpainting and multi-reference image editing model that brings Photoshop-like control into the realm of natural language. Built for developers, creators, and AI toolmakers, it enables detailed visual edits, image composition, and object-level replacements—all guided by intuitive prompts. Whether you're modifying product shots, refining creative assets, or combining multiple visual inputs into a single composition, GPT-Image-1 Edit makes it seamless.
Unlike the standard image generation model, GPT-Image-1 Edit focuses on transforming existing images. It supports four core editing capabilities:
Inpainting with Masks: Upload an image and a corresponding mask (with an alpha channel). The model uses the prompt to fill the transparent areas of the mask with entirely new content—ideal for object replacement, background changes, or localized edits.
Multi-Image Composition: Upload multiple reference images and describe how they should be combined. GPT-Image-1 Edit intelligently merges them into a single, coherent image. For example, uploading a soap bar, bath bomb, and lotion can generate a photorealistic gift basket.
Style-Guided Transformation: Use one or more images to define the visual style, and let the model adapt another image accordingly.
Partial Overwrites: Provide an image and ask the model to modify specific elements—like changing the sky in a landscape or replacing a product label.
The API uses standard multipart/form-data
for image uploads and supports images up to 25MB with alpha channels for masks. Like the generation model, output can be customized in terms of format (PNG, JPEG, WebP), background transparency, and compression levels.
GPT-Image-1 Edit blurs the line between generative and traditional design tools. It empowers creators to edit with words, reduce manual retouching time, and generate asset variants rapidly. With its fine control over visual composition and the ability to combine references, it becomes a core building block for next-gen design tools, AI-assisted workflows, and content platforms looking to scale visual operations.