1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
const axios = require('axios');
const FormData = require('form-data');
const api_key = "YOUR API-KEY";
const url = "https://api.segmind.com/v1/flux-kontext-dev";
const reqBody = {
"seed": 42,
"prompt": "Replace the background with a bokeh light effect, zooming in on the subject, keeping the person’s pose, position, scale, and camera angle identical. Only change the surrounding environment. ",
"guidance": 7,
"input_image": "https://images.segmind.com/generations/901b78cd-381a-4284-8391-306d4c7409e1/6852c6602b5db2d623cc373cfa699b55.webp",
"aspect_ratio": "match_input_image",
"output_format": "png",
"output_quality": 90,
"num_inference_steps": 35,
"disable_safety_checker": false
};
(async function() {
try {
const formData = new FormData();
// Append regular fields
for (const key in reqBody) {
if (reqBody.hasOwnProperty(key)) {
formData.append(key, reqBody[key]);
}
}
// Convert and append images as Base64 if necessary
const response = await axios.post(url, formData, {
headers: {
'x-api-key': api_key,
...formData.getHeaders()
}
});
console.log(response.data);
} catch (error) {
console.error('Error:', error.response ? error.response.data : error.message);
}
})();
Set a random seed for reproducibility. Use for consistent results; leave blank for randomness.
Describe generation task or image edits. Use clear and specific instructions for best results.
Determines adherence to prompt. Use higher values for more precision.
min : 0,
max : 10
Reference image for context. Use jpeg, png, gif, or webp for varied image integrations.
Select aspect ratio. Match input for consistency; use fixed for specific dimensions.
Allowed values:
Specify output format. Use webp for compressibility; jpg or png for compatibility.
Allowed values:
Determine output image quality. Use high values for detailed images and adjustments for jpg outputs.
min : 0,
max : 100
Specify number of processing steps. Use more steps for complex images and fewer for faster results.
min : 4,
max : 50
Turn off NSFW filter. Use with caution if needing unrestricted content.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
FLUX.1 Kontext is an advanced generative AI model that unifies image generation and in-context editing in a single framework. Leveraging state-of-the-art generative flow matching, it combines semantic cues from both text prompts and reference images to create new views and apply precise edits. Unlike traditional pipelines, FLUX.1 Kontext maintains object and character consistency across multiple editing steps, making it ideal for iterative design loops and rapid prototyping.
input_image
in JPEG, PNG, GIF, or WEBP to anchor edits.match_input_image
.png
, jpg
, or webp
; default quality is 90 for high fidelity.Q: How does FLUX.1 Kontext ensure consistency across edits?
A: By leveraging generative flow matching, the model retains semantic embeddings for characters and objects, preserving their appearance step after step.
Q: What image formats are supported?
A: FLUX.1 Kontext accepts JPEG, PNG, GIF, and WebP as input_image
sources.
Q: Can I control the level of creativity versus accuracy?
A: Yes. Adjust the guidance
parameter (0–10) to balance prompt fidelity and creative freedom.
Q: What’s the recommended number of inference steps?
A: We suggest 25–35 steps for most workflows. Fewer steps speed up generation; more steps enhance detail.
Q: Is FLUX.1 Kontext suitable for style transfer?
A: Absolutely. It outperforms benchmarks on global and local style transfer tasks within KontextBench.
Q: How fast is the model?
A: Optimized for interactive use, FLUX.1 Kontext delivers results significantly faster than comparable state-of-the-art systems, supporting rapid prototyping and real-time editing.
Integrated via Replicate. Commercial use is allowed.