1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
const axios = require('axios');
const FormData = require('form-data');
const api_key = "YOUR API-KEY";
const url = "https://api.segmind.com/v1/hunyuan3d-2.1";
const reqBody = {
"seed": 42,
"image": "https://segmind-resources.s3.amazonaws.com/output/f9e83c41-45aa-4ac0-b126-50e17b2ff935-kangaroo.jpg",
"steps": 30,
"num_chunks": 8000,
"max_facenum": 20000,
"guidance_scale": 7.5,
"generate_texture": true,
"octree_resolution": 256,
"remove_background": true
};
(async function() {
try {
const formData = new FormData();
// Append regular fields
for (const key in reqBody) {
if (reqBody.hasOwnProperty(key)) {
formData.append(key, reqBody[key]);
}
}
// Convert and append images as Base64 if necessary
const response = await axios.post(url, formData, {
headers: {
'x-api-key': api_key,
...formData.getHeaders()
}
});
console.log(response.data);
} catch (error) {
console.error('Error:', error.response ? error.response.data : error.message);
}
})();
Sets the random seed
URL of the input image for 3D conversion. Use clear images for best results.
Defines inference steps. Lower for speed, higher for quality.
min : 5,
max : 50
Sets mesh generation chunk count. 5000 for speed, 8000 for detail.
min : 1000,
max : 200000
Controls max mesh faces. 15000 for smaller models, 20000 for larger.
min : 10000,
max : 200000
Adjusts guidance scale. 7.5 for balanced generation, 10 for creative.
min : 1,
max : 20
Enables texture generation. Set true for detailed texture.
Choose octree resolution. 256 for balance, 512 for detailed meshes.
Allowed values:
Option to remove image background. Enable for isolated objects.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Hunyuan3D 2.1 is an open-source generative AI model for converting 2D images into high-fidelity 3D assets. It combines two specialized modules—Hunyuan3D-Shape for precise image-to-mesh reconstruction and Hunyuan3D-Paint for physically based rendering (PBR) texture synthesis. Designed to run on MacOS, Windows, and Linux, Hunyuan3D 2.1 supports full fine-tuning and customization. Developers and 3D artists benefit from photorealistic outputs with accurate reflections, subsurface scattering, and metalness effects, outpacing many closed-source and research models.
num_chunks
(1,000–200,000) and max_facenum
(10,000–200,000) for mesh complexity.guidance_scale
(1–20) balances adherence to input vs. creative variation.To maximize output quality and consistency:
seed=42
for reproducible results or random for exploration.steps=30
(range 5–50) for a balance of speed and fidelity.num_chunks
to 8,000+ and max_facenum
to 20,000+ for detailed surfaces.generate_texture=true
and choose octree_resolution=512
for higher texture resolution.remove_background=true
to isolate objects cleanly.guidance_scale=7.5
preserves input likeness; increase to 10 for creative variations.Q: How do I convert a product image to a 3D model?
A: Pass your image URL to the image
parameter and run Hunyuan3D-Shape. Adjust steps
and max_facenum
to refine mesh fidelity.
Q: Can I fine-tune Hunyuan3D 2.1 on my own dataset?
A: Yes. The repository includes training scripts and weight checkpoints. You can fine-tune on domain-specific images.
Q: What file formats are supported for export?
A: Generated meshes export as OBJ or glTF, with PBR textures in PNG or JPEG maps.
Q: Is Hunyuan3D 2.1 suitable for real-time applications?
A: For real-time, reduce steps
and num_chunks
settings, then optimize output in your engine’s LOD pipeline.
Q: Where can I get the code and weights?
A: Visit the official GitHub repository under an open-source license for full access and community contributions.