POST
javascript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 const axios = require('axios'); const fs = require('fs'); const path = require('path'); async function toB64(imgPath) { const data = fs.readFileSync(path.resolve(imgPath)); return Buffer.from(data).toString('base64'); } const api_key = "YOUR API-KEY"; const url = "https://api.segmind.com/v1/sam-3d-objects"; const data = { "image": "https://segmind-resources.s3.amazonaws.com/input/574b0eaa-a474-47fd-8e48-9eb107f76d8e-car.png", "prompt": "car", "mask": "toB64('null')", "seed": 123, "include_artifacts": false }; (async function() { try { const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } }); console.log(response.data); } catch (error) { console.error('Error:', error.response.data); } })();
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


imagestr *

Provide an image as a URL, file path, or base64 string to create a 3D object. Use a vehicle image URL for car modeling.


promptstr ( default: 1 )

Guide object selection with a text prompt. Use 'bicycle' or 'tree' for specific object creation.


maskimage ( default: 1 )

Use a mask image to specify 3D reconstruction regions. Optional for complex scenes, useful for precise object targeting.


point_coordsstr ( default: 1 )

Define object location with point coordinates. For ex: [[x1,y1]] or [[x1,y1],[x2,y2]]


bounding_boxstr ( default: 1 )

Specify object region with a bounding box. Example: [50, 75, 350, 400]


seedint ( default: 123 )

Set a random seed for repeatability. Choose 123 for unique experiments or 42 for standard reproducible results.

min : 0,

max : 1000000


include_artifactsbool ( default: 1 )

Include processing artifacts in a ZIP for debugging. Enable for detailed analysis or disable for faster outputs.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

SAM 3D Objects: Image-to-3D Reconstruction Model

Edited by Segmind Team on December 11, 2025.


What is SAM 3D Objects?

SAM 3D Objects, developed by Facebook Research, is a foundation model that transforms a single 2D image into a complete 3D reconstruction with detailed shapes, textures, and spatial layouts. It demonstrates remarkable precision in real-world scenarios by successfully interpreting occluded objects, clutter-heavy scenes, and complex spatial orientations, making it far superior to conventional 3D generation models. SAM 3D Objects is capable of inferring depth and detail even from partially visible image regions by seamlessly combining progressive training with human-in-the-loop refinement. As part of Meta’s SAM 3D suite, it supports the ability to derive a full 3D understanding directly from flat images, thereby eliminating the need for multi-view inputs or depth sensors.

Key Features of SAM 3D Objects

  • Single-Image 3D Reconstruction: It can generate complete 3D models from one flat image, even if there is no multi-view capture.
  • Occlusion and Clutter Handling: It can accurately reconstruct objects even when the source image has a partially hidden object or in complex scenes.
  • Multi-Modal Input Options: It supports text prompts, point coordinates, bounding boxes, and custom masks for precise object targeting.
  • Human-Evaluated Quality: The model outperforms previous 3D generation models in 'blind human evaluations' across diverse object categories.
  • Texture and Layout Preservation: It effectively maintains realistic surface details and spatial relationships in 3D space.
  • Reproducible Outputs: It includes seed control for consistent results across experiments or iterations.

Best Use Cases

  • E-commerce and Product Visualization: It can be utilized to transform catalog photos into interactive 3D models for AR try-on or immersive shopping experiences. SAM 3D Objects is useful for e-stores selling furniture, vehicles, and consumer electronics.

  • Game Development and Virtual Environments: Rapidly convert concept art or reference images into 3D assets for prototyping levels, props, or environmental objects without manual modeling.

  • Architecture and Urban Planning: It is an ideal tool to generate 3D representations of buildings, streetscapes, or interior spaces from photographic surveys for visualization and analysis.

  • Cultural Heritage Preservation: It can prove to be an asset to digitize artifacts, sculptures, and architectural elements from archival photographs when physical scanning isn't possible.

Prompt Tips and Output Quality

  • Effective Text Prompts: SAM 3D Objects works best with clear, single-object identifiers that help it isolate the target subject from background elements. So, use specific object names like "bicycle," "tree," or "car" rather than descriptive phrases.

  • Precision Targeting: It is a good practice to combine different methods for complex scenes with multiple objects: start with a text prompt, then refine with point coordinates [[x, y]] to indicate the object's center, or define a bounding box [x1, y1, x2, y2] to explicitly frame your target area.

  • Mask for Maximum Control: When automatic selection struggles with occlusions or overlapping objects, provide a custom mask image highlighting exactly which pixels to reconstruct in 3D.

  • Seed Management: Use consistent seed values (like 42) when iterating on the same image to evaluate how different prompts or coordinates affect results; randomize seeds for exploring variations.

  • Input Quality Matters: The result will be impeccable when the inputs are high-resolution images with clear lighting and minimal motion blur, as it will produce a sharper, detailed 3D geometry and textures. It is also vital to avoid heavily compressed JPEGs when possible.

FAQs

Is SAM 3D Objects open-source?
Yes, SAM 3D Objects is developed by Facebook Research as a foundation model. You may check Meta's official repositories for licensing terms and model weights.

What file formats does the model output?
SAM 3D Objects generates standard 3D formats compatible with major rendering and game engines. You can enable "Include Artifacts" for additional processing files useful for debugging or custom pipelines.

How does this compare to NeRF or Gaussian Splatting models?
Unlike NeRF, which requires multiple viewpoints, SAM 3D Objects works from a single image. Also, it is optimized for diverse real-world objects rather than scene-specific reconstruction, making it versatile for general-purpose 3D generation.

Can I reconstruct multiple objects from a single image?
Yes, it is possible to reconstruct multiple objects from a single image, but it is ideal to process them separately. Furthermore, use distinct prompts, coordinates, or bounding boxes for each object in successive API calls rather than attempting batch reconstruction.

What parameters should I tweak for the best results?
Begin with the image and your text prompt. If the generated output doesn’t include the intended object, provide point coordinates near its center. If the images contain multiple objects or cluttered scenes, it will be beneficial to specify a bounding box. Finally, use custom masks only when automatic segmentation repeatedly fails.

Does it work with artistic or stylized images?
SAM 3D Objects is trained primarily on photographic data since photorealistic renders and high-quality photographs result in the most accurate 3D reconstructions. It is a fact that performance degrades with heavily stylized, abstract, or illustrated inputs.