POST
javascript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 const axios = require('axios'); const fs = require('fs'); const path = require('path'); // helper function to help you convert your local images into base64 format async function toB64(imgPath) { const data = fs.readFileSync(path.resolve(imgPath)); return Buffer.from(data).toString('base64'); } const api_key = "YOUR API-KEY"; const url = "https://api.segmind.com/v1/o3"; const data = { "messages": [ { "role": "user", "content" : "tell me a joke on cats" }, { "role": "assistant", "content" : "here is a joke about cats..." }, { "role": "user", "content" : "now a joke on dogs" }, ] }; (async function() { try { const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } }); console.log(response.data); } catch (error) { console.error('Error:', error.response.data); } })();
RESPONSE
application/json
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


messagesArray

An array of objects containing the role and content


rolestr

Could be "user", "assistant" or "system".


contentstr

A string containing the user's query or the assistant's response.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

OpenAI o3 | Frontier Reasoning Language Model

What is OpenAI o3?

OpenAI o3 is a frontier-class reasoning model released in April 2025, designed to tackle the hardest problems in coding, mathematics, science, and visual perception. Unlike traditional large language models that generate responses instantly, o3 employs extended chain-of-thought reasoning — spending more compute time thinking through a problem before answering. This deliberate reasoning process makes it one of the most accurate models available for tasks requiring logical rigor and multi-step problem solving.

Available via Segmind's serverless API at /v1/o3, the model accepts both text and image inputs, making it a powerful tool for multimodal reasoning workflows.

Key Features

  • Extended chain-of-thought reasoning with a configurable reasoning_effort parameter (low, medium, high)
  • Multimodal input support — text prompts with optional image attachments (up to 5 files, 10MB each)
  • Tool-use capable — designed to reason over web search results, Python outputs, and visual data
  • 200k token context window with up to 100k output tokens
  • Frontier benchmark performance — 91.6% on AIME 2024, 69.1% on SWE-Bench Verified, 75.7% on ARC-AGI, and 25.2% on EpochAI FrontierMath (no prior model exceeded 2%)

Best Use Cases

Complex Coding Tasks: o3 achieves 69.1% on SWE-Bench Verified, making it the go-to model for debugging intricate bugs, architecting systems, and writing production-grade code from detailed specifications.

Advanced Mathematics: Scoring 91.6% on AIME 2024 and solving 25.2% of EpochAI FrontierMath problems, o3 handles graduate-level math with precision that far exceeds prior models.

Scientific Research & Analysis: Ideal for synthesizing research papers, writing experiment methodologies, and reasoning over complex scientific datasets.

Visual Reasoning: Submit images alongside text prompts for spatial reasoning, diagram interpretation, chart analysis, or solving visual puzzles — o3 can even interpret blurry or low-quality images.

Legal & Financial Analysis: o3's large context window and strong reasoning over ambiguous, disparate information makes it well-suited for contract review, financial modelling logic, and regulatory analysis.

Prompt Tips and Output Quality

Getting the best results from o3 requires clear, structured prompts:

  • Be explicit: State the objective, constraints, and desired output format upfront. o3 is not a conversational model — it rewards specificity.
  • Control compute with reasoning_effort: Set to high for maximum accuracy on hard tasks; low for faster, cheaper responses on simpler queries.
  • Use developer messages to clearly separate system-level instructions from user queries.
  • Keep function/tool lists focused: Overlapping tool descriptions lead to hesitation or hallucinated calls. Fewer, well-defined tools produce the most reliable results.
  • Provide full context: Unlike lightweight models, o3 uses all the context you give it. Include relevant code, documents, or data directly in the prompt.

FAQs

Q: What is the difference between o3 and o4-mini? o3 is the premium reasoning model optimized for maximum accuracy on complex tasks. o4-mini is faster and approximately 10x cheaper, making it ideal for high-volume or cost-sensitive workloads where top-tier reasoning depth is not essential.

Q: Does o3 support image inputs? Yes. You can attach up to 5 images (up to 10MB each) alongside your text prompt. o3 analyzes the images during its internal reasoning phase before generating a response.

Q: What is the reasoning_effort parameter? It controls how much compute o3 spends reasoning before responding. High effort yields better accuracy on hard tasks; low effort produces faster, cheaper results. This parameter lets you tune the cost-accuracy tradeoff per request.

Q: Is o3 suitable for production coding workflows? Yes — o3 scores 69.1% on SWE-Bench Verified, a significant step up from prior models. For best results, provide full context and specify exactly what you want returned rather than expecting the model to infer missing pieces.

Q: How does o3 handle long documents or large codebases? o3 supports a 200k token context window, enough to process extensive codebases, legal contracts, or research papers in a single request.

Q: Can o3 hallucinate? Like all LLMs, o3 can hallucinate — particularly when given many tools with overlapping or vague descriptions. Clear, concise tool definitions and explicit prompts significantly reduce this risk.