POST
javascript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 const axios = require('axios'); const api_key = "YOUR API-KEY"; const url = "https://api.segmind.com/v1/deepseek-chat"; const data = { "messages": [ { "role": "user", "content" : "tell me a joke on cats" }, { "role": "assistant", "content" : "here is a joke about cats..." }, { "role": "user", "content" : "now a joke on dogs" }, ] }; (async function() { try { const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } }); console.log(response.data); } catch (error) { console.error('Error:', error.response.data); } })();
RESPONSE
application/json
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


messagesArray

An array of objects containing the role and content


rolestr

Could be "user", "assistant" or "system".


contentstr

A string containing the user's query or the assistant's response.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

DeepSeek Chat

DeepSeek V3 represents a major advancement in open-source AI models, offering enhanced capabilities and performance. DeepSeek V3 is an open-source 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token. It features innovative load balancing and multi-token prediction, trained on 14.8T tokens. The model achieves state-of-the-art performance across benchmarks. It incorporates reasoning capabilities distilled from DeepSeek-R1 and supports a 128K context window.

Key Features of DeepSeek Chat

  • Speed Improvement: DeepSeek V3 processes 60 tokens per second, representing a 3x speed increase over its predecessor

  • Enhanced Capabilities: The model demonstrates improved overall performance across various tasks

  • Architecture: Built on a 671B Mixture-of-Experts (MoE) parameter architecture, with 37B activated parameters

  • Training Scale: Trained on 14.8 trillion high-quality tokens

  • API Compatibility: Maintains compatibility with previous versions for seamless transition

  • Open Source: Both the model and associated research papers are freely available to the community.