1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
const axios = require('axios');
const api_key = "YOUR API-KEY";
const url = "https://api.segmind.com/v1/sync.so-lipsync-2-pro";
const data = {
"video_url": "https://segmind-resources.s3.amazonaws.com/output/a741b039-226c-43c2-9bd0-c301f058d314-UntitledVideo-ezgif.com-crop-video.mp4",
"audio_url": "https://segmind-resources.s3.amazonaws.com/output/80e96316-7e75-4733-b80c-049a0a6787cb-c9f17960-96b5-4119-8b7e-4ae0c9f21e2f-audio-AudioTrimmer.com-AudioTrimmer.com.mp3",
"sync_mode": "loop",
"temperature": 0.5,
"auto_active_speaker_detection": true,
"occlusion_detection_enabled": false
};
(async function() {
try {
const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } });
console.log(response.data);
} catch (error) {
console.error('Error:', error.response.data);
}
})();
Provides the video URL for synchronization. Use high-quality links for best results.
Provides the audio URL for synchronization. Use clear audio files for precision.
Manages video-audio mismatch. Use 'loop' for repetitive audio, 'cut_off' for trimming.
Allowed values:
Controls expression in lip sync. Use 0.3 for calm, 0.8 for dynamic expressions.
min : 0,
max : 1
Detects and syncs active speaker automatically. Enable for multi-speaker scenarios.
Detects occlusion, slowing generation. Disable for faster processing.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Lipsync-2-Pro is an advanced AI model developed by Sync Labs that creates hyper-realistic lip synchronization videos. The AI model can effectively work across different video formats to edit dialogues within a video, while preserving facial expressions and even minute details with painstaking accuracy. Its 4K resolution, powered by diffusion-based super-resolution, gives speed in rendering video outputs with naturalistic results that don't need additional work in terms of training speakers in different languages or needing further refinement. Lipsync-2-Pro is a boon for film studios, content creators, and digital artists for its ability to create professional-level, perfectly synced videos.
For optimal results -
Q: How does Lipsync-2-Pro handle different languages? A: The AI model automatically adapts to any language to create the speaker's natural mouth movements without needing language-specific training.
Q: What video formats are supported? A: Lipsync-2-Pro works with multiple formats, which include live-action footage, 3D animations, and AI-generated videos up to 4K resolution.
Q: Do I need to train the model for different speakers? A: The huge advantage of the model is that it works instantly without speaker-specific training or fine-tuning.
Q: How can I optimize processing speed? A: You can optimize the processing speed by disabling occlusion detection for faster processing and ensuring clean audio input for best results.
Q: What's the recommended temperature setting? A: It is recommended to start with the default 0.5 setting; adjust lower (0.3) for subtle movements or higher (0.8) for more expressive results based on your content needs.
Q: Can it handle multiple speakers in one scene? A: Yes, you can enable the auto-active speaker detection for flawless synchronization in multi-speaker videos.