1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
const axios = require('axios');
const fs = require('fs');
const path = require('path');
async function toB64(imgPath) {
const data = fs.readFileSync(path.resolve(imgPath));
return Buffer.from(data).toString('base64');
}
const api_key = "YOUR API-KEY";
const url = "https://api.segmind.com/v1/live-portrait";
const data = {
"face_image": "toB64('https://segmind-sd-models.s3.amazonaws.com/display_images/liveportrait-input.jpg')",
"driving_video": "https://segmind-sd-models.s3.amazonaws.com/display_images/liveportrait-video.mp4",
"live_portrait_dsize": 512,
"live_portrait_scale": 2.3,
"video_frame_load_cap": 128,
"live_portrait_lip_zero": true,
"live_portrait_relative": true,
"live_portrait_vx_ratio": 0,
"live_portrait_vy_ratio": -0.12,
"live_portrait_stitching": true,
"video_select_every_n_frames": 1,
"live_portrait_eye_retargeting": false,
"live_portrait_lip_retargeting": false,
"live_portrait_lip_retargeting_multiplier": 1,
"live_portrait_eyes_retargeting_multiplier": 1
};
(async function() {
try {
const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } });
console.log(response.data);
} catch (error) {
console.error('Error:', error.response.data);
}
})();
An image with a face
A video to drive the animation
Size of the output image
min : 64,
max : 2048
Scaling factor for the face
min : 1,
max : 4
The maximum number of frames to load from the driving video. Set to 0 to use all frames.
Enable lip zero
Use relative positioning
Horizontal shift ratio
min : -1,
max : 1
Vertical shift ratio
min : -1,
max : 1
Enable stitching
Select every nth frame from the driving video. Set to 1 to use all frames.
Enable eye retargeting
Enable lip retargeting
Multiplier for lip retargeting
min : 0.01,
max : 10
Multiplier for eye retargeting
min : 0.01,
max : 10
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Live Portrait is an advanced AI-driven portrait animation framework. Unlike mainstream diffusion-based methods, Live Portrait leverages an implicit-keypoint-based framework for creating lifelike animations from single source images.
Efficient Animation: LivePortrait synthesizes lifelike videos from a single source image, using it as an appearance reference. The motion (facial expressions and head pose) is derived from a driving video, audio, text, or generation.
Stitching and Retargeting: Instead of following traditional diffusion-based approaches, LivePortrait explores and extends the potential of implicit-keypoint-based techniques. This approach effectively balances realism and expressiveness.
Bring life to historical figures: Imagine educational content or documentaries featuring animated portraits of historical figures with realistic expressions. Live Portrait allows you to create engaging narratives by adding subtle movements and emotions to portraits.
Create engaging social media content: Stand out from the crowd with captivating animated profile pictures or eye-catching social media posts featuring your own portrait brought to life. Live Portrait lets you personalize your content and grab attention with dynamic visuals.
Enhance e-learning experiences: Make educational content more interactive and engaging for learners of all ages. Animate portraits of educators or characters to explain concepts in a lively and memorable way.
Personalize avatars and characters: Design unique and expressive avatars for games, apps, or virtual reality experiences. Live Portrait allows you to create avatars with realistic facial movements that enhance user interaction.