Welcome to LartistAI Docs
Let's make your app AI-powered with simple, powerful APIs.
Quick Start
Get up and running in 3 simple steps
What Can You Build?
Choose from our comprehensive suite of AI models
Images
Generate and edit stunning images
Videos
Create dynamic videos from text or images
Audio
Generate speech and convert voices
3D/4D
Create 3D models and 4D content
Getting Started
Follow these simple steps to start using LartistAI APIs.
1Get Your API Key
Sign up for free and get your API key to start making requests
2Choose Your Integration Method
Pick the method that works best for your project
REST API
Integrate directly into your applications with simple HTTP requests
curl -X POST "https://apiplateform.richdalelab.com/api/generate" -H "Authorization: Bearer YOUR_API_KEY" -d '{"model": "sd35", "params": {"prompt": "your prompt"}}'3Make Your First Request
Use our simple API to start generating content
API Request
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"model": "sd35", "params": {"prompt": "A beautiful sunset over mountains"}}'Simple & Fast
Just replace YOUR_API_KEY with your actual API key and you're ready to go!
4Quick Start Examples
Copy and paste these examples to get started immediately
Generate an Image
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "sd35",
"params": {
"prompt": "A beautiful sunset over mountains",
"width": 1024,
"height": 1024
}
}'API Keys
Manage your API keys for secure access to LartistAI.
Available Models
Choose from our comprehensive suite of AI models for any creative project.
Image Generation Models
Image Editing Models
Video Generation Models
Stable Video Diffusion 1.1
Generate videos from images with temporal consistency
SV4D2
Advanced 4D video generation for dynamic content creation (GIF/MP4 input only)
Lartist Text-to-Video
Generate high-quality videos from text prompts with Lartist
Lartist Image-to-Video
Transform static images into dynamic videos with Lartist
Audio Generation Models
Stable Diffusion 3.5 API Reference
Detailed documentation for the Stable Diffusion 3.5 model.
Stable Diffusion 3.5
Advanced text-to-image generation with enhanced artistic control
/api/generateParameters
modelModel name: "sd35"
promptText description of the image to generate
negative_promptWhat to avoid in the generated image
guidance_scaleHow closely to follow the prompt (1-20)
denoising_stepsNumber of denoising steps (10-50)
widthImage width in pixels
heightImage height in pixels
seedRandom seed for reproducibility.
batch_sizeNumber of images to generate in a batch.
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"prompt": "A futuristic city skyline at sunset",
"guidance_scale": 7,
"denoising_steps": 25,
"width": 1024,
"height": 1024,
"seed": 0,
"batch_size": 1
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "sd35",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}Stable Video Diffusion 1.1 API Reference
Detailed documentation for the Stable Video Diffusion 1.1 model.
Stable Video Diffusion 1.1
Generate videos from images with temporal consistency
/api/generateParameters
modelModel name: "svd11"
paramsParameters object containing:
image_urlURL or base64 of the input image
stepsNumber of denoising steps (1-50)
widthVideo width in pixels
heightVideo height in pixels
output_dirDirectory to save output files.
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"image_url": "https://example.com/image.jpg",
"steps": 25,
"width": 1024,
"height": 576,
"output_dir": "test_output"
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "svd11",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}Stable Fast 3D API Reference
Detailed documentation for the Stable Fast 3D model.
Stable Fast 3D
Generate 3D models from single images with photorealistic textures (PNG/JPG only)
/api/generateParameters
modelModel name: "sf3d"
paramsParameters object containing:
file_base64Base64 encoded image file (PNG or JPG only)
file_nameName of the uploaded file
remesh_optionMesh reconstruction algorithm
vertex_countNumber of vertices in mesh (-1 for auto)
texture_sizeResolution of generated texture (256-2048)
foreground_ratioForeground to background ratio (0.0-1.0)
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"file_base64": "example_string",
"file_name": "example_string",
"remesh_option": "none",
"vertex_count": -1,
"texture_size": 1024,
"foreground_ratio": 0.85
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "sf3d",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}SV4D2 API Reference
Detailed documentation for the SV4D2 model.
SV4D2
Advanced 4D video generation for dynamic content creation (GIF/MP4 input only)
/api/generateParameters
modelModel name: "sv4d2"
paramsParameters object containing:
file_dataBase64 encoded input file (GIF or MP4 only)
model_pathModel variant to use
num_stepsNumber of denoising steps
n_framesNumber of frames in output
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"file_data": "example_string",
"model_path": "checkpoints/sv4d2.safetensors",
"num_steps": 25,
"n_frames": 13
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "sv4d2",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}Lartist Image Generation API Reference
Detailed documentation for the Lartist Image Generation model.
Lartist Image Generation
Generate high-quality images from text with the powerful Lartist model
/api/generateParameters
modelModel name: "Qwen/Qwen-image"
paramsParameters object containing:
promptText prompt to generate the image.
negative_promptWhat to avoid in the generated image
aspect_ratioAspect ratio of the generated image (e.g., 16:9, 1:1)
num_inference_stepsNumber of denoising steps.
guidance_scaleScale for classifier-free guidance.
seedRandom seed for reproducibility.
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"prompt": "A futuristic city skyline at sunset",
"negative_prompt": "blurry, low quality, distorted, ugly",
"aspect_ratio": "16:9",
"num_inference_steps": 20,
"guidance_scale": 4
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "qwen",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}Lartist Image Editing API Reference
Detailed documentation for the Lartist Image Editing model.
Lartist Image Editing
Edit and enhance images with Lartist Image Editing model
/api/generateParameters
modelModel name: "Qwen/Qwen-Image-edit"
paramsParameters object containing:
promptText prompt describing the desired edit.
image_base64Base64 encoded input image.
num_inference_stepsNumber of denoising steps.
guidance_scaleScale for classifier-free guidance.
widthWidth of the output image (matches input by default).
heightHeight of the output image (matches input by default).
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"prompt": "A futuristic city skyline at sunset",
"image_base64": "example_string",
"num_inference_steps": 20,
"guidance_scale": 7.5,
"width": "input image width",
"height": "input image height"
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "qwen-edit",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}Lartist Text-to-Speech API Reference
Detailed documentation for the Lartist Text-to-Speech model.
Lartist Text-to-Speech
Generate expressive speech from text using reference audio with Lartist
/api/generateParameters
modelModel name: "FunAudioLLM/CosyVoice"
taskTask type: "tts"
promptText to be converted to speech.
reference_audio_b64Base64 encoded reference audio for voice cloning.
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"task": "example_string",
"prompt": "A futuristic city skyline at sunset",
"reference_audio_b64": "example_string"
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "cosyvoice",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}Lartist Voice Conversion API Reference
Detailed documentation for the Lartist Voice Conversion model.
Lartist Voice Conversion
Convert your voice to another using a reference audio with Lartist
/api/generateParameters
modelModel name: "FunAudioLLM/CosyVoice"
taskTask type: "vc"
source_audio_b64Base64 encoded source audio to be converted.
reference_audio_b64Base64 encoded reference audio for target voice.
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"task": "example_string",
"source_audio_b64": "example_string",
"reference_audio_b64": "example_string"
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "cosyvoice-vc",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}Lartist Text-to-Video API Reference
Detailed documentation for the Lartist Text-to-Video model.
Lartist Text-to-Video
Generate high-quality videos from text prompts with Lartist
/api/generateParameters
modelModel name: "Wan-AI/Wan2.2-T2V-A14B"
paramsParameters object containing:
promptText prompt to generate the video.
heightHeight of the output video in pixels.
widthWidth of the output video in pixels.
num_framesNumber of frames in the output video.
num_inference_stepsNumber of denoising steps.
fpsFrames per second of the output video.
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"prompt": "A futuristic city skyline at sunset",
"height": 512,
"width": 768,
"num_frames": 32,
"num_inference_steps": 40,
"fps": 16
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "wan22-t2v",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}Lartist Image-to-Video API Reference
Detailed documentation for the Lartist Image-to-Video model.
Lartist Image-to-Video
Transform static images into dynamic videos with Lartist
/api/generateParameters
modelModel name: "Wan-AI/Wan2.2-I2V-A14B"
paramsParameters object containing:
promptText prompt to guide video generation.
image_path_b64Base64 encoded input image.
negative_promptWhat to avoid in the generated video.
heightHeight of the output video in pixels.
widthWidth of the output video in pixels.
num_framesNumber of frames in the output video.
num_inference_stepsNumber of denoising steps.
guidance_scaleScale for classifier-free guidance.
fpsFrames per second of the output video.
Code Examples
curl -X POST "https://apiplateform.richdalelab.com/api/generate" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "example_string",
"prompt": "A futuristic city skyline at sunset",
"image_path_b64": "example_string",
"negative_prompt": "overly saturated, overexposed, static, blurry details, subtitles, artistic style, paintings, still frames, gray overall, worst quality, low quality, JPEG artifacts, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn face, deformed, disfigured, malformed limbs, fused fingers, static scenes, cluttered background, three legs, crowded background, walking backwards",
"height": 512,
"width": 768,
"num_frames": 32,
"num_inference_steps": 40,
"guidance_scale": 4,
"fps": 16
}'Response Format
{
"success": true,
"result": {
"id": "job_123456789",
"status": "completed",
"output": {
"processed_files": [
{
"filename": "generated_output.png/mp4/glb",
"mimeType": "image/png | video/mp4 | model/gltf-binary | audio/wav",
"size": 1024000,
"base64": "[BASE64_ENCODED_DATA]",
}
]
},
"metadata": {
"model": "wan22-i2v",
"parameters": {
"prompt": "A beautiful sunset over mountains"
},
"processing_time": 2.5
}
}
}CLI Documentation
Quickly integrate LartistAI models using our command-line interface.
Install the CLI
Get started by installing the LartistAI CLI globally.
npm install -g @richdaleai/cliGenerate Image with CLI
Generate an image using the Stable Diffusion 3.5 model.
npx richdaleai sd35 --prompt "A beautiful sunset over mountains" --width 1024 --height 1024Error Handling
Common Error Responses
Understand and resolve issues with API requests.
401 Unauthorized
Invalid or missing API key. Make sure to include your API key in the Authorization header.
429 Rate Limited
Too many requests. Check your usage limits and try again later.
500 Server Error
Internal server error. Please try again or contact support if the issue persists.
Next Steps
Ready to start building? Here's what you can do next.