Overview
videos is OpenAI Sora’s video generation interface that creates video generation tasks through text prompts or optional reference image inputs.
After creation, you need to use the query interface to get the generation status. Once the task is completed, you can proceed with subsequent operations such as remixing and downloading.
For more information about the Sora generation interface, please refer to OpenAI Official Documentation.
Supported Models
Currently supported model IDs:
sora-2 (default)
sora-2-pro
Important Notes
Asynchronous ProcessingVideo generation requires considerable time and uses asynchronous mode. After creating a task, a task ID is returned immediately. You need to use the query interface to get generation progress and results.
Content PolicyGenerated video content must comply with OpenAI’s usage policies. Content that is illegal, violent, pornographic, or infringes on copyrights is prohibited.
Resource ManagementDownload generated videos promptly to avoid resource expiration. Check the expires_at field in the response to know when the video will expire.
Auto-Generated DocumentationThe request parameters and response format are automatically generated from the OpenAPI specification. All parameters, their types, descriptions, defaults, and examples are pulled directly from openapi.json. Scroll down to see the interactive API reference.
Quick Start
Basic Example: Text-to-Video
curl -X POST "https://wisdom-gate.juheapi.com/v1/video" \
-H "Authorization: Bearer $WISDOM_GATE_KEY" \
-F "prompt=A cat walking on the street" \
-F "model=sora-2" \
-F "seconds=4" \
-F "size=720x1280"
Example with Image Reference
curl -X POST "https://wisdom-gate.juheapi.com/v1/video" \
-H "Authorization: Bearer $WISDOM_GATE_KEY" \
-F "prompt=A serene landscape animation" \
-F "model=sora-2" \
-F "seconds=8" \
-F "size=1280x720" \
-F "input_reference=@reference_image.jpg"
Best Practices
1. Prompt Optimization
Use specific, detailed descriptions including scene, action, lighting and other details:
Good prompt:
A cinematic shot of a red sports car driving through a winding mountain road at sunset, with dramatic lighting and misty atmosphere, shot with a professional camera, 4K quality
Poor prompt:
2. Duration Control
Choose appropriate duration based on content complexity; shorter durations typically yield better quality:
- 4 seconds: Best for simple scenes, fastest generation
- 8 seconds: Good for moderate complexity scenes
- 12 seconds: For complex scenes, longer generation time
3. Resolution Selection
Choose appropriate resolution based on use case, balancing quality and generation time:
720x1280 (default): Vertical format, good for mobile/social media
1280x720: Horizontal format, standard widescreen
1024x1792: High vertical resolution
1792x1024: High horizontal resolution
4. Image Preprocessing
When using image guidance, ensure input images are:
- Clear and well-composed
- Reasonable file size (not too large)
- In supported formats (JPEG, PNG)
5. Error Handling
Implement comprehensive retry and error handling mechanisms:
import requests
import time
import random
def create_video_with_retry(prompt, max_retries=3):
url = "https://wisdom-gate.juheapi.com/v1/video"
headers = {"Authorization": f"Bearer WISDOM_GATE_KEY"}
data = {"prompt": prompt, "model": "sora-2"}
for attempt in range(max_retries):
try:
response = requests.post(url, headers=headers, data=data)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
if attempt < max_retries - 1:
wait_time = (2 ** attempt) + random.random()
time.sleep(wait_time)
else:
raise
Storyboard Mode
The API supports Storyboard functionality for creating sequential multi-shot videos. When using storyboard mode, the prompt must strictly follow this format:
Shot 1:
duration: 7.5sec
Scene: The plane takes off.
Shot 2:
duration: 7.5sec
Scene: The plane lands.
- Each shot starts with
Shot N: (N is the shot number)
- Use
duration: Xsec to specify shot duration
- Use
Scene: to describe the shot content
- Separate shots with blank lines
Storyboard Example
curl -X POST "https://wisdom-gate.juheapi.com/v1/video" \
-H "Authorization: Bearer $WISDOM_GATE_KEY" \
-F "prompt=Shot 1:
duration: 7.5sec
Scene: The plane takes off.
Shot 2:
duration: 7.5sec
Scene: The plane lands." \
-F "model=sora-2" \
-F "size=1280x720"
FAQ
How long does video generation take?
Typically takes several minutes to over ten minutes, depending on video duration, resolution, and server load. If there’s no response for a long time or failure occurs, please contact customer service.
What resolutions are supported?
Default support for 720x1280. Supported resolutions:
720x1280 (default) - Vertical
1280x720 - Horizontal
1024x1792 - High vertical
1792x1024 - High horizontal
For specific supported resolutions, please refer to the model documentation.
How long can generated videos be?
Default is 4 seconds. Maximum duration depends on model limitations:
sora-2: Supports 4, 8, and 12 seconds
sora-2-pro: Supports 4, 8, and 12 seconds
Please refer to official documentation for the latest duration limits.
How to improve generation quality?
- Use detailed prompts: Include scene, action, lighting, camera angles, and style
- Choose appropriate duration: Shorter durations (4-8 seconds) typically yield better quality
- Provide high-quality reference images: Clear, well-composed images guide better results
- Use Storyboard mode: For complex multi-shot videos, use the storyboard format
How to check video generation status?
After creating a video, use the returned id to query status:
video_id = result['id']
status_url = f"https://wisdom-gate.juheapi.com/v1/video/{video_id}"
response = requests.get(status_url, headers=headers)
status = response.json()
print(f"Status: {status['status']}, Progress: {status['progress']}%")
How to download the generated video?
Once the status is completed, use the content endpoint:
content_url = f"https://wisdom-gate.juheapi.com/v1/video/{video_id}/content"
response = requests.get(content_url, headers=headers)
with open("generated_video.mp4", "wb") as f:
f.write(response.content)
Bearer token authentication. Include your API key in the Authorization header as 'Bearer YOUR_API_KEY'
Request body for video generation using OpenAI Sora models. Uses multipart/form-data format.
Text prompt that describes the video to generate. For Storyboard mode, use the specific format with Shot N:, duration: Xsec, and Scene: descriptions.
Example:"A cat walking on the street"
model
enum<string>
default:sora-2
The video generation model to use. Defaults to sora-2.
Available options:
sora-2,
sora-2-pro
Clip duration in seconds. Defaults to 15 seconds.
Available options:
4,
8,
12
size
enum<string>
default:720x1280
Output resolution formatted as width x height. Defaults to 720x1280.
Available options:
720x1280,
1280x720,
1024x1792,
1792x1024
Optional image reference that guides generation. Upload as a file in multipart/form-data format.
Video generation request accepted
Response from video generation request. Returns immediately with task ID for asynchronous processing.
Unique identifier for the video generation request. Use this ID to query status and retrieve the video.
Example:"video_68e688d4950481918ec389280c2f78140fcb904657674466"
Object type, always 'video'
Unix timestamp (in seconds) when the request was created
Current status of the video generation. Use the query interface to check status updates.
Available options:
queued,
processing,
completed,
failed
Unix timestamp (in seconds) when the video generation completed. Null if not yet completed.
Error information if generation failed. Null if no error.
Unix timestamp (in seconds) when the video will expire. Download promptly to avoid expiration.
Model used for generation
Generation progress percentage (0-100)
Required range: 0 <= x <= 100
ID of the video this was remixed from, if applicable
Video duration in seconds
Video resolution (width x height)