curl --request POST \
--url https://api.topcalls.ai/v1/calls \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data @- <<EOF
{
"phone_number": "+14155551234",
"task": "Call to confirm John's appointment tomorrow at 3 PM"
}
EOF{
"call_id": "564d4fd4-03bc-400a-abe0-05540fbeff88",
"provider_call_id": "64e9bf0e-7c2f-4443-a759-7eb1731cd583",
"status": "queued"
}Create and dispatch an AI-powered phone call. The call will be queued and executed immediately.
Phone Number Format: Must be in E.164 format (e.g., +14155551234)
++Simple Mode: Provide task (simple prompt)
Advanced Mode: Provide instructions (full system prompt)
curl --request POST \
--url https://api.topcalls.ai/v1/calls \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data @- <<EOF
{
"phone_number": "+14155551234",
"task": "Call to confirm John's appointment tomorrow at 3 PM"
}
EOF{
"call_id": "564d4fd4-03bc-400a-abe0-05540fbeff88",
"provider_call_id": "64e9bf0e-7c2f-4443-a759-7eb1731cd583",
"status": "queued"
}Use Authorization: Bearer tc_live_xxxxx
Phone number in E.164 format (e.g., +14155551234)
++"+14155551234"
Simple prompt describing what the AI should do.
Use this OR instructions (not both).
10"Call to confirm John's appointment tomorrow at 3 PM"
Caller ID in E.164 format (optional, falls back to FROM_NUMBER env var)
"+18005551234"
The AI's opening line
1"Hi, this is Sarah from TopView Dental calling about your appointment."
Full system instructions for the AI.
Use this OR task (not both).
10"You are Sarah, a friendly appointment coordinator..."
Conversation mode
realtime: OpenAI Realtime API (speech-to-speech, low latency)legacy: Separate STT → LLM → TTS pipeline (custom voices, voice cloning)realtime, legacy Voice to use for AI responses.
Realtime mode:
alloy, echo, shimmer, ash, ballad, coral, sage, verseLegacy mode (ElevenLabs):
rachel, domi, bella, antoni, elli, josh, arnold, sam, adam, nicole, matilda21m00Tcm4TlvDq8ikWAM (24-char alphanumeric)Legacy mode (Deepgram):
aura-2-thalia-en, aura-2-orion-en, etc."alloy"
AI model to use for the call. Default is selected based on mode.
See GET /v1/models for the complete list of available models and their capabilities.
Defaults are automatically selected per mode if not specified.
"gemini-2.5-flash"
LLM creativity/temperature (0-1). Higher values = more creative responses.
0 <= x <= 1STT provider (legacy mode only)
deepgram: Deepgram (default, 36+ languages)gladia: Gladia (100+ languages, automatic language detection, multilingual support)stt_language: "multi" for automatic multilingual detection (Gladia only)mode=legacydeepgram, gladia "deepgram"
STT model (legacy mode only). See GET /v1/models for complete list of available STT models and their capabilities. Only used when mode=legacy.
1"nova-3"
STT language/dialect code (legacy mode only)
en-US, en-GB, en-AU, es-ES, nl-BEmulti for automatic multilingual detection (supported by Gladia only)mode=legacystt_languages array instead2"en-GB"
Array of language codes for restricted multi-language detection (Gladia only).
When multiple languages are provided:
code_switching mode automaticallyThis is preferred over stt_language: "multi" when you know which languages
your callers will speak, as it narrows the detection space from 100+ languages
to just the ones you specify.
Examples:
["en", "ro"] - Detect English and Romanian only["en", "es", "fr"] - Detect English, Spanish, and Frenchen, es, fr, de, ro)Only used when mode=legacy and stt_provider=gladia.
1 - 10 elements2["en", "ro"]Custom vocabulary for STT (Gladia only). Boost recognition of domain-specific words and phrases in real time.
Formats supported:
["Capex", "TopCalls"][{"value": "Capex", "language": "en"}]["Capex", {"value": "مرحبا", "language": "ar"}]Use cases:
Only used when mode=legacy and stt_provider=gladia.
1 - 100 elements1[
"Capex",
{ "value": "TopCalls", "language": "en" }
]STT endpoint sensitivity (Gladia only). Controls how long to wait after silence before considering speech complete.
Only used when mode=legacy and stt_provider=gladia.
0.01 <= x <= 20.01
STT interrupt/speech detection sensitivity (Gladia only). Controls the speech detection threshold for distinguishing speech from noise.
Only used when mode=legacy and stt_provider=gladia.
0 <= x <= 10.8
Transcript correction vocabulary for LLM-based STT error correction (legacy mode only). Provides domain-specific terms that STT often mishears, allowing the LLM to use context to mentally correct transcription errors.
Formats supported:
["Weaviate", "Kubernetes", "TopCalls"][
{ "correct": "Weaviate", "sounds_like": ["we activate", "web VT"] },
{ "correct": "NVIDIA", "sounds_like": ["in video"], "context": "hardware" }
]["TopCalls", { "correct": "Kubernetes", "sounds_like": ["cube net ease"] }]How it works:
Use cases:
Only used when mode=legacy.
1 - 100 elements1[
"TopCalls",
{
"correct": "Weaviate",
"sounds_like": ["we activate", "web VT"]
},
{
"correct": "Kubernetes",
"sounds_like": ["cube net ease", "cooper nettie"],
"context": "technology"
}
]TTS provider (legacy mode only). See GET /v1/voices/builtin for available voices per provider. Only used when mode=legacy.
deepgram, elevenlabs "deepgram"
TTS model (legacy mode only). See GET /v1/models for complete list of available TTS models. Only used when mode=legacy.
1"eleven_flash_v2_5"
ElevenLabs voice stability (legacy mode, tts_provider=elevenlabs only). Controls the consistency of the voice output.
mode=legacy and tts_provider=elevenlabs0 <= x <= 10.75
ElevenLabs voice similarity boost (legacy mode, tts_provider=elevenlabs only). Controls how closely the generated voice matches the original.
mode=legacy and tts_provider=elevenlabs0 <= x <= 10.5
ElevenLabs speech speed (legacy mode, tts_provider=elevenlabs only). Controls the rate of speech.
mode=legacy and tts_provider=elevenlabs0.7 <= x <= 1.20.78
Enable filler acknowledgments (legacy mode only). When enabled, the AI will generate brief acknowledgments (e.g., "Got it...", "Sure...") before the main response to reduce perceived latency.
false (default): No filler - AI responds directlytrue: AI generates contextual filler before main responseOnly used when mode=legacy.
false
Block interruption mode (legacy mode only). When enabled, the AI continues speaking even if the user talks over it.
Use cases:
Only used when mode=legacy.
false
Maximum call duration in minutes (enforced by telephony provider)
1 <= x <= 60Background audio preset to play during the call.
office: Office ambiance (default) - subtle office soundsnone: No background audioBackground audio plays continuously under the conversation and helps create a professional atmosphere.
office, none "office"
Volume level for background audio relative to speech.
low: Subtle (-10 dB) - quieter backgroundmedium: Balanced (-4 dB) - noticeable but balanced (default)high: Full volume (0 dB) - background at same level as speechOnly used when background_audio is not none.
low, medium, high "medium"
Webhook URL to receive call completion/failure notifications. Webhook is sent after call finishes (includes recording_url and call_summary when available).
"https://your-app.com/webhooks/call-complete"
Schema for post-call AI analysis. Defines what information to extract from the transcript.
After the call, AI analyzes the transcript and extracts structured data matching this schema.
Results are included in the webhook payload under the analysis field.
Supported types:
boolean: true/false values (e.g., "converted", "appointment_confirmed")string/text: Free-form text (e.g., "objections", "questions")number: Numeric values (e.g., "rating", "call_count")date: Date/time in ISO 8601 format (e.g., "appointment_time")Simple format: Just specify the type
{ "converted": "boolean", "objections": "string" }Rich format: Include description for better AI understanding
{
"converted": {
"type": "boolean",
"description": "Whether the lead agreed to schedule an appointment"
},
"appointment_time": {
"type": "date",
"description": "The scheduled appointment date/time if booked"
}
}Show child attributes
{
"converted": {
"type": "boolean",
"description": "Whether the lead agreed to schedule an appointment or expressed buying interest"
},
"objections": {
"type": "string",
"description": "Any concerns or objections the lead raised during the call"
},
"appointment_time": {
"type": "date",
"description": "The scheduled appointment date and time if one was booked"
}
}Custom metadata to include in webhook payload. System fields (task, voice, model, etc.) are filtered out automatically.
{
"patient_id": "pat_123",
"source": "reminder_system"
}Call created successfully
Call UUID
"564d4fd4-03bc-400a-abe0-05540fbeff88"
Provider call ID (may be null if call creation failed)
"64e9bf0e-7c2f-4443-a759-7eb1731cd583"
Current call status
queued, pending, in_progress, completed, failed, cancelled "queued"