Tag: developer guide

  • Tutorial – How to Use the Inbound Call Webhook & Dynamic Variables in Retell AI!

    Tutorial – How to Use the Inbound Call Webhook & Dynamic Variables in Retell AI!

    In “Tutorial – How to Use the Inbound Call Webhook & Dynamic Variables in Retell AI!” Henryk Brzozowski shows how Retell AI now lets you pick which voice agent handles inbound calls so you can adapt behavior by time of day, CRM conditions, country code, state, and other factors. This walkthrough explains why that control matters and how it helps you tailor responses and routing for smoother automation.

    The video lays out each step with timestamps—from a brief overview and use-case demo to how the system works, securing the webhook, dynamic variables, and template setup—so you can jump to the segments that matter most to your use case. Follow the practical examples to configure agent selection and integrate the webhook into your workflows with confidence.

    Overview of the Inbound Call Webhook in Retell AI

    The inbound call webhook in Retell AI is the mechanism by which the platform notifies your systems the moment a call arrives and asks you how to handle it. You use this webhook to decide which voice agent should answer, what behavior that agent should exhibit, and whether to continue, transfer, or terminate the call. Think of it as the handoff point where Retell gives you full control to apply business logic and data-driven routing before the conversation begins.

    Purpose and role of the inbound call webhook in Retell AI

    The webhook’s purpose is to let you customize call routing and agent behavior dynamically. Instead of relying on a static configuration inside the Retell dashboard, you receive a payload describing the incoming call and any context (CRM metadata, channel, caller ID, etc.), and you respond with the agent choice and instructions. This enables complex, real-time decisions that reflect your business rules, CRM state, and contextual data.

    High-level flow from call arrival to agent selection

    When a call arrives, Retell invokes your configured webhook with a JSON payload that describes the call. Your endpoint processes that payload, applies your routing logic (time-of-day checks, CRM lookup, geographic rules, etc.), chooses an agent or fallback, and returns a response instructing Retell which voice agent to spin up and which dynamic variables or template to use. Retell then launches the selected agent and begins the voice interaction according to your returned configuration.

    How the webhook interacts with voice agents and the Retell platform

    Your webhook never has to host the voice agent itself — it simply tells Retell which agent to instantiate and what context to pass to it. The webhook can return agent ID, template ID, dynamic variables, and other metadata. Retell will merge your response with its internal routing logic, instantiate the chosen voice agent, and pass along the variables to shape prompts, tone, and behavior. If your webhook indicates termination or transfer, Retell will act accordingly (end the call, forward it, or hand it to a fallback).

    Key terminology: webhook, agent, dynamic variable, payload

    • Webhook: an HTTP endpoint you own that Retell calls to request routing instructions for an inbound call.
    • Agent: a Retell voice AI persona or model configuration that handles the conversation (prompts, voice, behavior).
    • Dynamic variable: a key/value that you pass to agents or templates to customize behavior (for example, greeting text, lead score, timezone).
    • Payload: the JSON data Retell sends to your webhook describing the incoming call and associated metadata.

    Use Cases and Demo Scenarios

    This section shows practical situations where the inbound call webhook and dynamic variables add value. You’ll see how to use real-time context and external data to route calls intelligently.

    Common business scenarios where inbound call webhook adds value

    You’ll find the webhook useful for support routing, sales qualification, appointment confirmation, fraud prevention, and localized greetings. For example, you can route high-value prospects to senior sales agents, send calls outside business hours to voicemail or an after-hours agent, or present a customized script based on CRM fields like opportunity stage or product interest.

    Time-of-day routing example and expected behavior

    If a call arrives outside your normal business hours, your webhook can detect the timestamp and return a response that routes the call to an after-hours agent, plays a recorded message, or schedules a callback. Expected behavior: during business hours the call goes to live sales agents; after-hours the caller hears a friendly voice agent that offers call-back options or collects contact info.

    CRM-driven routing example using contact and opportunity data

    When Retell sends the webhook payload, include or look up the caller’s phone number in your CRM. If the contact has an open opportunity with high value or “hot” status, your webhook can choose a senior or specialized agent and pass dynamic variables like lead score and account name. Expected behavior: high-value leads get premium handling and personalized scripts drawn from your CRM fields.

    Geographic routing example using country code and state

    You can use the caller’s country code or state to route to local-language agents, region-specific teams, or to apply compliance scripts. For instance, callers from a specific country can be routed to a local agent with the appropriate accent and legal disclosures. Expected behavior: localized greetings, time-sensitive offers, and region-specific compliance statements.

    Hybrid scenarios: combining business rules, CRM fields, and time

    Most real-world flows combine multiple factors. Your webhook can first check time-of-day, then consult CRM for lead score, and finally apply geographic rules. For example, during peak hours route VIP customers to a senior agent; outside those hours route VIPs to an on-call specialist or schedule a callback. The webhook lets you express these layered rules and return the appropriate agent and variables.

    How Retell AI Selects Agents

    Understanding agent selection helps you design clear, predictable routing rules.

    Agent types and capabilities in Retell AI

    Retell supports different kinds of agents: scripted assistants, generative conversational agents, language/localization variants, and specialized bots (support, sales, compliance). Each agent has capabilities like voice selection, prompt templates, memory, and access to dynamic variables. You select the right type based on expected conversation complexity and required integrations.

    Decision points that influence agent choice

    Key decision points include call context (caller ID, callee number), time-of-day, CRM status (lead score, opportunity stage), geography (country/state), language preference, and business priorities (VIP escalation). Your webhook evaluates these to pick the best agent.

    Priority, fallback, and conditional agent selection

    You’ll typically implement a priority sequence: try the preferred agent first, then a backup, and finally a fallback agent that handles unexpected cases. Conditionals let you route specific calls (e.g., high-priority clients go to Agent A unless Agent A is busy, then Agent B). In your webhook response you can specify primary and fallback agents and even instruct Retell to try retries or route to voicemail.

    How dynamic variables feed into agent selection logic

    Dynamic variables carry the decision context: caller language, lead score, account tier, local time, etc. Your webhook either receives these variables in the inbound payload or computes/fetches them from external systems and returns them to Retell. The agent selection logic reads these variables and maps them to agent IDs, templates, and behavior modifiers.

    Anatomy of the Inbound Call Webhook Payload

    Familiarity with the payload fields ensures you know where to find crucial routing data.

    Typical JSON structure received by your webhook endpoint

    Retell sends a JSON object that usually includes call identifiers, timestamps, caller and callee info, and metadata. A simplified example looks like: { “call_id”: “abc123”, “timestamp”: “2025-01-01T14:30:00Z”, “caller”: { “number”: “+15551234567”, “name”: null }, “callee”: { “number”: “+15557654321” }, “metadata”: { “crm_contact_id”: “c_789”, “campaign”: “spring_launch” } } You’ll parse this payload to extract the fields you need for routing.

    Important fields to read: caller, callee, timestamp, metadata

    The caller.number is your primary key for CRM lookups and geolocation. The callee.number tells you which of your numbers was dialed if you own multiple lines. Timestamp is critical for time-based routing. Metadata often contains Retell-forwarded context, like the source campaign or previously stored dynamic variables.

    Where dynamic variables appear in the payload

    Retell includes dynamic variables under a metadata or dynamic_variables key (naming may vary). These are prepopulated by previous steps in your flow or by the dialing source. Your webhook should inspect these and may augment or override them before returning your response.

    Custom metadata and how Retell forwards it

    If your telephony provider or CRM adds custom tags, Retell will forward them in metadata. That allows you to carry contextual info — like salesperson ID or campaign tags — from the dialing source through to your routing logic. Use these tags for more nuanced agent selection.

    Configuring Your Webhook Endpoint

    Practical requirements and response expectations for your endpoint.

    Required endpoint characteristics (HTTPS, reachable public URL)

    Your endpoint must be a publicly reachable HTTPS URL with a valid certificate. Retell needs to POST data to it in real time, so it must be reachable from the public internet and respond timely. Local testing can be done with tunneling tools, but production endpoints should be resilient and hosted with redundancy.

    Expected request headers and content types

    Retell will typically send application/json content with headers indicating signature or authentication metadata (for example X-Retell-Signature or X-Retell-Timestamp). Inspect headers for authentication and use standard JSON parsing to handle the body.

    How to respond to Retell to continue or terminate flow

    Your response instructs Retell what to do next. To continue the flow, return a JSON object that includes the selected agent_id, template_id, and any dynamic_variables you want applied. To terminate or transfer, return an action field indicating termination, voicemail, or transfer target. If you can’t decide, return a fallback agent or an explicit error. Retell expects clear action directives.

    Recommended response patterns and status codes

    Return HTTP 200 with a well-formed JSON body for successful routing decisions. Use 4xx codes for client-side issues (bad request, unauthorized) and 5xx for server errors. If you return non-2xx, Retell may retry or fall back to default behavior; document and test how your configuration handles retries. Include an action field in the 200 response to avoid ambiguity.

    Local development options: tunneling with ngrok and similar tools

    For development, use ngrok or similar tunneling services to expose your local server to Retell. That lets you iterate quickly and inspect incoming payloads. Remember to secure your dev endpoint with temporary secrets and disable public tunnels after testing.

    Securing the Webhook

    Security is essential — you’re handling PII and controlling call routing.

    Authentication options: shared secret, HMAC signatures, IP allowlist

    Common options include a shared secret used to sign payloads (HMAC), a signature header you validate, and IP allowlists at your firewall to accept requests only from Retell IPs. Use a combination: validate HMAC signatures and maintain an IP allowlist for defense-in-depth.

    How to validate the signature and protect against replay attacks

    Retell can include a timestamp header and an HMAC signature computed over the body and timestamp. You should compute your own HMAC using the shared secret and compare in constant time. To avoid replay, accept signatures only if the timestamp is within an acceptable window (for example, 60 seconds) and maintain a short-lived nonce cache to detect duplicates.

    Transport security: TLS configuration and certificate recommendations

    Use strong TLS (currently TLS 1.2 or 1.3) with certificates from a trusted CA. Disable weak ciphers and ensure your server supports OCSP stapling and modern security headers. Regularly test your TLS configuration against best-practice checks.

    Rate-limiting, throttling, and handling abusive traffic

    Implement rate-limiting to avoid being overwhelmed by bursts or malicious traffic. Return a 429 status for client-side throttling and consider exponential backoff on retries. For abusive traffic, block offending IPs and alert your security team.

    Key rotation strategies and secure storage of secrets

    Rotate shared secrets on a schedule (for example quarterly) and keep a migration window to support both old and new keys during transition. Store secrets in secure vaults or environment managers rather than code or plaintext. Log and audit key usage where possible.

    Dynamic Variables: Concepts and Syntax

    Dynamic variables are the glue between your data and agent behavior.

    Definition and purpose of dynamic variables in Retell

    Dynamic variables are runtime key/value pairs that you pass into templates and agents to customize their prompts, behavior, and decisions. They let you personalize greetings, change script branches, and tailor agent tone without creating separate agent configurations.

    Supported variable types and data formats

    Retell supports strings, numbers, booleans, timestamps, and nested JSON-like objects for complex data. Use consistent formats (ISO 8601 for timestamps, E.164 for phone numbers) to avoid parsing errors in templates and agent logic.

    Variable naming conventions and scoping rules

    Use clear, lowercase names with underscores (for example lead_score, caller_country). Keep scope in mind: some variables are global to the call session, while others are template-scoped. Avoid collisions by prefixing custom variables (e.g., crm_lead_score) if Retell reserves common names.

    How to reference dynamic variables in templates and routing rules

    In templates and routing rules you reference variables using the platform’s placeholder syntax (for example {}). Use variables to customize spoken text, conditional branches, and agent selection logic. Ensure you escape or validate values before injecting them into prompts to avoid unexpected behavior.

    Precedence rules when multiple variables overlap

    When a variable is defined in multiple places (payload metadata, webhook response, template defaults), Retell typically applies a precedence order: explicit webhook-returned variables override payload-supplied variables, which override template defaults. Understand and test these precedence rules so you know which value wins.

    Using Dynamic Variables to Route Calls

    Concrete examples of variable-driven routing.

    Examples: routing by time of day using variables

    Compute local time from timestamp and caller timezone, then set a variable like business_hours = true/false. Use that variable to choose agent A (during hours) or agent B (after hours), and pass a greeting_time variable to the script so the agent can say “Good afternoon” or “Good evening.”

    Examples: routing by CRM status or lead score

    After receiving the call, do a CRM lookup based on caller number and return variables such as lead_score and opportunity_stage. If lead_score > 80 return agent_id = “senior_sales” and dynamic_variables.crm_lead_score = 95; otherwise return agent_id = “standard_sales.” This direct mapping gives you fine control over escalation.

    Examples: routing by caller country code or state

    Parse caller.number to extract the country code and set dynamic_variables.caller_country = “US” or dynamic_variables.caller_state = “CA”. Route to a localized agent and pass a template variable to include region-specific compliance text or offers tailored to that geography.

    Combining multiple variables to create complex routing rules

    Create compound rules like: if business_hours AND lead_score > 70 AND caller_country == “US” route to senior_sales; else if business_hours AND lead_score > 70 route to standard_sales; else route to after_hours_handler. Your webhook evaluates these conditions and returns the corresponding agent and variables.

    Fallbacks and default variable values for robust routing

    Always provide defaults for critical variables (for example lead_score = 0, caller_country = “UNKNOWN”) so agents can handle missing data. Include fallback agents in your response to ensure calls aren’t dropped if downstream systems fail.

    Templates and Setup in Retell AI

    Templates translate variables and agent logic into conversational behavior.

    How templates use dynamic variables to customize agent behavior

    Templates contain prompts with placeholders that get filled by dynamic variables at runtime. For example, a template greeting might read “Hello {}, this is {} calling about your {}.” Variables let one template serve many contexts without duplication.

    Creating reusable templates for common call flows

    Design templates for common flows like lead qualification, appointment confirmation, and support triage. Keep templates modular and parameterized so you can reuse them across agents and campaigns. This reduces duplication and accelerates iteration.

    Configuring agent behavior per template: prompts, voice, tone

    Each template can specify the agent prompt, voice selection, speech rate, and tone. Use variables to fine-tune the pitch and script content for different audiences: friendly or formal, sales or support, concise or verbose.

    Steps to deploy and test a template in Retell

    Create the template, assign it to a test agent, and use staging numbers or ngrok endpoints to simulate inbound calls. Test edge cases (missing variables, long names, unexpected characters) and verify how the agent renders the filled prompts. Iterate until you’re satisfied, then promote the template to production.

    Managing templates across environments (dev, staging, prod)

    Maintain separate templates or version branches per environment. Use naming conventions and version metadata so you know which template is live where. Automate promotion from staging to production with CI/CD practices when possible, and test rollback procedures.

    Conclusion

    A concise wrap-up and next steps to get you production-ready.

    Recap of key steps to implement inbound call webhook and dynamic variables

    To implement this system: expose a secure HTTPS webhook, parse the inbound payload, enrich with CRM and contextual data, evaluate your routing rules, return an agent selection and dynamic variables, and test thoroughly across scenarios. Secure the webhook with signatures and rate-limiting and plan for fallbacks.

    Final best practice checklist before going live

    Before going live, verify: HTTPS with strong TLS, signature verification implemented, replay protection enabled, fallback agent configured, template defaults set, CRM lookups performant, retry behavior tested, rate limits applied, and monitoring/alerting in place for errors and latency.

    Next steps for further customization and optimization

    After launch, iterate on prompts and routing logic based on call outcomes and analytics. Add more granular variables (customer lifetime value, product preferences). Introduce A/B testing of templates and collect agent performance metrics to optimize routing. Automate key rotation and integrate monitoring dashboards.

    Pointers to Retell AI documentation and community resources

    Consult the Retell AI documentation for exact payload formats, header names, and template syntax. Engage with the community and support channels provided by Retell to share patterns, get examples, and learn best practices from other users. These resources will speed your implementation and help you solve edge cases efficiently.


    You’re now equipped to design an inbound call webhook that uses dynamic variables to select agents intelligently and securely. Start with simple rules, test thoroughly, and iterate — you’ll be routing calls with precision and personalization in no time.

    If you want to implement Chat and Voice Agents into your business to reduce missed calls, book more appointments, save time, and make more revenue, book a discovery call here: https://brand.eliteaienterprises.com/widget/bookings/elite-ai-30-min-demo-call

  • How to use the GoHighLevel API v2 | Complete Tutorial

    How to use the GoHighLevel API v2 | Complete Tutorial

    Let’s walk through “How to use the GoHighLevel API v2 | Complete Tutorial”, a practical guide that highlights Version 2 features missing from platforms like make.com and shows how to speed up API integration for businesses.

    Let’s outline what to expect: getting started, setting up a GHL app, Make.com authentication for subaccounts and agency accounts, a step-by-step build of voice AI agents that schedule meetings, and clear reasons to skip the Make.com GHL integration.

    Overview of GoHighLevel API v2 and What’s New

    We’ll start with a high-level view so we understand why v2 matters and how it changes our integrations. GoHighLevel API v2 is the platform’s modernized, versioned HTTP API designed to let agencies and developers build deeper, more reliable automations and integrations with CRM, scheduling, pipelines, and workflow capabilities. It expands the surface area of what we can control programmatically and aims to support agency-level patterns like multi-tenant (agency + subaccount) auth, richer scheduling endpoints, and more granular webhook and lifecycle events.

    Explain the purpose and scope of the API v2

    The purpose of API v2 is to provide a single, consistent, versioned interface for manipulating core GHL objects — contacts, appointments, opportunities, pipelines, tags, workflows, and more — while enabling secure agency-level integrations. The scope covers CRUD operations on those resources, scheduling and calendar availability, webhook subscriptions, OAuth app management, and programmatic control over many features that previously required console use. In short, v2 is meant for production-grade integrations for agencies, SaaS, and automation tooling.

    Highlight major differences between API v2 and previous versions

    Compared to earlier versions, v2 focuses on clearer versioning, more predictable schemas, improved pagination/filtering, and richer auth flows for agency/subaccount models. We see more granular scopes, better-defined webhook event sets, and endpoints tailored to scheduling and provider availability. Error responses and pagination are generally more consistent, and there’s an emphasis on agency impersonation patterns — letting an agency app act on behalf of subaccounts more cleanly.

    List features unique to API v2 that other platforms (like Make.com) lack

    API v2 exposes a few agency-centric features that many third-party automation platforms don’t support natively. These include agency-scoped OAuth flows that allow impersonation of subaccounts, detailed calendar and provider availability endpoints for scheduling logic, and certain pipeline/opportunity or conversation APIs that are not always surfaced by general-purpose integrators. v2’s webhook control and subscription model is often more flexible than what GUI-based connectors expose, enabling lower-latency, event-driven architectures.

    Describe common use cases for agencies and automation projects

    We commonly use v2 for automations like automated lead routing, appointment scheduling with real-time availability checks, two-way calendar sync, advanced opportunity management, voice AI scheduling, and custom dashboards that aggregate multiple subaccounts. Agencies build connectors to unify client data, create multi-tenant SaaS offerings, and embed scheduling or messaging experiences into client websites and call flows.

    Summarize limitations or known gaps in v2 to watch for

    While v2 is powerful, it still has gaps to watch: documentation sometimes lags behind feature rollout; certain UI-only features may not yet be exposed; rate limits and batch operations might be constrained; and some endpoints may require extra parameters (account IDs) to target subaccounts. Also expect evolving schemas and occasional breaking changes if you pin to a non-versioned path. We should monitor release notes and design our integration for graceful error handling and retries.

    Prerequisites and Account Requirements

    We’ll cover what account types, permissions, tools, and environment considerations we need before building integrations.

    Identify account types supported by API v2 (agency vs subaccount)

    API v2 supports multi-tenant scenarios: the agency (root) account and its subaccounts (individual client accounts). Agency-level tokens let us manage apps and perform agency-scoped tasks, while subaccount-level tokens (or OAuth authorizations) let us act on behalf of a single client. It’s essential to know which layer we need for each operation because some endpoints are agency-only and others must be executed in the context of a subaccount.

    Required permissions and roles in GoHighLevel to create apps and tokens

    To create apps and manage OAuth credentials we’ll need agency admin privileges or a role with developer/app-management permissions. For subaccount authorizations, the subaccount owner or an admin must consent to the scopes our app requests. We should verify that the roles in the GHL dashboard allow app creation, OAuth redirect registration, and token management before building.

    Needed developer tools: HTTP client, Postman, curl, or SDK

    For development and testing we’ll use a standard HTTP client like curl or Postman to exercise endpoints, debug requests, and inspect responses. For iterative work, Postman or Insomnia helps organize calls and manage environments. If an official SDK exists for v2 we’ll evaluate it, but most teams will build against the REST endpoints directly using whichever language/framework they prefer.

    Network and security considerations (IP allowlists, CORS, firewalls)

    Network-wise, we should run API calls from secure server-side environments — API secrets and client secrets must never be exposed to browsers. If our org uses IP allowlists, we must whitelist our integration IPs in the GoHighLevel dashboard if that feature is enabled. Since most API calls are server-to-server, CORS is not a server-side concern, but web clients using implicit flows or front-end calls must be careful about exposing secrets. Firewalls and egress rules should allow outbound HTTPS to the API endpoints.

    Recommended environment setup for development (local vs staging)

    We recommend developing locally with environment variables and a staging subaccount to avoid polluting production data. Use a staging agency/subaccount pair to test multi-tenant flows and webhooks. For secrets, use a secret manager or environment variables; for deployment, use a separate staging environment that mirrors production to validate token refresh and webhook handling before going live.

    Registering and Setting Up a GoHighLevel App

    We’ll walk through creating an app in the agency dashboard and the critical app settings to configure.

    How to create a GHL app in the agency dashboard

    In the agency dashboard we’ll go to the developer or integrations area and create a new app. We provide the app name, a concise description, and choose whether it’s public or private. Creating the app registers a client_id and client_secret (or equivalent credentials) that we’ll use for OAuth flows and token exchange.

    Choosing app settings: name, logo, and public information

    Pick a clear, recognizable app name and brand assets (logo, short description) so subaccount admins know who is requesting access. Public-facing information should accurately describe what the app does and which data it will access — this helps speed consent during OAuth flows and builds trust with client admins.

    How to set and validate redirect URIs for OAuth flows

    When we configure OAuth, we must specify exact redirect URI(s) that the authorization server will accept. These must match the URI(s) our app will actually use. During testing, set local URIs (like a ngrok forwarding URL) only if the dashboard allows them. Redirect URIs should use HTTPS in production and be as specific as possible to avoid open redirect vulnerabilities.

    Understanding OAuth client ID and client secret lifecycle

    The client_id is public; the client_secret is private and must be treated like a password. If the secret is leaked we must rotate it immediately via the app management UI. We should avoid embedding secrets in client-side code, and rotate secrets periodically as part of security hygiene. Some platforms support generating multiple secrets or rotating with zero-downtime — follow the dashboard procedures.

    How to configure scopes and permission requests for your app

    When registering the app, select the minimal set of scopes needed — least privilege. Examples include read:contacts, write:appointments, manage:webhooks, etc. Requesting too many scopes will reduce adoption and increase risk; requesting too few will cause permission errors at runtime. Be explicit in consent screens so admins approve access confidently.

    Authentication Methods: OAuth and API Keys

    We’ll compare the two common authentication patterns and explain steps and best practices for each.

    Overview of OAuth 2.0 vs direct API key usage in GHL v2

    OAuth 2.0 is the recommended method for agency-managed apps and multi-tenant flows because it provides delegated consent and token lifecycles. API keys (or direct tokens) are simpler for single-account server-to-server integrations and can be generated per subaccount in some setups. OAuth supports refresh token rotation and scope-based access, while API keys are typically long-lived and require careful secret handling.

    Step-by-step OAuth flow for agency-managed apps

    The OAuth flow goes like this: 1) Our app directs an admin to the authorize URL with client_id, redirect_uri, and requested scopes. 2) The admin authenticates and consents. 3) The authorization server returns an authorization code to our redirect URI. 4) We exchange that code for an access token and refresh token using the client_secret. 5) We use the access token in Authorization: Bearer for API calls. 6) When the access token expires, we use the refresh token to obtain a new access token and refresh token pair.

    Acquiring API keys or tokens for subaccounts when available

    For certain subaccount-only automations we can generate API keys or account-specific tokens in the subaccount settings. The exact UI varies, but typically an admin can produce a token that we store and use in the Authorization header. These tokens are useful for server-to-server integrations where OAuth consent UX is unnecessary, but they require secure storage and rotation policies.

    Refreshing access tokens: refresh token usage and rotation

    Refresh tokens let us request new access tokens without user interaction. We should implement automatic refresh logic before tokens expire and handle refresh failures gracefully by re-initiating the OAuth consent flow if needed. Where possible, follow refresh token rotation best practices: treat refresh tokens as sensitive, store them securely, and rotate them when they’re used (some providers issue a new refresh token per refresh).

    Secure storage and handling of secrets in production

    In production we store client secrets, access tokens, and refresh tokens in a secrets manager or environment variables with restricted access. Never commit secrets to source control. Use role-based access to limit who can retrieve secrets and audit access. Encrypt tokens at rest and transmit them only over HTTPS.

    Authentication for Subaccounts vs Agency Accounts

    We’ll outline how auth differs when we act as an agency versus when we act within a subaccount.

    Differences in auth flows between subaccounts and agency accounts

    Agency auth typically uses OAuth client credentials tied to the agency app and supports impersonation patterns so we can operate across subaccounts. Subaccounts may use their own tokens or OAuth consent where the subaccount admin directly authorizes our app. The agency flow often requires additional headers or parameters to indicate which subaccount we’re targeting.

    How to authorize on behalf of a subaccount using OAuth or account linking

    To authorize on behalf of a subaccount we either obtain separate OAuth consent from that subaccount or use an agency-scoped consent that enables impersonation. Some flows involve account linking: the subaccount owner logs in and consents, linking their account to the agency app. After linking we receive tokens that include the subaccount context or an account identifier we include in API calls.

    Scoped access for agency-level integrations and impersonation patterns

    When we impersonate a subaccount, we limit actions to the specified scopes and subaccount context. Best practice is to request the smallest scope set and, where possible, request per-subaccount consent rather than broad agency-level scopes that grant access to all clients.

    Making calls to subaccount-specific endpoints and including the right headers

    Many endpoints require us to include either an account identifier in the URL or a header (for example, an accountId query param or a dedicated header) to indicate the target subaccount. We must consult endpoint docs to determine how to pass that context. Failing to include the account context commonly results in 403/404 errors or operations applied to the wrong tenant.

    Common pitfalls and how to detect permission errors

    Common pitfalls include expired tokens, insufficient scopes, missing account context, or using an agency token where a subaccount token is required. Detect permission errors by inspecting 401/403 responses, checking error messages for missing scopes, and logging the request/response for debugging. Implement clear retry and re-auth flows so we can recover from auth failures.

    Core API Concepts and Common Endpoints

    We’ll cover basics like base URL, headers, core resources, request body patterns, and relationships.

    Explanation of base URL, versioning, and headers required for v2

    API v2 uses a versioned base path so we can rely on /v2 semantics. We’ll set the base URL in our client and include standard headers: Authorization: Bearer , Content-Type: application/json, and Accept: application/json. Some endpoints require additional headers or an account id to target a subaccount. Always confirm the exact base path in the app settings or docs and pin the version to avoid unexpected breaking changes.

    Common resources: contacts, appointments, opportunities, pipelines, tags, workflows

    Core resources we’ll use daily are contacts (lead and customer records), appointments (scheduled meetings), opportunities and pipelines (sales pipeline management), tags for segmentation, and workflows for automation. Each resource typically supports CRUD operations and relationships between them (for example, a contact can have appointments and opportunities).

    How to construct request bodies for create, read, update, delete operations

    Create and update operations generally accept JSON payloads containing relevant fields: contact fields (name, email, phone), appointment details (start, end, timezone, provider_id), opportunity attributes (stage, value), and so on. For updates, include the resource ID in the path and send only changed fields if supported. Delete operations usually require the resource ID and respond with status confirmations.

    Filtering, searching, and sorting resources using query parameters

    We’ll use query parameters for filtering, searching, and sorting: common patterns include ?page=, ?limit=, ?sort=, and search or filter params like ?email= or ?createdAfter=. Advanced endpoints often support flexible filter objects or search endpoints that accept complex queries. Use pagination to manage large result sets and avoid pulling everything in one call.

    Understanding relationships between objects (contacts -> appointments -> opportunities)

    Objects are linked: contacts are the primary entity and can be associated with appointments, opportunities, and workflows. When creating an appointment we should reference the contact ID and, where applicable, provider or calendar IDs. When updating an opportunity stage we may reference related contacts and pipeline IDs. Understanding these relationships helps us design consistent payloads and avoid orphaned records.

    Working with Appointments and Scheduling via API

    Scheduling is a common and nuanced area; we’ll cover endpoints, availability, timezone handling, and best practices.

    Endpoints and payloads related to appointments and calendar availability

    Appointments endpoints let us create, update, fetch, and cancel meetings. Payloads commonly include start and end timestamps, timezone, provider (staff) ID, location or meeting link, contact ID, and optional metadata. Availability endpoints allow us to query a provider’s free/busy windows or calendar openings, which is critical to avoid double bookings.

    How to check provider availability and timezones before creating meetings

    Before creating an appointment we query provider availability for the intended time range and convert times to the provider’s timezone. We must respect daylight saving and ensure timestamps are in ISO 8601 with timezone info. Many APIs offer helper endpoints to get available slots; otherwise, we query existing appointments and external calendar busy times to compute free slots.

    Creating, updating, and cancelling appointments programmatically

    To create an appointment we POST a payload with contact, provider, start/end, timezone, and reminders. To update, we PATCH the appointment ID with changed fields. Cancelling is usually a delete or a PATCH that sets status to cancelled and triggers notifications. Always return meaningful responses to calling systems and handle conflicts (e.g., 409) if a slot was taken concurrently.

    Best practices for handling reschedules and host notifications

    For reschedules, we should treat it as an update that preserves history: log the old time, send notifications to hosts and guests, and include a reason if provided. Use idempotency keys where supported to avoid duplicate booking on retries. Send calendar invites or updates to linked external calendars and notify all attendees of changes.

    Integrating GHL scheduling with external calendar systems

    To sync with external calendars (Google, Outlook), we either leverage built-in calendar integrations or replicate events via APIs. We need to subscribe to external calendar webhooks or polling to detect external changes, reconcile conflicts, and mark GHL appointments as linked. Always store calendar event IDs so we can update/cancel the external event when the GHL appointment changes.

    Voice AI Agent Use Case: Automating Meeting Scheduling

    We’ll describe a practical architecture for using v2 with a voice AI scheduler that handles calls and books meetings.

    High-level architecture for a voice AI scheduler using GHL v2

    Our architecture includes the voice AI engine (speech-to-intent), a middleware server that orchestrates state and API calls to GHL v2, and calendar/webhook components. When a call arrives, the voice agent extracts intent and desired times, the middleware queries provider availability via the API, and then creates an appointment. We log the outcome and notify participants.

    Flow diagram: call -> intent recognition -> calendar query -> appointment creation

    Operationally: 1) Incoming call triggers voice capture. 2) Voice AI converts speech to text and identifies intent/slots (date, time, duration, provider). 3) Middleware queries GHL for availability for requested provider and time window. 4) If a slot is available, middleware POSTs appointment. 5) Confirmation is returned to the voice agent and a confirmation message is delivered to the caller. 6) Webhook or API response triggers follow-up notifications.

    Handling availability conflicts and fallback strategies in conversation

    When conflicts arise, we fall back to offering alternative times: query the next-best slots, propose them in the conversation, or offer to send a booking link. We should implement quick retries, soft holds (if supported), and clear messaging when no slots are available. Always confirm before finalizing and surface human handoff options if the user prefers.

    Mapping voice agent outputs to API payloads and fields

    The voice agent will output structured data (start_time, end_time, timezone, contact info, provider_id, notes). We map those directly into the appointment creation payload fields expected by the API. Validate and normalize phone numbers, names, and timezones before sending, and log the mapped payload for troubleshooting.

    Logging, auditing, and verifying booking success back to the voice agent

    After creating a booking, verify the API response and store the appointment ID and status. Send a confirmation message to the voice agent and store an audit trail that includes the original audio, parsed intent, API request/response, and final booking status. This telemetry helps diagnose disputes and improve the voice model.

    Webhooks: Subscribing and Handling Events

    Webhooks drive event-based systems; we’ll cover event selection, verification, and resilient handling.

    Available webhook events in API v2 and typical use cases

    v2 typically offers events for resource create/update/delete (contacts.created, appointments.updated, opportunities.stageChanged, workflows.executed). Typical use cases include syncing contact changes to CRMs, reacting to appointment confirmations/cancellations, and triggering downstream automations when opportunities move stages.

    Setting up webhook endpoints and validating payload signatures

    We’ll register webhook endpoints in the app dashboard and select the events we want. For security, enable signature verification where the API signs each payload with a secret; validate signatures on receipt to ensure authenticity. Use HTTPS, accept only POST, and respond quickly with 2xx to acknowledge.

    Design patterns for idempotent webhook handlers

    Design handlers to be idempotent: persist an event ID and ignore repeats, use idempotency keys when making downstream calls, and make processing atomic where possible. Store state and make webhook handlers small — delegate longer-running work to background jobs.

    Handling retry logic when receiving webhook replays

    Expect retries for transient errors. Ensure handlers return 200 only after successful processing; otherwise return a non-2xx so the platform retries. Build exponential backoff and dead-letter patterns for events that fail repeatedly.

    Tools to inspect and debug webhook deliveries during development

    During development we can use temporary forwarding tools to inspect payloads and test signature verification, and maintain logs with raw payloads (masked for sensitive data). Use staging webhooks for safe testing and ensure replay handling works before going live.

    Conclusion

    We’ll wrap up with key takeaways and next steps to get building quickly.

    Recap of essential steps to get started with GoHighLevel API v2

    To get started: create and configure an app in the agency dashboard, choose the right auth method (OAuth for multi-tenant, API keys for single-account), implement secure token storage and refresh, test core endpoints for contacts and appointments, and register webhooks for event-driven workflows. Use a staging environment and validate scheduling flows thoroughly.

    Key best practices to follow for security, reliability, and scaling

    Follow least-privilege scopes, store secrets in a secrets manager, implement refresh logic and rotation, design idempotent webhook handlers, and use pagination and batching to respect rate limits. Monitor telemetry and errors, and plan for horizontal scaling of middleware that handles real-time voice or webhook traffic.

    When to prefer direct API integration over third-party platforms

    Prefer direct API integration when you need agency-level impersonation, advanced scheduling and availability logic, lower latency, or features not exposed by third-party connectors. If you require fine-grained control over retry, idempotency, or custom business logic (like voice AI agents), direct integration gives us the flexibility we need.

    Next steps and resources to continue learning and implementing

    Next, we should prototype a small workflow: implement OAuth or API key auth, create a sample contact, query provider availability, and book an appointment. Iterate with telemetry and add webhooks to close the loop. Use Postman or a small script to exercise the end-to-end flow before integrating the voice agent.

    Encouragement to prototype a small workflow and iterate based on telemetry

    We encourage us to build a minimal, focused prototype — even a single flow that answers “can the voice agent book a meeting?” — and to iterate. Telemetry will guide improvements faster than guessing. With v2’s richer capabilities, we can quickly move from proof-of-concept to a resilient, production automation that brings real value to our agency and clients.

    If you want to implement Chat and Voice Agents into your business to reduce missed calls, book more appointments, save time, and make more revenue, book a discovery call here: https://brand.eliteaienterprises.com/widget/bookings/elite-ai-30-min-demo-call

  • Vapi AI Function Calling Explained | Complete tutorial

    Vapi AI Function Calling Explained | Complete tutorial

    Join us for a clear walkthrough of Vapi AI Function Calling Explained | Complete tutorial, showing how to enable a VAPI assistant to share live data during calls. Let us cover practical scenarios like scheduling meetings with available agents and a step-by-step process for creating and deploying custom functions on the VAPI platform.

    Beginning with environment setup and function schema design, the guide moves through implementation, testing, and deployment to make live integrations reliable. Along the way, join us to see examples, troubleshooting tips, and best practices for production-ready AI automation.

    What is Vapi and Its Function Calling Capability

    We will introduce Vapi as the platform that powers conversational assistants with the ability to call external functions, enabling live, actionable responses rather than static text alone. In this section we outline why Vapi is useful and how function calling extends the capabilities of conversational AI to support real-world workflows.

    Definition of Vapi platform and its primary use cases

    Vapi is a platform for building voice and chat assistants that can both converse and perform tasks by invoking external functions. We commonly use it for customer support automation, scheduling and booking, data retrieval and updates, and any scenario where a conversation must trigger an external action or fetch live data.

    Overview of function calling concept in conversational AI

    Function calling means the assistant can decide, during a conversation, to invoke a predefined function with structured inputs and then use the function’s output to continue the dialogue. We view this as the bridge between natural language understanding and deterministic system behavior, where the assistant hands off specific tasks to code endpoints.

    How Vapi function calling differs from simple responses

    Unlike basic responses that are entirely generated from language models, function calling produces deterministic, verifiable outcomes by executing logic or accessing external systems. We can rely on function results for up-to-date information, actions that must be logged, or operations that must adhere to business rules, reducing hallucination and increasing reliability.

    Real-world scenarios enabled by function calling

    We enable scenarios such as scheduling meetings, checking inventory and placing orders, updating CRM records, retrieving personalized account details, and initiating transactions. Function calling lets us create assistants that not only inform users but also act on their behalf in real time.

    Benefits of integrating function calling into Vapi assistants

    By integrating function calling, we gain more accurate and actionable assistants, reduce manual handoffs, ensure tighter control over side effects, and improve user satisfaction with faster, context-aware task completion. We also get better observability and audit trails because function calls are explicit and structured.

    Prerequisites and Setup

    We will describe what accounts, tools, and environments are needed to start building and testing Vapi functions, helping teams avoid common setup pitfalls and choose suitable development approaches.

    Required accounts and access: Vapi account and API keys

    To get started we need a Vapi account and API keys that allow our applications to authenticate and call the Vapi assistant runtime or to register functions. We should ensure the keys have appropriate scopes and that we follow any organizational provisioning policies for production use.

    Recommended developer tools and environment

    We recommend a modern code editor, version control, an HTTP client for testing (like a CLI or GUI tool), and a terminal. We also prefer local containers or serverless emulation for testing. Monitoring, logging, and secret management tools are helpful as we move toward production.

    Languages and frameworks supported or commonly used

    Vapi functions can be implemented in languages commonly used for serverless or API services such as JavaScript/TypeScript (Node.js), Python, and Go. We often pair these with frameworks or runtimes that support HTTP endpoints, structured logging, and easy deployment to serverless platforms or containers.

    Setting up local development vs cloud development

    Locally we set up emulators or stubbed endpoints and mock credentials so we can iterate fast. For cloud development, we provision staging environments, deploy to managed serverless platforms or container hosts, and configure secure networking. We use CI/CD pipelines to move from local tests to cloud staging safely.

    Sample repositories, SDKs, and CLI tools to install

    We clone starter repositories and install Vapi SDKs or CLI tooling to register and test functions, scaffold handlers, and deploy from the command line. We also add language-specific SDKs for faster serialization and validation when building function interfaces.

    Vapi Architecture and Components Relevant to Function Calling

    We will map the architecture components that participate when the assistant triggers a function call so we can understand where to integrate security, logging, and error handling.

    Core Vapi service components involved in calls

    The core components include the assistant runtime that processes conversations, a function registry holding metadata, an execution engine that routes call requests, and observability layers for logs and metrics. We also rely on auth managers to validate and sign outbound requests.

    Assistant runtime and how it invokes functions

    The assistant runtime evaluates user intent and context to decide when to invoke a function. When it chooses to call a function, it builds a structured payload, references the registered function signature, and forwards the request to the function endpoint or to an execution queue, then waits for a response or handles async patterns.

    Function registry and metadata storage

    We maintain a function registry that stores definitions, parameter schemas, endpoint URLs, version info, and permissions metadata. This registry lets the runtime validate calls, present available functions to the model, and enforce policy and routing rules during invocation.

    Event and message flow during a call

    During a call we see a flow: user input → assistant understanding → function selection → payload assembly → function invocation → result return → assistant response generation. Each step emits events we can log for debugging, analytics, and auditing.

    Integration points for external services and webhooks

    Function calls often act as gateways to external services via APIs or webhooks. We integrate through authenticated HTTP endpoints, message queues, or middleware adapters, ensuring we transform and validate data at each integration point to maintain robustness.

    Designing Functions for Vapi

    We will cover design principles for functions so they map cleanly to conversational intents and remain maintainable, testable, and safe to run in production.

    Defining responsibilities and boundaries for functions

    We design functions with single responsibilities: query availability, create appointments, fetch customer records, and so on. By keeping functions focused we minimize coupling, simplify testing, and make it clearer when and why the assistant should call each function.

    Choosing synchronous vs asynchronous function behavior

    We decide synchronous behavior when immediate feedback is required and latency is low; we choose asynchronous behavior when operations are long-running or involve other systems that will callback later. We design conversational flows to let users know when they should expect immediate results versus a follow-up.

    Naming conventions and versioning strategies

    We adopt consistent naming such as noun-verb or domain-action patterns (e.g., meetings.create, agents.lookup) and include versioning in the registry (v1, v2) so we can evolve contracts without breaking existing flows. We keep names readable for both engineers and automated systems.

    Designing idempotent functions and side-effect handling

    We prefer idempotent functions for operations that might be retried, ensuring repeated calls do not create duplicates or inconsistent state. When side effects are unavoidable, we include unique request IDs and use checks or compensating transactions to handle retries safely.

    Structuring payloads for clarity and extensibility

    We structure inputs and outputs with clear fields, typed values, and optional extension sections for future data. We favor flat, human-readable keys for common fields and nested objects only when logically grouped, so the assistant and developers can extend contracts without breaking parsers.

    Function Schema and Interface Definitions

    We will explain how to formally declare the function interfaces so the assistant can validate inputs and outputs and developers can rely on clear contracts.

    Specifying input parameter schemas and types

    We define expected parameters, types (string, integer, datetime, object), required vs optional fields, and acceptable formats. Precise schemas help the assistant serialize user intent into accurate function calls and prevent runtime errors.

    Defining output schemas and expected responses

    We document expected response fields, success indicators, and standardized data shapes so the assistant can interpret results to continue the conversation or present actionable summaries to users. Predictable outputs reduce branching complexity in dialog logic.

    Using JSON Schema or OpenAPI for contract definition

    We use JSON Schema or OpenAPI to formally express parameter and response contracts. These formats let us validate payloads automatically, generate client stubs, and integrate with testing tools to ensure conformance between the assistant and the function endpoints.

    Validation rules and error response formats

    We specify validation rules, error codes, and structured error responses so failures are machine-readable and human-friendly. By returning consistent error formats, we let the assistant decide whether to ask users for corrections, retry, or escalate to a human.

    Documenting example requests and responses

    We include example request payloads and typical responses in the function documentation to make onboarding and debugging faster. Examples help both developers and the assistant understand edge cases and expected conversational outcomes.

    Authentication and Authorization for Function Calls

    We will cover how to secure function endpoints, manage credentials, and enforce policies so function calls are safe and auditable.

    Options for securing function endpoints (API keys, OAuth, JWT)

    We secure endpoints using API keys for simple services, OAuth for delegated access, or JWTs for signed assertions. We select the method that aligns with our security posture and the requirements of the external systems we integrate.

    How to store and rotate credentials securely

    We store credentials in a secrets manager or environment variables with restricted access, and we implement automated rotation policies. We ensure credentials are never baked into code or logs and that rotation processes are tested to avoid downtime.

    Role-based access control for function invocation

    We apply RBAC so only authorized agents, service accounts, or assistant instances can invoke particular functions. We define roles for developers, staging, and production environments, minimizing accidental access across stages.

    Least-privilege principles for external integrations

    We give functions the minimum permissions needed to perform their tasks, limiting access to specific resources and scopes. This reduces blast radius in case of leaks and makes compliance and auditing simpler.

    Handling multi-tenant auth scenarios and agent accounts

    For multi-tenant apps we scope credentials per tenant and implement agent accounts that act on behalf of users. We map possession tokens or tenant IDs to backend credentials securely and ensure data isolation across tenants.

    Connecting Vapi Functions to External Systems

    We will discuss reliability and transformation patterns when bridging the assistant with calendars, CRMs, databases, and messaging systems.

    Common integrations: calendars, CRMs, databases, messaging

    We commonly connect to calendar APIs for scheduling, CRMs for customer data, databases for persistence, and messaging platforms for notifications. Each integration has distinct latency and consistency considerations we account for in function design.

    Design patterns for reliable API calls (retries, timeouts)

    We implement retries with exponential backoff, sensible timeouts, and circuit breakers for flaky services. We surface transient errors to the assistant as retryable, while permanent errors trigger fallback flows or human escalation.

    Transforming and mapping external data to Vapi payloads

    We map external response shapes into our internal payloads, normalizing date formats, time zones, and enumerations. We centralize transformations in adapters so the assistant receives consistent, predictable data regardless of the upstream provider.

    Using middleware or adapters for third-party APIs

    We place middleware layers between Vapi and third-party APIs to handle authentication, rate limiting, data mapping, and common error handling. Adapters make it easier to swap providers and keep function handlers focused on business logic.

    Handling rate limits, batching, and pagination

    We respect provider rate limits by implementing throttling, batching requests when appropriate, and handling pagination with cursors. We design conversational flows to set user expectations when operations require multiple steps or delayed results.

    Step-by-Step Example: Scheduling Meetings with Available Agents

    We present a concrete example of a scheduling workflow so we can see how function calling works end-to-end and what design decisions matter for a practical use case.

    Overview of the scheduling use case and user story

    Our scheduling assistant helps users find and book meetings with available agents. The user asks for a meeting, the assistant checks agent availability, suggests slots, and confirms a booking. We aim for a smooth flow that handles conflicts, time zones, and rescheduling.

    Data model: agents, availability, time zones, and meetings

    We model agents with identifiers, working hours, time zone offsets, and availability rules. Availability data can be calendar-derived or from a scheduling service. Meetings contain participants, start/end times, location or virtual link, and a status field for confirmed or canceled events.

    Designing the scheduling function contract and responses

    We define functions such as agents.lookupAvailability and meetings.create with clear inputs: agentId, preferred windows, attendee info, and timezone. Responses include availableSlots, chosenSlot, meetingId, and conflict reasons. We include metadata for rescheduling and confirmation messages.

    Implementing availability lookup and conflict resolution

    Availability lookup aggregates calendar free/busy queries and business rules, then returns candidate slots. For conflicts we prefer deterministic resolution: propose next available slot or present alternatives. We use idempotent create operations combined with booking locks or optimistic checks to avoid double-booking.

    Flow for confirming, rescheduling, and canceling meetings

    The flow starts with slot selection, function call to create the meeting, and confirmation returned to the user. For rescheduling we call meetings.update with the meetingId and new time; for canceling we call meetings.cancel. Each step verifies permissions, sends notifications, and updates downstream systems.

    Implementing Function Logic and Deployment

    We will explain implementation options, testing practices, and deployment strategies so we can reliably run functions in production and iterate safely.

    Choosing hosting: serverless functions vs containerized services

    We choose serverless functions for simple, event-driven handlers with low maintenance, and containerized services for complex stateful logic or higher throughput. Our choice balances cost, scalability, cold-start behavior, and operational control.

    Implementing the function handler, input parsing, and output

    We build handlers to validate inputs against the declared schema, perform business logic, call external APIs, and return structured outputs. We centralize parsing and error handling so the assistant can make clear decisions after the function returns.

    Unit testing functions locally with mocked inputs

    We write unit tests that run locally using mocked inputs and stubs for external services. Tests cover success, validation errors, transient failures, and edge cases. This gives us confidence before integration testing with the assistant runtime.

    Packaging and deploying functions to Vapi or external hosts

    We package functions into deployable artifacts—zip packages for serverless or container images for Kubernetes—and push them through CI/CD pipelines to staging and production. We register function metadata with Vapi so the assistant can discover and call them.

    Versioned deployments and rollback strategies

    We deploy with version tags, blue-green or canary strategies, and metadata indicating compatibility. We keep rollback plans and automated health checks so we can revert changes quickly if a new function version causes failures.

    Conclusion

    We will summarize the main takeaways and suggest next steps to build, test, and iterate on Vapi function calling to unlock richer conversational experiences.

    Recap of the key concepts for Vapi function calling

    We covered what Vapi function calling is, the architecture that supports it, how to design and secure functions, and best practices for integration, testing, and deployment. The core idea is combining conversational intelligence with deterministic function execution for reliable actions.

    Practical next steps to implement and test your first function

    We recommend starting with a small, well-scoped function such as a simple availability lookup, defining clear schemas, implementing local tests, and then registering and invoking it from an assistant in a staging environment to observe behaviors and logs.

    How function calling unlocks richer, data-driven conversations

    By enabling the assistant to call functions, we turn conversations into transactions: live data retrieval, real-world actions, and context-aware decisions. This reduces ambiguity and enhances user satisfaction by bridging understanding and execution.

    Encouragement to iterate, monitor, and refine production flows

    We should iterate quickly, instrument for observability, and refine flows based on real user interactions. Monitoring, error reporting, and user feedback loops help us improve reliability and conversational quality over time.

    Pointers to where to get help and continue learning

    We will rely on internal documentation, team collaboration, and community examples to deepen our knowledge. Practicing with real scenarios, reviewing logs, and sharing patterns within our team accelerates learning and helps us build robust, production-grade Vapi assistants.

    If you want to implement Chat and Voice Agents into your business to reduce missed calls, book more appointments, save time, and make more revenue, book a discovery call here: https://brand.eliteaienterprises.com/widget/bookings/elite-ai-30-min-demo-call

Social Media Auto Publish Powered By : XYZScripts.com