Blits.ai API Documentation
Integrate powerful conversational AI into your applications with our comprehensive REST and WebSocket APIs
API Overview
The Blits.ai API provides two primary integration methods: a REST API for simple request-response interactions and a WebSocket API for real-time, bidirectional communication with support for streaming audio and live transcription.
REST API
Simple HTTP endpoints for basic conversational interactions. Perfect for traditional request-response patterns.
- Easy to implement with any HTTP client
- Stateless communication
- JSON request/response format
WebSocket API
Real-time bidirectional communication with support for voice streaming, live transcription, and instant responses.
- Real-time streaming audio (TTS/STT)
- Persistent connection for multiple messages
- Live conversation state management
Web Widget
Add a fully featured Blits.ai chat experience to your website with a single script snippet. Configure behavior, branding, and security from the Blits Platform dashboard.
- Drop-in script snippet for any website or web app
- Centralized configuration of theme, behavior, and security in the dashboard
- Supports voice, files, multi-language, and advanced behavior controls
Base URLs
Use the general endpoint or choose a regional endpoint closest to your users for optimal performance.
General Endpoint
https://platform.blits.ai/api/external/wss://platform.blits.ai/Regional endpoints are also available for specific geographic requirements.
Website Widget
Embed the Blits.ai chat widget on your website using the pre-built script snippet. Configure appearance and behavior from the Blits Platform dashboard.
Embed Chat Widget on Your Website
The easiest way to add Blits.ai to your website is using our pre-built chat widget. Simply copy and paste the code snippet into your HTML.
Quick Embed Code
Add this code snippet to your website's HTML, preferably just before the closing </body> tag:
<!-- Blits.ai Chat Widget -->
<script defer src="https://platform.blits.ai/assets/chat-widget.js"></script>
<script>
window.addEventListener('load', function () {
window.ChatWidget.mount({
token: "YOUR_API_TOKEN",
code: "YOUR_BUBBLE_CODE",
backendUrl: "https://platform.blits.ai",
websocketUrl: "wss://platform.blits.ai",
// Optional: provide your own stable userId instead of the auto-generated one
// userId: "YOUR_USER_ID",
debug: false
});
});
</script>Chat Widget Configuration
The ChatWidget.mount() function accepts the following configuration options:
| Parameter | Type | Required | Description |
|---|---|---|---|
token | string | Yes | Your API JWT token obtained from Bot Settings |
code | string | Yes | Your chat bubble unique code from Bot Settings → Configuration → Chat Bubble |
backendUrl | string | Yes | Backend URL (use general endpoint or regional: eu, us, uae, au) |
websocketUrl | string | Yes | WebSocket URL (wss://platform.blits.ai or regional) |
userId | string | No | Optional stable user identifier. When set, this ID is used instead of the automatically generated userId for the chat session. |
debug | boolean | No | Enable debug mode for console logging (default: false) |
- Login to platform.blits.ai
- Navigate to Bot Settings → Configuration → Chat Bubble
- Create a new chat bubble or select an existing one
- Copy the API token and bubble code from the settings
- Paste them into the embed code above
Chat Widget Customization
Customize your chat widget's appearance and behavior through the Blits Platform dashboard. Available customization options include:
Appearance
- • Custom bot avatar
- • Chat bubble icon
- • Main color theme (default Blits color or custom HEX)
- • Text color customization
- • Title and branding
- • "Powered by Blits" label on/off
- • Custom CSS for advanced styling overrides
Behavior
- • Bubble position (top/bottom, left/right/center)
- • Screen size (big/small)
- • Microphone enable/disable
- • File upload button
- • Popup messages (enable/disable, timing and delay)
- • Start chatbot automatically or only after button click
Security
- • Authorized origins whitelist
- • Domain restrictions
- • CORS configuration
Features
- • Voice interaction support
- • Multi-language support
- • File/image sharing
- • Notification messages when the bot answers on initialization
- • Button activation mode
Advanced Web Widget Integration (via code)
For advanced integrations you can also use the Blits JavaScript API (as documented in chat-bubble-via-code) to programmatically initialize and render chat instances, override the userId, start a specific subflow, or run multiple widgets on a single page — without using Direct Line.
Basic initialization and render
Load the bundle and initialize the chat widget in JavaScript. You can provide your own userId to reuse a stable identifier instead of the automatically generated one. You can also pass useStagingVersion and an optional flowName when you want to test a specific subflow in staging without impacting your production flow.
<!-- Blits advanced chat bubble integration -->
<script src="https://platform.blits.ai/assets/chat-widget.js"></script>
<script>
window.BlitsBotChat.init({
botId: "YOUR_BOT_ID",
code: "YOUR_BUBBLE_CODE",
// Optional: provide your own stable userId
userId: "YOUR_USER_ID",
// Optional: use the latest staging version of your bot instead of production
useStagingVersion: true,
// Optional: start the conversation from a specific subflow (for testing)
flowName: "YOUR_SUBFLOW_NAME"
});
// Render the widget into a specific element
window.BlitsBotChat.render(
{
botId: "YOUR_BOT_ID",
code: "YOUR_BUBBLE_CODE",
userId: "YOUR_USER_ID",
useStagingVersion: true,
flowName: "YOUR_SUBFLOW_NAME"
},
document.getElementById("blits-chat-container")
);
</script>Multiple widget instances
You can also run multiple chat widgets on the same page by giving each instance a unique id.
<script>
// First instance
window.BlitsBotChat.init({
id: "first",
botId: "YOUR_BOT_ID",
code: "YOUR_BUBBLE_CODE"
});
// Second instance with its own id
window.BlitsBotChat.init({
id: "second",
botId: "YOUR_BOT_ID",
code: "YOUR_BUBBLE_CODE"
});
// Render a specific instance into a DOM element
window.BlitsBotChat.render(
{
id: "second",
botId: "YOUR_BOT_ID",
code: "YOUR_BUBBLE_CODE"
},
document.getElementById("second-chat-container")
);
</script>REST API
The Blits REST API allows developers to integrate their own channels with the Blits platform. Use JWT tokens for authentication and interact with bots via voice and text.
https://platform.blits.ai/api/external/All REST API endpoints use Bearer token authentication with your JWT token.Introduction
The Blits Client (REST) API allows developers and teams to integrate their own channels with the Blits platform. This enables custom communication with websites, apps and other user channels to interact with the bot via voice and/or text.
The API uses a JWT token for authentication and identification of the bot and tenant. See the Authentication section for more information how to get your JWT token.
You should have an active account within the Blits Platform. Login at platform.blits.ai/login or custom endpoint.
Quick Start
- Obtain your JWT token from Bot Settings
- Initialize a new conversation to start a new conversation and get a conversation ID
- Get the opening message from the bot (if configured)
- Send text, voice, or file messages using the conversation ID
API Endpoints
Initialize New Conversation
POST /initializeCreates a new conversation ID which is used to interact with the chatbot. The conversation ID is needed for all subsequent API calls and is used to maintain conversation context, including prior blocks, following blocks, user messages, and bot responses.
Headers:
Authorization: Bearer {{authToken}}
Content-Type: application/jsonRequest Body (Optional):
{
"userId": "custom-user-id",
"conversationId": "existing-conversation-id",
"setCurrentLanguageTo": "en"
}| Parameter | Type | Required | Description |
|---|---|---|---|
userId | string | No | Optional custom user identifier. Random ID generated if not provided |
conversationId | string | No | Optional existing conversation ID to reconnect |
setCurrentLanguageTo | string | No | Language code (e.g., "en", "ar") for multilingual agents |
Response:
{
"conversationId": "63525c75-7734-44b6-8a65-96d1d16ac06b",
"botId": "bot-12345",
"userId": "user-67890",
"currentLanguage": "en",
"availableLanguages": ["en", "ar", "es", "nl"],
"channel": "externalApi",
"chatHistory": []
}Example:
curl --location --request POST 'https://platform.blits.ai/api/external/initialize' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN' \
--header 'Content-Type: application/json' \
--data '{
"userId": "user-123",
"setCurrentLanguageTo": "en"
}'Check Conversation is Active
POST /is-active/:conversationIdCheck if a conversation is still active and the bot is ready to receive messages.
Headers:
Authorization: Bearer {{authToken}}Path Parameters:
| Parameter | Description |
|---|---|
conversationId | The conversation ID obtained from the initialize endpoint |
Example:
curl --location --request POST 'https://platform.blits.ai/api/external/is-active/{{conversationId}}' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN'Requesting Opening Message
POST /messages/:conversationId/text With this POST request the opening message(s) can be requested from the bot. Set the header speech-response to true if you want to receive audio instead of text.
Headers:
Authorization: Bearer {{authToken}}
Content-Type: application/json
speech-response: falseRequest Body:
{
"type": "NEW",
"language": "EN"
}| Parameter | Type | Required | Description |
|---|---|---|---|
type | string | Yes | Set to "NEW" to request the opening message(s) from the bot |
language | string | No | Optional language code (e.g., "EN", "AR") for the opening message |
Response:
{
"conversationToken": "63525c75-7734-44b6-8a65-96d1d16ac06b",
"expectUserResponse": true,
"response": ["Hi! This is the 'Welcome Message'"]
}Example:
curl --location 'https://platform.blits.ai/api/external/messages/{{conversationId}}/text' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN' \
--header 'Content-Type: application/json' \
--data '{ "type": "NEW" }'Sending Text Message
POST /messages/:conversationId/textSend a text message to the bot and receive a response.
Headers:
Authorization: Bearer {{authToken}}
Content-Type: application/json
speech-response: false
text-response: truespeech-response: true- Receive audio response in addition to texttext-response: true- Receive text response (default: true)
Request Body:
{
"message": "Hello, how are you?",
"userId": "user-123",
"language": "EN"
}| Parameter | Type | Required | Description |
|---|---|---|---|
message | string | Yes | The text message to send to the bot |
userId | string | No | Optional user identifier for tracking |
language | string | No | Optional language code (e.g., "EN", "AR") |
Response:
{
"conversationToken": "63525c75-7734-44b6-8a65-96d1d16ac06b",
"expectUserResponse": true,
"response": ["I'm doing great! How can I help you today?"],
"language": "en"
}Example:
curl --location 'https://platform.blits.ai/api/external/messages/{{conversationId}}/text' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN' \
--header 'Content-Type: application/json' \
--data '{
"message": "Hello, how are you?",
"userId": "user-123"
}'Sending Files or Images
POST /messages/:conversationId/textWith this POST a file or image can be sent to the bot. This API only works if the bot is expecting a file or image.
Headers:
Authorization: Bearer {{authToken}}
Content-Type: multipart/form-dataForm Data:
| Field | Required | Description |
|---|---|---|
attachments | Yes | The file to upload (images, documents, etc.) |
Example:
curl --location 'https://platform.blits.ai/api/external/messages/{{conversationId}}/text' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN' \
--form 'attachments=@"/path/to/file"'Sending Speech
POST /messages/:conversationId/speech Send audio input to the bot and receive a text or audio response. Set the header speech-response to true to receive an audio response.
Headers:
Authorization: Bearer {{authToken}}
Content-Type: audio/wav
speech-response: trueRequest Body:
Binary audio data (WAV format)
Response:
{
"conversationToken": "63525c75-7734-44b6-8a65-96d1d16ac06b",
"expectUserResponse": true,
"response": ["I understood your voice message!"],
"speechData": "base64-encoded-audio-response"
}Example:
curl --location 'https://platform.blits.ai/api/external/messages/{{conversationId}}/speech' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN' \
--header 'Content-Type: audio/wav' \
--header 'speech-response: true' \
--data-binary '@/path/to/audio.wav'Analyze Message
GET /messages/:conversationId/:messageIdRetrieve detailed information about a specific message in the conversation.
Headers:
Authorization: Bearer {{authToken}}Path Parameters:
| Parameter | Required | Description |
|---|---|---|
conversationId | Yes | The conversation ID |
messageId | Yes | The message ID to analyze |
Example:
curl --location 'https://platform.blits.ai/api/external/messages/{{conversationId}}/{{messageId}}' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN'Analyze Message (NLU)
GET /analyze-messageAnalyze a message using Natural Language Understanding (NLU) to extract intents, entities, and other linguistic features without initiating a conversation.
Headers:
Authorization: Bearer {{authToken}}Query Parameters:
| Parameter | Required | Description |
|---|---|---|
message | Yes | The message text to analyze (URL encoded) |
Example:
curl --location 'https://platform.blits.ai/api/external/analyze-message?message=Hello%20there' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN'Get External Settings
GET /settings/:codeRetrieve external configuration settings for the bot using a specific settings code.
Headers:
Authorization: Bearer {{authToken}}Path Parameters:
| Parameter | Required | Description |
|---|---|---|
code | Yes | The settings code to retrieve |
Example:
curl --location 'https://platform.blits.ai/api/external/settings/{{code}}' \
--header 'Authorization: Bearer YOUR_JWT_TOKEN'Response Formats
- • When
speech-response: false(default) - Returns JSON with text response - • When
speech-response: trueandtext-response: false- Returns audio/mp3 binary - • When
speech-response: trueandtext-response: true- Returns multipart/mixed (both JSON and audio) - • Maximum file upload size: 10 MB
WebSocket API
The WebSocket API enables real-time, bidirectional communication between your application and Blits.ai. It's ideal for voice-enabled applications, live chat experiences, and streaming responses.
Connection
Connect to the WebSocket server at the general endpoint:
wss://platform.blits.ai/Once connected, you'll receive a connection acknowledgment before you can send commands. Regional endpoints are also available - see the for details.
Available Commands
initialize
Required FirstInitialize a new conversation session. This must be the first command sent after connecting.
Request:
{
"command": "initialize",
"authToken": "{{authToken}}",
"conversationId": "optional-existing-conversation-id",
"userId": "optional-user-identifier",
"environment": "production",
"setCurrentLanguageTo": "en",
"tts": {
"streaming": true
},
"requestId": "optional-request-id"
}Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
command | string | Yes | Must be "initialize" |
authToken | string | Yes | Your API authentication token (JWT) |
conversationId | string | No | Reconnect to existing conversation. If omitted, a new conversation is created |
userId | string | No | Custom user identifier. Random hash generated if not provided |
environment | string | No | "test" or "production" (default: "production") |
setCurrentLanguageTo | string | No | Language code for speech recognition and TTS (e.g., "en", "ar", "nl") |
tts.streaming | boolean | No | Enable chunk encoding for streaming TTS responses |
requestId | string | No | Optional identifier to correlate requests with responses |
Response:
{
"command": "initialize",
"conversationId": "c2dc951f-9631-44ef-9826-abcd1234efgh",
"botId": "bot-12345",
"userId": "user-67890",
"channel": "API",
"currentLanguage": "en",
"availableLanguages": ["en", "ar", "es", "nl"],
"chatHistory": [],
"supportsSttStreaming": true,
"requestId": "optional-request-id"
}message
Core CommandSend a text or voice message to the AI and receive a response.
Request (Text Message):
{
"command": "message",
"authToken": "{{authToken}}",
"conversationId": "{{conversationId}}",
"message": "Hello, how can you help me?",
"speechResponse": false,
"type": "NEW",
"requestId": "msg-001"
}Request (Voice Message):
{
"command": "message",
"authToken": "{{authToken}}",
"conversationId": "{{conversationId}}",
"speechData": "base64-encoded-audio-data",
"speechResponse": true,
"requestId": "msg-002"
}Key Parameters:
| Parameter | Type | Description |
|---|---|---|
message | string | Text message to send (required if speechData not provided) |
speechData | string | Base64-encoded audio file (WAV/MP3) - used instead of message |
speechResponse | boolean | Set to true to receive audio response (speechData in response) |
type | string | Set to "NEW" to trigger welcome dialog |
fileData | string | Base64-encoded file attachment |
fileExtension | string | File extension for fileData (pdf, jpg, png, etc.) |
location | array | GPS coordinates [latitude, longitude] |
llmAdditionalInput | string | Additional context for LLM processing |
Response:
{
"command": "message",
"conversationId": "{{conversationId}}",
"response": "I can help you with...",
"language": "en-us",
"emotions": ["neutral"],
"speechData": "base64-encoded-audio-response",
"isPartOfMainContent": true,
"requestId": "msg-001"
}streamUserSpeech
Streaming STTStream audio chunks for real-time speech-to-text transcription.
Request:
{
"command": "streamUserSpeech",
"authToken": "{{authToken}}",
"conversationId": "{{conversationId}}",
"chunk": "base64-encoded-audio-chunk",
"isFinal": false,
"requestId": "stream-001"
}Response (Real-time Transcription):
{
"command": "streamUserSpeech",
"conversationId": "{{conversationId}}",
"transcript": "Hello, I would like...",
"isFinal": false
}Note: Set isFinal: true in the request to signal the end of the audio stream. The response will include the final transcription with isFinal: true.
isActive
Check if the bot is actively processing a message or waiting for user input.
Request:
{
"command": "isActive",
"authToken": "{{authToken}}",
"conversationId": "{{conversationId}}",
"requestId": "active-check-001"
}Response:
{
"command": "isActive",
"active": true,
"requestId": "active-check-001"
}loadHistory
Retrieve conversation history for a specific user.
Request:
{
"command": "loadHistory",
"authToken": "{{authToken}}",
"conversationId": "{{conversationId}}",
"userId": "user-12345",
"requestId": "history-001"
}Response:
{
"command": "loadHistory",
"conversationId": "{{conversationId}}",
"chatHistory": [
{
"type": "user",
"message": "Hello",
"timestamp": "2024-01-15T10:30:00Z"
},
{
"type": "bot",
"message": "Hi! How can I help you?",
"timestamp": "2024-01-15T10:30:01Z"
}
]
}deleteHistory
Delete conversation history for a specific user and/or conversation.
Request:
{
"command": "deleteHistory",
"authToken": "{{authToken}}",
"conversationId": "{{conversationId}}",
"userId": "user-12345",
"requestId": "delete-001"
}Response:
{
"command": "deleteHistory",
"conversationId": "{{conversationId}}",
"userId": "user-12345",
"requestId": "delete-001"
}stateVariables
Update conversation state variables for context-aware interactions.
Request:
{
"command": "stateVariables",
"authToken": "{{authToken}}",
"conversationId": "{{conversationId}}",
"userId": "user-12345",
"variables": {
"customerName": "John Doe",
"accountNumber": "ACC-123456",
"preferredLanguage": "en"
},
"requestId": "vars-001"
}Response:
{
"command": "stateVariables"
}debug
DevelopmentRetrieve detailed conversation state and bot settings for debugging purposes.
Request:
{
"command": "debug",
"authToken": "{{authToken}}",
"conversationId": "{{conversationId}}",
"userId": "user-12345",
"requestId": "debug-001"
}Response:
{
"command": "debug",
"conversationId": "{{conversationId}}",
"state": {
"id": "{{conversationId}}",
"botId": "bot-12345",
"currentLanguage": "en",
"variables": {},
"_ts": 1705315200
},
"botSettings": {
"speechToTextStreaming": true,
"splitSpeechAndText": false
}
}Special Response Events
recognizeSpeech
Automatically sent when voice input is recognized, showing the transcribed text before the AI generates a response.
Response:
{
"command": "recognizeSpeech",
"requestMessage": "What is the weather today?",
"requestId": "msg-002"
}generateSpeech / generateSpeechDone
Used for streaming TTS responses (when tts.streaming: true in initialize).
Speech Chunk Response:
{
"command": "generateSpeech",
"speechDataChunk": "base64-audio-chunk",
"format": "mp3",
"responseId": "tts-123",
"requestId": "msg-001"
}Completion Response:
{
"command": "generateSpeechDone"
}statusUpdate
Provides real-time status updates during processing.
Response:
{
"command": "statusUpdate",
"response": {
"status": "processing",
"message": "Analyzing your request..."
}
}Error Handling
Errors are returned as JSON objects with descriptive messages:
{
"error": "External API timed out",
"errorHint": "conversationId expired. Initialize a new conversation",
"active": false,
"conversationId": "{{conversationId}}"
}API Endpoints
Blits.ai provides multiple endpoints to ensure optimal performance for users worldwide. Choose the endpoint closest to your users or use the general endpoint for automatic routing.
General Endpoint
The general endpoint automatically routes your requests to the optimal server based on your location. This is the recommended endpoint for most use cases.
REST API
https://platform.blits.ai/api/external/WebSocket API
wss://platform.blits.ai/Regional Endpoints
For specific geographic requirements or data residency compliance, you can use regional endpoints to ensure your data stays within a specific region.
🇪🇺 Europe
Hosted in EU data centers
https://eu.platform.blits.ai/api/external/wss://eu.platform.blits.ai/🇺🇸 United States
Hosted in US data centers
https://us.platform.blits.ai/api/external/wss://us.platform.blits.ai/🇦🇪 UAE / Middle East
Hosted in UAE data centers
https://uae.platform.blits.ai/api/external/wss://uae.platform.blits.ai/🇦🇺 Australia
Hosted in Australian data centers
https://au.platform.blits.ai/api/external/wss://au.platform.blits.ai/Performance Tips
- • Use the general endpoint for automatic routing and best overall performance
- • Choose a regional endpoint if you have specific data residency requirements
- • Regional endpoints ensure data stays within the specified geographic region
- • All endpoints provide the same functionality and API compatibility
- • Latency is typically lowest when using the endpoint closest to your users
Authentication
All API requests require authentication using JWT (JSON Web Token). Include your authentication token in every request.
Getting Your API Token
- Log in to your Blits.ai dashboard
- Navigate to Bot Settings → Configuration → API Channel
- Press the ... button and select "settings"
- Generate a new API key or copy an existing one
- Store your API key securely
Using Authentication
REST API
Include the token in the Authorization header:
Authorization: Bearer YOUR_JWT_TOKENWebSocket API
Include the token in the authToken field of every command:
{
"command": "initialize",
"authToken": "YOUR_JWT_TOKEN",
...
}Security Best Practices
- • Never expose your API token in client-side code
- • Always make API calls from your backend server
- • Rotate your API keys regularly
- • Use environment variables to store sensitive credentials
- • Monitor your API usage for suspicious activity
Integration Examples
Code examples to help you get started with the Blits.ai API.
Embed Chat Widget on Your Website
The easiest way to add Blits.ai to your website is using our pre-built chat widget. Simply copy and paste the code snippet into your HTML.
Quick Embed Code
Add this code snippet to your website's HTML, preferably just before the closing </body> tag:
<!-- Blits.ai Chat Widget -->
<script defer src="https://platform.blits.ai/assets/chat-widget.js"></script>
<script>
window.addEventListener('load', function () {
window.ChatWidget.mount({
token: "YOUR_API_TOKEN",
code: "YOUR_BUBBLE_CODE",
backendUrl: "https://platform.blits.ai",
websocketUrl: "wss://platform.blits.ai",
debug: false
});
});
</script>Chat Widget Configuration
The ChatWidget.mount() function accepts the following configuration options:
| Parameter | Type | Required | Description |
|---|---|---|---|
token | string | Yes | Your API JWT token obtained from Bot Settings |
code | string | Yes | Your chat bubble unique code from Bot Settings → Configuration → Chat Bubble |
backendUrl | string | Yes | Backend URL (use general endpoint or regional: eu, us, uae, au) |
websocketUrl | string | Yes | WebSocket URL (wss://platform.blits.ai or regional) |
debug | boolean | No | Enable debug mode for console logging (default: false) |
- Login to platform.blits.ai
- Navigate to Bot Settings → Configuration → Chat Bubble
- Create a new chat bubble or select an existing one
- Copy the API token and bubble code from the settings
- Paste them into the embed code above
Chat Widget Customization
Customize your chat widget's appearance and behavior through the Blits Platform dashboard. Available customization options include:
Appearance
- • Custom bot avatar
- • Chat bubble icon
- • Main color theme
- • Text color customization
- • Title and branding
Behavior
- • Bubble position (top/bottom, left/right/center)
- • Screen size (big/small)
- • Microphone enable/disable
- • File upload button
- • Popup messages
Security
- • Authorized origins whitelist
- • Domain restrictions
- • CORS configuration
Features
- • Voice interaction support
- • Multi-language support
- • File/image sharing
- • Notification messages
- • Button activation mode
JavaScript / Node.js WebSocket Client
const WebSocket = require('ws');
const API_TOKEN = 'your-jwt-token-here';
// Use general endpoint (recommended) or choose a regional endpoint
const WS_URL = 'wss://platform.blits.ai/'; // or eu, us, uae, au
// Connect to WebSocket
const ws = new WebSocket(WS_URL);
ws.on('open', () => {
console.log('Connected to Blits.ai');
// Initialize conversation
ws.send(JSON.stringify({
command: 'initialize',
authToken: API_TOKEN,
userId: 'user-123',
setCurrentLanguageTo: 'en',
tts: { streaming: false }
}));
});
ws.on('message', (data) => {
const response = JSON.parse(data.toString());
console.log('Received:', response);
if (response.command === 'initialize') {
console.log('Conversation ID:', response.conversationId);
// Send a message
ws.send(JSON.stringify({
command: 'message',
authToken: API_TOKEN,
conversationId: response.conversationId,
message: 'Hello! How are you today?',
speechResponse: false,
requestId: 'msg-001'
}));
}
if (response.command === 'message') {
console.log('Bot response:', response.response);
}
});
ws.on('error', (error) => {
console.error('WebSocket error:', error);
});
ws.on('close', () => {
console.log('Disconnected from Blits.ai');
});Python WebSocket Client
import asyncio
import websockets
import json
API_TOKEN = 'your-jwt-token-here'
# Use general endpoint (recommended) or choose a regional endpoint
WS_URL = 'wss://platform.blits.ai/' # or eu, us, uae, au
async def chat_with_blits():
async with websockets.connect(WS_URL) as ws:
print('Connected to Blits.ai')
# Wait for connection acknowledgment
connect_msg = await ws.recv()
print('Connection:', json.loads(connect_msg))
# Initialize conversation
await ws.send(json.dumps({
'command': 'initialize',
'authToken': API_TOKEN,
'userId': 'user-123',
'setCurrentLanguageTo': 'en'
}))
init_response = json.loads(await ws.recv())
print('Initialized:', init_response)
conversation_id = init_response['conversationId']
# Send a message
await ws.send(json.dumps({
'command': 'message',
'authToken': API_TOKEN,
'conversationId': conversation_id,
'message': 'Hello! How are you today?',
'speechResponse': False,
'requestId': 'msg-001'
}))
message_response = json.loads(await ws.recv())
print('Bot response:', message_response['response'])
# Run the async function
asyncio.run(chat_with_blits())Browser JavaScript Example
class BlitsClient {
constructor(apiToken, endpoint = 'platform.blits.ai') {
this.apiToken = apiToken;
this.ws = null;
this.conversationId = null;
this.endpoint = endpoint;
}
connect() {
return new Promise((resolve, reject) => {
// Use general endpoint or regional: eu.platform.blits.ai, us.platform.blits.ai, etc.
const wsUrl = `wss://${this.endpoint}/`;
this.ws = new WebSocket(wsUrl);
this.ws.onopen = () => {
console.log('Connected to Blits.ai');
resolve();
};
this.ws.onmessage = (event) => {
const response = JSON.parse(event.data);
this.handleMessage(response);
};
this.ws.onerror = (error) => {
console.error('WebSocket error:', error);
reject(error);
};
});
}
async initialize(userId = 'web-user') {
return new Promise((resolve) => {
const initHandler = (response) => {
if (response.command === 'initialize') {
this.conversationId = response.conversationId;
resolve(response);
}
};
this.messageHandlers.push(initHandler);
this.ws.send(JSON.stringify({
command: 'initialize',
authToken: this.apiToken,
userId: userId,
setCurrentLanguageTo: 'en'
}));
});
}
sendMessage(text, options = {}) {
return new Promise((resolve) => {
const messageHandler = (response) => {
if (response.command === 'message' &&
response.requestId === options.requestId) {
resolve(response);
}
};
this.messageHandlers.push(messageHandler);
this.ws.send(JSON.stringify({
command: 'message',
authToken: this.apiToken,
conversationId: this.conversationId,
message: text,
speechResponse: options.speechResponse || false,
requestId: options.requestId || Date.now().toString()
}));
});
}
handleMessage(response) {
console.log('Received:', response);
this.messageHandlers.forEach(handler => handler(response));
}
}
// Usage
// Use general endpoint (recommended) or specify regional endpoint
const client = new BlitsClient('your-jwt-token', 'platform.blits.ai');
// Or use regional: 'eu.platform.blits.ai', 'us.platform.blits.ai', etc.
await client.connect();
await client.initialize('user-123');
const response = await client.sendMessage('Hello!');
console.log('Bot said:', response.response);Voice Message Example (Browser)
// Record audio and send to Blits.ai
async function recordAndSend() {
// Get microphone access
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
const audioChunks = [];
mediaRecorder.ondataavailable = (event) => {
audioChunks.push(event.data);
};
mediaRecorder.onstop = async () => {
// Create blob from chunks
const audioBlob = new Blob(audioChunks, { type: 'audio/wav' });
// Convert to base64
const reader = new FileReader();
reader.readAsDataURL(audioBlob);
reader.onloadend = () => {
const base64Audio = reader.result.split(',')[1];
// Send to Blits.ai
ws.send(JSON.stringify({
command: 'message',
authToken: API_TOKEN,
conversationId: conversationId,
speechData: base64Audio,
speechResponse: true,
requestId: 'voice-msg-001'
}));
};
};
// Start recording
mediaRecorder.start();
console.log('Recording...');
// Stop after 5 seconds (or when user stops)
setTimeout(() => {
mediaRecorder.stop();
console.log('Recording stopped');
stream.getTracks().forEach(track => track.stop());
}, 5000);
}
// Handle voice response
ws.onmessage = (event) => {
const response = JSON.parse(event.data);
if (response.command === 'message' && response.speechData) {
// Play audio response
const audio = new Audio('data:audio/mp3;base64,' + response.speechData);
audio.play();
// Also show text
console.log('Bot said:', response.response);
}
};Ready to Get Started?
Sign up for a Blits.ai account to get your API credentials and start building.











