Custom Ingestion API
Push your own customer & activity data into MadKudu in real time, then use it everywhere—Sales Copilot, AI Scoring, MadKudu MCP, etc.
The Ingestion API allows you to send custom customer or activity data into MadKudu in real time. It’s ideal for ingesting product usage events, marketing actions, or any proprietary/purchased signals—especially when native integrations (e.g. Salesforce, HubSpot) are not an option.
Whether it’s product usage from your app, marketing events from custom systems, or any proprietary or purchased signals, this API gives you a single, secure, schema-flexible way to get your data into the MadKudu pipeline.

When to use the Ingestion API
Use this API when you need to send data to MadKudu
that can’t be synced through existing native integrations (see integration list)
where going through Snowflake, BigQuery or S3 is a heavier lift
If your data is already piped into Segment or your warehouse, we recommend starting with those integrations.
Supported data objects
contact event
action or behavior (e.g. “signed_up”) tied to a person
account event
action or behavior (e.g. “new funding round”) tied to a company
contact
Update to person-level attributes
account
Update to company/account-level attributes
How the Ingestion API works
There 2 ingestion modes to choose from
Lightweight JSON file upload directly to the API (payloads up to 1 MB)
Heavyweight JSONL file uploads to a S3 URL (gzip file up to 2 GB)
Format
Each ingestion mode expects the same record format. Files must follow the schema defined in the Upload JSON Schema Reference or they will be rejected
Ingestion mode 1: Lightweight JSON upload to API
Use this mode for small, frequent uploads (e.g. real-time or near real-time updates).
Payload limit: Up to 1 MB (uncompressed)
Content type:
application/json
Upload method:
POST
https://madapi.madkudu.com/ingestion/upload-json
Recommended for:
Testing integrations
Event-based ingestion
Low-latency use cases
Upload JSON data directly through the API.
Request model for direct JSON upload
Stream type for the data
Accepted streams for customer data
Data array - contents depend on stream type
POST /ingestion/upload-json HTTP/1.1
Host: madapi.madkudu.com
x-api-key: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 182
{
"stream": "account",
"data": [
{
"type": "account",
"account_id": "acme-corp",
"domain": "acme.com",
"account_properties": {
"name": "Acme Corporation",
"industry": "Technology",
"employees": 500
}
}
]
}
The request has succeeded.
{
"status": "success",
"message": "Data uploaded successfully",
"file_key": "data/tenants/objects/contact_event/source_system=dataapi/tenant=3303/dt=2025-06-27-07-26/a84118bb-1557-42c4-819c-241c2887aa2d.jsonl"
}
Ingestion mode 2: Heavyweight JSONL file upload to S3
Use this mode for large batch uploads (e.g. daily exports or backfills).
Your input file must be in JSONL format, then compressed using gzip.
Payload limit: Up to 2 GB (compressed as
.jsonl.gz
)Upload method:
Request a pre-signed S3 URL from the API
POST /ingestion/request-upload-url
Upload the
.jsonl.gz
file directly to the S3 URL with the command line, AWS, Postman or other scriptExample of request
curl -X PUT \
-T your-file.jsonl.gz \
"https://madkudu-ingestion.s3.amazonaws.com/tmp/your-upload-id.json.gz?...[signature]"
Get confirmation of the upload from the API
POST /ingestion/notify-upload-complete
Recommended for:
Large CRM exports
Historical event backfills
Low-frequency, high-volume ingestion
Generate a presigned URL for file upload.
Stream type for the data
Accepted streams for customer data
Content type of the file to upload
Content types for customer data
Content encoding for the file
Content encodings for customer data
POST /ingestion/generate-upload-url HTTP/1.1
Host: madapi.madkudu.com
x-api-key: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 81
{
"stream": "account",
"content_type": "application/jsonl",
"content_encoding": "gzip"
}
The request has succeeded.
{
"upload_url": "https://partner-data-prod-data-madkudu.s3-us-west-2.amazonaws.com/data/tenants/objects/example-stream/source_system%3Ddataapi/tenant%3D3303/dt%3D2025-06-27-07-24/de27963b-2bfa-4701-9204-a21467748ce3.json?AWSAccessKeyId=ASIAS64Z634EVSBAIBTO&Signature=GJeGQdnIrVbR0sW9rZmE1m%2BOL1c%3D&content-type=application%2Fjson&x-amz-security-token=IQoJb3JpZ2luX2VjEHgaCXVzLXdlc3QtMiJHMEUCIQDFbBJX%2FIO6GhG%2BEHG837LNBJXIkvVCfZNMfwo7ypSCKwIgOCGI89cb99BLAz2ZH8wjWVTjmjXO5W6MJp2V7PFQo3kq5wMIcBAAGgwyMDM3OTY5NjMwODEiDLR5h0aBdKvuUFxYZSrEA9%2BwDQWmpiTEm8GRXFPap2nnpp4flSkMthrVr8rXif8w7YcqJW2ifzLu3zduPrCLCd%2BygSm7IVdMMEIDW2Ts08SIjIggf4tdAiMXDP69ACLletNEuBXIE0EUjqW43ZijoljM0n4gSVjgD%2BwUimU%2F1mtIiCnu5vXr1rZ07lwXYuwr9thjOpZbh0e7SvNDW2tgzxVA%2BNoaYyYdhL21G%2BK6BsKHchD2iFvRuThQP7e51Q3BMgCmrmsZeIfgRcKTyFN%2B%2FHRsVW6rZPElL0uA1RMLTcstQCm7LyaY4trYmjwx6U0CxA4ev%2Bp8eb2osNL1M0qXlfmKAAohiv7TMh6ukTAGEDsNzXwsK5sUVi8TAgchiT85ISa5jEgHOc%2Bsb6szhHqpm%2FM%2Ffh9tSkBHabFvFWnntcJVGqUZknO5j%2Bs%2FIO0aTjAIQzqBcnpfrU95BzRhtyykCAYNAtI7yz3LBFqTClIqJ2Vp3qrR%2BHt5lBF14xvAzk9vNh2qewRtDTX2YnezfrejxEb0i19RfLtWhtEYHIqF%2BuyYGPRL5%2FlawtPC1u5ESED2AT177NOz26ynLs%2FeJbAm8CklKLfo9kIHl4%2BsX%2B8coS9FQK0iMJCN%2BcIGOqUBq2qTYtLgMLzcCkS6Z06jwaXijcCtuWPt6v6xPmtfAO%2BgYYN1RDXJPs3U1kvr4cUK33RHXem8CyWZIoer5oUkfqOGulvfbo3NmRRFWK7v0bCGPdY22iBsZgoPJziiUbYxNeXNR401E1MWeanPxljhE5H79ZQcxN%2FLk997K8i%2Fr%2B7ckJitP7gRleDTIYK01wXSI8WhJlQ85JYSfBJjyx2lc4Hq2elT&Expires=1751012665",
"file_key": "data/tenants/objects/example-stream/source_system=dataapi/tenant=3303/dt=2025-06-27-07-24/de27963b-2bfa-4701-9204-a21467748ce3.json"
}
GZIP your file and Upload it to the Upload URL
Here is a Python script
import requests
import gzip
import shutil
def upload_jsonl_to_s3_presigned_url(file_path, presigned_url):
with open(file_path, "rb") as f:
response = requests.put(
presigned_url,
data=f,
headers={"Content-Type": "application/jsonl", "Content-Encoding": "gzip"},
)
if response.status_code == 200:
print("Upload successful")
else:
print(f"Upload failed: {response.status_code} - {response.text}")
file_path = "valid.jsonl"
# Compress the file
compressed_file = file_path + ".gz"
with open(file_path, "rb") as f_in:
with gzip.open(compressed_file, "wb") as f_out:
shutil.copyfileobj(f_in, f_out)
# Get presigned upload url from previous step
upload_jsonl_to_s3_presigned_url(compressed_file, presigned_url)
Confirm that a file has been uploaded.
File key from the upload response
POST /ingestion/confirm-upload HTTP/1.1
Host: madapi.madkudu.com
x-api-key: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 146
{
"file_key": "data/tenants/objects/example-stream/source_system=dataapi/tenant=3303/dt=2025-06-27-07-24/de27963b-2bfa-4701-9204-a21467748ce3.json"
}
The request has succeeded.
{
"status": "success",
"message": "File upload confirmed and processing started"
}
Last updated
Was this helpful?