Skip to main content
POST
/
api
/
v1
/
chat
cURL
curl -X POST https://app.okrapdf.com/api/v1/chat \
  -H "Authorization: Bearer okra_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"job_id": "ocr-abc123", "message": "What is the total revenue?"}'
{
  "answer": "<string>",
  "job_id": "<string>"
}
One-shot Q&A against any completed extraction job. Ask a question, get a Markdown-formatted answer with citations from the original PDF.The file search store is cached per job — the first query takes a few seconds to provision, subsequent queries are fast.This powers the okra chat CLI command.

How it works

  1. You pass a job_id (must be completed) and a message
  2. OkraPDF uploads the PDF to Gemini’s file search API (cached after first use)
  3. Gemini answers using the document as grounding context
  4. Response includes page references and quoted excerpts

Example: agent pipeline

import requests

API_KEY = "okra_YOUR_KEY"
BASE = "https://app.okrapdf.com/api/v1"
headers = {"Authorization": f"Bearer {API_KEY}"}

# 1. Extract a PDF
job = requests.post(
    f"{BASE}/extract/sync",
    headers=headers,
    json={"url": "https://example.com/annual-report.pdf"},
    timeout=90,
).json()

job_id = job["job_id"]

# 2. Ask questions about it
questions = [
    "What is the total revenue?",
    "List the top 5 risk factors",
    "Summarize the CEO letter in 3 bullets",
]

for q in questions:
    resp = requests.post(
        f"{BASE}/chat",
        headers=headers,
        json={"job_id": job_id, "message": q},
    ).json()
    print(f"Q: {q}")
    print(f"A: {resp['answer']}\n")

Authorizations

Authorization
string
header
required

API key as Bearer token: Authorization: Bearer okra_xxx

Body

application/json
job_id
string
required

OCR job ID (must be completed)

Example:

"ocr-abc123"

message
string
required

Question to ask about the document

Example:

"What is the total revenue for Q4?"

Response

Chat response

answer
string
required

Markdown-formatted answer with citations

job_id
string
required