A mid-sized enterprise had a straightforward but powerful idea: use their locally-hosted AI model to automatically generate summaries of employee performance review data stored in their SQL Server database. The workflow seemed simple enough:
The reality? This "simple" integration touches on some of the thorniest problems in enterprise software: database security, API orchestration, authentication, timeout management, and reliable data transformation. Building this from scratch would require weeks of development. They needed a faster path.
Before exploring the solution, it's worth understanding why organizations don't simply give their AI models direct database access:
The answer is an API layer that mediates between AI and data—providing security, governance, and reliability. DreamFactory is a secure, self-hosted enterprise data access platform that provides governed API access to any data source, connecting enterprise applications and on-prem LLMs with role-based access and identity passthrough. It serves exactly this role—sitting between your AI models and your production databases so you get the insights without the risk.
Here's the architecture that was implemented using DreamFactory:
┌─────────────────────────────────────────────────────────────────────────────────┐
│ AI-POWERED DATA SUMMARIZATION WORKFLOW │
└─────────────────────────────────────────────────────────────────────────────────┘
┌──────────────┐
│ Client │
│ (App/Web/ │
│ Mobile) │
└──────┬───────┘
│
│ GET /api/v2/ai-summary?EmployeeId=12345
│ Header: X-DreamFactory-Api-Key: [key]
▼
┌─────────────────────────────────────────────────────────────────────────────────┐
│ DREAMFACTORY PLATFORM │
│ │
│ ┌───────────────────────────────────────────────────────────────────────────┐ │
│ │ PYTHON SCRIPTED SERVICE │ │
│ │ │ │
│ │ 1. Validate input parameters │ │
│ │ 2. Call SQL Server stored procedure ──────────────────────────────┐ │ │
│ │ 3. Format data as AI prompt │ │ │
│ │ 4. Call local AI model ───────────────────────────────────────┐ │ │ │
│ │ 5. Return AI response │ │ │ │
│ │ │ │ │ │
│ └──────────────────────────────────────────────────────────────────┼───┼────┘ │
│ │ │ │
│ ┌──────────────────────────────────────┐ ┌─────────────────────┼───┘ │
│ │ SQL SERVER SERVICE │ │ HTTP/RWS SERVICE │ │
│ │ ("enterprise-db") │◄───┘ ("local-ai") │ │
│ │ │ │ │
│ │ • Stored procedure execution │ • Proxies to │ │
│ │ • Role-based access control │ local LLM │ │
│ │ • Connection pooling │ • Handles auth │ │
│ │ • Query logging │ • 5-min timeout │ │
│ └──────────────────┬───────────────────┘ └────────┬───────┘ │
│ │ │ │
└─────────────────────┼──────────────────────────────────────┼────────────────────┘
│ │
▼ ▼
┌───────────────────┐ ┌───────────────────┐
│ SQL SERVER │ │ LOCAL AI MODEL │
│ │ │ (WebUI/Ollama/ │
│ Stored Procedure │ │ LM Studio) │
│ Returns employee │ │ │
│ review data │ │ Generates │
│ │ │ summary │
└───────────────────┘ └───────────────────┘
DreamFactory provides three critical services in this architecture:
What looks clean in an architecture diagram rarely goes smoothly in practice. Here's what actually happened during implementation—and how each challenge was resolved.
The first attempt to proxy requests to the local AI model failed immediately. GET requests worked fine, but POST requests to the chat completions endpoint returned a 500 error with a cryptic stack trace pointing to curl_setopt_array().
Root Cause: A subtle bug in PHP's array handling. The code used array_merge() to combine cURL options, but cURL option constants are integers (like CURLOPT_HTTPHEADER = 10023). PHP's array_merge() re-indexes numeric keys, corrupting the options array.
Solution: Replace array_merge() with the array union operator (+) which preserves integer keys:
// Before (broken): $curlOptions = array_merge($curlOptions, $globalOptions); // After (fixed): $curlOptions = $globalOptions + $curlOptions;
Once POST requests worked, a new problem emerged: the AI model took 2-3 minutes to generate summaries, but requests were timing out after 30 seconds.
Root Cause: The default cURL timeout was set to 30 seconds—reasonable for most APIs, but far too short for LLM inference.
Solution: Increase the default timeout to 300 seconds (5 minutes), with the ability to override per-service:
$_curlOptions = [ CURLOPT_TIMEOUT => 300, // 5 minutes for AI workloads // ... other options ];
The most insidious bug appeared after timeouts were fixed. The AI would generate a complete response, but clients received truncated JSON—cut off mid-property name. Debugging revealed the full response was received by cURL but corrupted somewhere in the response pipeline.
Root Cause: Two issues combined:
CURLINFO_HEADER_SIZE to split headers from body, but this value was incorrect for multi-block responses.Solution: Implement smarter header/body splitting that finds the actual last header block, and pass raw JSON responses through without decode/re-encode transformation:
// Find the last HTTP response's header/body separator
while (($_httpPos = strpos($_result, "HTTP/", $_searchPos)) !== false) {
$_nextSeparator = strpos($_result, "\r\n\r\n", $_httpPos);
if ($_nextSeparator !== false) {
$_headerEnd = $_nextSeparator + 4;
$_searchPos = $_headerEnd;
} else {
break;
}
}
An unexpected issue arose when the Python script tried to call other DreamFactory services internally. Requests to the SQL Server service would hang indefinitely.
Root Cause: DreamFactory's Python scripting engine makes internal API calls via HTTP back to the same server. With limited PHP-FPM workers, this created a deadlock: the script held one worker while waiting for a response that required another worker.
Solution: Use Python's urllib directly with explicit timeouts, bypassing the platform's internal API mechanism:
# Direct HTTP call with timeout
req = urllib.request.Request(url, method='GET')
req.add_header('X-DreamFactory-Api-Key', api_key)
with urllib.request.urlopen(req, timeout=120) as response:
result = json.loads(response.read().decode())
After resolving all challenges, here's the production script that orchestrates the entire workflow:
import json
import urllib.request
import urllib.parse
def summarize_employee_data(event, platform):
"""
Orchestrates employee data summarization:
1. Retrieves employee data from SQL Server stored procedure
2. Sends data to local AI model for summarization
3. Returns AI-generated summary
"""
# Extract and validate parameters
params = event['request']['parameters']
employee_id = params.get('EmployeeId') if isinstance(params, dict) else None
if not employee_id:
event['response']['status_code'] = 400
event['response']['content'] = {'error': 'EmployeeId is required'}
return event['response']
# Get API key for internal service calls
api_key = event['request']['headers'].get('X-DreamFactory-Api-Key', '')
base_url = 'http://127.0.0.1/api/v2'
try:
# ============================================
# STEP 1: Call stored procedure for employee data
# ============================================
encoded_id = urllib.parse.quote(str(employee_id))
proc_url = f'{base_url}/enterprise-db/_proc/GetEmployeeReviewData?EmployeeId={encoded_id}'
proc_req = urllib.request.Request(proc_url, method='GET')
proc_req.add_header('X-DreamFactory-Api-Key', api_key)
proc_req.add_header('Content-Type', 'application/json')
with urllib.request.urlopen(proc_req, timeout=120) as response:
employee_data = json.loads(response.read().decode())
# ============================================
# STEP 2: Build AI prompt with employee data
# ============================================
data_str = json.dumps(employee_data, ensure_ascii=True)
ai_payload = {
"model": "llama3:70b",
"messages": [
{
"role": "system",
"content": "You are an HR assistant that creates concise, professional summaries of employee performance reviews."
},
{
"role": "user",
"content": f"Please summarize the following employee performance data in 100-150 words, highlighting key achievements and areas for growth:\n\n{data_str}"
}
]
}
ai_data = json.dumps(ai_payload, ensure_ascii=True).encode('utf-8')
# ============================================
# STEP 3: Call local AI model
# ============================================
ai_url = f'{base_url}/local-ai/chat/completions'
ai_req = urllib.request.Request(ai_url, data=ai_data, method='POST')
ai_req.add_header('X-DreamFactory-Api-Key', api_key)
ai_req.add_header('Content-Type', 'application/json')
with urllib.request.urlopen(ai_req, timeout=300) as response:
ai_result = json.loads(response.read().decode('utf-8'))
# ============================================
# STEP 4: Return the AI-generated summary
# ============================================
event['response']['content'] = {
'employee_id': employee_id,
'summary': ai_result['choices'][0]['message']['content'],
'model': ai_result.get('model'),
'usage': ai_result.get('usage')
}
event['response']['content_type'] = 'application/json'
return event['response']
except urllib.error.HTTPError as e:
error_body = e.read().decode() if e.fp else str(e)
event['response']['status_code'] = e.code
event['response']['content'] = {'error': error_body}
return event['response']
except Exception as e:
event['response']['status_code'] = 500
event['response']['content'] = {'error': str(e)}
return event['response']
_event.script_result = summarize_employee_data(_event, _platform)
This integration could have been built several ways. Here's how DreamFactory compares to the alternatives:
Building from scratch requires implementing:
Timeline: 4-8 weeks of development
Ongoing Cost: Continuous maintenance, security patches, feature additions
Verdict: Maximum flexibility, but slow and expensive
Microsoft's cloud-native approach uses Azure Functions for logic and API Management for the gateway.
Pros: Scalable, managed infrastructure, Azure AD integration
Cons: Azure lock-in, complex setup, per-request pricing adds up, requires Azure SQL or hybrid connectivity for on-prem databases
Verdict: Good for Azure-native organizations, but expensive and complex for hybrid scenarios
Amazon's equivalent uses Lambda functions behind API Gateway.
Pros: Serverless, scales to zero, pay-per-use
Cons: AWS lock-in, cold start latency (problematic for AI workloads), complex IAM configuration, VPC setup needed for database access
Verdict: Works for simple cases, but cold starts and VPC complexity hurt AI use cases
Hasura provides instant GraphQL APIs for databases.
Pros: Instant API generation, real-time subscriptions, excellent PostgreSQL support
Cons: GraphQL-focused (not REST), limited stored procedure support, no built-in HTTP proxy for external services, PostgreSQL-centric
Verdict: Great for PostgreSQL + GraphQL, but missing key features for this use case
Python frameworks purpose-built for LLM + data integration.
Pros: AI-native design, flexible chains, great for prototyping
Cons: Requires Python developers, no REST API out of the box, no built-in authentication or access control, not designed for production API serving
Verdict: Excellent for AI experimentation, but needs additional infrastructure for production APIs
DreamFactory occupies a unique position: enterprise-grade API management with the flexibility to handle custom AI workflows.
| Capability | Custom Code | Cloud (Azure/AWS) | Hasura | LangChain | DreamFactory |
|---|---|---|---|---|---|
| SQL Server stored procedures | Manual | Via Functions | Limited | Manual | Native |
| HTTP proxy to AI | Manual | Via Functions | No | Manual | Native (RWS) |
| Custom orchestration | Full control | Functions | Actions | Full control | Scripts |
| Role-based access | Manual | IAM/AD | Yes | No | Native |
| On-premises deployment | Yes | Hybrid only | Yes | Yes | Yes |
| Time to production | Weeks | Days-Weeks | Hours-Days | Days | Hours |
This implementation demonstrates DreamFactory's core value for AI + enterprise data scenarios:
The AI model never touches the database directly. Every request goes through DreamFactory's authentication, authorization, and logging. The stored procedure is exposed as an API endpoint with role-based access control—the AI integration inherits all existing security policies.
Despite the challenges encountered, the entire integration—from concept to working API—took hours, not weeks. The SQL Server connection was configured in minutes. The AI proxy was a service configuration. Only the orchestration logic required actual code.
Local AI models have unique requirements: long timeouts, large payloads, streaming responses. DreamFactory's HTTP service (RWS) handles these gracefully, with configurable timeouts up to 5+ minutes and proper handling of chunked responses.
The AI service is configured, not coded. Switching from a local Llama model to OpenAI, Anthropic Claude, or any other provider is a configuration change—update the base URL and authentication, and the same workflow continues working.
Every API call is logged. When the truncation bug appeared, the logs showed exactly what was received and sent at each step. This observability is built-in, not bolted on.
DreamFactory is the right choice when:
1. What is DreamFactory? DreamFactory is a secure, self-hosted enterprise data access platform that provides governed API access to any data source, connecting enterprise applications and on-prem LLMs with role-based access and identity passthrough. It eliminates the need to build custom API infrastructure from scratch by giving you instant, secured REST APIs for your existing databases and AI models.
2. Can I use DreamFactory with any AI model or just local ones? DreamFactory works with any AI model that exposes a REST API. That includes locally-hosted models through Ollama, LM Studio, or vLLM, as well as cloud providers like OpenAI or Anthropic. Because the AI connection is a service configuration rather than hard-coded logic, switching providers is just a URL and credential change — no rewriting code.
3. How does DreamFactory keep my data secure when connecting AI to enterprise databases? The AI model never touches the database directly. Every request routes through DreamFactory's API layer, which enforces API key authentication, role-based access control, and identity passthrough. Every call is logged for audit compliance, connections are pooled to prevent exhaustion, and rate limiting stops runaway queries — all without writing custom security code.
To implement a similar AI + database integration:
The combination of instant database APIs, flexible HTTP proxying, and custom scripting makes DreamFactory uniquely suited for bridging enterprise data and modern AI—without the complexity of building from scratch or the lock-in of cloud-only solutions.