back arrow Blog
Complete API Observability: Building Production-Grade Analytics for DreamFactory with Logstash, Elasticsearch, and Grafana

Complete API Observability: Building Production-Grade Analytics for DreamFactory with Logstash, Elasticsearch, and Grafana

RECOMMENDED ARTICLES

Executive Summary

API observability is a critical operational requirement for production REST API platforms. DreamFactory, as an enterprise API generation and management platform, produces high-volume API traffic that demands robust logging, real-time analytics, and diagnostic capabilities. This guide demonstrates how to implement a complete observability stack using Logstash for log ingestion and processing, Elasticsearch for indexed storage and search, and Grafana for advanced visualization and alerting. This architecture provides engineering teams with the infrastructure needed for performance monitoring, security auditing, usage analytics, and rapid troubleshooting across all DreamFactory-managed APIs.


The API Observability Challenge

Modern API platforms like DreamFactory generate massive volumes of operational data. Every API request—whether against a MySQL database, MongoDB collection, AWS S3 bucket, or custom scripted endpoint—produces metadata that contains valuable insights:

  • Performance metrics: Response times, throughput, error rates
  • Security events: Authentication attempts, authorization failures, unusual access patterns
  • Usage analytics: Endpoint popularity, client behavior, service health
  • Diagnostic data: Stack traces, validation errors, database query performance

Without proper observability infrastructure, this data is ephemeral—written to rotating log files and lost within hours or days. Engineering teams lose visibility into production behavior, making it difficult to:

  • Detect performance degradation 
  • Investigate security incidents after they occur
  • Understand actual API usage patterns for capacity planning
  • Troubleshoot integration issues reported by API consumers

The Logstash + Elasticsearch + Grafana stack solves these challenges by creating a durable, searchable, visualizable record of all API activity.


Architecture Overview: The LEG Stack for API Observability

Logstash: Log Processing and Enrichment Engine

What It Is

Logstash is an open-source data processing pipeline developed by Elastic. It ingests data from multiple sources, transforms it through filter plugins, and outputs it to various destinations. For API observability, Logstash serves as the critical bridge between raw application logs and structured, indexed data.

Core Capabilities

  • Multi-source ingestion: File tails, HTTP inputs, message queues, database queries
  • Pattern-based parsing: Grok patterns extract structured fields from unstructured log lines
  • Data enrichment: Add GeoIP data, user lookups, rate calculations, or custom business logic
  • Output flexibility: Send processed logs to Elasticsearch, S3, databases, or monitoring systems
  • Reliability features: Persistent queues, dead letter queues, and retry logic

Role in DreamFactory Observability

Logstash consumes DreamFactory's Laravel-formatted application logs, parses them into structured JSON documents, enriches them with contextual metadata (user details, service information, rate limit status), and delivers them to Elasticsearch for indexing. This transformation converts raw log lines like:

[2024-01-15 14:23:45] production.INFO: API Request {"method":"GET","endpoint":"/api/v2/mysql/_table/users","user_id":42,"duration_ms":156}

Into fully structured, queryable records with normalized timestamps, extracted dimensions, and calculated metrics.

Elasticsearch: Distributed Search and Analytics Engine

What It Is

Elasticsearch is a distributed, RESTful search and analytics engine built on Apache Lucene. It stores documents in JSON format across sharded indices, providing millisecond query performance even across billions of records.

Core Capabilities

  • Full-text search: Analyze and search text fields with relevance scoring
  • Structured queries: Filter, aggregate, and analyze numerical and categorical data
  • Time-series optimization: Index lifecycle management, rollover policies, and retention controls
  • Horizontal scalability: Add nodes to increase storage and query capacity
  • Near-real-time indexing: Documents become searchable within seconds of ingestion

Role in DreamFactory Observability

Elasticsearch stores every processed API log record in time-based indices (e.g., dreamfactory-logs-2024.01.15). It enables:

  • Ad-hoc investigation: "Show me all failed authentication attempts from IP 203.0.113.45 in the last 4 hours"
  • Aggregation queries: "Calculate 95th percentile response time by service and endpoint"
  • Pattern detection: "Find API keys making more than 1000 requests per minute"
  • Historical analysis: "Compare last week's traffic to the same week last year"

The combination of structured indexing and powerful query DSL makes Elasticsearch the analytical foundation of the observability stack.

Grafana: Visualization and Alerting Platform

What It Is

Grafana is an open-source observability platform that creates interactive dashboards, visualizations, and alerts from multiple data sources. While originally designed for time-series metrics, modern Grafana excels at log analytics, trace visualization, and hybrid observability workflows.

Core Capabilities

  • Multi-source querying: Connect to Elasticsearch, Prometheus, MySQL, CloudWatch, and 100+ other data sources
  • Rich visualization library: Time-series graphs, heatmaps, tables, pie charts, geographic maps, and custom panels
  • Templating engine: Create dynamic dashboards with variable filters and drill-down capabilities
  • Alerting framework: Define threshold-based and query-based alerts with flexible notification channels
  • Dashboard sharing: Embed visualizations, create public snapshots, or export to PDF

Role in DreamFactory Observability

Grafana transforms Elasticsearch query results into actionable insights through:

  • Real-time monitoring dashboards: Live views of API request rates, error percentages, and response time distributions
  • Service health scorecards: Per-service and per-endpoint performance metrics
  • Security monitoring panels: Authentication failures, rate limit violations, and anomalous access patterns
  • Usage analytics reports: Client distribution, feature adoption, and capacity utilization
  • Proactive alerting: Notifications when error rates spike, response times degrade, or suspicious patterns emerge

Strengths of the Logstash + Elasticsearch + Grafana Stack

1. Purpose-Built for Log Analytics at Scale

Unlike general-purpose databases, this stack is optimized for write-heavy, time-series workloads. Elasticsearch's inverted indices make substring searches and aggregations fast even across terabytes of log data. Logstash's pipeline architecture handles throughput spikes without data loss.

2. Flexible Schema Evolution

As DreamFactory's logging evolves—adding new fields, changing formats, or introducing new service types—Elasticsearch's dynamic mapping adapts automatically. No schema migrations or downtime required.

3. Rich Ecosystem and Plugin Architecture

Logstash offers 200+ input, filter, and output plugins. Elasticsearch supports custom analyzers, scripting, and machine learning features. Grafana provides extensive visualization options and integrations with incident management platforms like PagerDuty and Slack.

4. Cost-Effective Open Source Foundation

The core stack is open source with no licensing costs. Organizations can run it on their own infrastructure, avoiding per-GB ingestion fees charged by commercial SaaS offerings. For teams already managing Kubernetes or VM infrastructure, this provides significant cost advantages at scale.

5. Query Language Power

Elasticsearch's Query DSL enables complex analytical queries that would be cumbersome in SQL:

{
  "aggs": {
    "error_rate_by_service": {
      "terms": {"field": "service_name"},
      "aggs": {
        "error_percentage": {
          "bucket_script": {
            "buckets_path": {
              "errors": "status_codes.errors",
              "total": "_count"
            },
            "script": "params.errors / params.total * 100"
          }
        }
      }
    }
  }
}

This query calculates error rate by service in a single request—a pattern common in API observability.

6. Unified Platform for Logs, Metrics, and Traces

Modern Elasticsearch deployments support not just logs but also Prometheus-style metrics and distributed traces. Grafana visualizes all three signal types in correlated dashboards, enabling true full-stack observability.


Weaknesses and Operational Considerations

1. Operational Complexity

Running Elasticsearch in production requires expertise in:

  • Cluster sizing: Balancing shard count, node resources, and query performance
  • Index lifecycle management: Automating rollover, retention, and tier migration
  • Heap tuning: Configuring JVM settings for stability under load
  • Monitoring the monitor: Using dedicated tooling to track cluster health

Small teams may find this burden significant compared to managed alternatives like Datadog or New Relic.

2. Resource Intensity

Elasticsearch is memory-hungry. A production cluster typically requires:

  • 8-16 GB RAM per node for heap allocation
  • Fast SSD storage for acceptable query performance
  • Network bandwidth for inter-node coordination and replication

High-volume DreamFactory deployments processing millions of API requests per day may need dedicated infrastructure just for observability.

3. Limited Out-of-the-Box Anomaly Detection

While Elasticsearch offers machine learning features, they require manual model training and tuning. Unlike AI-powered SaaS tools that automatically detect anomalies, the LEG stack requires explicit threshold definitions and alerting rules.

4. Grafana Query Complexity for Non-Technical Users

Building effective Elasticsearch queries in Grafana requires understanding both Lucene query syntax and Elasticsearch aggregations. Business users may struggle to self-serve analytics without pre-built dashboards.


Practical Implementation for DreamFactory API Observability

The following sections outline key configuration steps for implementing the LEG stack with DreamFactory. These configurations are platform-agnostic and can be deployed on-premises, in the cloud (AWS, Azure, GCP), using managed services (AWS OpenSearch, Elastic Cloud, Grafana Cloud), or in containerized environments (Docker, Kubernetes).

Phase 1: Configure DreamFactory Logstash Service

DreamFactory includes a built-in Logstash logging service that pushes API event data directly to Logstash via network protocols. This eliminates the need to access DreamFactory log files directly.

Create a Logstash Service in DreamFactory

  1. Navigate to Services in the DreamFactory admin interface
  2. Click Create and select Logstash as the service type
  3. Configure the service with the following settings:

Service Configuration:

  • Name: api-observability (or your preferred name)
  • Label: API Observability Logger
  • Description: Sends API events to Logstash for analysis
  • Active: Yes

Logstash Connection Settings:

  • Host: IP address or hostname where Logstash is listening (e.g., logstash.yourcompany.com or IP)
  • Port: Port number configured in Logstash input (e.g., 12201 for GELF, 5000 for HTTP)
  • Protocol/Format: Choose based on your Logstash input configuration:
    • GELF (UDP): Recommended for high-throughput scenarios, uses Graylog Extended Log Format
    • HTTP: Good for reliable delivery with retry logic
    • TCP: Persistent connection for streaming logs
    • UDP: Lightweight, fire-and-forget delivery

Log Context Configuration:

Select which data elements to capture with each log entry:

  • Request URI: The API endpoint being called
  • Request Method: HTTP method (GET, POST, PUT, DELETE, etc.)
  • Request Service: DreamFactory service name (mysql, mongodb, etc.)
  • Request Resource: The specific resource being accessed
  • Request Content: Request body/payload
  • Request Headers: HTTP headers
  • Request Parameters: Query string and path parameters
  • Response Status Code: HTTP status code (200, 404, 500, etc.)
  • Response Content: Response body (for events only)
  • Platform Session User: Authenticated user information
  • Platform Session API Key: The API key used for the request

Service Event Mapping:

Configure which DreamFactory events trigger log entries. Each event can have a custom log level and message. The event format is {service_name}.{event_type}.

Common event mappings:

Service: mysql.*
Log Level: INFO
Message: MySQL service activity

Service: mongodb.*
Log Level: INFO
Message: MongoDB service activity

Service: user.registered
Log Level: INFO
Message: New user registration

Service: system.admin.session.create
Log Level: WARNING
Message: Admin login detected

Event Pattern Examples:

  • Service-specific: mysql.post.post_process, mongodb.get.pre_process
  • Service wildcard: mysql.* (all events for MySQL service), user.* (all events for user service)
  • System-wide: Configure separate mappings for each service you want to monitor

For comprehensive API observability, create event mappings for all your DreamFactory services (database services, file storage services, custom scripted services, etc.)

Phase 2: Configure Logstash Input

Configure Logstash to receive data from the DreamFactory Logstash service. The input configuration must match the protocol you selected in DreamFactory.

Configuration File Location

The location and method for applying Logstash configuration depends on your deployment:

Self-Hosted Logstash:

  • Create a configuration file: /etc/logstash/conf.d/dreamfactory.conf
  • After creating/editing the file, restart Logstash: sudo systemctl restart logstash
  • Verify configuration: sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/dreamfactory.conf

Docker/Docker Compose:

  • Mount configuration to: /usr/share/logstash/pipeline/dreamfactory.conf
  • Restart the container: docker restart logstash or docker compose restart logstash

Kubernetes:

  • Create a ConfigMap with the Logstash configuration
  • Mount the ConfigMap to the Logstash pod at /usr/share/logstash/pipeline/
  • Apply changes: kubectl rollout restart deployment/logstash

Managed Services (AWS OpenSearch, Elastic Cloud):

  • Use the service provider's web interface or API to configure Logstash pipelines
  • Refer to provider-specific documentation for pipeline management

For GELF (UDP) Protocol:

input {
  gelf {
    port => 12201
    type => "dreamfactory"
  }
}

filter {
  if [type] == "dreamfactory" {
    # Parse DreamFactory event context
    if [_platform][session][user] {
      mutate {
        add_field => {
          "user_id" => "%{[_platform][session][user][id]}"
          "user_email" => "%{[_platform][session][user][email]}"
        }
      }
    }

    # Extract request metadata
    if [_event][request] {
      mutate {
        add_field => {
          "endpoint" => "%{[_event][request][uri]}"
          "method" => "%{[_event][request][method]}"
          "service_name" => "%{[_event][request][service]}"
        }
      }
    }

    # Extract response metadata
    if [_event][response] {
      mutate {
        add_field => {
          "status_code" => "%{[_event][response][status_code]}"
        }
      }
    }

    # Classify response status
    if [status_code] {
      ruby {
        code => '
          status = event.get("status_code").to_i
          if status >= 200 && status < 300
            event.set("status_class", "success")
          elsif status >= 400 && status < 500
            event.set("status_class", "client_error")
          elsif status >= 500
            event.set("status_class", "server_error")
          end
        '
      }
    }

    # Calculate performance tier if duration is available
    # Note: You may need to calculate duration from timestamps

    # Clean up nested structures for simpler querying
    mutate {
      remove_field => ["_platform", "_event"]
    }
  }
}

output {
  elasticsearch {
    hosts => ["https://your-elasticsearch-host:9200"]
    index => "dreamfactory-api-%{+YYYY.MM.dd}"
  }
}

For HTTP Protocol:

input {
  http {
    port => 5000
    type => "dreamfactory"
    codec => json
  }
}

filter {
  # Use the same filter configuration as GELF above
}

output {
  elasticsearch {
    hosts => ["https://your-elasticsearch-host:9200"]
    index => "dreamfactory-api-%{+YYYY.MM.dd}"
  }
}

For TCP Protocol:

input {
  tcp {
    port => 5000
    type => "dreamfactory"
    codec => json_lines
  }
}

filter {
  # Use the same filter configuration as GELF above
}

output {
  elasticsearch {
    hosts => ["https://your-elasticsearch-host:9200"]
    index => "dreamfactory-api-%{+YYYY.MM.dd}"
  }
}

Key Pipeline Features:

  • Event-driven ingestion: DreamFactory pushes logs in real-time as API events occur
  • Rich context extraction: Captures user, session, request, and response metadata
  • Field normalization: Extracts nested DreamFactory event data into flat, queryable fields
  • Status classification: Categorizes responses as success, client error, or server error
  • Clean data structure: Removes nested objects for optimal Elasticsearch indexing

Phase 3: Elasticsearch Configuration

Once you have Elasticsearch deployed in your environment of choice, configure it to optimally store and index DreamFactory API logs.

Configure Index Template

Create an index template to optimize field mappings for DreamFactory API logs:

curl -X PUT "https://your-elasticsearch-host:9200/_index_template/dreamfactory-api-template" -H 'Content-Type: application/json' -d'
{
  "index_patterns": ["dreamfactory-api-*"],
  "template": {
    "settings": {
      "number_of_shards": 3,
      "number_of_replicas": 1,
      "index.codec": "best_compression"
    },
    "mappings": {
      "properties": {
        "@timestamp": {"type": "date"},
        "endpoint": {"type": "keyword"},
        "method": {"type": "keyword"},
        "service_name": {"type": "keyword"},
        "service_type": {"type": "keyword"},
        "service_category": {"type": "keyword"},
        "status_code": {"type": "short"},
        "status_class": {"type": "keyword"},
        "duration_ms": {"type": "float"},
        "performance_tier": {"type": "keyword"},
        "user_id": {"type": "integer"},
        "user_email": {"type": "keyword"},
        "ip_address": {"type": "ip"},
        "geo": {
          "properties": {
            "location": {"type": "geo_point"}
          }
        }
      }
    }
  }
}
'

Set Up Index Lifecycle Policy

Automate data retention and lifecycle management with an ILM policy:

curl -X PUT "https://your-elasticsearch-host:9200/_ilm/policy/dreamfactory-api-policy" -H 'Content-Type: application/json' -d'
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {
            "max_size": "50GB",
            "max_age": "1d"
          }
        }
      },
      "warm": {
        "min_age": "7d",
        "actions": {
          "shrink": {"number_of_shards": 1},
          "forcemerge": {"max_num_segments": 1}
        }
      },
      "delete": {
        "min_age": "90d",
        "actions": {"delete": {}}
      }
    }
  }
}
'

This policy:

  • Keeps indices in "hot" tier for fast writes during the first 7 days
  • Moves older data to "warm" tier with optimized storage
  • Automatically deletes data after 90 days

Phase 4: Grafana Dashboard Creation

Once Grafana is deployed in your environment, connect it to your Elasticsearch instance and build dashboards for visualizing DreamFactory API metrics.

Configure Elasticsearch Data Source

In Grafana, navigate to Configuration → Data Sources → Add data source → Elasticsearch and configure:

Basic Settings:

  • Name: DreamFactory API Logs
  • URL: Your Elasticsearch endpoint (e.g., https://your-elasticsearch-host:9200)
  • Access: Server (default) - Grafana backend makes requests

Authentication:

Production Elasticsearch deployments should have authentication enabled. Choose the appropriate method:

Option 1: Basic Authentication (Username/Password)
  • Enable Basic Auth toggle
  • User: Your Elasticsearch username (e.g., elastic or custom user)
  • Password: Your Elasticsearch password
  • Example for self-managed Elasticsearch:
    # Set password for elastic user
    /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
    
Option 2: API Key Authentication (Recommended for Production)

Create an API key in Elasticsearch:

curl -X POST "https://your-elasticsearch-host:9200/_security/api_key" \
  -u elastic:your-password \
  -H "Content-Type: application/json" \
  -d '{
    "name": "grafana-dreamfactory-api-logs",
    "role_descriptors": {
      "grafana_reader": {
        "cluster": ["monitor"],
        "indices": [
          {
            "names": ["dreamfactory-api-*"],
            "privileges": ["read", "view_index_metadata"]
          }
        ]
      }
    }
  }'
  • In Grafana, enable Custom HTTP Headers
  • Add header: Authorization with value: ApiKey <base64-encoded-api-key>
Option 3: TLS Client Certificate (Enterprise)
  • Enable TLS Client Auth
  • Upload client certificate and key
  • Configure CA certificate if using self-signed certificates

Elasticsearch Settings:

  • Index name: dreamfactory-api-*
  • Pattern: Daily (matches the dreamfactory-api-YYYY.MM.dd index format)
  • Time field: @timestamp
  • Version: 8.0+ (or your Elasticsearch version)
  • Max concurrent Shard Requests: 5 (default)

Logs Settings:

  • Log message field: message
  • Log level field: level

How DreamFactory Benefits from This Observability Architecture

Native Integration with DreamFactory's Architecture

DreamFactory's Laravel foundation makes it naturally compatible with this observability stack. The platform's service-oriented architecture—where each database, file storage, or external API is a distinct service—maps perfectly to Elasticsearch's structured indexing. Every DreamFactory service generates consistent, well-formatted logs that Logstash can process without complex parsing logic.

Visibility Across Auto-Generated APIs

One of DreamFactory's core value propositions is automatic API generation from existing data sources. This creates a challenge: how do you monitor APIs you didn't manually code? The LEG stack solves this by capturing every auto-generated endpoint's performance, regardless of whether it's a table operation, stored procedure call, or complex join query. Engineering teams gain the same observability for generated APIs as they would for hand-written code.

Multi-Tenant Observability

DreamFactory often powers multi-tenant SaaS applications where a single instance serves multiple customers. The LEG stack enables per-tenant performance monitoring by indexing user_id, api_key, or custom tenant identifiers. Operations teams can:

  • Compare performance across tenants to identify "noisy neighbors"
  • Provide per-customer SLA reports
  • Detect anomalous usage patterns indicating integration issues
  • Allocate infrastructure costs based on actual API consumption

Complementary to DreamFactory's Built-In Features

DreamFactory includes basic request logging and rate limiting, but lacks deep analytics and long-term trend analysis. The LEG stack extends these capabilities without replacing them:

  • DreamFactory's role-based access control determines which APIs users can call
  • The LEG stack records what they actually called and how it performed
  • DreamFactory's API key management authenticates requests
  • The LEG stack analyzes usage patterns per API key for billing or abuse detection

This separation of concerns keeps DreamFactory focused on API generation and management while delegating observability to specialized tools.

Accelerating Time-to-Resolution

When DreamFactory deployments encounter issues—slow queries, authentication failures, or integration breakages—the LEG stack dramatically reduces diagnostic time. Instead of SSH-ing into containers and grepping log files, engineers query Elasticsearch for precise time ranges, error patterns, and correlated events. A problem that might take hours to diagnose through manual log review can be isolated in minutes with targeted Grafana dashboards.

Supporting Compliance and Audit Requirements

Organizations in healthcare, finance, and government often face strict audit logging requirements. DreamFactory's API logs, when indexed in Elasticsearch with appropriate retention policies, provide:

  • Immutable audit trails: Who accessed what data, when, and from where
  • Compliance reporting: Automated generation of access reports for auditors
  • Anomaly detection: Identifying unusual data access patterns that might indicate breaches
  • Long-term archival: S3 integration for 7+ year retention at low cost

The LEG stack transforms DreamFactory from a simple API platform into a compliance-ready enterprise solution.


When to Choose the Logstash + Elasticsearch + Grafana Stack

This architecture is ideal when:

  • You need full control over your observability infrastructure and data retention
  • Your DreamFactory deployment processes more than 1 million API requests per day
  • You require complex, custom analytics beyond basic metrics
  • You're already operating Elasticsearch for other use cases (search, analytics)
  • Cost predictability matters more than operational simplicity
  • Your team has Elasticsearch/Grafana expertise or is willing to develop it
  • You need to correlate API logs with other operational data (Kubernetes metrics, application traces, business events)

Consider managed alternatives when:

  • Your team is small and lacks dedicated DevOps resources
  • You prefer a fully-managed SaaS solution over self-hosted infrastructure
  • Your API volume is under 100,000 requests per day
  • You need AI-powered anomaly detection without manual tuning
  • You want integrated incident management and on-call workflows

The LEG stack represents a middle path: more powerful than simple log aggregation, more cost-effective than enterprise APM platforms, and more controllable than fully-managed SaaS offerings. For organizations running DreamFactory at scale, it provides the observability foundation needed to operate confidently in production.


Frequently Asked Questions

Q1: Why use the DreamFactory Logstash service instead of accessing log files directly?

The DreamFactory Logstash service provides several advantages over file-based logging:

Real-time data: Events are pushed to Logstash as they occur, enabling immediate visibility into API activity rather than waiting for batch log processing.

Structured data by default: DreamFactory sends fully structured event data with user context, session information, and request/response metadata directly to Logstash. No complex parsing or regex patterns needed.

Works everywhere: Whether DreamFactory runs in Docker containers, Kubernetes, serverless functions, or traditional VMs, the network-based approach works identically. File-based logging struggles with ephemeral containers and distributed deployments.

Security: Network transmission supports encryption (TLS) and authentication. Log files on disk require careful permissions management and create potential security risks.

Q2: What performance impact does this have on DreamFactory?

The Logstash service is designed for minimal performance impact:

Asynchronous delivery: Log events are sent to Logstash after the API response is returned to the client. Users experience no latency from logging operations.

Efficient protocols: GELF (UDP) adds less than 1ms overhead per request. Even HTTP-based logging adds only 2-5ms on average.

Selective logging: Use service event mappings to log only what matters. For example, log data modification operations (mysql.post, mongodb.delete) but skip high-frequency read operations to reduce overhead.

In production deployments handling 10,000+ requests per minute, the Logstash service typically adds less than 0.5% CPU overhead and has no measurable impact on API throughput.

Q3: Can I use managed services instead of self-hosting?

Yes. The DreamFactory Logstash service works with both self-hosted and managed observability platforms:

Managed Elasticsearch options:

  • AWS OpenSearch Service
  • Elastic Cloud
  • Azure Elasticsearch Service

Managed Grafana options:

  • Grafana Cloud (includes a free tier)
  • AWS Managed Grafana
  • Azure Managed Grafana

Fully managed alternatives:
The DreamFactory Logstash service can send data to commercial observability platforms like Datadog, New Relic, or Splunk via their HTTP endpoints. Simply configure the service to use HTTP protocol and point it to the platform's ingestion API.

The choice between self-hosted and managed services depends on your team's operational expertise, compliance requirements, and preference for control versus convenience. The DreamFactory Logstash service configuration is identical regardless—just point it to your Logstash endpoint.


Conclusion

Implementing Logstash, Elasticsearch, and Grafana for DreamFactory API observability creates a production-grade monitoring system that scales with your platform's growth. This architecture transforms raw API logs into actionable insights, enabling engineering teams to detect issues proactively, investigate problems efficiently, and optimize performance systematically.

By following the implementation patterns outlined in this guide—structured logging in DreamFactory, enriched processing in Logstash, indexed storage in Elasticsearch, and visual analytics in Grafana—you build an observability stack that evolves with your needs. The flexibility of open-source tools combined with DreamFactory's extensible architecture provides a foundation for long-term operational excellence.

For DreamFactory users committed to enterprise-scale API management, the LEG stack is not just a monitoring solution—it's a strategic investment in reliability, security, and data-driven decision making.