Skip to main content
Phoebe uses two integration approaches: Collector-level ingestion and vendor API access. We recommend using both - ingestion enables better evidence retrieval, while API access fetches additional context such as dashboards.
Default data retention is 30 days. Contact support to request a different retention period.

Before you begin: Collector-level ingestion

  • Ingestion key from Phoebe (provided during onboarding)
  • Outbound HTTPS (443) allowed to ingest.phoebe.ai
  • (Recommended) A secrets manager or environment variables - avoid hard-coding keys in configs

Choosing an integration

ProviderCollector-level IngestionAPI AccessData Types
OpenTelemetryLogs, Metrics, Traces
AWS CloudWatchLogs, Metrics
GrafanaLogs, Metrics, Traces
DatadogLogs, Metrics, Traces
Google Cloud PlatformLogs
New RelicLogs, Metrics, Traces
Using OpenTelemetry? If your stack supports OTel, we recommend it for Collector-level ingestion - it offers comprehensive coverage with the lowest operational overhead.
Still configure API access for your observability vendor for Phoebe to access additional context.

Troubleshooting

Verifying ingestion

  1. Generate test data in the relevant system (log line, synthetic metric, or trace)
  2. Check sender logs (Collector/Firehose) for HTTP errors or backoffs
  3. Confirm with Phoebe - share the approximate timestamp and source so we can validate arrival

Common issues & fixes

Authentication (401/403)
  • Double-check the exact header name/value:
    • OpenTelemetry: X-API-Key: <key>
    • Firehose HTTP destination: configure the Access key to your Phoebe key
    • Promtail: X-API-Key: <key> in the headers section
  • Ensure no extra whitespace or quotes; prefer environment variables or secrets manager.
Connectivity / timeouts
  • Allow outbound HTTPS (443) to ingest.phoebe.ai from the sending network (EKS/ECS nodes, VMs, etc.)
  • Enable compression to reduce payload size (gzip)
  • Tune buffering to keep chunks ≤ 5 MB
  • For Firehose, reduce buffer size/interval to lower latency
OpenTelemetry data not appearing
  • Verify the Phoebe exporter is listed in every relevant pipeline (logs, metrics, traces)
  • Ensure you added an additional exporter rather than replacing existing ones
  • Confirm endpoint is exactly https://ingest.phoebe.ai/otel (no /v1 prefix)
  • Check Collector logs for otlphttp/phoebe export errors and backoff messages
Amazon Data Firehose delivery failures
  • Check the S3 failure bucket for recent objects; inspect payloads and error messages
  • Verify:
    • Firehose stream is Active
    • IAM roles & policies (both Firehose and CloudWatch Logs) are correct
    • CloudWatch subscription filters show Active
  • Keep buffer size under 5 MB to avoid large payload failures
Promtail logs not appearing
  • Verify the Phoebe client is listed in the clients section with the correct URL (https://ingest.phoebe.ai/loki/api/v1/push).
  • Ensure X-API-Key header is set correctly with your Phoebe ingestion key.
  • If using multiple destinations, confirm Phoebe is listed last to avoid blocking your primary destination.
  • Check Promtail logs for connection errors or 4xx/5xx responses.
  • Verify outbound HTTPS (443) to ingest.phoebe.ai is allowed from the Promtail host.
Helpful HTTP responses (at sender)
  • 2xx — accepted by Phoebe
  • 4xx — fix configuration (keys/headers/endpoint/payload format)
  • 5xx — transient; sender should retry (ensure retries/backoff are enabled)

Getting help

When contacting support, include:
  • Integration method (OpenTelemetry, Firehose, Promtail, or Prometheus)
  • Sender logs with error snippets
  • Approximate timestamp(s) and source (service name, Log Group, etc.)
  • Confirmation that your ingestion key is configured and how it’s stored (env/secrets)

Security & operations best practices

  • Don’t hard-code secrets. Use environment variables or a secrets manager and restrict read access
  • TLS in transit. All endpoints use HTTPS. Ensure outbound proxies (if any) don’t downgrade TLS
  • Right-sized batching. Smaller batches reduce latency and make retries cheaper; larger batches are more efficient for high-throughput logs. Start with 1–5 MB buffers
  • Resource attributes matter. Ensure service.name, deployment.environment, and (optionally) service.version are present so investigations group data correctly
  • Backpressure & retries. Keep retries enabled with exponential backoff to ride out transient network hiccups