Supported data types: Logs • Metrics
Ingestion with AWS Data Firehose
Stream CloudWatch data to Phoebe using Amazon Data Firehose with an HTTP endpoint destination.
CloudWatch Logs
CloudWatch Metrics
Prerequisites
- Permissions to create IAM roles/policies, Firehose delivery streams, subscription filters, and (optionally) S3 buckets
- Phoebe ingestion key
- CloudWatch Log Groups to stream
SetupCreate (optional) S3 bucket for failures
Use this bucket for failed deliveries to aid troubleshooting.Example name: phoebe-log-stream-failures
Create a minimal-scope IAM role for Firehose
Firehose can auto-create a role, but a custom role limits permissions and improves security.
Policy (replace the bucket name if different): {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::phoebe-log-stream-failures",
"arn:aws:s3:::phoebe-log-stream-failures/*"
]
}
]
}
Trust policy for Firehose: {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "firehose.amazonaws.com" },
"Action": "sts:AssumeRole"
}
]
}
Create a Firehose delivery stream (HTTP endpoint)
- Source: Direct PUT
- Destination: HTTP Endpoint
- HTTP endpoint URL:
https://ingest.phoebe.ai/aws/firehose/logs
- Access key: your Phoebe ingestion key
Prefer storing the key in AWS Secrets Manager; grant Firehose read access.
- Content encoding: GZIP
- Retry duration: ~300 seconds (tune as needed)
S3 backup:
- Mode: Failed data only
- Bucket: the one you created for failures
Buffering:
- Size: 1–5 MB (smaller = lower latency)
- Interval: 60–300 s
Service access:
- Choose the IAM role you created for Firehose.
Allow CloudWatch Logs to put records into Firehose
Create an IAM role for logs.amazonaws.com to call firehose:PutRecord* on your stream.Policy (replace REGION, ACCOUNT_ID, and stream name): {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["firehose:PutRecord","firehose:PutRecordBatch"],
"Resource": "arn:aws:firehose:REGION:ACCOUNT_ID:deliverystream/phoebe-firehose-stream"
}
]
}
Trust policy (CloudWatch Logs): {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "logs.amazonaws.com" },
"Action": "sts:AssumeRole"
}
]
}
Create CloudWatch Logs subscription filter(s)
For each Log Group you want to stream:
- Destination: Amazon Data Firehose
- Delivery stream: your Firehose stream
- Role: the CloudWatch-to-Firehose role
- Filter pattern: empty for all logs (or set a pattern to restrict)
Verify delivery
Generate a test event: aws logs put-log-events \
--log-group-name /your/log/group \
--log-stream-name test-stream \
--log-events timestamp=$(date +%s)000,message='{"test":"phoebe-firehose"}'
- In Firehose → your stream → Monitoring, verify successful deliveries increase and failures remain zero.
- Check the failure S3 bucket is empty (or inspect objects for errors).
Metric Streams provide near real-time CloudWatch metrics (2-3 minute latency) for services like EC2, RDS, Lambda, ECS, ALB, WAF, and 200+ other AWS services.
SetupCreate a Firehose delivery stream for metrics
Create a new Firehose delivery stream (separate from logs):
- Source: Direct PUT
- Destination: HTTP Endpoint
- HTTP endpoint URL:
https://ingest.phoebe.ai/aws/firehose/metrics
- Access key: your Phoebe ingestion key
- Content encoding: GZIP
- Retry duration: 300 seconds
S3 backup:
- Mode: Failed data only
- Bucket: your failure bucket (can reuse the one created for logs)
Buffering:
- Size: 1–5 MB
- Interval: 60 s (lower for near real-time)
Service access:
- Use the same IAM role created for logs, or create a dedicated one.
Create a CloudWatch Metric Stream
In CloudWatch → Metrics → Streams, create a new Metric Stream:
- Destination: Amazon Data Firehose (in same account)
- Firehose stream: select your metrics Firehose stream
- Output format: JSON
Use JSON format, not OpenTelemetry.
- Metrics to stream: Choose either:
- All metrics — streams all CloudWatch namespaces
- Selected namespaces — choose specific services (e.g.,
AWS/EC2, AWS/RDS, AWS/Lambda, AWS/ApplicationELB, AWS/WAFV2)
Start with specific namespaces to control costs, then expand as needed. High-cardinality namespaces like AWS/EC2 can generate significant volume.
- Additional statistics: Optionally include percentiles (p50, p90, p99) for supported metrics.
Configure IAM permissions
CloudWatch Metric Streams needs permission to write to Firehose. Create or update an IAM role:Policy: {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["firehose:PutRecord", "firehose:PutRecordBatch"],
"Resource": "arn:aws:firehose:REGION:ACCOUNT_ID:deliverystream/your-metrics-stream"
}
]
}
Trust policy: {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "streams.metrics.cloudwatch.amazonaws.com" },
"Action": "sts:AssumeRole"
}
]
}
Verify metrics delivery
After a few minutes:
- In CloudWatch → Metric Streams, verify the stream shows Running status
- Check the MetricUpdate count is increasing in the stream’s monitoring
- In Firehose → your stream → Monitoring, verify successful deliveries
- Check the failure S3 bucket is empty
Metrics typically appear within 2-3 minutes of being emitted by AWS services.
Available metricsCloudWatch Metric Streams can send metrics from 200+ AWS services, including:| Service | Example Metrics |
|---|
| EC2 | CPUUtilization, NetworkIn/Out, DiskReadOps |
| RDS | CPUUtilization, DatabaseConnections, FreeableMemory |
| Lambda | Invocations, Duration, Errors, Throttles |
| ECS | CPUUtilization, MemoryUtilization |
| ALB | RequestCount, TargetResponseTime, HTTPCode_* |
| WAF | AllowedRequests, BlockedRequests |
| ElastiCache | CacheHits, CacheMisses, CPUUtilization |
| SQS | NumberOfMessagesReceived, ApproximateAgeOfOldestMessage |
API access
- Create IAM Role/User with
CloudWatchReadOnlyAccess policy
- Generate access key and secret
- Enter credentials in the Integrations UI