Key challenges
- Limited observability and alerting across services.
- Manual, error-prone Kubernetes deployment workflows.
- Fragmented infrastructure without standardized network primitives.
- Inefficient Jenkins flows with manual UI steps.
- Low reuse across Kubernetes service deployments.
Outcomes
- Centralized logging, metrics, and alerts for EKS workloads.
- Secure, scalable network aligned with AWS best practices.
- Reusable Helm charts that simplified multi-service delivery.
- Jenkins pipelines codified with clear environment separation.
- Reduced deployment errors by eliminating manual UI steps.
Architecture + observability
A high-level view of the observability stack used to standardize logging and alerts for EKS workloads.
My contributions
- Built the observability stack with CloudWatch Agent, Fluent Bit, and custom dashboards with alerts.
- Designed the network layer (VPC, subnets, route tables, security groups).
- Migrated EKS deployments to pure YAML workflows, reducing reliance on
kubectl. - Implemented Helm to standardize and template multi-service deployments.
- Deployed Jenkins Cloud using Kubernetes as dynamic build agents.
- Refactored Jenkins pipelines into Jenkinsfiles with audit-ready separation.
Technologies used
- Infrastructure as code: Terraform (modular).
- CI/CD: Jenkins (Kubernetes Cloud, Jenkinsfiles), GitHub Actions.
- Cloud provider: AWS (EKS, RDS, ALB, CloudWatch, IAM, Secrets Manager).
- Kubernetes management: Helm, kubectl, YAML manifests.
- Monitoring: CloudWatch Logs, Metrics, Dashboards, Fluent Bit.
