Cloud Management Dashboard Intern
Skip the busywork
ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.
About this role
Role Overview
At KBR, we are dedicated to advancing technology and providing innovative solutions to our clients. We are looking for a talented and motivated individual to join our team and help us lead the way in hybrid environment instrumentation at an enterprise scale.
As part of KBR, you will have the opportunity to develop and enhance the tools and dashboards that are crucial for the success of various teams, including DevOps, Security, and Systems Administration/Engineering. Your contributions will directly impact how we manage and monitor complex systems, ensuring efficiency and security across our operations.
This is a hands-on role where you’ll develop how hybrid environments are instrumented at enterprise scale and contribute code and dashboards that are used by real teams (DevOps, Security, Systems Administration/Engineering).
*** Three years of continuous U.S. residency required ***
Responsibilities
- Data Ingestion & Integration
- Write scripts and REST/API calls to collect data from cloud accounts as well as on premises sources such as Kubernetes and vCenter.
- Write scripts and REST/API calls to collect cost and security information to aggregate and process.
- Aggregation & Processing
- Normalize disparate data (metrics, events, billing, security findings) and publish to a centralized data location for display.
- Implement basic ETL steps: shape, label, and enrich data to support drill‑downs and multi‑tenant views.
- Dashboards & Visualization
- Build dashboards that show high‑level health/capacity and enable drill‑down to specific services, clusters, nodes, or costs.
- Performance, Security, and Cost
- Track SLIs/SLOs (latency, availability, saturation), security posture (e.g., misconfigurations, vulnerabilities), and cloud/on-prem costs.
- Prototype recommendations/remediation hints (e.g., right‑sizing, idle resource cleanup, patch drift).
Minimum Qualifications
- Currently pursuing a degree in Computer Science, Information Systems, Data Science, or related field.
- Foundational knowledge of AWS (can identify common services/resources; understands CloudWatch/CloudTrail concepts).
- Comfortable writing scripts in to call REST/JSON APIs, or to scrape and process data.
- Basic exposure to Kubernetes (pods, services, namespaces) and Prometheus/Grafana (metrics, queries, panels).
- Familiarity with Linux, containers, and Git-based workflows.
- Strong problem-solving, curiosity, and willingness to learn enterprise tooling.
- *** Three years of continuous U.S. residency required ***
Equivalent education and/or experience will be considered.
Primary Tools & Technologies
- Metrics & Visualization
- Prometheus – Metrics collection and alerting (can integrate with AWS workloads)
- Grafana – Dashboarding and visualization (including Amazon Managed Grafana)
- Amazon CloudWatch – Metrics, logs, dashboards, and alarms
- AWS Services
- CloudWatch – Core observability (metrics, logs, dashboards)
- CloudTrail – API activity tracking
- AWS Config – Resource compliance and configuration history
- Cost Explorer / CUR (Cost & Usage Reports) – Cost monitoring and optimization
- AWS X-Ray – Distributed tracing for applications
- AWS OpenSearch Service – Log aggregation and search
- AWS Distro for OpenTelemetry (ADOT) – Instrumentation for metrics and traces
- Languages
- Python – Automation, data processing, API integration
- PowerShell – Windows-based automation (if applicable)
- Automation / IaC
- Terraform – Infrastructure as Code
- AWS CloudFormation – Native IaC
- GitHub / GitLab – Version control and CI/CD
- APIs / ETL
- REST / JSON – API integration
- AWS SDKs (boto3 for Python) – AWS automation and data extraction
- Lightweight ETL – For transforming CloudWatch/Cost data into dashboards
*** Three years of continuous U.S. residency required ***