Skip to content

Instantly share code, notes, and snippets.

@jwmatthews
Created February 16, 2026 23:37
Show Gist options
  • Select an option

  • Save jwmatthews/546f1e12421350a31f7d2f0517bdb247 to your computer and use it in GitHub Desktop.

Select an option

Save jwmatthews/546f1e12421350a31f7d2f0517bdb247 to your computer and use it in GitHub Desktop.

EKS to ROSA Migration Pain Points: Real-World Problems and Solutions

Version: 1.0
Last Updated: February 2026
Target Audience: Platform Engineers, OpenShift Migration Specialists, Red Hat Customers


Executive Summary

Migrating from Amazon EKS to Red Hat OpenShift Service on AWS (ROSA) is unique among Kubernetes migrations. Since both platforms run on AWS infrastructure, cloud-specific integrations (IAM, EBS, S3, VPC) remain largely compatible. However, OpenShift's enterprise-grade security model, built-in operators, and opinionated platform features create friction points that can cause application failures if not properly addressed.

This document catalogs real-world migration challenges when moving from vanilla Kubernetes (EKS) to OpenShift (ROSA), with detailed remediation strategies, code examples, and automated detection patterns for migration tooling like MTA/Konveyor.

Key Differences: EKS vs ROSA

Aspect EKS ROSA
Kubernetes Vanilla K8s OpenShift 4.x (K8s + extensions)
Security Permissive defaults Restrictive SCCs
Ingress Ingress resources Routes (preferred)
Registry ECR external Internal + external
Monitoring CloudWatch (separate) Prometheus/Grafana (built-in)
Logging CloudWatch/FluentBit Elasticsearch/Fluentd (built-in)
GitOps ArgoCD (install yourself) OpenShift GitOps (built-in)
CI/CD CodePipeline (external) OpenShift Pipelines/Tekton (built-in)
Service Mesh Istio/App Mesh (install) OpenShift Service Mesh (built-in)
Operators Optional First-class citizens
Multi-tenancy Namespaces Projects + additional RBAC

Why This Migration is Different

Unlike EKS→AKS or EKS→GKE migrations:

  • βœ… AWS integrations work: IRSA, EBS, ALB, Secrets Manager, etc.
  • βœ… No cloud credential migration: IAM roles remain the same
  • βœ… Network stays the same: VPC, Security Groups, subnets
  • ❌ Security model changes dramatically: SCCs are stricter than PSPs
  • ❌ API differences: Routes vs Ingress, Projects vs Namespaces
  • ❌ Image restrictions: OpenShift blocks certain images by default

Table of Contents

  1. Security Context Constraints (SCCs)
  2. Routes vs Ingress
  3. Container Image Compatibility
  4. Service Accounts and RBAC
  5. Projects vs Namespaces
  6. Storage and Persistent Volumes
  7. Network Policies
  8. Monitoring and Observability
  9. Logging
  10. Service Mesh
  11. Operators and Operator Lifecycle
  12. GitOps Integration
  13. CI/CD Pipelines
  14. Internal Image Registry
  15. AWS Integration Continuity
  16. Detection Patterns for MTA/Konveyor
  17. Migration Strategies
  18. Quick Reference Tables

1. Security Context Constraints (SCCs)

Pain Point: Containers Running as Root

Severity: πŸ”΄ Critical - Application won't start
Frequency: Very Common (60-70% of EKS workloads)
Impact: Pods stuck in CreateContainerConfigError

The Problem

EKS allows containers to run as root by default. ROSA enforces Security Context Constraints (SCCs) that prevent running as root unless explicitly granted. This is OpenShift's most impactful difference from vanilla Kubernetes.

EKS Configuration (Works)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21  # Official nginx image runs as root
        ports:
        - containerPort: 80
        volumeMounts:
        - name: config
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
      volumes:
      - name: config
        configMap:
          name: nginx-config

This works fine on EKS - nginx binds to port 80, runs as root (UID 0).

After Migration to ROSA (Broken)

kubectl get pods -n production
# NAME                         READY   STATUS                       RESTARTS   AGE
# nginx-app-5d8f9c5b4-abc123   0/1     CreateContainerConfigError   0          2m

kubectl describe pod nginx-app-5d8f9c5b4-abc123 -n production
# Events:
#   Warning  Failed  Error: container has runAsNonRoot and image will run as root
#   Warning  Failed  Error: container's runAsUser breaks non-root policy

Root Cause Analysis

OpenShift assigns the restricted SCC by default, which:

  1. Blocks root (UID 0): runAsUser must be non-zero
  2. Randomizes UID: Assigns random UID from project's range (e.g., 1000660000-1000669999)
  3. Drops capabilities: CAP_NET_BIND_SERVICE not available (can't bind ports < 1024)
  4. No privilege escalation: allowPrivilegeEscalation: false
  5. Filesystem is read-only: Except for explicitly mounted volumes

Check assigned SCC:

# See what SCC is assigned to pod
oc get pod nginx-app-5d8f9c5b4-abc123 -n production -o yaml | grep openshift.io/scc
# openshift.io/scc: restricted

# List all SCCs
oc get scc
# NAME               PRIV    CAPS         SELINUX     RUNASUSER          FSGROUP     SUPGROUP    PRIORITY
# anyuid             false   <no value>   MustRunAs   RunAsAny           RunAsAny    RunAsAny    10
# hostaccess         false   <no value>   MustRunAs   MustRunAsRange     MustRunAs   RunAsAny    <no value>
# hostmount-anyuid   false   <no value>   MustRunAs   RunAsAny           RunAsAny    RunAsAny    <no value>
# hostnetwork        false   <no value>   MustRunAs   MustRunAsRange     MustRunAs   MustRunAs   <no value>
# node-exporter      false   <no value>   RunAsAny    RunAsAny           RunAsAny    RunAsAny    <no value>
# nonroot            false   <no value>   MustRunAs   MustRunAsNonRoot   RunAsAny    RunAsAny    <no value>
# privileged         true    *            RunAsAny    RunAsAny           RunAsAny    RunAsAny    <no value>
# restricted         false   <no value>   MustRunAs   MustRunAsRange     MustRunAs   RunAsAny    <no value>

Solution 1: Use Non-Root Compatible Image (Best Practice)

Recommended: Rebuild or use images designed for OpenShift.

# Dockerfile for non-root nginx
FROM nginx:1.21

# Create non-root user
RUN chgrp -R 0 /var/cache/nginx \
    /var/run \
    /var/log/nginx \
    /usr/share/nginx/html && \
    chmod -R g=u /var/cache/nginx \
    /var/run \
    /var/log/nginx \
    /usr/share/nginx/html

# Use port > 1024 (no root required)
RUN sed -i 's/listen\s*80;/listen 8080;/g' /etc/nginx/conf.d/default.conf
EXPOSE 8080

# Run as non-root user
USER 1001

ROSA Configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: your-registry/nginx-nonroot:1.21  # Updated image
        ports:
        - containerPort: 8080  # Changed from 80
        volumeMounts:
        - name: config
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
      # No securityContext needed - works with 'restricted' SCC

Solution 2: Grant 'anyuid' SCC (Less Secure)

Use Case: Legacy applications that can't be rebuilt, vendor-supplied images.

# Create service account
oc create serviceaccount nginx-sa -n production

# Grant anyuid SCC to service account
oc adm policy add-scc-to-user anyuid -z nginx-sa -n production

# Verify
oc get scc anyuid -o yaml | grep -A 5 users:
# users:
# - system:serviceaccount:production:nginx-sa

Updated Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
  namespace: production
spec:
  template:
    spec:
      serviceAccountName: nginx-sa  # Use service account with anyuid SCC
      containers:
      - name: nginx
        image: nginx:1.21  # Original image now works
        ports:
        - containerPort: 80

⚠️ Security Warning: anyuid SCC allows running as any UID including root. Only use when necessary and document the justification.

Solution 3: Custom SCC (Balanced Approach)

Use Case: Need specific capabilities but not full anyuid.

apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  name: nginx-custom-scc
# Allow running as specific UID range
runAsUser:
  type: MustRunAsRange
  uidRangeMin: 1000
  uidRangeMax: 2000
# Allow binding to privileged ports
allowedCapabilities:
- NET_BIND_SERVICE
# Allow host ports
allowHostPorts: true
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
fsGroup:
  type: RunAsAny
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SETUID
- SETGID
seLinuxContext:
  type: MustRunAs
supplementalGroups:
  type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
users: []
groups: []
priority: 10
# Create custom SCC
oc apply -f nginx-custom-scc.yaml

# Grant to service account
oc adm policy add-scc-to-user nginx-custom-scc -z nginx-sa -n production

Common SCC Issues and Solutions

Issue 1: Pod needs to write to filesystem

Error:

Error: failed to create containerd container: cannot set fsgroup to 0: operation not permitted

Solution: Use fsGroup in securityContext:

spec:
  securityContext:
    fsGroup: 1000
  containers:
  - name: app
    volumeMounts:
    - name: data
      mountPath: /app/data
Issue 2: Database containers failing

Example: PostgreSQL official image

Error:

initdb: error: could not create directory "/var/lib/postgresql/data": Permission denied

Solution: Use OpenShift-compatible PostgreSQL image:

spec:
  containers:
  - name: postgres
    image: registry.redhat.io/rhel8/postgresql-13:latest  # OpenShift-compatible
    # or
    image: bitnami/postgresql:13  # Community option with non-root support
Issue 3: Application writes logs to /var/log

Error:

Error: failed to write to /var/log/app.log: read-only file system

Solution: Log to stdout/stderr (12-factor app) or use volume:

spec:
  containers:
  - name: app
    volumeMounts:
    - name: logs
      mountPath: /var/log
  volumes:
  - name: logs
    emptyDir: {}

SCC Migration Checklist

  • Identify all containers running as root
  • Check which containers bind to ports < 1024
  • Inventory filesystem write requirements
  • Test with OpenShift-compatible images first
  • Document why anyuid is needed (if used)
  • Update CI/CD to build non-root images
  • Add SCC validation to deployment pipelines

Detection Patterns for MTA/Konveyor

# PATTERN 1: No securityContext specified
spec:
  containers:
  - name: app
    # Missing securityContext - likely runs as root

# PATTERN 2: Explicit root user
securityContext:
  runAsUser: 0

# PATTERN 3: Privileged container
securityContext:
  privileged: true

# PATTERN 4: Host path volumes
volumes:
- name: host-data
  hostPath:
    path: /data

# PATTERN 5: Binding to privileged ports
ports:
- containerPort: 80
- containerPort: 443

# ACTION: Flag for SCC review, suggest non-root alternatives

2. Routes vs Ingress

Pain Point: Ingress Resources Don't Create Routes

Severity: 🟑 Medium - Traffic routing affected
Frequency: Very Common (80%+ of applications)
Impact: External access not configured, different feature set

The Problem

EKS uses standard Kubernetes Ingress resources (often with ALB Ingress Controller). OpenShift has its own routing layer called Routes, which predates Kubernetes Ingress and has different capabilities. While ROSA supports Ingress resources, Routes are the native and preferred mechanism.

EKS Configuration (Works)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  namespace: production
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789012:certificate/abc123
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/ssl-redirect: "443"
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 8080
      - path: /admin
        pathType: Prefix
        backend:
          service:
            name: admin-service
            port:
              number: 8081

After Migration to ROSA (Partial Functionality)

oc get ingress -n production
# NAME          CLASS    HOSTS              ADDRESS   PORTS   AGE
# api-ingress   <none>   api.example.com              80      5m

# Ingress exists but may not create load balancer automatically
# OpenShift router handles traffic differently

oc get routes -n production
# No resources found in production namespace.
# Routes not auto-created from Ingress

Understanding OpenShift Routes

Routes provide:

  • HTTP/HTTPS/TLS termination
  • Path-based and host-based routing
  • Load balancing across pods
  • WebSocket support
  • Traffic splitting (A/B testing, canary)
  • Rate limiting (via annotations)

Routes vs Ingress differences:

Feature Ingress OpenShift Route
Standard K8s standard OpenShift-specific
TLS termination Via Ingress controller Built-in to router
Path rewriting Controller-dependent Native support
Traffic splitting Via annotations Native alternateBackends
WebSocket Controller-dependent Native support
Wildcard routing Controller-dependent Native support
mTLS Complex setup Native support

Solution 1: Migrate to Routes (Recommended)

Simple HTTP Route:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: api-route
  namespace: production
spec:
  host: api.example.com  # Optional - auto-generated if omitted
  to:
    kind: Service
    name: api-service
    weight: 100
  port:
    targetPort: 8080

HTTPS Route with Edge Termination:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: api-route-tls
  namespace: production
spec:
  host: api.example.com
  to:
    kind: Service
    name: api-service
  port:
    targetPort: 8080
  tls:
    termination: edge  # TLS terminates at router
    insecureEdgeTerminationPolicy: Redirect  # HTTP -> HTTPS redirect
    # Optional: Custom certificate (defaults to OpenShift wildcard cert)
    certificate: |
      -----BEGIN CERTIFICATE-----
      MIIDXTCCAkWgAwIBAgIJAKZ...
      -----END CERTIFICATE-----
    key: |
      -----BEGIN RSA PRIVATE KEY-----
      MIIEpAIBAAKCAQEA0Z...
      -----END RSA PRIVATE KEY-----
    caCertificate: |
      -----BEGIN CERTIFICATE-----
      MIIDXTCCAkWgAwIBAgIJAKZ...
      -----END CERTIFICATE-----

Route with Path-Based Routing:

# Note: OpenShift Routes don't support multiple paths in single Route
# Need separate Routes for different paths

# Route 1: /api -> api-service
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: api-route
  namespace: production
spec:
  host: api.example.com
  path: /api
  to:
    kind: Service
    name: api-service
  port:
    targetPort: 8080
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
---
# Route 2: /admin -> admin-service
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: admin-route
  namespace: production
spec:
  host: api.example.com
  path: /admin
  to:
    kind: Service
    name: admin-service
  port:
    targetPort: 8081
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect

Advanced: Traffic Splitting (Canary Deployments)

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: api-canary
  namespace: production
spec:
  host: api.example.com
  to:
    kind: Service
    name: api-service-v1
    weight: 90  # 90% of traffic
  alternateBackends:
  - kind: Service
    name: api-service-v2
    weight: 10  # 10% of traffic (canary)
  port:
    targetPort: 8080
  tls:
    termination: edge

Advanced: mTLS (Re-encrypt Termination)

Use Case: End-to-end encryption, service mesh

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: api-mtls
  namespace: production
spec:
  host: api.example.com
  to:
    kind: Service
    name: api-service
  port:
    targetPort: 8443  # Backend uses HTTPS
  tls:
    termination: reencrypt  # Decrypt at router, re-encrypt to pod
    insecureEdgeTerminationPolicy: Redirect
    # Client-facing certificate
    certificate: |
      -----BEGIN CERTIFICATE-----
      ...
      -----END CERTIFICATE-----
    key: |
      -----BEGIN RSA PRIVATE KEY-----
      ...
      -----END RSA PRIVATE KEY-----
    # Backend (pod) certificate validation
    destinationCACertificate: |
      -----BEGIN CERTIFICATE-----
      ...
      -----END CERTIFICATE-----

Solution 2: Keep Using Ingress (with limitations)

OpenShift supports standard Ingress, but with caveats:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  namespace: production
  annotations:
    # OpenShift-specific annotations
    route.openshift.io/termination: "edge"
    route.openshift.io/insecure-policy: "Redirect"
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 8080
  tls:
  - hosts:
    - api.example.com
    secretName: api-tls-secret  # TLS certificate

What happens:

  • OpenShift creates a Route automatically from the Ingress
  • Limited feature set compared to native Routes
  • No traffic splitting, advanced routing features

Check auto-created Route:

oc get route -n production
# NAME                    HOST/PORT           PATH   SERVICES      PORT   TERMINATION   WILDCARD
# api-ingress-abcd123     api.example.com     /      api-service   8080   edge          None

Solution 3: Hybrid Approach

Use both for different use cases:

Routes for:

  • Internal services (microservices communication)
  • Advanced routing features
  • Traffic splitting
  • OpenShift-native applications

Ingress for:

  • Multi-cloud portability
  • Vendor-neutral manifests
  • GitOps with non-OpenShift clusters

Accessing Routes

# List all routes
oc get routes -n production

# Get route URL
oc get route api-route -n production -o jsonpath='{.spec.host}'
# Output: api.example.com

# Full URL with protocol
echo "https://$(oc get route api-route -n production -o jsonpath='{.spec.host}')"

# Test route
curl -k https://$(oc get route api-route -n production -o jsonpath='{.spec.host}')/health

Custom Domain with Route

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: custom-domain
  namespace: production
spec:
  host: myapp.company.com  # Custom domain
  to:
    kind: Service
    name: api-service
  tls:
    termination: edge
    certificate: |
      # Certificate for myapp.company.com
    key: |
      # Private key

DNS Configuration:

# Get router load balancer
oc get svc router-default -n openshift-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
# Output: a1b2c3d4e5f6g7h8.elb.us-east-1.amazonaws.com

# Create CNAME record in Route 53
# myapp.company.com -> a1b2c3d4e5f6g7h8.elb.us-east-1.amazonaws.com

Rate Limiting with Routes

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: api-rate-limited
  namespace: production
  annotations:
    haproxy.router.openshift.io/rate-limit-connections: "100"
    haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp: "10"
    haproxy.router.openshift.io/rate-limit-connections.rate-http: "100"
    haproxy.router.openshift.io/rate-limit-connections.rate-tcp: "100"
spec:
  host: api.example.com
  to:
    kind: Service
    name: api-service
  tls:
    termination: edge

3. Container Image Compatibility

Pain Point: Images Requiring Root or Specific UIDs

Severity: πŸ”΄ High - Pods won't start
Frequency: Very Common
Impact: Need to rebuild images or find alternatives

The Problem

Many popular container images from Docker Hub assume root access or specific UIDs. OpenShift's random UID assignment breaks these images.

Common Problematic Images

Won't work on OpenShift without modification:

  1. nginx (official) - Runs as root, binds to port 80
  2. redis (official) - Expects UID 999
  3. mysql (official) - Expects UID 999, writes to /var/lib/mysql
  4. postgres (official) - Expects UID 999, writes to /var/lib/postgresql
  5. mongodb (official) - Expects UID 999
  6. elasticsearch (official) - Expects UID 1000, writes to multiple paths
  7. rabbitmq (official) - Expects UID 999
  8. jenkins (official) - Runs as root
  9. node (official) - Often runs as root in derived images

Solution 1: Use OpenShift-Compatible Images

Red Hat Certified Images:

# PostgreSQL
spec:
  containers:
  - name: postgres
    image: registry.redhat.io/rhel8/postgresql-13:latest
    # or from catalog
    image: registry.redhat.io/rhscl/postgresql-12-rhel7:latest

# Redis
spec:
  containers:
  - name: redis
    image: registry.redhat.io/rhel8/redis-6:latest

# MySQL
spec:
  containers:
  - name: mysql
    image: registry.redhat.io/rhel8/mysql-80:latest

# MongoDB
spec:
  containers:
  - name: mongodb
    image: registry.redhat.io/rhscl/mongodb-36-rhel7:latest

Community OpenShift-Compatible Images:

# Bitnami images (support arbitrary UIDs)
spec:
  containers:
  - name: postgres
    image: bitnami/postgresql:13
  - name: redis
    image: bitnami/redis:6
  - name: mysql
    image: bitnami/mysql:8.0
  - name: mongodb
    image: bitnami/mongodb:4.4

Solution 2: Rebuild Images for OpenShift

Dockerfile best practices for OpenShift:

FROM node:16

# Create app directory with group permissions
WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy application code
COPY . .

# CRITICAL: Set permissions for arbitrary UID
# OpenShift runs as random UID in root group (GID 0)
RUN chgrp -R 0 /app && \
    chmod -R g=u /app

# Use non-root user (will be overridden by OpenShift anyway)
USER 1001

# Use unprivileged port
EXPOSE 8080

# Start application
CMD ["node", "server.js"]

Key principles:

  1. Group permissions: OpenShift runs as random UID but always in root group (GID 0)

    chgrp -R 0 /path
    chmod -R g=u /path  # Group permissions = User permissions
  2. Unprivileged ports: Use ports > 1024

    EXPOSE 8080  # Not 80
    EXPOSE 8443  # Not 443
  3. Writable directories: Any directory that needs writes must have group permissions

    RUN mkdir /app/logs && \
        chgrp -R 0 /app/logs && \
        chmod -R g=u /app/logs
  4. No explicit USER: OpenShift overrides USER directive, but good to set for local testing

    USER 1001  # Or any non-zero UID

Example: Nginx for OpenShift

FROM nginx:1.21

# Create nginx user and group
RUN groupadd -g 1001 nginx && \
    useradd -u 1001 -g nginx -s /bin/bash -m nginx

# Change nginx to listen on 8080
RUN sed -i 's/listen\s*80;/listen 8080;/g' /etc/nginx/conf.d/default.conf && \
    sed -i '/user  nginx;/d' /etc/nginx/nginx.conf

# Set permissions for nginx directories
RUN chgrp -R 0 /var/cache/nginx \
                /var/run \
                /var/log/nginx \
                /etc/nginx/conf.d \
                /usr/share/nginx/html && \
    chmod -R g=u /var/cache/nginx \
                 /var/run \
                 /var/log/nginx \
                 /etc/nginx/conf.d \
                 /usr/share/nginx/html

# OpenShift will run as random UID, but set for local testing
USER 1001

EXPOSE 8080

Example: PostgreSQL for OpenShift

FROM postgres:13

# Directories that need to be writable
ENV PGDATA=/var/lib/postgresql/data

# Set up directories with group permissions
RUN mkdir -p /var/lib/postgresql/data /var/run/postgresql && \
    chgrp -R 0 /var/lib/postgresql /var/run/postgresql && \
    chmod -R g=u /var/lib/postgresql /var/run/postgresql

USER 999  # postgres user, but OpenShift will override

EXPOSE 5432

Solution 3: Init Containers for Permission Fixes

Use Case: Can't rebuild image, need to fix permissions at runtime

apiVersion: apps/v1
kind: Deployment
metadata:
  name: legacy-app
spec:
  template:
    spec:
      # Init container runs first, sets up permissions
      initContainers:
      - name: fix-permissions
        image: busybox
        command:
        - sh
        - -c
        - |
          chgrp -R 0 /data
          chmod -R g=u /data
        volumeMounts:
        - name: data
          mountPath: /data
        securityContext:
          runAsUser: 0  # Init container CAN run as root with anyuid SCC
      
      # Main application container
      containers:
      - name: app
        image: legacy-app:1.0
        volumeMounts:
        - name: data
          mountPath: /data
      
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: app-data
      
      # Service account with anyuid for init container
      serviceAccountName: legacy-app-sa

Image Validation Script

#!/bin/bash
# Script to test if image works on OpenShift

IMAGE=$1

echo "Testing image: $IMAGE"

# Run with random UID (simulates OpenShift)
RANDOM_UID=$((1000000 + RANDOM % 100000))

docker run --rm --user ${RANDOM_UID}:0 $IMAGE sh -c "
  echo 'Testing with UID: ${RANDOM_UID}, GID: 0'
  echo 'Checking write access...'
  touch /tmp/test 2>&1 && echo 'βœ“ /tmp writable' || echo 'βœ— /tmp not writable'
  id
  ps aux
" || echo "Failed to run container"

Common Image Issues

Issue 1: Can't write to application directory
# Fix with emptyDir volume
spec:
  containers:
  - name: app
    volumeMounts:
    - name: app-tmp
      mountPath: /app/tmp
  volumes:
  - name: app-tmp
    emptyDir: {}
Issue 2: Application expects specific UID files
# Use fsGroup to ensure files are readable
spec:
  securityContext:
    fsGroup: 1001  # Files created with this GID
  containers:
  - name: app
    # Now can read files even with random UID
Issue 3: Image pulls fail
# OpenShift may block images from untrusted registries
# Configure image content source policy

cat <<EOF | oc apply -f -
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
  name: allow-docker-hub
spec:
  repositoryDigestMirrors:
  - mirrors:
    - docker.io
    source: docker.io
EOF

Building OpenShift-Compatible Images in CI/CD

GitHub Actions:

name: Build OpenShift Image
on: [push]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Build with OpenShift compatibility
      run: |
        docker build \
          --build-arg USER_ID=1001 \
          --build-arg GROUP_ID=0 \
          -t myapp:${GITHUB_SHA} \
          -f Dockerfile.openshift \
          .
    
    - name: Test with random UID
      run: |
        docker run --rm --user $((1000000 + RANDOM % 100000)):0 \
          myapp:${GITHUB_SHA} \
          /bin/sh -c "id && ./healthcheck.sh"
    
    - name: Push to registry
      run: |
        docker push myapp:${GITHUB_SHA}

4. Service Accounts and RBAC

Pain Point: Automatic Token Mounting and RBAC Restrictions

Severity: 🟑 Medium
Frequency: Common
Impact: Applications can't access Kubernetes API, different default permissions

The Problem

OpenShift has stricter default RBAC and different service account behavior than vanilla Kubernetes. Service account tokens are mounted differently, and default permissions are more restrictive.

EKS Configuration (Works)

apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-sa
  namespace: production
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-server
spec:
  template:
    spec:
      serviceAccountName: app-sa
      containers:
      - name: api
        image: myapi:latest
        # Application uses in-cluster config to access K8s API
        # Works with default ServiceAccount permissions

Application code (works on EKS):

from kubernetes import client, config

# Load in-cluster config (uses mounted SA token)
config.load_incluster_config()

v1 = client.CoreV1Api()
# List pods in namespace - works with default permissions on EKS
pods = v1.list_namespaced_pod(namespace="production")

After Migration to ROSA (Fails)

# Application fails with RBAC error
kubectl logs api-server-xxx -n production
# kubernetes.client.exceptions.ApiException: (403)
# Reason: Forbidden
# "pods is forbidden: User "system:serviceaccount:production:app-sa" 
#  cannot list resource "pods" in API group "" in the namespace "production""

Root Cause

  1. More restrictive default RBAC: OpenShift denies most operations by default
  2. No implicit cluster-admin: Even in your own namespace
  3. Project-scoped permissions: Need explicit grants

Solution: Create Explicit RBAC

Grant permissions to list pods:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: production
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-sa-pod-reader
  namespace: production
subjects:
- kind: ServiceAccount
  name: app-sa
  namespace: production
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Verify permissions:

# Check what ServiceAccount can do
oc auth can-i list pods --as=system:serviceaccount:production:app-sa -n production
# yes

# Get all permissions for SA
oc describe rolebinding app-sa-pod-reader -n production

Common RBAC Patterns

Pattern 1: Application Needs to Read ConfigMaps/Secrets
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: config-reader
  namespace: production
rules:
- apiGroups: [""]
  resources: ["configmaps", "secrets"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-sa-config-reader
  namespace: production
subjects:
- kind: ServiceAccount
  name: app-sa
roleRef:
  kind: Role
  name: config-reader
  apiGroup: rbac.authorization.k8s.io
Pattern 2: Operator Needs Cluster-Wide Access
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: myoperator-role
rules:
- apiGroups: [""]
  resources: ["pods", "services", "endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["apps"]
  resources: ["deployments", "statefulsets"]
  verbs: ["get", "list", "watch", "create", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: myoperator-binding
subjects:
- kind: ServiceAccount
  name: operator-sa
  namespace: operators
roleRef:
  kind: ClusterRole
  name: myoperator-role
  apiGroup: rbac.authorization.k8s.io
Pattern 3: CI/CD Service Account
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: deployer
  namespace: production
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["pods", "services", "configmaps"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cicd-deployer
  namespace: production
subjects:
- kind: ServiceAccount
  name: github-actions-sa
roleRef:
  kind: Role
  name: deployer
  apiGroup: rbac.authorization.k8s.io

Service Account Token Mounting

EKS default: Tokens auto-mounted to /var/run/secrets/kubernetes.io/serviceaccount/token

ROSA: Same, but with additional security:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-sa
  namespace: production
# ROSA automatically adds:
# - Bound service account tokens (more secure)
# - Shorter token lifetimes
# - Automatic rotation

Disable auto-mounting if not needed:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: no-token-sa
automountServiceAccountToken: false
---
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      serviceAccountName: no-token-sa
      automountServiceAccountToken: false  # Can also set per-pod

OpenShift-Specific: SCCs and Service Accounts

# Grant SCC to service account
oc adm policy add-scc-to-user anyuid -z app-sa -n production

# View SCCs for service account
oc get scc -o json | jq '.items[] | select(.users[] | contains("system:serviceaccount:production:app-sa")) | .metadata.name'

5. Projects vs Namespaces

Pain Point: Additional Isolation and Multi-Tenancy

Severity: 🟒 Low - Mostly transparent
Frequency: Universal
Impact: Additional RBAC, network policies, quotas

The Problem

OpenShift wraps Namespaces with Projects, adding extra isolation, RBAC, and metadata. While Namespaces still exist and work, Projects provide additional enterprise features.

Understanding Projects

Project = Namespace + Annotations + RBAC + NetworkPolicies + ResourceQuotas

# Create project (recommended on OpenShift)
oc new-project production --display-name="Production Environment" --description="Production workloads"

# This creates:
# 1. Namespace named "production"
# 2. Project metadata
# 3. Default network policies
# 4. RBAC bindings for creator
# 5. Service accounts

# Create namespace (also works)
oc create namespace production
# Creates namespace but without Project metadata

Key Differences

1. Default RBAC

When creating Project:

oc new-project myapp

# Auto-creates RoleBindings:
oc get rolebindings -n myapp
# NAME                    ROLE                    USERS                 GROUPS  SERVICEACCOUNTS
# admin                   ClusterRole/admin       your-username         
# system:deployers        ClusterRole/system:deployer                   myapp/deployer
# system:image-builders   ClusterRole/system:image-builder              myapp/builder
# system:image-pullers    ClusterRole/system:image-puller               myapp/default

When creating Namespace:

oc create namespace myapp

# No auto-created RoleBindings - must create manually
2. Network Isolation

Projects get default network policies:

# Auto-created when using oc new-project
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-from-same-namespace
  namespace: production
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector: {}
3. Resource Quotas

Admins can set project-wide quotas:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: production-quota
  namespace: production
spec:
  hard:
    requests.cpu: "100"
    requests.memory: 200Gi
    limits.cpu: "200"
    limits.memory: 400Gi
    persistentvolumeclaims: "50"
    requests.storage: "1Ti"

Migration Impact

If using kubectl/namespaces on EKS:

kubectl create namespace production
kubectl apply -f deployment.yaml -n production

On ROSA, prefer oc/projects:

oc new-project production
oc apply -f deployment.yaml -n production
# or
kubectl apply -f deployment.yaml -n production  # Still works

Working with Projects

# List projects
oc projects

# Switch project
oc project production

# Get current project
oc project
# Using project "production" on server "https://api.rosa-cluster.xxxx.p1.openshiftapps.com:6443"

# Delete project (deletes namespace + metadata)
oc delete project production

Project Templates

Admins can create default templates for new projects:

apiVersion: template.openshift.io/v1
kind: Template
metadata:
  name: project-request
  namespace: openshift-config
objects:
- apiVersion: project.openshift.io/v1
  kind: Project
  metadata:
    name: ${PROJECT_NAME}
    annotations:
      openshift.io/description: ${PROJECT_DESCRIPTION}
      openshift.io/display-name: ${PROJECT_DISPLAYNAME}
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    name: default-quota
    namespace: ${PROJECT_NAME}
  spec:
    hard:
      requests.cpu: "10"
      requests.memory: 20Gi
- apiVersion: v1
  kind: LimitRange
  metadata:
    name: default-limits
    namespace: ${PROJECT_NAME}
  spec:
    limits:
    - type: Container
      default:
        cpu: 500m
        memory: 512Mi
      defaultRequest:
        cpu: 100m
        memory: 128Mi
parameters:
- name: PROJECT_NAME
- name: PROJECT_DISPLAYNAME
- name: PROJECT_DESCRIPTION

6. Storage and Persistent Volumes

Pain Point: Storage Classes and Operators

Severity: 🟒 Low - Mostly compatible
Frequency: Universal
Impact: Different defaults, operators available

The Good News

ROSA runs on AWS, so EBS integration works!

# EKS StorageClass - works on ROSA too
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp3-encrypted
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  encrypted: "true"
  iops: "3000"
  throughput: "125"
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer

OpenShift Enhancements

1. Default Storage Classes:

# ROSA comes with pre-configured storage classes
oc get sc
# NAME              PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION
# gp2               ebs.csi.aws.com         Delete          WaitForFirstConsumer   true
# gp2-csi           ebs.csi.aws.com         Delete          WaitForFirstConsumer   true
# gp3               ebs.csi.aws.com         Delete          WaitForFirstConsumer   true
# gp3-csi (default) ebs.csi.aws.com         Delete          WaitForFirstConsumer   true

2. OpenShift Data Foundation (ODF):

Optional: Enterprise storage with replication, snapshots, RGW (S3-compatible)

# Install ODF operator
oc create -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-storage
EOF

# Install via OperatorHub
# Provides: Ceph, RBD, CephFS, RGW

3. Volume Snapshots:

OpenShift makes snapshots easier:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: postgres-snapshot
  namespace: database
spec:
  volumeSnapshotClassName: csi-aws-vsc
  source:
    persistentVolumeClaimName: postgres-data

fsGroup Handling

Important difference with random UIDs:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: gp3-csi
---
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      securityContext:
        fsGroup: 1000  # CRITICAL: Files created with this GID
      containers:
      - name: app
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: app-data

Why fsGroup matters on OpenShift:

  • Pod runs as random UID (e.g., 1000660000)
  • Without fsGroup, files owned by root:root
  • With fsGroup, files owned by randomUID:1000
  • Group permissions allow write access

7. Network Policies

Pain Point: More Restrictive Defaults

Severity: 🟑 Medium
Frequency: Common
Impact: Inter-pod communication blocked

The Problem

OpenShift projects may have default network policies that restrict traffic. Applications expecting open communication may fail.

EKS (No Network Policies by Default)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: production
spec:
  template:
    spec:
      containers:
      - name: web
        # Can talk to anything
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: production
spec:
  template:
    spec:
      containers:
      - name: api
        # Can talk to anything
# All pods can communicate freely

ROSA (May Have Default Deny)

# Check for network policies
oc get networkpolicy -n production

# If default deny exists:
# NAME                              POD-SELECTOR   AGE
# allow-from-openshift-ingress      <none>         5m
# allow-from-same-namespace         <none>         5m

Default policy example:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-by-default
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Solution: Explicit Network Policies

Allow frontend β†’ backend:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

Allow DNS:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-dns
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: openshift-dns
    ports:
    - protocol: UDP
      port: 53

Allow all egress (common need):

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-all-egress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - {}

8. Monitoring and Observability

Pain Point: CloudWatch vs Built-in Prometheus

Severity: 🟑 Medium - Operational change
Frequency: Universal
Impact: Different metrics, queries, dashboards

The Problem

EKS typically uses CloudWatch for metrics. ROSA includes built-in Prometheus/Grafana stack. While you can keep using CloudWatch, the built-in stack is more Kubernetes-native.

EKS Configuration (CloudWatch)

CloudWatch Container Insights:

# FluentBit DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cloudwatch-agent
  namespace: amazon-cloudwatch
spec:
  template:
    spec:
      containers:
      - name: cloudwatch-agent
        image: amazon/cloudwatch-agent:latest
        # Sends metrics to CloudWatch

Querying CloudWatch:

import boto3
cloudwatch = boto3.client('cloudwatch')

response = cloudwatch.get_metric_statistics(
    Namespace='ContainerInsights',
    MetricName='pod_cpu_utilization',
    Dimensions=[
        {'Name': 'ClusterName', 'Value': 'production-eks'},
        {'Name': 'Namespace', 'Value': 'production'}
    ],
    StartTime=datetime.now() - timedelta(hours=1),
    EndTime=datetime.now(),
    Period=300,
    Statistics=['Average']
)

ROSA Built-in Monitoring

Automatic Prometheus stack:

# Prometheus already running
oc get pods -n openshift-monitoring
# NAME                                           READY   STATUS
# alertmanager-main-0                            6/6     Running
# alertmanager-main-1                            6/6     Running
# alertmanager-main-2                            6/6     Running
# cluster-monitoring-operator-xxx                2/2     Running
# grafana-xxx                                    3/3     Running
# prometheus-k8s-0                               6/6     Running
# prometheus-k8s-1                               6/6     Running
# prometheus-operator-xxx                        2/2     Running
# thanos-querier-xxx                             6/6     Running

Access Grafana:

# Get Grafana route
oc get route grafana -n openshift-monitoring
# NAME      HOST/PORT                                      
# grafana   grafana-openshift-monitoring.apps.rosa-xxx.com

# Access via browser
# Login with OpenShift credentials

Query metrics (PromQL):

# Via CLI
oc exec -n openshift-monitoring prometheus-k8s-0 -- \
  promtool query instant http://localhost:9090 \
  'sum(rate(container_cpu_usage_seconds_total{namespace="production"}[5m])) by (pod)'

# Via Console
# Navigate to: Observe β†’ Metrics

Application Metrics

Expose metrics in your app:

// Go example
package main

import (
    "net/http"
    "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promhttp"
)

var (
    requestCounter = prometheus.NewCounterVec(
        prometheus.CounterOpts{
            Name: "http_requests_total",
            Help: "Total HTTP requests",
        },
        []string{"path", "method", "status"},
    )
)

func init() {
    prometheus.MustRegister(requestCounter)
}

func main() {
    http.Handle("/metrics", promhttp.Handler())
    http.ListenAndServe(":8080", nil)
}

ServiceMonitor for scraping:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: api-metrics
  namespace: production
  labels:
    app: api-server
spec:
  selector:
    matchLabels:
      app: api-server
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
---
apiVersion: v1
kind: Service
metadata:
  name: api-server
  namespace: production
  labels:
    app: api-server
spec:
  ports:
  - name: metrics
    port: 8080
    targetPort: 8080
  selector:
    app: api-server

Verify scraping:

# Check ServiceMonitor
oc get servicemonitor -n production

# Check targets in Prometheus
# Navigate to: Observe β†’ Targets
# Look for production/api-metrics

Custom Dashboards

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-dashboard-api
  namespace: openshift-monitoring
  labels:
    grafana_dashboard: "true"
data:
  api-dashboard.json: |
    {
      "dashboard": {
        "title": "API Server Metrics",
        "panels": [
          {
            "title": "Request Rate",
            "targets": [
              {
                "expr": "sum(rate(http_requests_total{namespace=\"production\"}[5m])) by (path)"
              }
            ]
          }
        ]
      }
    }

Alerting

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: api-alerts
  namespace: production
spec:
  groups:
  - name: api-server
    interval: 30s
    rules:
    - alert: HighErrorRate
      expr: |
        sum(rate(http_requests_total{status=~"5.."}[5m])) 
        / 
        sum(rate(http_requests_total[5m])) 
        > 0.05
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "High error rate on API server"
        description: "Error rate is {{ $value | humanizePercentage }}"

Migration Strategy

Option 1: Dual Shipping (transition period)

Ship metrics to both CloudWatch and Prometheus during migration:

# Keep CloudWatch agent
# + Add Prometheus ServiceMonitor
# Gradually migrate dashboards/alerts

Option 2: Full Migration

# 1. Export CloudWatch dashboards
# 2. Recreate in Grafana (PromQL)
# 3. Migrate alerts to PrometheusRules
# 4. Remove CloudWatch agent

9. Logging

Pain Point: CloudWatch Logs vs OpenShift Logging (EFK)

Severity: 🟑 Medium
Frequency: Universal
Impact: Different query language, aggregation

The Problem

EKS typically uses CloudWatch Logs or FluentBit β†’ CloudWatch. ROSA includes Elasticsearch/FluentD/Kibana (EFK) stack, though you can still use CloudWatch.

ROSA Logging Stack

Built-in options:

  1. OpenShift Logging (EFK) - Included, can be enabled
  2. CloudWatch Logs - Still works (AWS integration)
  3. External (Splunk, Datadog, etc.) - Via operators

Enable OpenShift Logging:

# Install Elasticsearch Operator
oc create -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-operators-redhat
  annotations:
    openshift.io/node-selector: ""
  labels:
    openshift.io/cluster-monitoring: "true"
EOF

# Install via OperatorHub
# Then create ClusterLogging instance

ClusterLogging CR:

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance
  namespace: openshift-logging
spec:
  managementState: Managed
  logStore:
    type: elasticsearch
    retentionPolicy:
      application:
        maxAge: 7d
      infra:
        maxAge: 7d
      audit:
        maxAge: 7d
    elasticsearch:
      nodeCount: 3
      storage:
        size: 200Gi
        storageClassName: gp3-csi
      redundancyPolicy: SingleRedundancy
  visualization:
    type: kibana
    kibana:
      replicas: 2
  collection:
    logs:
      type: fluentd
      fluentd: {}

Access Kibana:

# Get Kibana route
oc get route kibana -n openshift-logging
# Navigate to URL, login with OpenShift credentials

Querying Logs

CloudWatch Insights (EKS):

fields @timestamp, @message
| filter kubernetes.namespace_name = "production"
| filter kubernetes.labels.app = "api-server"
| filter @message like /ERROR/
| stats count() by bin(5m)

Kibana (ROSA):

kubernetes.namespace_name:"production" AND 
kubernetes.labels.app:"api-server" AND 
message:"ERROR"

Application Logging Best Practices

Structured logging:

import logging
import json_logging

json_logging.init_non_web(enable_json=True)
logger = logging.getLogger(__name__)

logger.info("User login", extra={
    "user_id": "12345",
    "ip_address": "192.168.1.1",
    "action": "login"
})

# Output (JSON):
# {"timestamp": "2024-02-16T10:00:00Z", "level": "INFO", "message": "User login", 
#  "user_id": "12345", "ip_address": "192.168.1.1", "action": "login"}

Log to stdout/stderr:

# Don't write logs to files inside container
# OpenShift captures stdout/stderr automatically
spec:
  containers:
  - name: app
    # Logs go to stdout
    command: ["./app"]
    # NOT: command: ["./app > /var/log/app.log"]

10. Service Mesh

Pain Point: AWS App Mesh vs OpenShift Service Mesh

Severity: 🟑 Medium (if using service mesh)
Frequency: Uncommon (10-20% of workloads)
Impact: Complete reconfiguration required

The Problem

If using AWS App Mesh on EKS, you'll need to migrate to OpenShift Service Mesh (Istio-based) on ROSA.

OpenShift Service Mesh

Install Service Mesh Operator:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: servicemeshoperator
  namespace: openshift-operators
spec:
  channel: stable
  name: servicemeshoperator
  source: redhat-operators
  sourceNamespace: openshift-marketplace

Create Service Mesh Control Plane:

apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
  name: basic
  namespace: istio-system
spec:
  version: v2.3
  tracing:
    type: Jaeger
    sampling: 10000
  gateways:
    ingress:
      enabled: true
    egress:
      enabled: true
  policy:
    type: Istiod
  telemetry:
    type: Istiod
  addons:
    grafana:
      enabled: true
    jaeger:
      install:
        storage:
          type: Memory
    kiali:
      enabled: true
    prometheus:
      enabled: true

Add namespace to mesh:

apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  name: default
  namespace: istio-system
spec:
  members:
  - production
  - staging

Enable sidecar injection:

oc label namespace production istio-injection=enabled

11. Operators and Operator Lifecycle

Pain Point: Operator-First vs Manual Installation

Severity: 🟒 Low - Beneficial difference
Frequency: Common
Impact: New way of managing applications

The Opportunity

ROSA/OpenShift emphasizes Operator Lifecycle Manager (OLM) for managing applications. This is actually an improvement over EKS.

Installing Operators

OperatorHub (Web Console):

  1. Navigate to: Operators β†’ OperatorHub
  2. Search for operator (e.g., "PostgreSQL")
  3. Click Install
  4. Choose namespace, update channel
  5. Click Subscribe

Via CLI:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: postgresql-operator
  namespace: operators
spec:
  channel: stable
  name: postgresql-operator-dev4devs-com
  source: operatorhubio-catalog
  sourceNamespace: olm
  installPlanApproval: Automatic

Using Operators

Example: PostgreSQL Operator

apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
  name: production-db
  namespace: database
spec:
  image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-15.1-0
  postgresVersion: 15
  instances:
    - name: instance1
      replicas: 3
      dataVolumeClaimSpec:
        accessModes:
        - "ReadWriteOnce"
        resources:
          requests:
            storage: 100Gi
  backups:
    pgbackrest:
      image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.41-4
      repos:
      - name: repo1
        volume:
          volumeClaimSpec:
            accessModes:
            - "ReadWriteOnce"
            resources:
              requests:
                storage: 100Gi

Instead of manually deploying StatefulSet + Service + ConfigMap + Secrets

Common Operators

  • Databases: PostgreSQL, MySQL, MongoDB, Redis
  • Message Queues: AMQ Streams (Kafka), AMQ Broker
  • Monitoring: Prometheus, Grafana
  • Service Mesh: Red Hat Service Mesh
  • Serverless: OpenShift Serverless (Knative)
  • Pipelines: OpenShift Pipelines (Tekton)

12. GitOps Integration

Pain Point: Self-Managed ArgoCD vs OpenShift GitOps

Severity: 🟒 Low - Improvement
Frequency: Common for mature teams
Impact: Better integration, built-in

The Good News

ROSA includes OpenShift GitOps (ArgoCD) operator. If you were already using ArgoCD on EKS, migration is straightforward.

Install OpenShift GitOps

# Via OperatorHub or:
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-gitops-operator
  namespace: openshift-operators
spec:
  channel: latest
  name: openshift-gitops-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
EOF

Access ArgoCD:

# Get route
oc get route openshift-gitops-server -n openshift-gitops

# Get admin password
oc extract secret/openshift-gitops-cluster -n openshift-gitops --to=-

Migrate ArgoCD Applications

EKS ArgoCD Application:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: api-server
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/company/applications.git
    targetRevision: main
    path: api-server
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

ROSA (minimal changes):

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: api-server
  namespace: openshift-gitops  # Different namespace
spec:
  project: default
  source:
    repoURL: https://github.com/company/applications.git
    targetRevision: main
    path: api-server/overlays/rosa  # May need OpenShift-specific overlay
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Kustomize Overlays for OpenShift

api-server/
β”œβ”€β”€ base/
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   └── kustomization.yaml
└── overlays/
    β”œβ”€β”€ eks/
    β”‚   └── kustomization.yaml
    └── rosa/
        β”œβ”€β”€ kustomization.yaml
        β”œβ”€β”€ route.yaml          # Add Route
        └── patches/
            └── deployment.yaml  # Patch for SCC compatibility

rosa/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../../base

resources:
- route.yaml  # OpenShift Route

patches:
- path: patches/deployment.yaml
  target:
    kind: Deployment
    name: api-server

rosa/patches/deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-server
spec:
  template:
    spec:
      # Remove securityContext that sets runAsUser: 0
      # OpenShift will assign random UID
      securityContext: {}
      containers:
      - name: api
        # Change to non-root image
        image: company/api-server-openshift:latest
        ports:
        - containerPort: 8080  # Changed from 80

13. CI/CD Pipelines

Pain Point: CodePipeline/GitHub Actions vs OpenShift Pipelines

Severity: 🟒 Low - Optional migration
Frequency: Universal
Impact: Can keep existing CI/CD or migrate

Options

Option 1: Keep Existing CI/CD

GitHub Actions, CodePipeline, etc. can still deploy to ROSA:

# GitHub Actions
name: Deploy to ROSA
on: [push]
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Login to ROSA
      run: |
        oc login --token=${{ secrets.OPENSHIFT_TOKEN }} \
                 --server=https://api.rosa-cluster.xxxx.openshiftapps.com:6443
    
    - name: Deploy
      run: |
        oc apply -f k8s/

Option 2: OpenShift Pipelines (Tekton)

Built-in, Kubernetes-native CI/CD:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: build-and-deploy
  namespace: cicd
spec:
  params:
  - name: git-url
    type: string
  - name: image-name
    type: string
  workspaces:
  - name: shared-workspace
  tasks:
  - name: fetch-repository
    taskRef:
      name: git-clone
      kind: ClusterTask
    workspaces:
    - name: output
      workspace: shared-workspace
    params:
    - name: url
      value: $(params.git-url)
  
  - name: build-image
    taskRef:
      name: buildah
      kind: ClusterTask
    runAfter:
    - fetch-repository
    workspaces:
    - name: source
      workspace: shared-workspace
    params:
    - name: IMAGE
      value: $(params.image-name)
  
  - name: deploy
    taskRef:
      name: openshift-client
      kind: ClusterTask
    runAfter:
    - build-image
    params:
    - name: SCRIPT
      value: |
        oc apply -f k8s/deployment.yaml
        oc set image deployment/api-server api=$(params.image-name)

Trigger pipeline:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: build-and-deploy-run-1
  namespace: cicd
spec:
  pipelineRef:
    name: build-and-deploy
  params:
  - name: git-url
    value: https://github.com/company/api-server.git
  - name: image-name
    value: image-registry.openshift-image-registry.svc:5000/production/api-server:latest
  workspaces:
  - name: shared-workspace
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

14. Internal Image Registry

Pain Point: ECR External vs OpenShift Internal Registry

Severity: 🟒 Low - Additional option
Frequency: Common
Impact: Can use both ECR and internal registry

OpenShift Internal Registry

Enabled by default in ROSA:

# Check registry
oc get svc -n openshift-image-registry
# NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)
# image-registry            ClusterIP   172.30.xxx.xxx  <none>        5000/TCP

# Internal registry URL
image-registry.openshift-image-registry.svc:5000

Using Internal Registry

Build and push from within cluster:

# Create BuildConfig
oc new-build --binary --name=api-server -n production

# Start build from local directory
oc start-build api-server --from-dir=. --follow -n production

# Image automatically stored in internal registry:
# image-registry.openshift-image-registry.svc:5000/production/api-server:latest

Use in Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-server
  namespace: production
spec:
  template:
    spec:
      containers:
      - name: api
        image: image-registry.openshift-image-registry.svc:5000/production/api-server:latest
        imagePullPolicy: Always

Still Using ECR

ECR still works!

# Create pull secret for ECR
oc create secret docker-registry ecr-secret \
  --docker-server=123456789012.dkr.ecr.us-east-1.amazonaws.com \
  --docker-username=AWS \
  --docker-password=$(aws ecr get-login-password --region us-east-1) \
  -n production

# Link to service account
oc secrets link default ecr-secret --for=pull -n production

Use ECR image:

spec:
  containers:
  - name: api
    image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/api-server:v1.0
  imagePullSecrets:
  - name: ecr-secret

Hybrid Approach

# External images from ECR
spec:
  containers:
  - name: api
    image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/api-server:v1.0
  
  # Init container from internal registry
  initContainers:
  - name: migrations
    image: image-registry.openshift-image-registry.svc:5000/production/migrations:latest

15. AWS Integration Continuity

Good News: AWS Integrations Still Work!

Severity: 🟒 Low - Mostly compatible
Frequency: Universal
Impact: Minimal changes needed

IAM Roles for Service Accounts (IRSA)

Still works on ROSA!

apiVersion: v1
kind: ServiceAccount
metadata:
  name: s3-reader
  namespace: production
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/rosa-s3-reader
---
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      serviceAccountName: s3-reader
      containers:
      - name: app
        # AWS SDK automatically uses IRSA credentials

Setup on ROSA:

# Same as EKS - create IAM role with OIDC provider
aws iam create-role \
  --role-name rosa-s3-reader \
  --assume-role-policy-document file://trust-policy.json

# Trust policy uses ROSA OIDC provider
# Get OIDC provider URL:
rosa describe cluster --cluster=my-cluster | grep "OIDC Endpoint"

AWS Load Balancer Controller

Can still use ALB!

# Install AWS Load Balancer Controller on ROSA
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=rosa-cluster

Ingress with ALB:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-alb
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 8080

Or use OpenShift Route with ALB:

Both can coexist! Use Routes for simple cases, ALB for advanced features.

EBS CSI Driver

Works out of the box:

# Same StorageClass as EKS
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp3
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  encrypted: "true"

AWS Secrets Manager CSI

Still works:

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: aws-secrets
  namespace: production
spec:
  provider: aws
  parameters:
    objects: |
      - objectName: "production/database/password"
        objectType: "secretsmanager"

VPC, Security Groups, Subnets

All AWS networking constructs work the same:

  • ROSA nodes are EC2 instances in your VPC
  • Security Groups apply to nodes
  • VPC CNI networking available
  • PrivateLink endpoints work
  • Transit Gateway connections work

16. Detection Patterns for MTA/Konveyor

Automated Migration Analysis

MTA/Konveyor should flag these patterns when analyzing EKS β†’ ROSA migrations:

Pattern 1: Root Containers

# PATTERN: No securityContext or runAsUser: 0
spec:
  containers:
  - name: app
    image: nginx:latest
    # Missing securityContext

# OR
securityContext:
  runAsUser: 0

# ACTION: 
# - Suggest non-root image
# - Or recommend anyuid SCC with documentation
# - Link to image rebuilding guide

Pattern 2: Privileged Ports

# PATTERN: Ports < 1024
ports:
- containerPort: 80
- containerPort: 443

# ACTION:
# - Suggest changing to 8080, 8443
# - Check if image supports port configuration
# - Recommend rebuilding image

Pattern 3: Host Path Volumes

# PATTERN: hostPath volumes
volumes:
- name: data
  hostPath:
    path: /data

# ACTION:
# - Flag as incompatible with SCCs
# - Suggest PVC or emptyDir
# - Warn about security implications

Pattern 4: Ingress Resources

# PATTERN: Ingress with ALB annotations
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/*

# ACTION:
# - Suggest creating equivalent Route
# - Show Route conversion example
# - Note that Ingress still works but Routes preferred

Pattern 5: Explicit runAsUser

# PATTERN: Specific UID requirements
securityContext:
  runAsUser: 1000

# ACTION:
# - Warn that OpenShift assigns random UID
# - Suggest fsGroup for file permissions
# - Check if image truly requires specific UID

Pattern 6: CloudWatch Dependencies

# PATTERN: CloudWatch agent/FluentBit configs
kind: DaemonSet
metadata:
  name: cloudwatch-agent

# ACTION:
# - Suggest OpenShift Logging (EFK)
# - Or note CloudWatch still works
# - Provide migration guide

Pattern 7: Privileged Containers

# PATTERN: Privileged containers
securityContext:
  privileged: true

# ACTION:
# - Flag as requiring privileged SCC
# - Request justification
# - Suggest alternatives if possible

Pattern 8: Host Network

# PATTERN: Host network
spec:
  hostNetwork: true

# ACTION:
# - Flag as requiring hostnetwork SCC
# - Suggest Network Policies instead
# - Document security implications

Konveyor Analysis Report Example

# EKS to ROSA Migration Analysis

## Critical Issues (Must Fix)

### 1. Root Containers (15 deployments)
- `nginx-app` in namespace `production`
  - Image: nginx:1.21 (runs as root)
  - Binds to port 80
  - **Action**: Use nginx-openshift image or rebuild with ports 8080
  - **Effort**: Medium (2-4 hours per deployment)

### 2. Host Path Volumes (3 deployments)
- `legacy-app` in namespace `legacy`
  - Uses hostPath: /var/lib/app
  - **Action**: Migrate to PVC with gp3 StorageClass
  - **Effort**: High (requires data migration)

## Medium Priority

### 3. Ingress Resources (23 ingresses)
- All ingresses use ALB controller
- **Action**: Create equivalent Routes (automated conversion available)
- **Effort**: Low (scripted conversion)

### 4. CloudWatch Metrics (cluster-wide)
- FluentBit DaemonSet shipping to CloudWatch
- **Action**: Enable OpenShift Logging or keep CloudWatch
- **Effort**: Medium (reconfigure dashboards)

## Low Priority

### 5. Service Account RBAC (45 service accounts)
- Many assume default cluster-admin permissions
- **Action**: Create explicit Roles/RoleBindings
- **Effort**: Low (mostly automated)

## Summary

- **Total Workloads**: 127
- **Critical Issues**: 18
- **Estimated Migration Time**: 2-3 weeks
- **Recommended Approach**: Incremental (stateless first)

17. Migration Strategies

Strategy 1: Direct Migration (Fastest)

Timeline: 1-2 weeks
Best for: Non-production, dev/test environments

# Week 1: Preparation
- Create ROSA cluster
- Set up AWS integrations (IRSA, EBS, etc.)
- Identify image compatibility issues
- Rebuild images for OpenShift

# Week 2: Migration
- Deploy applications to ROSA
- Fix SCC issues
- Test functionality
- Cutover DNS

Pros:

  • Fast
  • Simple

Cons:

  • High risk
  • Downtime required
  • All-or-nothing

Strategy 2: Phased Migration (Safest)

Timeline: 4-6 weeks
Best for: Production environments

Phase 1: Infrastructure (Week 1-2)

  • Create ROSA cluster
  • Configure AWS integrations
  • Set up monitoring, logging
  • Validate networking

Phase 2: Non-Production (Week 2-3)

  • Migrate dev/staging environments
  • Identify issues
  • Update runbooks
  • Train team

Phase 3: Stateless Production (Week 3-4)

  • Migrate stateless apps first
  • Run in parallel with EKS
  • Gradually shift traffic

Phase 4: Stateful Production (Week 4-6)

  • Database migrations (with replication)
  • Message queues
  • Final cutover

Pros:

  • Low risk
  • Learn as you go
  • Easy rollback

Cons:

  • Longer timeline
  • More complex coordination

Strategy 3: Blue-Green Cluster (Most Controlled)

Timeline: 6-8 weeks
Best for: Mission-critical applications

Phase 1: Build Green (ROSA) (Week 1-3)

  • Parallel infrastructure
  • All applications deployed
  • Full testing

Phase 2: Validation (Week 4-5)

  • Load testing
  • Security scanning
  • Chaos engineering

Phase 3: Traffic Shift (Week 6)

  • 10% traffic to ROSA
  • Monitor for 48 hours
  • Increase to 50%
  • Monitor for 48 hours
  • Full cutover

Phase 4: Cleanup (Week 7-8)

  • Keep EKS for 1-2 weeks
  • Final decommission

Pros:

  • Safest
  • Easy rollback
  • Thorough validation

Cons:

  • Highest cost (dual clusters)
  • Longest timeline
  • Complex traffic management

18. Quick Reference Tables

Critical Differences Summary

Aspect EKS ROSA Migration Effort
Security Permissive Restrictive SCCs πŸ”΄ High
Ingress Ingress (ALB) Routes preferred 🟑 Medium
Images Any image Non-root required πŸ”΄ High
RBAC Permissive defaults Explicit grants 🟑 Medium
Monitoring CloudWatch Prometheus built-in 🟑 Medium
Logging CloudWatch EFK built-in 🟑 Medium
GitOps Self-managed ArgoCD OpenShift GitOps 🟒 Low
Registry ECR only ECR + Internal 🟒 Low
AWS Integrations Native Still works! 🟒 Low

SCC Quick Reference

SCC Use Case Risk Level When to Use
restricted Default, most secure 🟒 Low Always (if possible)
nonroot Must run non-root 🟒 Low Non-root images
anyuid Any UID (including root) 🟑 Medium Legacy apps
hostaccess Access host resources πŸ”΄ High Rarely
privileged Full privileges πŸ”΄ Critical Almost never

Route vs Ingress Feature Matrix

Feature Route Ingress Notes
Path routing βœ“ (separate Routes) βœ“ Ingress more flexible
TLS termination βœ“ Native βœ“ Via controller Routes simpler
Traffic splitting βœ“ Native ❌ (via annotations) Routes better for canary
WebSocket βœ“ Native Depends on controller Routes guaranteed
Wildcard βœ“ Depends on controller Routes more flexible
mTLS βœ“ Native Complex Routes easier
Multi-cloud ❌ OpenShift-only βœ“ Standard Ingress more portable

Image Compatibility Checklist

  • Runs as non-root user
  • Uses ports > 1024
  • Writable directories have group permissions (chmod g=u)
  • No hardcoded UID/GID requirements
  • Doesn't require privileged mode
  • Doesn't need host path volumes
  • Works with random UID assignment
  • Logs to stdout/stderr (not files)

Migration Readiness Checklist

Pre-Migration

  • ROSA cluster created and configured
  • AWS integrations validated (IRSA, EBS, VPC)
  • All images tested for OpenShift compatibility
  • SCCs documented and approved
  • Routes created for all Ingresses
  • RBAC roles defined
  • Monitoring/logging configured
  • Team trained on OpenShift

During Migration

  • Applications deployed to ROSA
  • SCC issues resolved
  • Persistent volumes migrated
  • Network policies tested
  • Service mesh configured (if needed)
  • Smoke tests passing
  • Performance validated

Post-Migration

  • DNS updated
  • CloudWatch/monitoring cutover
  • Alerts configured
  • Documentation updated
  • Runbooks updated
  • Team handoff complete
  • EKS cluster decommissioned

Appendix: Common Error Messages

"container has runAsNonRoot and image will run as root"

Cause: Image runs as root, restricted SCC blocks it
Fix: Use non-root image or grant anyuid SCC

"pods is forbidden: User cannot list resource"

Cause: Insufficient RBAC permissions
Fix: Create Role and RoleBinding

"failed to create containerd container: cannot set fsgroup to 0"

Cause: Trying to use GID 0
Fix: Set fsGroup to non-zero value

"Error: port 80: bind: permission denied"

Cause: Trying to bind to privileged port without capabilities
Fix: Use port > 1024 or rebuild image


Conclusion

Migrating from EKS to ROSA is fundamentally different from other Kubernetes migrations because both platforms run on AWS. The challenges are not about cloud provider differences, but about OpenShift's enterprise security model and opinionated platform features.

Top 3 Migration Challenges:

  1. Security Context Constraints (SCCs) - Root containers and privileged ports
  2. Image Compatibility - Need non-root images or rebuilding
  3. Routes vs Ingress - Learning OpenShift's routing model

Top 3 Migration Benefits:

  1. AWS Integration Continuity - IRSA, EBS, VPC, Security Groups all work
  2. Built-in Enterprise Features - Monitoring, logging, GitOps, service mesh
  3. Red Hat Support - Enterprise support for entire stack

Success Factors:

  • Thorough SCC analysis and planning
  • Image compatibility testing before migration
  • Phased approach (non-prod β†’ stateless β†’ stateful)
  • Team training on OpenShift-specific features
  • Leveraging MTA/Konveyor for automated analysis

Tools:

  • MTA/Konveyor - Automated migration analysis
  • OpenShift GitOps - Built-in ArgoCD
  • oc CLI - OpenShift command-line tool
  • Velero - Backup/restore for data migration

This migration is an opportunity to adopt a more opinionated, enterprise-grade Kubernetes platform while maintaining your existing AWS investments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment