Skip to content

Instantly share code, notes, and snippets.

View GuillaumeFalourd's full-sized avatar
🏠
Working from home

Guillaume Falourd GuillaumeFalourd

🏠
Working from home
View GitHub Profile

Great set of questions—let’s break this down using clear systems thinking and practical engineering management advice, ensuring you’re set up for robust, reliable automation.

1. Cron Scheduling with GitHub Actions

You can use the schedule trigger in GitHub Actions to run your workflows at specific times. You’ll need two workflows:

  • One for terraform apply at 8 a.m.
  • One for terraform destroy at 5 p.m.

Example schedule YAML:

@GuillaumeFalourd
GuillaumeFalourd / rfc.md
Last active February 11, 2025 20:24
request for comment template

Cache DLQ Buffer

Goal:

This feature aims to save messages that cannot be sent when the component is offline and resend them when it is online again.

For instance, if queue is offline for some reason, replicator should keep the messages that it needs to send to queue and resend them when queue is online.

Likewise, with repository, if repository is offline for some reason, replicator should keep the messages that it needs to send to repository and resend them when repository is online.

@GuillaumeFalourd
GuillaumeFalourd / aws.md
Last active January 3, 2025 11:41
AWS Concepts

Common AWS Services and their uses cases

1. Amazon S3 (Simple Storage Service)

  • Purpose: Object storage service.
  • What it allows:
    • Store and retrieve any amount of data, such as files, images, backups, or logs.
    • Host static websites or serve static assets (e.g., HTML, CSS, JavaScript).
    • Integrate with other AWS services for data pipelines, analytics, or backups.
  • Example Use Case: Storing event data from Kafka topics for long-term storage or reprocessing (e.g., EventBus Sink to S3).
  • Comparison: Similar to Google Cloud Storage or Azure Blob Storage.
@GuillaumeFalourd
GuillaumeFalourd / datadog.md
Last active December 30, 2024 13:52
Datadog Concepts

Datadog Core Concepts

  • Metrics: Datadog collects metrics from your infrastructure, applications, and services. Metrics are time-series data points that help you monitor performance and health.
  • Logs: Datadog aggregates logs from your systems, applications, and cloud providers. Logs are essential for troubleshooting and understanding system behavior.
  • Traces (APM): Application Performance Monitoring (APM) traces help you track requests across distributed systems, providing insights into latency, errors, and bottlenecks.
  • Dashboards: Customizable visualizations of metrics, logs, and traces. Dashboards help you monitor key performance indicators (KPIs) in real-time.
  • Monitors: Alerts that notify you when metrics, logs, or traces deviate from expected thresholds or patterns.
  • Service Map: A real-time visualization of the relationships between your services and their dependencies.

Setting Up Datadog

@GuillaumeFalourd
GuillaumeFalourd / kubernetes.md
Created December 26, 2024 13:11
Kubernetes concepts

Kubernetes Concepts

  1. Pod: A pod is the smallest deployable unit in Kubernetes, encapsulating one or more containers that share storage, network, and specifications. Pods ensure co-located containers work together as a single application.

  2. Node: A node is a physical or virtual machine in a Kubernetes cluster. It runs pods and includes essential components like kubelet, kube-proxy, and a container runtime to manage workloads.

  3. Service: A service provides a stable network endpoint to expose a set of pods. It enables communication between components or external clients, abstracting dynamic pod IPs with consistent DNS or IP.

  4. ConfigMap: A ConfigMap stores non-sensitive configuration data as key-value pairs. It decouples configuration from application code, allowing pods to consume settings dynamically via environment variables, command-line arguments, or mounted volumes.

Níveis de Senioridade

1. Desenvolvedor Backend Cloud Júnior

Conhecimento Técnico:

  • Linguagem de Programação: Consegue desenvolver APIs RESTful básicas em uma linguagem de programação comum (ex: Python, Java, Go).
  • Containers: Sabe criar um Dockerfile simples e usar docker-compose para rodar a aplicação localmente.
  • Persistência de Dados: Consegue integrar a aplicação com um banco de dados (relacional ou NoSQL) e realizar operações CRUD.
  • Infraestrutura como Código (IaC): Tem noções básicas de Terraform e consegue seguir exemplos para provisionar recursos simples (ex: uma instância de banco de dados ou um bucket S3).

The issue you're experiencing with the odoo job being skipped could be due to several reasons. Here are a few potential causes and solutions:

  1. Conditional Check Failure: The odoo job has a conditional check:
    if: ${{ needs.versions.outputs.names_comma != '' }}

If the names_comma output from the versions job is an empty string, the odoo job will be skipped. Ensure that the versions job is producing the expected output.

  1. Output Propagation: Ensure that the outputs from the prev_git_tag_info and versions jobs are correctly propagated and used in the odoo job. Any issue in these jobs can cause the odoo job to be skipped.

Using GitHub App to Install Private Repos Across Organizations

Using GitHub Apps to manage access to private repositories across organizations is indeed a more secure and scalable approach compared to using service accounts and SSH keys. GitHub Apps offer granular permissions, better security controls, and automated workflows. Here's a step-by-step guide on how to achieve this:

Step 1: Create a GitHub App

  1. Create the GitHub App in Org1:
    • Go to the Settings of Org1.
    • Select Developer settings and then GitHub Apps.
  • Click New GitHub App.

The "INFINITE_LOOP_DETECTED" error you're encountering typically happens when there is a repetitive request pattern that the system detects as a loop. Given the context of your GitHub Actions workflow and the information provided, here are a few potential reasons and solutions for this issue:

Potential Causes and Solutions

Multiple Parallel Runs

Since the workflow can be triggered in parallel by multiple events, and each run uses the same VERCEL_TOKEN, it’s possible that these concurrent requests are causing Vercel to detect an infinite loop.

Solution: Add a delay or rate limiting to the Get deployment details step to prevent too many simultaneous requests. You can introduce a short sleep before making the API call:

The issue seems indeed related to the system Keychain being locked on the GitHub Action runner, which is preventing the [CP] Embed Pods Frameworks step from completing. The setup_ci command is used to configure the CI environment, including setting up the necessary Keychain settings, but it needs to be combined with Fastlane's match to properly manage code signing certificates and provisioning profiles.

Understanding setup_ci and match

  1. setup_ci:
  • This command is designed to perform various CI-specific setup tasks. It includes unlocking the Keychain and setting up other CI-related configurations.
  1. match: