Skip to main content

Security in GAP

This page describes the security controls that protect applications built and deployed on GAP. It covers the platform lifecycle from team onboarding and source code management, through container, frontend, and infrastructure deployment pipelines, to Kubernetes runtime and credential management.

Most controls are enforced automatically by the platform. Where teams need to configure something themselves or request a change through a pull request, this is noted explicitly.

Team onboarding

When a new team joins GAP, their Azure environment is provisioned through a PR-based Terraform workflow. The onboarding configuration is defined as code using shared modules, ensuring every team receives a standardised, hardened environment.

Azure resource provisioning

Each team receives a dedicated set of Azure resources with security controls applied by default:

  • Shared resource group — a team-level resource group for shared infrastructure, protected with a CanNotDelete management lock
  • Azure Key Vault — a shared Key Vault with separated access policies (read-only and read-write groups), network ACLs, and Event Grid subscriptions for audit logging of secret, key, and certificate lifecycle events
  • Terraform state storage — a dedicated storage account for Terraform state files with geo-redundant replication (RAGRS), versioning and change feed enabled, a deny-by-default firewall with IP allowlisting, and a management lock to prevent accidental deletion
  • Project resource groups — additional resource groups for specific projects, each with their own RBAC bindings inherited from the team hierarchy

Identity and access control

Access to team resources is managed through a tiered Entra ID group hierarchy, with each tier granting progressively broader permissions:

  • Readers — read-only access to team resources
  • Contributors — modify access to team resources
  • Contributors (PIM) — elevated contributor access through Privileged Identity Management. Activations are limited to 8 hours and require a written justification, providing time-limited elevated access with an audit trail
  • Owners — full control over team resources
  • Approvers — designated members who approve access requests through access packages

All groups are security-enabled with duplicate-name prevention. Membership is managed externally through access packages and automated provisioning — not manually in the Terraform configuration.

Self-service access packages

Team members request access through Entra ID Access Packages with built-in governance controls:

  • Approval required — all access requests require approval from the team's designated approvers, with a justification from the requester
  • Time-limited grants — access is granted for one year and must be renewed
  • Annual access reviews — access is reviewed annually. If not reviewed, access is automatically removed
  • Extensible — users can request extensions without re-approval

Source code security

Repository provisioning

All GitHub repositories in Gjensidige are created through the Platform GitHub Bot — the standard GitHub UI for creating repositories is disabled. This ensures repositories start with hardened security defaults:

  • Private visibility — repositories are created as private by default
  • Branch protection on main — requires a minimum of 2 approving reviews, enforces review from code owners, and requires signed commits
  • Read-only workflow permissions — GitHub Actions workflows default to read-only access, preventing supply chain attacks through compromised workflows
  • GitHub Advanced Security (GHAS) — code scanning, dependency scanning, and secret scanning with push protection are enabled at creation
  • Team-based ownership — every repository is assigned to a team with appropriate permission levels (admin for the owning team, read/write for collaborators)

These defaults are applied at repository creation. Repository administrators can modify settings afterward — drift from the baseline is tracked by the Security Score system.

GitHub Advanced Security

GHAS is enabled on every new repository by the provisioning bot and provides three layers of source code protection:

  • Code scanning — static analysis to detect vulnerabilities in application code
  • Dependency scanning — identifies known vulnerabilities in third-party dependencies
  • Secret scanning and push protection — prevents accidental exposure of credentials and tokens in commits

Security Score

GAP uses an in-house developed security scoring system called Security Score to continuously evaluate repository security posture. Each repository is scored on a 0–100 scale based on a weighted combination of security dimensions — including dependency vulnerability scanning, secret scanning, branch protection configuration, code scanning results, and GitHub Actions permission hygiene. The individual dimensions and their weights are tuned over time as the threat landscape evolves.

Scores are aggregated at team and department level, giving organisational visibility into repository and supply chain security over time. The goal is to motivate teams to continuously improve their security controls rather than to enforce hard thresholds.

Signed commits

All GitHub commits are signed cryptographically on the developer machine and validated in GitHub. Commit signature verification is enforced through branch protection rules applied at repository creation, ensuring commit authenticity.

GitHub organisation governance

Beyond individual repository defaults, Gjensidige manages GitHub organisation-level security settings as code using Terraform. This ensures consistent governance across the entire organisation:

Security features enabled by default on all new repositories:

  • GHAS
  • Dependabot alerts and automatic security updates
  • Dependency graph
  • Secret scanning with push protection

Organisation rulesets enforce mandatory policies that cannot be bypassed by repository administrators:

  • Default branch data loss prevention — prevents deletion and force pushes to the default branch across all repositories
  • Signed commits — cryptographic commit signatures are required on the default branch across the organisation (see Signed commits)
  • Terraform and Kubernetes manifest protection — infrastructure repositories require code owner review, signed commits, and at least one approving review before changes can be merged

GitHub Actions allowlist — only explicitly approved actions can run in the organisation. Third-party actions must be reviewed and pinned to a specific version or commit SHA, preventing supply chain attacks through compromised actions.

Repository creation is restricted — organisation members cannot create repositories directly. Combined with the Platform GitHub Bot, this ensures every repository is provisioned through a controlled process with enforced security defaults.

Container deployment security

Backend services in GAP follow a GitOps-based deployment path: code is committed to GitHub, a container image is built and pushed to Azure Container Registry (ACR), the Kubernetes manifest repository is updated with the new image tag, and Argo CD syncs the change to the cluster. Security controls are applied at each step.

Container build pipeline

All container images are built using the docker-build-scan-push-action — a standardised GitHub Actions composite action that handles building, scanning, signing, attesting, and pushing images to ACR:

  • Trivy vulnerability scanning — images are scanned with Aqua Security Trivy before being pushed to ACR. Images with CRITICAL vulnerabilities are blocked. Continuous runtime scanning detects new vulnerabilities after deployment
  • Image tag immutability — after a successful push, the image tag is locked in ACR by disabling write and delete operations, so the tested image is exactly what runs in production
  • Build provenance (SLSA)SLSA build provenance attestations are generated using actions/attest-build-provenance, creating a signed record of the source repository, commit, runner environment, and build inputs
  • SBOM attestation — a CycloneDX Software Bill of Materials is generated and attached to each image using actions/attest-sbom, inventorying all packages and dependencies
  • Image signing with Sigstore CosignSigstore Cosign creates a cryptographic signature for every image at build time. Signatures are verified when the image is pulled into Kubernetes — if the signature does not match, the deployment is denied
  • Image annotations and traceability — each image is annotated with the triggering actor, workflow run ID, and job name. Combined with digest-pinned image references (registry/repo:tag@sha256:...), this links a running container back to its source code and build

The action authenticates with ACR using OpenID Connect (OIDC) federated credentials — no long-lived secrets are stored in the CI pipeline.

Manifest repository update

After the container image is built and pushed to ACR, the gap-workflow-dispatch-action triggers a reusable manifest update workflow in the Kubernetes manifest repository. The workflow creates a signed commit updating the image tag, opens a pull request with deploy metadata (image, environment, commit SHA, actor, workflow run link), and auto-merges it. The dispatch event carries the digest-pinned image reference, target environment, source commit SHA, and the GitHub Actions run ID — linking the manifest change back to the build that produced the image. Authentication uses short-lived GitHub tokens scoped to the target manifest repository.

Kubernetes security

Cluster hardening

GAP production clusters are hardened at the infrastructure level with a defence-in-depth approach. Cluster configuration is defined in a shared Terraform module:

  • Entra ID-only authentication — local Kubernetes accounts are disabled. All cluster access requires Entra ID authentication, ensuring centralised identity management and conditional access policies apply
  • API server VNet integration — the Kubernetes API server runs inside the cluster's virtual network with access restricted to authorised IP ranges (corporate network and CI/CD runners only)
  • AzureLinux with ephemeral OS disks — nodes run AzureLinux (Microsoft's container-optimised Linux distribution) and use ephemeral OS disks, meaning node state is not persisted. If a node is compromised, replacing it produces a clean instance
  • Automatic upgrades — Kubernetes patches are applied automatically, and node OS images are upgraded on a rolling basis through scheduled maintenance windows
  • Run command disabled — the AKS run command feature is disabled, preventing remote command execution on nodes through the Azure control plane
  • Image cleaner — unused container images are automatically removed from nodes every 48 hours, reducing the attack surface
  • Management locks — all clusters are protected with CanNotDelete locks, preventing accidental or unauthorised deletion of cluster resources
  • Dedicated system node pool — system components run on a dedicated node pool that rejects application workloads, ensuring platform stability and reducing the impact of application issues
  • Multi-zone availability — nodes are distributed across three availability zones for resilience against zone-level failures

GitOps deployment with Argo CD

All application deployments to Kubernetes are managed through Argo CD, following a GitOps model where the desired state of every application is declared in Git and continuously reconciled by the cluster.

Team isolation through AppProjects

Each team receives a dedicated Argo CD AppProject that enforces strict boundaries:

  • Source repository restriction — each project can only deploy from the team's own manifest repository, preventing cross-team interference
  • Destination namespace restriction — deployments are scoped to the team's assigned Kubernetes namespaces only
  • Cluster resources denied — teams cannot create or modify cluster-scoped resources (namespaces, ClusterRoles, CRDs). The cluster resource whitelist is empty by default
  • Namespace resources whitelisted — only approximately 24 specific resource types (Deployments, Services, ConfigMaps, CiliumNetworkPolicies, etc.) can be managed, limiting the impact of misconfiguration
  • RBAC via Entra ID groups — project-level permissions (sync, restart, view logs) are bound to Entra ID groups, following the same identity model as the rest of the platform

Cluster authentication

Argo CD authenticates to managed clusters using Azure Workload Identity — the same OIDC-based mechanism used by applications. No static tokens or long-lived credentials are stored. The controller's cluster access follows a two-tier RBAC model:

  • Namespace admin — write access only to the specific namespaces each cluster is configured to manage (explicit allowlist)
  • Cluster-wide read-only — read access for syncing cluster-scoped resources, with targeted write permissions for specific CRDs required for GitOps operations
  • No cluster-admin — the Argo CD application controller does not hold cluster-admin privileges

GitOps governance

All Argo CD configuration — projects, applications, and RBAC — is managed as code in Git. Changes require pull request approval, and CODEOWNERS rules ensure each team's configuration is reviewed by the appropriate team members. This provides an audit trail of every deployment configuration change across the platform.

Gatekeeper Policy Controller

GAP uses the Gatekeeper Policy Controller to enforce security policies in Kubernetes. The policy controller is integrated with Azure Policy, so enforcement is also reflected in the Azure secure score for the overall Azure environment.

GAP enforces the Kubernetes Restricted Pod Security Policy — the most secure baseline — along with built-in Azure policies and custom GAP policies. Policies are defined as code using Terraform and applied at the subscription level across all AKS clusters.

View enforced policies by category

Container security policies

  • No privileged containers — containers cannot run in privileged mode
  • Prevent privilege escalation — blocks processes from gaining more privileges than their parent
  • No CAP_SYS_ADMIN — prevents granting the powerful CAP_SYS_ADMIN capability
  • Allowed container capabilities — only explicitly approved Linux capabilities are permitted
  • Read-only root filesystem — containers must use a read-only root filesystem
  • Approved user and group IDs — pods and containers can only run with pre-approved user/group IDs
  • Disable automounting API credentials — prevents automatic mounting of service account tokens
  • Allowed seccomp profiles — only approved seccomp profiles can be used
  • Allowed AppArmor profiles — only approved AppArmor profiles can be used
  • Allowed SELinux options — only approved SELinux options can be used
  • Allowed ProcMountType — restricts the proc mount types available to containers
  • No forbidden sysctl interfaces — prevents use of dangerous sysctl kernel parameters

Image and registry policies

  • Allowed container images — only images from approved registries (e.g. gjensidige.azurecr.io) and a curated allowlist of third-party images are permitted
  • Allowed pull policy — containers must use an approved image pull policy

Network policies

  • HTTPS only — clusters must be accessible only over HTTPS
  • Internal load balancers — only internal load balancers are allowed
  • Allowed external IPs — services can only use approved external IPs
  • Block host network — pods can only use approved host network and port ranges

Pod and workload policies

  • No naked pods — pods must be managed by a controller (e.g. Deployment, StatefulSet)
  • No default namespace — workloads cannot be deployed to the default namespace
  • Required labels — pods must have specific labels for identification and ownership
  • Readiness and liveness probes — all containers must have health probes configured
  • Block host process ID and IPC — containers cannot share the host's process ID or IPC namespace

Volume policies

  • No host path volumes — host path volumes are restricted to approved paths only
  • Allowed volume types — only approved volume drivers can be used
  • Allowed FlexVolume drivers — only approved FlexVolume drivers are permitted

Cluster-level policies

  • RBAC required — role-based access control must be enabled on all clusters
  • Local authentication disabled — clusters must not allow local authentication methods
  • Non-vulnerable Kubernetes version — clusters must run a Kubernetes version without known vulnerabilities
  • Encrypted node disks — node temp disks and cache must be encrypted at host

Teams can request policy exclusions for specific workloads when justified. Exclusions are configured per-environment and managed through pull requests to the policy repository.

Runtime threat detection with Falco

GAP uses the Falco runtime security engine to detect potentially malicious events in Kubernetes clusters. Falco monitors for attempts to exploit common attack vectors in Kubernetes and in running container images. When a suspicious event is detected, alerts are created for investigation.

Application security defaults

All applications in GAP are deployed through the Gappynator controller — a custom Kubernetes operator that enforces security defaults on every application. When a team defines an Application resource, Gappynator automatically provisions the deployment with hardened settings that satisfy the Gatekeeper policies by default.

Pod security context

The following settings are enforced on every container:

  • Non-root user — containers run as UID 10001, never as root
  • Read-only root filesystem — the root filesystem is mounted read-only, preventing runtime tampering
  • No privilege escalationallowPrivilegeEscalation is set to false
  • All capabilities dropped — Linux capabilities are dropped entirely (drop: ["ALL"])
  • Seccomp profile — the RuntimeDefault seccomp profile is applied, filtering dangerous system calls

Resource limits

Every container receives default CPU and memory requests and limits, preventing resource exhaustion and ensuring fair scheduling across the cluster.

Service account token hardening

Service account tokens are not automatically mounted into pods. Applications that need a token must explicitly opt in, reducing the risk of credential exposure if a container is compromised.

Deployment resilience

  • Rolling updates — deployments use a rolling update strategy to ensure zero-downtime releases
  • Topology spread — pods are distributed across both nodes and availability zones, limiting the impact of infrastructure failures
  • Pod disruption budgets — automatically created to prevent accidental eviction of running pods during cluster maintenance
  • Horizontal auto-scaling — applications default to a minimum of 2 replicas with auto-scaling based on CPU utilisation

Namespace isolation and RBAC

Every team in GAP receives dedicated namespaces in Kubernetes, providing logical isolation for their applications. Access is controlled through a tiered RBAC model where all roles are bound to Entra ID groups — there are no local Kubernetes users.

Cluster-level roles:

  • Cluster admin — full cluster access, restricted to the platform team and available only through Just-in-Time (JIT) access elevation
  • Cluster viewer — read-only access across all namespaces for monitoring and troubleshooting
  • Team cluster access — allows team members to list namespaces and custom resource definitions, providing visibility into the cluster structure
  • Security operator — grants access to vulnerability reports and pod-level operations for security incident response. Available only through JIT access elevation

Namespace-level roles (per team):

  • gap-view — read-only access within the team's namespace, including pods, deployments, logs, network policies, and security reports. This is the default access level
  • gap-edit — write access to workloads, secrets, deployments, network policies, and other namespace resources. Available through JIT access elevation when needed

Each namespace is also provisioned with resource quotas (CPU and memory limits) and limit ranges (per-container defaults) to prevent resource exhaustion and ensure fair distribution across teams.

Zero Trust networking

GAP enforces a Zero Trust networking model using Cilium as the network data plane and policy engine. Cilium uses eBPF for high-performance, kernel-level network filtering and observability.

Default Network Policies are deployed to every namespace with a deny-all baseline, defined in the team-namespace configuration. Only the following traffic is permitted by default:

  • DNS resolution — egress to kube-dns for service discovery
  • External traffic — egress to the internet (private IP ranges are blocked by default)
  • Service mesh — communication with the Linkerd control plane
  • Ingress controllers — inbound traffic from the platform ingress layer
  • Monitoring — inbound scraping from Prometheus

All other ingress and egress traffic between applications must be explicitly opened on a case-by-case basis through network policy changes in the manifest repository.

Applications define their network access requirements through an accessPolicy in the Application resource. The Gappynator controller translates these high-level rules into Cilium network policies with support for FQDN-based egress filtering (e.g., *.redis.cache.windows.net), CIDR-based rules, and cross-namespace communication — all without requiring teams to write low-level policy YAML.

Service mesh with Linkerd

All namespaces are enrolled in the Linkerd service mesh, which provides:

  • Mutual TLS (mTLS) — all pod-to-pod traffic is automatically encrypted and authenticated. The default inbound policy is all-authenticated, meaning only traffic with a valid mTLS identity is accepted
  • Traffic observability — access logs and traffic metrics for all service-to-service communication

Audit logging

GAP streams Kubernetes audit logs to a centralised logging infrastructure for security monitoring and incident investigation:

  • API server audit logs — all requests to the Kubernetes API are logged, providing a complete record of who did what and when
  • Guard logs — Entra ID sign-in and access decisions are logged for identity-related investigations
  • Cluster autoscaler logs — scaling decisions are logged for operational visibility
  • Centralised aggregation — logs are streamed to Azure Event Hub and Log Analytics for long-term retention, alerting, and correlation with other security signals

Frontend deployment security

Frontend applications follow a separate deployment path from backend services. Instead of being containerised and deployed to Kubernetes, frontends are built as static assets and uploaded to a public CDN backed by Azure Blob Storage.

CDN upload pipeline

All frontend deployments use the cdn-upload-action — a standardised GitHub Actions composite action that enforces security controls before assets reach the CDN:

  • Gitleaks secret scanning — the source directory is scanned for hardcoded secrets before upload. If secrets are detected, the deployment is blocked
  • Trivy vulnerability scanning — dependencies are scanned for CRITICAL vulnerabilities. The upload is blocked if critical CVEs are found
  • SBOM generation — a CycloneDX Software Bill of Materials is generated and stored in a separate private storage account for supply chain tracking
  • Build provenance attestations — SLSA attestations are created for uploaded assets, providing a cryptographically signed record of what was uploaded, when, and by whom
  • Source map separation — source maps are excluded from the public CDN and uploaded to a private storage account used by Grafana Faro for error tracking, preventing source code exposure
  • Metadata tagging — every upload is tagged with the source repository, commit SHA, Git ref, triggering actor, and job ID for traceability
  • Cache control — entry point files (such as index.html and manifest.json) bypass the CDN cache entirely, while hashed assets are cached with long-lived headers

The action authenticates to Azure using OIDC — no long-lived storage keys or service principal secrets are stored in the CI pipeline.

CDN infrastructure

The public CDN storage is defined as code using Terraform. Each team receives a dedicated storage container with isolated RBAC:

  • Per-team container isolation — each team's assets are stored in a separate blob container. Write access is granted only to that team's Entra ID group via the Storage Blob Data Contributor role scoped to the individual container
  • OIDC credential provisioning — frontend repositories are registered in the credential configuration and receive OIDC federated credentials scoped to the CDN storage account. The same credential module used for infrastructure deployments provisions these identities
  • HTTPS only — HTTP requests to the CDN endpoint are blocked. The custom domain uses a TLS certificate managed in Azure Key Vault
  • Metadata header stripping — Azure internal headers (blob type, lease status, and upload metadata) are stripped from CDN responses to prevent information leakage

Terraform deployment security

All infrastructure in GAP is provisioned through Terraform, following a GitOps model where infrastructure state is declared in code, reviewed through pull requests, and applied through centralised reusable workflows.

Reusable Terraform workflows

All Terraform deployments in GAP use centralised reusable workflows that enforce a secure deployment pipeline:

  • OIDC authentication — workflows authenticate to Azure using OIDC federated credentials. No long-lived Azure secrets are stored in GitHub
  • Plan/apply separation — every deployment generates a Terraform plan that must be reviewed before apply. The apply job only runs if changes are detected and approval conditions are met
  • Environment approval gates — production applies can require manual approval through GitHub environment protection rules, providing a human checkpoint before infrastructure changes
  • Encrypted plan files — Terraform plan files are encrypted with AES-256 before being stored as workflow artifacts. Encryption keys are retrieved from Azure Key Vault at runtime, ensuring plan contents are never exposed in plaintext
  • IaC vulnerability scanningTrivy scans Terraform code for infrastructure misconfigurations (e.g., unencrypted storage, overly permissive security groups) and reports results to the GitHub Security tab
  • Concurrency control — concurrent applies to the same infrastructure are prevented, avoiding state corruption and race conditions
  • Deployment tracking — all applies are registered to a central audit system, recording the environment, team, timestamp, and actor
  • Minimal workflow permissions — workflows run with read-only permissions, only requesting id-token: write for OIDC authentication

Terraform credential provisioning

Each repository that deploys infrastructure receives its own Entra ID Service Principal with OIDC federated credentials, managed centrally through Terraform:

  • Per-repository identities — every repository gets a dedicated Service Principal, ensuring access can be audited and revoked independently
  • OIDC subject filtering — federated credentials are restricted by environment, Git ref, and even specific reusable workflows. A credential scoped to environment:prod and job_workflow_ref:terraform-plan-and-apply-azure.yml cannot be used from any other context
  • Least-privilege role assignments — each Service Principal is assigned only the Azure roles needed for that repository's infrastructure, scoped to specific resources or resource groups
  • No static secrets — credentials are never stored or rotated manually. Each workflow run exchanges a short-lived GitHub OIDC token for a temporary Azure access token that expires after 1 hour

For accessing private Terraform modules hosted in GitHub, workflows use short-lived GitHub tokens rather than personal access tokens.

Credential management

Access across GAP follows the Principle of Least Privilege: developers receive read access by default and can elevate their permissions through JIT access when needed. All credentials in the platform are short-lived and scoped to the minimum required permissions.

Short-lived credentials

Credentials are needed at every step of the deployment process — from pushing container images to dispatching workflows to accessing Azure resources. GAP eliminates long-lived secrets from the pipeline entirely:

  • Azure Container Registry — GitHub Actions authenticate using OIDC federated credentials. No static Azure credentials are stored in GitHub
  • Cross-repository access — workflows that need to access other repositories use short-lived GitHub tokens scoped to the minimum required repositories and permissions
  • Azure resources at runtime — applications authenticate using Workload Identity (see below), exchanging short-lived Kubernetes tokens for Azure access tokens

This approach eliminates the need to manage and rotate long-lived credentials, removing the risk of credential leakage.

Short-lived GitHub tokens

GitHub Actions workflows across Gjensidige use short-lived GitHub App installation tokens for cross-repository access and GitHub API operations. These tokens replace static personal access tokens entirely:

  • Short-lived — each token is valid for 1 hour and automatically rotated every 30 minutes
  • Repository-scoped — tokens only grant access to explicitly listed repositories, never the entire organisation
  • Permission-scoped — each token carries only the permissions required for its use case (e.g., contents: write for manifest updates, pull_requests: write for automated PRs)
  • Centrally managed — all token configurations are defined as code in a central repository, providing a single audit point for all cross-repository credentials

Sensitive permissions (such as workflows, secrets, or administration) require explicit allowlisting per repository, preventing accidental over-provisioning. Pre-built token templates are available for common use cases such as Kubernetes manifest updates, data registry access, and Terraform module downloads.

Workload Identity

Applications in GAP connect to Azure resources using Workload Identity — a mechanism that federates Kubernetes service accounts with Azure managed identities using OIDC. This eliminates the need for static credentials, connection strings, or secrets in application configuration.

How it works

  1. When an Application resource is created, the Gappynator controller automatically provisions an Azure managed identity for that application — ensuring each application has its own unique identity
  2. Gappynator creates a federated credential that links the application's Kubernetes service account to its Azure identity through the cluster's OIDC issuer
  3. Gappynator creates a Kubernetes service account annotated with the Azure client ID
  4. At runtime, the application exchanges a short-lived Kubernetes token for an Azure access token — no secrets are involved

The identity is also used to authenticate with Azure Key Vault. Gappynator automatically creates a SecretProviderClass that mounts Key Vault secrets directly into pods using the application's own identity, with automatic rotation and pod restart when secrets change.

Centrally managed access

While Gappynator handles identity creation, the permissions each identity receives are managed as code in a central Terraform configuration and applied through pull requests. This provides:

  • Namespace-level defaults — every pod in a namespace inherits a baseline set of permissions (e.g., read access to the team's shared Key Vault)
  • App-specific overrides — individual applications can be granted additional, scoped permissions for the specific resources they need (e.g., contributor access to a specific database)
  • Audit trail — every access change is reviewed and recorded in Git history