A patch policy for all artifacts (e.g. in images) is defined. How often is an image rebuilt?
Risk: Vulnerabilities in running artifacts stay for long and might get exploited.
Fast patching of third party component is needed. The DevOps way is to have an automated pull request for new components. This includes * Applications * Virtualized operating system components (e.g. container images) * Operating Systems * Infrastructure as Code/GitOps (e.g. argocd based on a git repository or terraform).
Risk: Components with known (or unknown) vulnerabilities might stay for long and get exploited, even when a patch is available.
Automated merges of automated created PRs for outdated dependencies. A good practice is to merge trusted dependencies (e.g. spring boot) after a grace period like one week. Often, patches, fixes and minor updates are automatically merged. Be aware that automated merging requires a high automated test coverage. Enforcement of merging of pull requests after a grace period.
Risk: Vulnerabilities in running artifacts stay for too long and might get exploited.
A base image is a pre-built image that serves as a starting point for building new images or containers. These base images usually include an operating system, necessary dependencies, libraries, and other components that are required to run a specific application or service. Nightly builds of custom base images refer to an automated process that occurs daily or on a scheduled basis, usually during nighttime or off-peak hours, to create updated versions of custom base images.
Risk: Vulnerabilities in running containers stay for too long and might get exploited.
Distroless images are minimal, stripped-down base images that contain only the essential components required to run your application. They do not include package managers, shells, or any other tools that are commonly found in standard Linux distributions. Using distroless images can help reduce the attack surface and overall size of your container images.
Risk: Components, dependencies, files or file access rights might have vulnerabilities, but the they are not needed.
The maximum lifetime for a Docker container refers to the duration a container should be allowed to run before it is considered outdated, stale, or insecure. There is not a fixed, universally applicable maximum lifetime for a Docker container, as it varies depending on the specific use case, application requirements, and security needs. As a best practice, it is essential to define a reasonable maximum lifetime for containers to ensure that you consistently deploy the most recent, patched, and secure versions of both your custom base images and third-party images.
Risk: Vulnerabilities in images of running containers stay for too long and might get exploited. Long running containers have potential memory leaks. A compromised container might get killed by restarting the container (e.g. in case the attacker has not reached the persistence layer).
Automated merges of automated created PRs for outdated dependencies.
Risk: Even if automated dependencies PRs are merged, they might not be deployed. This results in vulnerabilities in running artifacts stay for too long and might get exploited.
The maximum lifetime for a Docker container refers to the duration a container should be allowed to run before it is considered outdated, stale, or insecure. There is not a fixed, universally applicable maximum lifetime for a Docker container, as it varies depending on the specific use case, application requirements, and security needs. As a best practice, it is essential to define a reasonable maximum lifetime for containers to ensure that you consistently deploy the most recent, patched, and secure versions of both your custom base images and third-party images.
Risk: Vulnerabilities in running containers stay for too long and might get exploited.