Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flexibility in using Helm chart: args, env, probes & better security and namespaces usage #126

Merged
merged 5 commits into from
Mar 27, 2023

Conversation

laimison
Copy link
Contributor

Hi,

I have added more ways to use the chart. Those ways are optional and don't change the way templates are rendered now, except:
this block bellow is added by default on Deployment - it improves security:

          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            runAsGroup: 1000
            runAsUser: 1000

Also, missing parameters for probes are added.


Let me know, if PR looks good or needs adjustments to be agreed eg not forcing better security in values.yaml file at deployment.securityContext.

I have tested this on cluster.

I would like to add these improvements to be able to use this chart and depend on it. So I hope flexibilities and improvements can help other people too. It would be great to have helm chart released.

Let me know your thoughts and also things I should be aware of.

Thanks

@bakito
Copy link
Owner

bakito commented Mar 21, 2023

@laimison thank you very much for the PR.

I agree with your demand of being able to have more flexibility, but I have a simpler approach than mapping evry single property.

for example defining proves and security context in the values and importing the whole yaml into the deployment.
This give even more flexibility, than mapping each value separately

        {{- with .Values.deployment.livenessProbe }}
          livenessProbe:
            {{- toYaml . | nindent 12 }}
          {{- end }}

@@ -1,3 +1,4 @@
{{ if .Values.deployment.create }}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why ould you not want to create the deployment, when you install this chart?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for suggestions, these are great. I will amend PR once have a moment.

In terms of this item, I just added it, in case people find something not flexible enough as me. They can use their Deployment and submit PR here. So there is no gap in waiting when chart can offer what they need.
If you want, I can exclude this change. By default, deployment has to be created based on values.yaml.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rader have this conditon removed as I do not see the use case.

But anyway thank you very much for your initiative. I'm looking forward to the update.

Don,'t forget to run make docs before pushing to have the readme updated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes have been submitted and PR is ready for review. Please check if you want to have limit-rpm - requests limit per IP on Ingress and if version increase is okay. Nothing else additional.

ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
readinessProbe:
failureThreshold: {{ .Values.deployment.readinessProbe.failureThreshold }}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer having the whole probe config in the values and include the whole yaml as mentioned in the comment

httpGet:
path: /_health
port: http
readinessProbe:
livenessProbe:
failureThreshold: {{ .Values.deployment.livenessProbe.failureThreshold }}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

httpGet:
path: /_health
port: http
{{- if .Values.deployment.securityContext }}
securityContext:
{{- if .Values.deployment.securityContext.allowPrivilegeEscalation }}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here

# - path: /ssw(/|$)(.*)
# pathType: Prefix
hosts:
- host: example.internal
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with changing the example, but would not set a default host in the values.

{{- end }}
{{- if .Values.deployment.env }}
env:
{{- if .Values.deployment.env.SEALEDSECRETSCONTROLLERNAMESPACE }}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use better readable camelCase keys for the env variable in the values it only need to be uppercase in the variable name, not the helm values.

…metic changes & limit-rpm added to an ingress
# nginx.ingress.kubernetes.io/ssl-redirect: 'true'
annotations:
# -- Specifies number of requests accepted from a given IP each minute
nginx.ingress.kubernetes.io/limit-rpm: "180"
Copy link
Owner

@bakito bakito Mar 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you leave this empty default?
You can keep the rpm commented out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed, thanks

- ALL
privileged: false
runAsGroup: 1000
runAsUser: 1000
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Dockerfile runs with user 1001 https://github.com/bakito/sealed-secrets-web/blob/main/Dockerfile#L32
Can you change this also to the same value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed, thanks

failureThreshold: 3
successThreshold: 1
timeoutSeconds: 10
initialDelaySeconds: 15
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

periodSeconds: 5
successThreshold: 1
timeoutSeconds: 10
initialDelaySeconds: 30
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you remove this part?

    failureThreshold: 3
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 10
    initialDelaySeconds: 30

30 sec us much too high for this application as it starts fast.

The kubernetes default should be enough
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes

If someone needs it changed, it can be overwritten in the values.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed and PR is for review now, many thanks

@bakito bakito merged commit 6d3a6cc into bakito:main Mar 27, 2023
@bakito
Copy link
Owner

bakito commented Mar 27, 2023

@laimison thank you very much for your contribution

@laimison
Copy link
Contributor Author

@bakito thanks for your time and merge. Let me know when/if there are some plans to release new version of this chart so I believe it will be listed using command helm search repo bakito/sealed-secrets-web. Cheers.

@bakito
Copy link
Owner

bakito commented Mar 28, 2023

@laimison
Copy link
Contributor Author

@bakito perfect, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants