Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

keactrl binary please #1

Open
vavallee opened this issue Oct 7, 2022 · 11 comments
Open

keactrl binary please #1

vavallee opened this issue Oct 7, 2022 · 11 comments

Comments

@vavallee
Copy link
Contributor

vavallee commented Oct 7, 2022

Hi!
First of all, thank you for all the work you did on this. You saved me hours of work. I am running 2 of your docker containers in kubernetes and it's excellent.
I am looking to add liveness and readiness probes and realized that the keactrl binary is not included in your docker image, could you add it?

Now to get stork up and running.

@JonasAlfredsson
Copy link
Owner

Hi vavallee,

Glad you found the containers useful :)
I have actually not yet been able to do a "proper" deployment of the containers myself, so if you run in to any issues I am very interested in hearing about them!

In regards to keactrl I am not entirely sure what you mean. I have a separate container with the kea-ctrl-agent which you can set up like this, and then communicate with the DHCP services like it is described in this section.

If I am missing the usecase for including the binary inside the same container as the DHCP binary you will need to explain how and why that is needed.

@JonasAlfredsson
Copy link
Owner

Did the kea-ctrl-agent container solve your problem or is there something else we need to do?

@courtland
Copy link

courtland commented Oct 17, 2022

Perhaps he is trying to use an exec livenessProbe (https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command) where keactrl must exist within the container being probed?

I haven't gotten to this yet, but I am also using the images with great success so far. I wonder what is the best way to do a liveness/health check for the individual containers? The kea-ctrl-agent container is good for interacting with an entire Kea cluster, but I don't know if any or all workload orchestrators (docker/k8s/etc) support probing another URL/container as a health check? There's probably a simpler way?

@JonasAlfredsson
Copy link
Owner

JonasAlfredsson commented Oct 17, 2022

Interesting, problem.
I am not sure the best way here since I have not even thought about deploying this in a kubernetes setting, so I apologize here for coming with suggestions that might not even be applicable.

My first thought is that I am unsure if just including the kea-ctrl binray in the image would help, since then you would need two services running side by side which is not optimal (and not something I have designed for).

My second thought is that is is possible to monitor the health of the pod, and make the pod contain both the kea-dhcp and the kea-ctrl agent and have it all be restarted in case the ctrl container doesn't answer as you expect?

The third thought is if we can do something with this comment from the documentation:

During startup, the server attempts to create a PID file of the form: [runstatedir]/kea/[conf name].kea-dhcp4.pid, where:
runstatedir: The value as passed into the build configure script; it defaults to /usr/local/var/run. Note that this value may be overridden at runtime by setting the environment variable KEA_PIDFILE_DIR, although this is intended primarily for testing purposes.

Would it perhaps be enough to check if this file exists?

@JonasAlfredsson
Copy link
Owner

If you are also trying to deploy to k8s I would like to ask you to look at this: #3
And if possible I would really like feedback and improvement suggestions to this, since it is not something I have the ability to test at the moment.

@vavallee
Copy link
Contributor Author

vavallee commented Nov 1, 2022

Yes, this is for the readiness/liveness probes. I am just looking for something to assure that the pod is still responsive, nothing super fancy like a smoke test. Open to other suggestions

@JonasAlfredsson
Copy link
Owner

Alright, and the suggestion to putting both the dhcp and the ctrl-agent container inside the same pod and using the ctrl-agent one as the health indicator for the entire pod is not viable (I actually don't know if this is possible)?

If you define a socket on the dhcp service you can use socat to communicate with it (the ctrl-agent only expose this as an HTTP interface), so that is an advanced solution. But if you don't want to enable such communication I don't know if Kea has any good "are you alive" indicators (except that it is PID 1 in the container and if it exits the container dies).

If you need a less socker dependent way of checking livelyness I think it might be time to open an issue on the official Kea repo: https://gitlab.isc.org/isc-projects/kea/

@JonasAlfredsson
Copy link
Owner

OK, I am gonna poke this issue again.

A suggestion I have is to include socat in the base image so it is possible to send commands directly to the exposed socket of the target service. I think this is a cleaner solution than starting and running ctrl-agent inside the same container.

But depending on how you set up your pod deployment it could be of interest running the ctrl-agent as a sidecar container to the main process and have the livelyness probe go through it.

@drizzd
Copy link

drizzd commented Nov 18, 2023

I don't think you can run vanilla Kubernetes liveness probes in a sidecar. The liveness probe is a command executed in the same container with docker exec. For REST services it is usually curl or wget. If the dhcp container exposes an HTTP endpoint, adding curl to the image may suffice.

Using keactrl instead of curl could make sense if we want the health check to probe deeper on application level, or if the HTTP endpoints change more frequently than the keactrl CLI.

@tbotnz
Copy link

tbotnz commented Feb 7, 2024

@vavallee when you deployed this with k8s, did you need to initialize the DB schema using kea admin?

@vavallee
Copy link
Contributor Author

@vavallee when you deployed this with k8s, did you need to initialize the DB schema using kea admin?

Not that I remember, though this was ages ago

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants