Software Development Kit
Categories:
When you (a ChallMaker) want to deploy a single container specific for each source, you don’t want to understand how to deploy it to a specific provider. In fact, your technical expertise does not imply you are a Cloud expert… And it was not to expect ! Writing a 500-lines long scenario fitting the API only to deploy a container is a tedious job you don’t want to do more than once: create a deployment, the service, possibly the ingress, have a configuration and secrets to handle…
For this reason, we built a Software Development Kit to ease your use of chall-manager. It contains all the features of the chall-manager without passing you the issues of API compliance.
Additionnaly, we prepared some common use-cases factory to help you focus on your CTF, not the infrastructure:
The community is free to create new pre-made recipes, and we welcome contributions to add new official ones. Please open an issue as a Request For Comments, and a Pull Request if possible to propose an implementation.
Build scenarios
Fitting the chall-manager scenario API imply inputs and outputs.
Despite it not being complex, it still requires work, and functionalities or evolutions does not guarantee you easy maintenance: offline compatibility with OCI registry, pre-configured providers, etc.
Indeed, if you are dealing with a chall-manager deployed in a Kubernetes cluster, the ...pulumi.ResourceOption
contains a pre-configured provider such that every Kubernetes resources the scenario will create, they will be deployed in the proper namespace.
Inputs
Those are fetchable from the Pulumi configuration.
Name | Required | Description |
---|---|---|
identity | ✅ | the identity of the Challenge on Demand request |
Outputs
Those should be exported from the Pulumi context.
Name | Required | Description |
---|---|---|
connection_info | ✅ | the connection information, as a string (e.g. curl http://a4...d6.my-ctf.lan ) |
flag | ❌ | the identity-specific flag the CTF platform should only validate for the given source |
Kubernetes ExposedMonopod
When you want to deploy a challenge composed of a single container, on a Kubernetes cluster, you want it to be fast and easy.
Then, the Kubernetes ExposedMonopod
fits your needs ! You can easily configure the container you are looking for and deploy it to production in the next seconds.
The following shows you how easy it is to write a scenario that creates a Deployment with a single replica of a container, exposes a port through a service, then build the ingress specific to the identity and finally provide the connection information as a curl
command.
main.go
package main
import (
"github.com/ctfer-io/chall-manager/sdk"
"github.com/ctfer-io/chall-manager/sdk/kubernetes"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
sdk.Run(func(req *sdk.Request, resp *sdk.Response, opts ...pulumi.ResourceOption) error {
cm, err := kubernetes.NewExposedMonopod(req.Ctx, &kubernetes.ExposedMonopodArgs{
Image: "myprofile/my-challenge:latest",
Port: 8080,
ExposeType: kubernetes.ExposeIngress,
Hostname: "brefctf.ctfer.io",
Identity: req.Config.Identity,
}, opts...)
if err != nil {
return err
}
resp.ConnectionInfo = pulumi.Sprintf("curl -v https://%s", cm.URL)
return nil
})
}
Requirements
To use ingresses, make sure your Kubernetes cluster can deal with them: have an ingress controller (e.g. Traefik), and DNS resolution points to the Kubernetes cluster.Feedback
Was this page helpful?