This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

ChallMaker Guides

A collection of guides made for ChallMakers.

1 - Create a Scenario

Create a Chall-Manager Scenario from scratch.

You are a ChallMaker or only curious ? You want to understand how the chall-manager can spin up challenge instances on demand ? You are at the best place for it then.

This tutorial will be split up in three parts:

Design your Pulumi factory

We call a “Pulumi factory” a golang code or binary that fits the chall-manager scenario API. For details on this API, refer to the SDK documentation.

The requirements are:

  • have go installed.
  • have pulumi installed.

Create a directory and start working in it.

mkdir my-challenge
cd $_

go mod init my-challenge

First of all, you’ll configure your Pulumi factory. The example below constitutes the minimal requirements, but you can add more configuration if necessary.

Pulumi.yaml

name: my-challenge
runtime: go
description: Some description that enable others understand my challenge scenario.

Then create your entrypoint base.

main.go

package main

import (
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
        // Scenario will go there

		return nil
	})
}

You will need to add github.com/pulumi/pulumi/sdk/v3/go to your dependencies: execute go mod tidy.

Starting from here, you can get configurations, add your resources and use various providers.

For this tutorial, we will create a challenge consuming the identity from the configuration and create an Amazon S3 Bucket. At the end, we will export the connection_info to match the SDK API.

main.go

package main

import (
    "github.com/pulumi/pulumi-aws/sdk/v6/go/aws/s3"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
        // 1. Load config
		cfg := config.New(ctx, "my-challenge")
		config := map[string]string{
			"identity": cfg.Get("identity"),
		}

        // 2. Create resources
        _, err := s3.NewBucketV2(ctx, "example", &s3.BucketV2Args{
			Bucket: pulumi.String(config["identity"]),
			Tags: pulumi.StringMap{
				"Name":     pulumi.String("My Challenge Bucket"),
				"Identity": pulumi.String(config["identity"]),
			},
		})
		if err != nil {
			return err
		}

        // 3. Export outputs
        // This is a mockup connection info, please provide something meaningfull and executable
		ctx.Export("connection_info", pulumi.String("..."))
		return nil
	})
}

Don’t forget to run go mod tidy to add the required Go modules. Additionally, make sure to configure the chall-manager pods to get access to your AWS configuration through environment variables, and add a Provider configuration in your code if necessary.

You can test it using the Pulumi CLI with for instance the following.

pulumi stack init # answer the questions
pulumi up         # preview and deploy

Make it ready for chall-manager

Now that your scenario is designed and coded accordingly to your artistic direction, you have to prepare it for the chall-manager to receive it. Make sure to remove all unnecessary files, and zip the directory it is contained within.

cd ..
zip -r my-challenge.zip ./my-challenge/*

And you’re done. Yes, it was that easy :)

But it could be even more using the SDK !

2 - Software Development Kit

Sometimes, you don’t need big things. The SDK makes sure you don’t need to be a DevOps.

When you (a ChallMaker) want to deploy a single container specific for each source, you don’t want to understand how to deploy it to a specific provider. In fact, your technical expertise does not imply you are a Cloud expert… And it was not to expect ! Writing a 500-lines long scenario fitting the API only to deploy a container is a tedious job you don’t want to do more than once: create a deployment, the service, possibly the ingress, have a configuration and secrets to handle…

For this reason, we built a Software Development Kit to ease your use of chall-manager. It contains all the features of the chall-manager without passing you the issues of API compliance.

Additionnaly, we prepared some common use-cases factory to help you focus on your CTF, not the infrastructure:

The community is free to create new pre-made recipes, and we welcome contributions to add new official ones. Please open an issue as a Request For Comments, and a Pull Request if possible to propose an implementation.

Build scenarios

Fitting the chall-manager scenario API imply inputs and outputs.

Despite it not being complex, it still requires work, and functionalities or evolutions does not guarantee you easy maintenance: offline compatibility with OCI registry, pre-configured providers, etc.

Indeed, if you are dealing with a chall-manager deployed in a Kubernetes cluster, the ...pulumi.ResourceOption contains a pre-configured provider such that every Kubernetes resources the scenario will create, they will be deployed in the proper namespace.

Inputs

Those are fetchable from the Pulumi configuration.

Name Required Description
identity the identity of the Challenge on Demand request

Outputs

Those should be exported from the Pulumi context.

Name Required Description
connection_info the connection information, as a string (e.g. curl http://a4...d6.my-ctf.lan)
flag the identity-specific flag the CTF platform should only validate for the given source

Kubernetes ExposedMonopod

When you want to deploy a challenge composed of a single container, on a Kubernetes cluster, you want it to be fast and easy.

Then, the Kubernetes ExposedMonopod fits your needs ! You can easily configure the container you are looking for and deploy it to production in the next seconds. The following shows you how easy it is to write a scenario that creates a Deployment with a single replica of a container, exposes a port through a service, then build the ingress specific to the identity and finally provide the connection information as a curl command.

main.go

package main

import (
	"github.com/ctfer-io/chall-manager/sdk"
	"github.com/ctfer-io/chall-manager/sdk/kubernetes"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	sdk.Run(func(req *sdk.Request, resp *sdk.Response, opts ...pulumi.ResourceOption) error {
		cm, err := kubernetes.NewExposedMonopod(req.Ctx, &kubernetes.ExposedMonopodArgs{
			Image:      "myprofile/my-challenge:latest",
			Port:       8080,
			ExposeType: kubernetes.ExposeIngress,
			Hostname:   "brefctf.ctfer.io",
			Identity:   req.Config.Identity,
		}, opts...)
		if err != nil {
			return err
		}

		resp.ConnectionInfo = pulumi.Sprintf("curl -v https://%s", cm.URL)
		return nil
	})
}

The Kubernetes ExposedMonopod architecture for deployed resources.

3 - Update in production

How to update a challenge scenario once it is in production (instances are deployed) ?

So you have a challenge that made its way to production, but it contains a bug or an unexpected solve ? Yes, we understand your pain: you would like to patch this but expect services interruption… It is not a problem anymore !

A common worklow of a challenge fix happening in production.

We adopted the reflexions of The Update Framework to provide infrastructure update mecanisms with different properties.

What to do

You will have to update the scenario, of course. Once it is fixed and validated, archive the new version.

Then, you’ll have to pick up an Update Strategy.

Update Strategy Require Robustness¹ Time efficiency Cost efficiency Availability TL;DR;
Update in place Efficient in time & cost ; require high maturity
Blue-Green Efficient in time ; costfull
Recreate Efficient in cost ; time consuming

¹ Robustness of both the provider and resources updates.

More information on the selection of those models and how they work internally is available in the design documentation.

You’ll only have to update the challenge, specifying the Update Strategy of your choice. Chall-Manager will temporarily block operations on this challenge, and update all existing instances. This makes the process predictible and reproductible, thus you can test in a pre-production environment before production. It also avoids human errors during fix, and lower the burden at scale.

4 - Use the flag variation engine

Use the flag variation engine to block shareflag, as a native feature of the Chall-Manager SDK.

Shareflag is considered by some as the worst part of competitions leading to unfair events, while some others consider this a strategy. We consider this a problem we could solve.

Context

In “standard” CTFs as we could most see them, it is impossible to solve this problem: if everyone has the same binary to reverse-engineer, how can you differentiate the flag per each team thus avoid shareflag ?

For this, you have to variate the flag for each source. One simple solution is to use the SDK.

Use the SDK

The SDK can variate a given input with human-readable equivalent characters in the ASCII-extended charset, making it handleable for CTF platforms (at least we expect it). If one character is out of those ASCII-character, it will be untouched.

To import this part of the SDK, execute the following.

go get github.com/ctfer-io/chall-manager/sdk

Then, in your scenario, you can create a constant that contains the “base flag” (i.e. the unvariated flag).

const flag = "my-supper-flag"

Finally, you can export the variated flag.

package main

import (
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"

	"github.com/ctfer-io/chall-manager/sdk"
)

const flag = "my-supper-flag"

func main() {
	sdk.Run(func(req *sdk.Request, resp *sdk.Response, opts ...pulumi.ResourceOption) error {
		// ...

		resp.ConnectionInfo = pulumi.String("...").ToStringOutput()
		resp.Flag = pulumi.Sprintf("BREFCTF{%s}", sdk.VariateFlag(req.Config.Identity, flag))
		return nil
	})
}
package main

import (
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"

    "github.com/ctfer-io/chall-manager/sdk"
)

const flag = "my-supper-flag"

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		// 1. Load config
		cfg := config.New(ctx, "no-sdk")
		config := map[string]string{
			"identity": cfg.Get("identity"),
		}

		// 2. Create resources
		// ...

		// 3. Export outputs
		ctx.Export("connection_info", pulumi.String("..."))
        ctx.Export("flag", pulumi.Sprintf("BREFCTF{%s}", sdk.VariateFlag(config["identity"], flag)))
		return nil
	})
}

If you want to use decorator around the flag (e.g. BREFCTF{}), don’t put it in the flag constant else it will be variated.