Request for Product: Typescript Docker Construct
Dockerfiles are a constant source of frustration in the container ecosystem. First, they are difficult to write (well). They can't express all types of build graphs – only linear graphs are easily expressible, limiting the amount of parallelism one can do (i.e., how fast your builds can be). Finally, they are difficult to natively integrate into build systems – passing well-known build arguments, environment variables, and other build-time variables.
But what if we could easily define docker builds in code? A high-level description of the solution, then a blueprint for how it should be done.
Solution: Typescript Docker Construct
Define a DAG (directed acyclic graph) using the same construct pattern as AWS CDK, Pulumi, and Terraform use for infrastructure. Serialize the synthesized construct to a Buildkit frontend that transparently executes the operations using docker build
without any other plugins or arguments (see: An Alternative to the Dockerfile).
export class Build extends Construct {
constructor(props = {}) {
super(undefined as any, '');
const buildImg = new Image(this, 'buildImage', {
from: 'ubuntu:latest',
buildArgs: {
'http.proxy': 'http://proxy.example.com:8080',
'https.proxy': 'https://proxy.example.com:8080',
},
});
const appArtifacts = new AppBuilder(this, 'appBuild', {
image: buildImg,
source: new Source(this, 'gitSrc', {
url: 'git://github.com/moby/buildkit.git',
}),
});
new MergeOp(this, 'merge', {
inputs: [
new SourceOp(this, 'src1', {
source: appArtifacts.outputs.image,
exec: {
path: './bin/app',
args: ['--arg1', '--arg2'],
},
}),
});
const runtimeImage = new Image(this, 'runtimeImage', {
from: buildImg,
buildArgs: {
'NODE_ENV': 'production',
},
});
runtimeImage.copy(this, 'copy', {
source: appArtifacts.outputs.image,
destination: '/app',
});
}
}
Why?
Typescript has replaced YAML for infrastructure scripting configuration (see why). It's easy to use, has a complete language, and has an extensive code-sharing module/import system. It additionally has basic type safety that enhances API discoverability.
Buildkit, the engine that powers Docker, can build, cache, and represent arbitrarily complex builds. Unfortunately, the Dockerfile can't express all of these builds. Attempts at solving this have not been fruitful (best-practices configurations like Buildpacks don't move the needle).
There's a missing link between building artifacts and deploying infrastructure. These have different tools (Docker/Pulumi) and don't have a trivial way to work together. Tagging and piping your artifacts through to the IaC (infrastructure) tools in the wrong way can trigger long and unnecessary rebuilds. Connecting your build and deployment pipeline takes time.
Using a similar pattern as IaC paves the path for integrating the two. Your image builds can be part of your infrastructure deployment. The deployment will only need to know about your source code.
How?
Buildkit accepts alternative frontends (see my 2019 post on An Alternative to the Dockerfile for an example). In your configuration file that replaces a Dockerfile, you use a directive
#syntax=repo/image
to indicate that a custom builder image should be pull. That image is run with the context provided and has access to a special gRPC server that runs a service called the LLB (low-level build language) Bridge. You can see the protobuf definition here. The custom builder definesSolveRequest
with a build graph and callsSolve
on the service. Buildkit does the rest.Use the
aws/constructs
library to define the graph. The constructs library is a lightweight way of representing the composable configuration model through code. Thesynth
step should compile the graph into the Buildkit SolveRequest.Ideally, the Typescript client should make the request directly. Unfortunately, the gRPC service runs on
stdio
, and the Node gRPC runtime does not support HTTP2 over stdio (issue). You'll also need to copy over all the protobufs in the Buildkit repository and compile them to a typescript client (using ts-proto), which is a royal pain.It's unclear how to connect the two. A few ideas – write the protobuf requests to disk and then load them using a client written in go that can be vendored rather than generated (I did in Mockerfile). Or compile the SolveRequest to an intermediate format that can more easily be loaded than protobuf requests. Or run the go gRPC server over TCP that forwards requests to the Buildkit server (still connecting with the Node gRPC client). Maybe the maintainers of Buildkit would support running the gRPC server on TCP as a default?
If you're interested in working on this, let me know. I'm happy to provide more guidance or answer any questions on the specifics of how it would work.