What if a package manager built packages on demand? What if docker registries built images as they were requested?
Today, there are a few manual steps between a developer writing code and other developers being able to use that code as a package. Some package managers allow developers to reference code by git references (e.g., a checksum or tag), but not all code is useable simply by pulling the source files. Instead, there’s usually a bundling or compilation step.
What if the package manager could bundle software on-demand? If a user request a docker image example:v3 that hasn’t been uploaded to the registry, the image registry could still satisfy the request by pulling the code, building the image, and serving the artifact. The end user gets the image they wanted, and the maintainer doesn’t have to worry about building, tagging, and pushing every time they make a change.
It’s not just docker images. The foundation is being built for cross-language packages — C++ code converted with emscripten and exposed to javascript via embind, or WebAssembly modules exporting functions to different runtimes. Today, the process looks something like this — fork a repo, create a project_bindings.cpp file that exposes a few methods, compile it to JavaScript (or some other language) bindings, and push it to the appropriate package manager. What if all of this could happen automatically? What if you could just find (most) code on GitHub and just import it, regardless of language?
There’s some hand-waving here. A repository and a Dockerfile aren’t sufficient to figure out how to build the project into a Docker image (although it’s sufficient most of the time). WebAssembly or other bindings to languages aren’t always straightforward to figure out (although the process is getting easier all the time). A basic version of this is what I described as GitHub’s missing package manager, but there is a lot more that can be built.
You have basically described Nix :)
* Packages in Nix are defined as functions (so called derivations)
* Package dependencies are function parameters
* You can override dependencies by simply passing different arguments
* Nix derivations are fully overridable:
* you override the source - tell Nix to get the source from a local directory, ssh+git repo, hosted VCS like GitHub/GitLab/..., package registry like NPM, Cargo, etc.
* You can apply patches on top of the source
* You override compiler flags, build system options, or the entire build recipe
* Force enable/disable the tests of all or a specific package in your dependency graph (so called closure in Nix)
* Nix will automatically figure out if it needs to build the package from source or download a binary artifact. This is the so called substitution or binary cache - you can disable a particular cache or add additional ones. Sort of like pulling different layers of a docker image from different docker registries. The difference being that Docker only keeps the image in the registry, while Nix always has the build recipe and you can reproduce it at will.
* Since Nix knows your complete dependency graph, you can bundle your app (or entire OS) in a variety of formats, including Docker: https://github.com/nix-community/nixos-generators#supported-formats
* Nix is a killer tool for building polyglot apps - e.g. combining (building and linking in Wasm modules) Rust, C++ and Nim code
Github actions marketplace, should allow different channels to add the last steps for publication to different registries.
Tools wide enough might be a challenge, but if you already build your package through code or configuratión you just need a compatible action.
Bazel might be general enough to compile any language, in an incremental and effective way. This could happen in the requester's computer, a bigger infra, or just using build resources in github.
So recapping:
Ideally as a dev
We are saying that:
One adds a docker image
And an automáted build process
Goes into the marketplace, gets a general build plugin, selects distribución channels or accepts the default and voila
He's left with development only flows
Meanwhile first requester's in any distribución channel contribute to the registro and end up verifying QAing the final builds (should not be a problem as everything is dockerized and using reliable build systems)