I highly recommend Nix. We used to use apt-get, pypi, rubygems, npm, make, and others, and now it's all replaced with a single tool: nix. It's dramatically simplified builds and dependency management for us at MixRank.
Benefits:
* It's declarative, so no one has to run anything to get to the correct state when a dependency has changed. Open a new nix-shell and automatically get everything. It's impossible to get in a bad state because it doesn't touch your system.
* We know we're all running the same software because the build will fail if hashes don't match.
* Portable across most unix's, and windows support is in progress.
* Patching other people's code is as easy as referencing a .patch file from a nix expression. You don't have to wait for your patch to be accepted in the main project or fork the project to build a custom release.
* There's no virtualization of IO like in containers. It effectively has no overhead.
* We can easily use different versions of the same piece of software in parallel; they don't collide because the hashes don't match.
* It has a very active community. The package repository, nixpkgs [1], has 73k commits, 650 contributors, and median pull request acceptance of 18 hours.
* Our software won't silently break because one of our package managers replaced a package. We maintain a fork of nixpkgs which we rebase on our own schedule.
If you've ever been frustrated with your package manager or build tools, give nix a shot!
All of the scheme your heart could want. I run archlinux + guix and it's a magical happy land. I'm sure the same could be accomplished with nix, but I enjoy writing scheme too much!
I've watched a few talks on guix, and there's lots to like.
I recall there's supposed to be compatibility between nix and guix, but can't remember the details. Are they going to share package expressions, have a transpiler, or something like this?
They both use the same daemon (guix-builder is a fork of nix-builder), so I think they could share packages given the hashing is the same. This hypothetical has not been implemented yet, I'd love it.
Their is an importer[0], but this is basic and just grabs the package metadata, it's up to the packager to implement the package still. Hopefully this is further automated in the future.
Question now that I got my guix zealotry out of the way.
Do you run your own hydra? I really want to but it seems like setting one up is a pain.
How do you handle /nix/store sizes? I've found that gc get's away from me. For 1 user with very little installed, my store (including it's hardlink dedup) is 11Gigs, and I have 59 packages installed.
I considered running hydra briefly, but shyed away from the extra complexity and operational overhead. Most things are already available in the community hydra, and building the unique things is hardly an inconvenience since it happens once per machine. Instead of using channels, we maintain a fork of nixpkgs and occasionally squash it into a git subtree of our main repo. Our git history reflects everything you could want to know to replicate a working build of our software.
I've never done a gc on my /nix/store, and have wrecklessly added things to it while experimenting. It's 25GB and has 44k (!) derivations/packages in it.
How do you maintain derivations for your own software with respect to nixpkgs?
Do you have your own function that composes things and imports <nixpkgs>, along with overrides for things that you find deficient in the derivations found in nixpkgs? Do you include derivations for your own software as part of your fork of nixpkgs that doesn't make its way back upstream?
The simplest approach is to just use the default nix channel. For a team and production environment, this isn't ideal because you don't control when changes are made to your dependencies. So, we fork nixpkgs and rebase on our own schedule. This is also nice because it gives us an easy avenue to contribute upstream. Changes that may have community value go in this fork, nothing private.
The fork is then squashed as a git subtree into our main repo. That way it's part of our history like everything else. You can even git-bisect with confidence in the face of many dependency changes!
In addition, we have a tree of our own nix expressions for dependencies that are unique to us or that we haven't bothered to contribute back to nixpkgs yet. Sometimes this includes copying an expression for a dependency into our tree with a patch until the bug is fixed in an upstream project.
The final result is one or many nix files reference our private nix expressions and nixpkgs subtree, intended to be used with nix-shell.
I'm curious how this handles changes within nixpkgs itself. For example, if developer A has nixpkgs version 111, and developer B has nixpkgs version 222, and they both attempt to build a Haskell project depending on "libxml", they may compile against different versions of libxml (or worse, an older version of libxml might be incompatible with the Haskell code).
Nixpkgs solves this itself by "owning" the entire dependency tree. That is, if there's a definition in nixpkgs that requires libxml, you can bet it works with the libxml derivation defined in that same nixpkgs. If not, there might be libxml1, libxml2 derivations defined.
In practice, using the ~latest version of everything, or whatever nixpkgs happens to have, usually works well. I started moving toward building my own nixpkgs distribution, which included a vendored copy of nixpkgs at a specific version (rather than whatever version was available at the time you downloaded it), combined with a few customizations for specific (older) versions of, say, libxml. This ensured multiple developers could download things months apart and still expect everything to work. (nixpkgs changes somewhat rapidly.)
Probably the most succinct comparison of Nix and Docker I've seen.
So this works by having Stack (a Haskell build tool?) emit a nix file to build a Haskell project? Very cool.
The more natural approach for this seems to be to use your language's package manager spec as a base to build Nix packages. Since package manager spec files already contain build instructions, this frees you up from any dependency on a specific build system. It would then be straightforward to generate Nix packages from a repository and serve them through a Nix channel.
All that being said, I should implement this for OCaml and OPAM. Nix is amazing, but it is "all-in." Native Nix builds the right way involve all your dependencies being Nix packages. (Nix builds using a language's package manager are impure and would likely require hand-holding with external deps.) This article seems like a nice step towards solving that, but I think the real solution would be tooling around the package manager itself.
> The more natural approach for this seems to be to use your language's package manager spec as a base to build Nix packages. Since package manager spec files already contain build instructions, this frees you up from any dependency on a specific build system. It would then be straightforward to generate Nix packages from a repository and serve them through a Nix channel.
Yeah `Stack.yaml` is weird because it is both manually edited and generated. I'd rather see a split where the `Cabal-file + overrides/extra-info => build-plan` (not any central package db is not consulted). Then either Nix, or the built-in default can interpret that build plan (and just the build-plan) to actually build the thing. We can add a feedback loop `Cabal-file + overrides/extra-info + Maybe(solved-versions) => build-plan solved-versions` analogous to Cargo's and Bundler's lockfiles too.
Basically the goal is to separate the less deterministic aspects of querying versions and solving constraints, vs the entirely determinstic simple-stupid steps of actually building things.
Ideally then, nixpkgs would need no knowledge of upstream packages, and just the ability to interpret build-plans per language package manager. Caching would follow from just making sure the build-plan didn't very in stupid ways (of course the nix interpretation of the build-plan should be deterministic).
For system packages relying on a langauge package manager (xmonad...) use a "dynamic import" (import result of derivation) to avoid the need of caching these steps in the nixpkgs source.
I have never used stack: I went from cabal-install hell straight to NixOS.
My current approach is: write the usual .cabal file, run
cabal2nix --shell . > default.nix
then build and install the package (with `nix-build` or `nix-env -if.`) or open a nix-shell (just `nix-shell` in the project root) where I can run cabal-install to do things like uploading to hackage, building haddock docs, using cabal repl.
The nix expression is deterministically generated by the .cabal so unless you update the nix channel it won't change and you can safely delete it and regenerate it at anytime. While stack.yaml is not, or not entirely generate, if I understand correctly.
Are there other advantages over cabal2nix beside parallelization and incremental recompilation? I hardly handle 2 package at once so it's not really an issue for me.
Incremental recompilation is a big one, but beyond that, out-of-the-box handling of multi-package projects without the hassle of custom scripting or setting up .nix files just right. If all your projects are single package, `cabal repl` should work pretty well. Where stack really shines, and where it's interesting to simply reuse that, is for multi-package repos, which tend to be very common in private company Haskell developments.
Benefits:
* It's declarative, so no one has to run anything to get to the correct state when a dependency has changed. Open a new nix-shell and automatically get everything. It's impossible to get in a bad state because it doesn't touch your system.
* We know we're all running the same software because the build will fail if hashes don't match.
* Portable across most unix's, and windows support is in progress.
* Patching other people's code is as easy as referencing a .patch file from a nix expression. You don't have to wait for your patch to be accepted in the main project or fork the project to build a custom release.
* There's no virtualization of IO like in containers. It effectively has no overhead.
* We can easily use different versions of the same piece of software in parallel; they don't collide because the hashes don't match.
* It has a very active community. The package repository, nixpkgs [1], has 73k commits, 650 contributors, and median pull request acceptance of 18 hours.
* Our software won't silently break because one of our package managers replaced a package. We maintain a fork of nixpkgs which we rebase on our own schedule.
If you've ever been frustrated with your package manager or build tools, give nix a shot!
[1] https://github.com/nixos/nixpkgs