As I mentioned in a previous post, I'm exploring new ways of handling dependency problems in the experimental aptitude branch. This (very long) post explains the basic idea and some of its ramifications. Hopefully, I've also told the blog to just put this first paragraph into the RSS feed, so I can ramble at length without making Planet readers want to burn me in effigy.
I started seriously thinking about writing a new resolver back around New Years, when I had a minor insight. I realized that a core behavior of aptitude, apt-get, and even dselect, a behavior that I've just accepted for years without much thought, is actually a UI deficiency. You see, all of these programs -- indeed, all dependency resolvers that I've used (although I'd love to learn of software that doesn't behave this way!) -- work in a simple way: they find a solution to the dependency problem that you hand them, using some simple heuristics to generate a "reasonable" one, and ask you if it's OK. If not, well, you get to solve the whole thing yourself.
The problem here is that there's no middle ground between fully automatic operation and fully manual operation. But a fully automatic solver will always produce some results you don't want -- this is demonstrated by the fact that two different people, confronted with the same dependency problem, might prefer different resolutions to it. My idea was that the process of resolving dependencies should really be much more interactive, with the user able to instruct the program to go look for a "better" solution. In this model, it might even be acceptable for the initial solution to be "poorer" than apt's solution, as long as it was easy to get to the user's favorite way of handling the situation. If you think of the space of all possible solutions, the program becomes a tool for interactively examining this space (ignoring non-solutions).
It happens that there are a lot of algorithms designed for this sort of situation. My first thought was that best-first search might be a good starting point, as it's simple and it makes it easy to get the "next best" solution. Now, one danger with this algorithm is that it can take exponential time to find a solution -- in fact, any algorithm that is guaranteed to find a solution if one exists will have this problem (assuming P!=NP). However, I was relying (and my experience so far seems to prove me right) on the fact that the Debian archive only produces "easy" instances of this problem. To put it another way, we don't have an archive full of crazy dependency conflicts, and so almost anything halfway sensible that you do will solve a dependency problem -- the difficulty is not in finding solutions, but in generating solutions that are "reasonable".
However, at the time I had to finish a masters thesis. As I was wrapping up work on that, I started in on the dependency resolver again. Now, best-first search works by keeping a priority queue of search nodes (these are 'partial solutions'; in my case, collections of actions that might resolve the dependencies). At each step, the algorithm pulls the "best" node off the queue and, if it's not a solution (doesn't resolve the dependencies), enqueues its successors -- these are the search nodes that can be reached in "one step" from the current node. Thus, there are two main things that have to be defined before the algorithm can be used: the ordering of search nodes (which ones are "better"?) and the generation of successors.
Ordering search nodes has two main goals: you want to find a solution fast and you want to produce a good solution (note that these goals may be opposed to each other). Figuring out the general shape of the ordering was easy -- it would penalize broken dependencies and lots of changes (to guide the search towards solutions), and it would weigh the "desirability" of various solutions (for instance, the removal of an installed package should be penalized relative to the installation of a non-installed package). It takes effort to find the right values for the various factors that are involved, but I deferred this problem for later.
The successor generation was not so obvious, although it seems obvious in hindsight, and this was where I stumbled on the second simple-yet-cool idea in the new resolver. Now, it's obvious that you could generate successors by just blindly performing every possible installation, removal, and upgrade. However, this would be completely unmanageable: the number of successor nodes at each step would be so large that the algorithm would get bogged down almost immediately. A better approach is to do a dependency-directed search. Find an unsatisfied dependency and enqueue all the different ways of satisfying it; these are the successors of your search node.
But what are the ways of satisfying a dependency? For a straight dependency it's fairly simple: install any package version that fulfills the dependency, or remove the source package, or install a different version of the source package. But for a Conflict, you have to remove every version of any package that is conflicted against. Plus, you have to deal with provided packages and other "interesting" special cases.
At this point I did what anyone with too much academic training would do: I started looking for a simple model of the problem that I could solve without much trouble. Of course, the satisfaction of Boolean expressions is one option, but this is kind of unsatisfying, since you end up with a totally different problem than you started with. It might be possible to solve the problem this way, but as a tool for reasoning about it, SAT is unilluminating. I wanted a model that was closer to the world of package dependencies.
What I discovered is that all you need to represent dependencies is, well, dependencies. You do not need Conflicts, each dependency targets an ORed set of individual package versions, and (perhaps even more useful for eliminating annoying special cases) you do not need to distinguish between "installed" and "not installed" packages. The elimination of virtual and not-fully-versioned dependencies (leaving you with ORed dependencies and conflicts that target individual package versions) is pretty obvious, but the elimination of Conflicts and removals might be less so. As a CS weenie, I think this is a rather cute little trick; skip the next paragraph if you don't.
Ok, now that the non-CS-weenies are all gone, I'll continue. Look at it this way: the core property of package versions is that they are mutually exclusive; you can't have versions 1 and 2 of "foo" installed at the same time. So to eliminate removals, I just introduced a new version of every package (call it, say, UNINST) that represents the removal of that package. This models removals perfectly, since a package can't be removed at the same time that it's installed. Now it should be obvious how Conflicts are modeled: if "foo" has versions 1, 2, 3, and 4, then a conflict on versions 2 and 3 is equivalent to a dependency on version 1 or 4; this follows because with the elimination of the ability to remove packages, we know that exactly one version of each package is installed.
Now, the really interesting thing here, aside from the fact I was able to prove cool theorems about my algorithm using this model, is that the model is so close to the problem at hand that it is possible to perform transformations between the two on the fly using thin wrapper classes. If you've used libapt, this is your cue to start drooling :-). If you haven't, I should explain that because libapt gives you a very low-level view of the package cache, finding answers to apparently simple questions -- such as "what actions will satisfy this dependency?" or "what packages depend on this one?" -- require complex and error-prone iteration constructs.
Explicitly instantiating the abstract model for APT, and writing the algorithm against this model, effectively factors the code of the resolver into two parts: a module of wrapper objects that define the "physics" of APT's world, and the core resolver algorithm. The nasty iteration constructs are isolated in one place and can be verified without more than a small amount of pain, while the algorithm itself is free of them. The magic of C++ templates is used to bind the two pieces together at compile-time (although I suspect that the iterators aren't, or shouldn't be, fully inlined, since some of them are actually quite complicated; this is an optimization to look at later).
With this model in place, generating successors becomes almost trivial. The technique that's actually used is a bit more sophisticated than what I described above; you can find the details in my more formal LaTeX writeup. The most interesting thing (IMO) is the various tricks that are used to restrict successor generation; they have the double effect of avoiding "silly" solutions (like "remove A, install B, install A, install C, remove A, install D, ..."), while also decreasing the number of successors generated at each step (hence making the algorithm quite a bit faster).
In case it's not obvious from the preceding comments, I should point out that one major benefit of this approach is that all solutions can be generated (technically this is not true, but the caveat is not important for our purposes -- see the paper). In particular, this kills the problem of "apt doesn't know how to install from experimental" once and for all. It also should kill off the problem of ambiguous virtual dependencies: aptitude will score solutions that install the package with the highest Priority above all the alternatives.
Some work still needs to be done. Recommends are currently ignored by the resolver; I have a plan for handling them (described in the writeup). With complicated problems, it's sometimes hard to find the solution that you want because there are just so many to wade through; I have a plan for this too (it involves more back-and-forth, so you can guide the resolver more explicitly towards your solution). Oh, and with really complicated problems, aptitude might just give up -- as a hedge against blowup, it only tests 5000 nodes before aborting. However, if you have such a complicated situation, it is likely that apt will also fail to find a solution, and you always have the option of asking aptitude to "try harder".
Hopefully that explains a bit more what I've been muttering and talking people's ears off about lately. :-)