SAT Competition 2016
Many applications of SAT are based on solving a sequence of incrementally generated SAT instances. In such scenarios the SAT solver is invoked multiple times on the same formula under different assumptions. Also the formula is extended by adding new clauses and variables between the invocations. It is possible to solve such problem by solving the incrementally generated instances independently, however this might be very inefficient compared to an incremental SAT solver, which can reuse knowledge acquired while solving the previous instances (for example the learned clauses).
There are several SAT solvers that support incremental SAT solving, however each has its own interface, which differs from the others. That makes comparing them difficult. Our goal is to have a single interface implemented by many different solvers. Users who want to use incremental SAT solvers in their applications would also benefit from such an interface. Applications could be written without selecting the concrete incremental SAT solver in advance and migrating to a different solver would be just a matter of updating linking parameters.
The Incremental Interface - IPASIR
The name IPASIR is the reversed acronym for "Re-entrant Incremental Satisfiability Application Program Interface". With an additional space and question mark it can also serve as an expression for offering a brewed beverage to a gentleman.
The interface was designed to have the following properties:
- Easy to implement, so that SAT solver developers would be willing to support it.
- Easy to use, so that anyone can easily build SAT based applications (can be used for educational purposes).
- Universal and powerful, so that it can be used for a broad range of industrial and research applications.
The interface consists of only 9 functions which are inspired by the incremental interfaces of PicoSAT and Lingeling. It is defined in the form of a C language header file ipasir.h. Please refer to the comments in the header file for more information about the specifications of the interface.
We have prepared a package (ipasir.zip) containing three SAT solvers adapted to IPASIR and a few simple IPASIR based applications. One of the adapted SAT solvers in the package is Sat4j which demonstrates that not only solvers written in C/C++ can implement IPASIR. The package also contains a simple Java binding and Java application to show how any IPASIR compatible SAT solver can be used in a Java application.
In order to participate in the Incremental Library Track please follow these steps:
- Implement all the functions defined in the ipasir.h header file.
- Create a directory with your code and makefile following the style of the three example solvers in the ipasir.zip package. Placing your directory in the "ipasir/sat/" directory and issuing "make" in the "ipasir/" directory should result in a successful build of all the example applications with your solver (and the original example solvers). See the README files located in the package for more information.
- Add the system description of your solver (see general rules) into the directory.
- Pack your directory into a zip file and send it to firstname.lastname@example.org
The benchmarks for the Incremental Library Track are command line applications that solve a problem specified by an input file given as a parameter to the application. Please take a look at the example applications in the ipasir.zip package. To create your own benchmark applications follow the style of the directories in the "ipasir/app/" directory.
Prepare your application directory in such a way that placing it in the "ipasir/app/" directory and issuing "make" in the "ipasir/" directory results in a successful build of your application (along with the original example applications) with all the solvers in the package. Follow the instruction in the README files found in the ipasir.zip package. As the last step add a short text document that describes your application into your directory, pack it into a zip file and send it to email@example.com.
Benchmark Selection and Evaluation of Solvers
The benchmarks used for the competition will be selected by the organizers. The solvers will be evaluated by the number of problem instances (application + input pairs) solved within a given time limit.