SAT-Race 2015

Rules & Instructions

These are the official rules and submission instructions for SAT-Race 2015.

Table of Contents:

  1. General Rules
  2. Main Track Rules
  3. Parallel Track Rules
  4. Incremental Library Track Rules

General Rules

Participants

Each participant of the SAT-Race 2015 must submit a short (1-2 pages pdf-file) description of his system(s). This should include a list of all authors of the system and their present institutional affiliations. It should also describe any algorithms or data structures that are not standardly used in such systems. See system descriptions from previous competitions for examples. System descriptions will be posted on the SAT-Race 2015 website.

Submitters are encouraged to be physically present at the SAT'15 conference. This is, however, not a requirement to participate or win.

There will be three tracks (see below) where participants can submit their solvers independently.

Important Dates

The following dates apply for each track.

Solver/Benchmark Submission from: Already Available
Solver Registration* Deadline: 15th July 2015
Solver Submission Deadline (final version): 6th August 2015
Benchmark Submission Deadline: 1st July 2015
Announcement of Results: At SAT'15 Conference

*By registration we mean creating a StarExec account and sending an email (to satrace2015@googlegroups.com) with the list of tracks you want to participate in.

Benchmark Selection

The benchmark problems for the Main and Parallel tracks will be selected from problems used in previous competitions and newly submitted benchmarks (see the next section for more information on submitting benchmarks). The following selection method will be applied:

  1. We randomly select 3 out of the top 7 solvers from the previous competition ( Sequential, Application SAT+UNSAT track of SAT Competition 2014).
  2. We run the selected solvers on all the benchmarks with a 1 hour time limit.
  3. We define the hardness of an instance as the total time the three solvers spent on it (maximum is 3 hours in the case that none of the solvers could solve the instance within the time limit).
  4. Using the hardness, we randomly select 300 instances based on a normal distribution with mean 1.5 and standard deviation 1.
  5. Those 300 instances will be used for the Main Track.
  6. The top 100 hardest instances out of the 300 will be used for the Parallel Track.

The benchmarks for the Incremental Library Track are applications using a SAT solver in an incremental fashion. See the Incremental Library Track Rules section for more information.

Evaluation and Prizes

The solvers will be compared based on the number of problem instances they can solve within the time limit of one hour. For each track there will be four prizes.

Portfolios

Portfolio SAT solvers (consisting of different core SAT solvers) may participate in the Race. Before including a SAT solver in your portfolio please consult the license/author of the given SAT solver whether such usage is allowed.

Restrictions

We reserve the right to not accept more than two solvers from one submitter per track.


Main Track Rules

Submission and Execution Environment

The Main track will be run on the StarExec-cluster. In order to participate you must:

  1. Register for an account in the SAT community (unless you are already registered).
  2. Wait until one of the SAT community leaders approves your request (should not take more than 24h).
  3. Upload your solver binary into the subspace called "SatRace2015" (root/SAT/SatRace2015).
    • The uploaded file must have a special format (see the StarExec User Guide for details, here is an example with minisat).
    • The binaries should be statically linked in order to avoid problems with missing libraries.
    • The binaries must be compatible with the StarExec machine specifications.
    • We recommend to select "run test job = yes" with "setting profile = SAT" when uploading a solver. This creates a test job that runs the newly upladed solver on a random 3SAT formula with 100 variables (10 seconds time limit). If the test job fails your solver binary is probably not compatible with starexec.
    • This virtual machine image that matches StarExec can be useful for compiling the proper binary.
  4. Send an email to satrace2015@googlegroups.com with your system description (see general rules) and (optionally) your source code.

Input and Output Format

The input and output format requirements are the same as those used for the SAT Competitions.

Benchmark Submission

The benchmarks are submitted using the StarExec system. The instructions for submitting a benchmark set are:

  1. Register for an account in the SAT community (unless you are already registered).
  2. Wait until one of the SAT community leaders approves your request (should not take more than 24h).
  3. Create a new subspace in the subspace called "SatRace2015Benchmarks" (root/SAT/SatRace2015Benchmarks).
  4. Upload your benchmarks into the subspace you created.
    • Upload a zipped file containing cnf files (that are not zipped separately).
    • Select "place all benchmarks in ..." as the "upload method"
    • Select "cnf" as the "benchmark type"
    • For more information about uploading benchmarks see the StarExec manual
  5. Send an email to satrace2015@googlegroups.com to informs us about your submission, please include a short description of the submitted benchmark set(s).

Parallel Track Rules

Execution Environment

For the Parallel Track we have two stages, the first stage with 8 core computers and the second stage with 64 core computers. Due to resource constraints only the best 8 solvers from the first stage (based on the highest number of solved instances) will participate in the second stage. The winners (all four prizes) will be selected from the solvers participating in the second stage.

The participating solvers will receive a second command line argument indicating the number of available cores (8 or 64), the first argument is the cnf input file as usual.

Submission

In order to participate in the Parallel Track please send a zip archive to satrace2015@googlegroups.com containing:

Input and Output Format

The input and output format requirements are the same as those used for the SAT Competitions except for a special second argument indicating the number of available cores.
Example: "./solver dimacs.cnf 64" runs a solver named "solver" on a cnf file named "dimacs.cnf" on a machine with 64 cores.


Incremental Library Track Rules

Motivation

Many applications of SAT are based on solving a sequence of incrementally generated SAT instances. In such scenarios the SAT solver is invoked multiple times on the same formula under different assumptions. Also the formula is extended by adding new clauses and variables between the invocations. It is possible to solve such problem by solving the incrementally generated instances independently, however this might be very inefficient compared to an incremental SAT solver, which can reuse knowledge acquired while solving the previous instances (for example the learned clauses).

There are several SAT solvers that support incremental SAT solving, however each has its own interface, which differs from the others. That makes comparing them difficult. Our goal is to have a single interface implemented by many different solvers. Users who want to use incremental SAT solvers in their applications would also benefit from such an interface. Applications could be written without selecting the concrete incremental SAT solver in advance and migrating to a different solver would be just a matter of updating linking parameters.

The Proposed Interface - IPASIR

The name of the proposed interface is IPASIR, which is the reversed acronym for "Re-entrant Incremental Satisfiability Application Program Interface". With an additional space and question mark it can also serve as an expression for offering a brewed beverage to a gentleman.

The interface was designed to have the following properties:

The interface consists of only 9 functions which are inspired by the incremental interfaces of PicoSAT and Lingeling. It is defined in the form of a C language header file ipasir.h. Please refer to the comments in the header file for more information about the specifications of the interface.

We have prepared a package (ipasir.zip) containing three SAT solvers adapted to IPASIR and a few simple IPASIR based applications. One of the adapted SAT solvers in the package is Sat4j which demonstrates that not only solvers written in C/C++ can implement IPASIR. The package also contains a simple Java binding and Java application to show how any IPASIR compatible SAT solver can be used in a Java application.

Solver Submission

In order to participate in the Incremental Library Track please follow these steps:

  1. Implement all the functions defined in the ipasir.h header file.
  2. Create a directory with your code and makefile following the style of the three example solvers in the ipasir.zip package. Placing your directory in the "ipasir/sat/" directory and issuing "make" in the "ipasir/" directory should result in a successful build of all the example applications with your solver (and the original example solvers). See the README files located in the package for more information.
  3. Add the system description of your solver (see general rules) into the directory.
  4. Pack your directory into a zip file and send it to satrace2015@googlegroups.com

Benchmark Submission

The benchmarks for the Incremental Library Track are command line applications that solve a problem specified by an input file given as a parameter to the application. Please take a look at the example applications in the ipasir.zip package. To create your own benchmark applications follow the style of the directories in the "ipasir/app/" directory.

Prepare your application directory in such a way that placing it in the "ipasir/app/" directory and issuing "make" in the "ipasir/" directory results in a successful build of your application (along with the original example applications) with all the solvers in the package. Follow the instruction in the README files found in the ipasir.zip package. As the last step add a short text document that describes your application into your directory, pack it into a zip file and send it to satrace2015@googlegroups.com.

Benchmark Selection and Evaluation of Solvers

The benchmarks used for the competition will be selected by the organizers. The solvers will be evaluated by the number of problem instances (application + input pairs) solved within a given time limit.