Skip to main content

Function: benchmark

benchmark(m, params): Promise<BenchmarkResult[]>

Benchmark given model or a set of models.

Parameters

NameTypeDescription
mModel | Model[]Model or an array of models to benchmark.
paramsBenchmarkParametersBenchmark parameters to use (includes solver Parameters}).

Returns

Promise<BenchmarkResult[]>

An array of results, one for each run.

Remarks

In the basic usage, when the input is a single model, the function is similar to solve: it runs the solver and returns the result. However, there are several additional features:

  • The function can solve each model multiple times, with different random random seeds (using parameter BenchmarkParameters.nbSeeds). This is useful to get more reliable statistics.
  • Multiple models can be solved in parallel to speed up the computation (using parameter nbParallelRuns). In this case it is useful to limit the number of threads for each solve by parameter Parameters.nbWorkers.
  • If multiple models are solved (or one model with multiple seeds), this function suppresses the normal output and instead prints a table with statistics of the runs. The table is printed on the standard output.
  • The function can also output the results in CSV or JSON formats or export the models into JSON, JavaScript or text formats.
  • In case of an error, the function does not throw an exception but returns ErrorBenchmarkResult for the given run.

See BenchmarkParameters for more details.

Example

Let's suppose that we have a function createModel that takes a filename as a parameter and returns Model. For example, the function can model jobshop problem and read the data from a file.

We are going to create a command line application around createModel that allows to solve multiple models, using multiple random seeds, run benchmarks in parallel, store results in files etc.

import * as CP from '@scheduleopt/optalcp';

function createModel(filename: string): CP.Model {
...
}

// What to print when --help is specified (assuming that the program name is benchmark.js):
let usage = "Usage: node mybenchmark.js [options] <datafile1> [<datafile2> ...]";

// Unless specified differently on the command line, time limit will be 60s:
let params = CP.BenchmarkParameters = { timeLimit: 60 }
// Parse command line arguments.
// Unrecognized arguments are assumed to be file names:
let filenames = CP.parseSomeBenchmarkParameters(params, usage);

// Check that at least one data file was specified:
if (filenames.length == 0) {
console.error("No data files specified.");
process.exit(1);
}

// From array of file names, create an array of models using createModel:
let models = filenames.map(f => createModel(f));
// And run the benchmark:
CP.benchmark(models, params);

The resulting program can be used for example as follows:

node mybenchmark.js --nbParallelRuns 2 --nbWorkers 2 --worker0.noOverlapPropagationLevel 4 \
--output results.json --summary summary.csv --log `logs/{name}.txt` \
data/*.txt

In this case the program will solve all benchmarks from the directory data, running two solves in parallel, each with two workers (threads). The first worker will use propagation level 4 for the noOverlap constraint. The results will be stored in JSON file results.json (an array of BenchmarkResult objects), a summary will be stored in CSV file summary.csv, and the log files for individual runs will be stored in the directory logs (one file for each run named after the model, see Model.setName.