Skip to main content

Type alias: BenchmarkParameters

BenchmarkParameters: Parameters & {"dontOutputSolutions": boolean;"exportDomains": string;"exportJS": string;"exportJSON": string;"exportTxt": string;"log": string;"maxObjective": number;"minObjective": number;"nbParallelRuns": number;"nbSeeds": number;"output": string;"result": string;"solve": boolean;"summary": string; }

Extension of Parameters that can be used to parameterize function benchmark.

The parameters are the same as in Parameters with some additions that control the behavior of function benchmark. In particular, there are parameters that allow to store the output in file(s), run the model multiple times with multiple seeds, etc.

Parameters can be also parsed from command line arguments using function parseBenchmarkParameters or parseSomeBenchmarkParameters.

Filename patterns

Some parameters can specify a filename pattern. Patterns allows to generate a unique file name for each benchmark run. The pattern is a string that can contain the following placeholders:

  • {name} - the name of the model (see Model.setName). If the model name is not set (it is undefined) then a unique name is generated.
  • {seed} - the seed used for the run. Useful especially in combination with nbSeeds parameter.
  • {flat_name} - the name of the model with all characters '/' replaced by '_'.

Type declaration

dontOutputSolutions?

optional dontOutputSolutions: boolean

When set to true then don't include solutions in the file specified by output parameter. This can save a lot of space.

exportDomains?

optional exportDomains: string

Filename pattern for exporting domains after propagation. See patterns in BenchmarkParameters. The file is in text format.

exportJS?

optional exportJS: string

Filename pattern for exporting the model into JavaScript. See patterns in BenchmarkParameters. The models are exported using function problem2js. The problem is solved before exporting (unless parameter solve is set to false). If a solution was found then it is included in the exported code as a warm start.

exportJSON?

optional exportJSON: string

Filename pattern for exporting the problem into JSON format. See patterns in BenchmarkParameters. The problems are exported using function problem2json. The problem is solved before exporting (unless parameter solve is set to false). If a solution was found then it is included in export as a warm start.

exportTxt?

optional exportTxt: string

Filename pattern for exporting the problem into text format. See patterns in BenchmarkParameters. The models are exported using function problem2txt. The problem is solved before exporting (unless parameter solve is set to false). If a solution was found then it is included in the exported text.

log?

optional log: string

Filename patter for log files. Every benchmark run will be logged into a separate file. See patterns in BenchmarkParameters.

maxObjective?

optional maxObjective: number

Constrain the objective to be less or equal to the given value.

minObjective?

optional minObjective: number

Constrain the objective to be greater or equal to the given value.

nbParallelRuns?

optional nbParallelRuns: number

Run up to the specified number of solves in parallel.

Make sure that you limit the number of workers in all models using parameter Parameters.nbWorkers.

nbSeeds?

optional nbSeeds: number

Run each model multiple times with a different random seed.

output?

optional output: string

Filename for detailed results of all benchmarks. The result is in JSON format, it is an array of BenchmarkResult.

result?

optional result: string

Filename pattern for results of a benchmark. The result is stored in JSON format as BenchmarkResult.

solve?

optional solve: boolean

Whether to solve the model(s). The value could be true, false or undefined, solve is not called only if the value is false.

Not calling solve can be useful in combination with exportJSON, exportTxt, or exportJS.

summary?

optional summary: string

Filename for a summary of all benchmarks. The summary is in CSV format, there is a line for each benchmark run.