# Resource consumption

Scheduling problems often use resources like machines, tools, a certain number of workers, raw or intermediate materials, etc.

## Cumulatives

A cumulative expression counts the amount of resources used per time period

In this example each task represents a worker and the cumulative expression shows the count of workers per time.

The corresponding model is

- OPtaL
- TypeScript

`range workers = 1 .. 4`

dvar interval task[workers] length 5

dexpr count = sum (i in workers) pulse(task[i], 1)

`const model = new CP.Model`

const task = Array <IntervalVar> (4)

const cumul_term = Array<CP.CumulExpr> (4)

for (let i = 0; i < 4; i++) task[i] = model.intervalVar({ length: 5 })

for (let i = 0; i < 4; i++) cumul_term[i] = pulse(task[i])

const count = model.sum(cumul_term)

Like many general purpose programming languages, TypeScript is not very good at manipulating
algebraic expressions therefore the need for intermediate `CumulExpr`

We could have made them implicit writing something like

`const model = new CP.Model`

const task = Array <IntervalVar> (4)

for (let i = 0; i < 4; i++) task[i] = model.intervalVar({ length: 5 })

const count = model.sum(task.map(t => pulse(t, 1)))

A cumulative expression is completely defined by the value (start and length) of the interval variables it is composed of.

The following functions are provided to transform an interval into a term for the temporal sum

`stepAtStart(interval, value)`

: at the start time of the interval, an amount of`value`

resources is used`stepAtEnd(interval, value)`

: at the end time of the interval, an amount of`value`

resources is used (typically released)`pulse (interval, value)`

: is the same as combining`stepAtStart(interval, value)`

with`stepAtEnd(interval, value)`

And finally `stepAt(time, value)`

that specifies that at time `time`

the amount `value`

of resources is used.

In this example we have two *producer* tasks with a `stepAtEnd`

and two *consumer*
tasks with a `stepAtStart`

. Notice how the cumulative becomes negative when the producer
tasks are after the consumer tasks.

**Decision variables** are fixed by the engine which chooses a possible value in their
**domain** and checks whether the constraints are still valid.

The value of **decision expressions** on the other hand, is completely defined
by the values of the decision variables they are composed of.
The engine doesn't need make any choice, it just **propagates the expression bounds**. As such adding
expressions to a problem doesn't increase its dimension.

For instance the decision expression `x + y`

is completely defined by the values
taken by decision variables `x`

and `y`

. Once the variables `x`

and `y`

are fixed,
the value of `x + y`

can be immediately deduced. Also, if we know for instance that
`0 <= x + y <= 1`

we can deduce bounds on `x <= 1`

and `y <= 1`

.

### Bounds on cumulatives

Expressions need to be given **bounds** in order to **propagate**. Otherwise the expression
doesn't do anything.

`range workers = 1 .. 10`

dvar interval task[workers] length 5

dexpr count = sum (i in workers) pulse(task[i])

constraints {

count <= 2 // at most 2 workers at any point in time

}

Our gantt chart doesn't enforce the cumulative bounds, instead it signals in color the places where the cumulative limits are violated, for the user to fix it.

Bounds for the cumulative constraint could be any function. However OptalCP doesn't support yet using non-constant functions (like step functions, piecewise linear functions) as bounds of a cumulative expression

### Cumulatives and precedences

The purpose of adding bounds to a cumulative is to force the engine to compute precedences.

In the producers and consumer example, we could have added precedences between the producer / consumer pairs which would have prevented the total inventory of product to become negative.

The issue with this form of modeling is that we are required to decide which activity will come before which other activity.
In the model we decided that `end producer[0] <= start consumer[0]`

and `end producer[1] <= start consumer[1]`

instead of the opposite.

`range items = 1 .. 2`

dvar interval producers[items] length 5

dvar interval consumers[items] length 5

dexpr inventory = // useless without bounds

sum (i in items) stepAtEnd (producers[i], 1)

- sum (i in items) stepAtStart (consumers[i], 1)

constraints {

forall (i in items) end producer[i] <= start consumer[i]

}

In fully symmetric problems it doesn't matter, but in more general cases, by adding precedences without knowing which one is the right precedence to add, we could miss the optimal solution by artificially limiting the problem's solutions.

We will see an example with the Millau viaduct problem modeled with cumulatives

### Propagation of the cumulative expressions

Propagation of cumulative expressions is NP-complete. Hence there is not a single propagation algorithm, insead each engine uses its own. OptalCP uses Timetable-Edge Finding (TTEF) [Vilim 2011]

![Widget showing cumul before propagation]

![Widget showing cumul after propagation]

### Typical uses of cumulatives

There are two very frequent uses of cumulatives expressions

#### Counting resources or inventory

Cumulatives are used to count a resource per time like

- the number of cranes simultaneously in use on a construction site
- the number of workers at any point in time in a supermarket
- the number of containers waiting on a yard in a port
- the number of customers staying overnight in a hotel

Associate to each "item" you want to count an interval where the start time
is the beginning of the "usage" of that resource and the end time its "end of usage"
and make a cumulative using `pulse`

`range r = 10 .. 10`

int party_size [r] = 1 + random(4)

int stay_start [r] = 1 + random (74)

int stay_length [r] = 1 + random (14)

dvar interval stay[r] length stay_length[r]

dexpr people = sum (i in r) pulse (stay[r], party_size[r])

With that information other decisions can be modelled like the number of people required in the restaurant, etc.

#### Work in progress inventory

Cumulatives are also unsed in a manufacturing enviroments to account for
inventory, in particular work in progress inventory with **recipes**

- producer tasks consume product R (raw materials) and produce W (work in progress)
- consumer tasks consume product W and produce F (final product)

In particular there are (at least) 3 cumulatives to consider : R, W and F

And there is no reason for the consumers and producers to be related (like the same number, duration or quantities produced / consumed) as they are usually completely different equipment.

In our example

- producer tasks : 3 R -> 2 W
- consumer tasks : 4 W -> 1 F

There could just as well have been multiple raw materials, multiple work-in-progress products and multiple final products.

`range p = 1..5`

range c = 1..4

dvar interval producer[p] length 10

dvar interval consumer[c] length 5

dexpr raw =

stepAt(0, initial_inventory)

- sum (i in p) stepAtStart (producer[i], 3)

dexpr wip =

sum (i in p) stepAtEnd (producer[i], 2)

- sum (i in c) stepAtStart (consumer[i], 4)

dexpr final =

sum (i in c) stepAtEnd (consumer[i], 1)

FIXME: initial inventory, separation of charts, and charts display in negative

## Resource Constrained Project Scheduling Problem (RCPSP)

The combination of the project scheduling problem with cumulatives is called the ***resource-constrained project scheduling problem (RCPSP) ***. It is a classic problem in scheduling, with many publications and benchmarks

### Data format

A typical data file is organized as follows

`m n`

c_1 .. c_n

0 0 0 0 0 0 ... 0 m-1 2 3 4 5 ... m-1

length cr_1 cr_2 ... cr_n k s_0 s_1 ... s_k

...

0 0 0 0 0 0 ... 0 0

There are

`m`

tasks indexed by`1 .. m`

`n`

resources indexed by`1 .. n`

- the capacity of the resources
`c_1 ... c_n`

- each lines lists
- the length of the task
`length`

- the consumption on each resource
`cr_1 ... cr_n`

- the number of successors
`k`

- the list of successors
`s_1 ... s_k`

- the length of the task
- the first task is a dummy task of length 0 with all other tasks as successors
- the last task is a dummy task of length 0 with all other tasks as predecessors

This instance is DC/DC1/mv1 (sometimes the data creators remove redundant successors of the first dummy task)

`12 4 `

3 9 9 9

0 0 0 0 0 4 2 3 4 5

2 0 1 0 0 1 6

4 0 0 0 3 1 10

6 0 0 2 0 1 9

9 0 0 7 0 1 11

8 0 0 5 0 1 7

10 3 0 0 0 2 8 11

4 0 0 0 7 1 12

3 3 0 0 0 1 12

5 0 0 6 0 1 12

1 0 9 0 0 1 12

0 0 0 0 0 0

### Model

The model is the following (without the dummy tasks)

- OPtaL
- TypeScript

`// Data`

range tasks = 2 .. 11

range resources = 1 .. 4

int capacity [resources] = [3 9 9 9]

int len [tasks] = [2 4 6 9 8 10 4 3 5 1]

int consumption [tasks][resources] = [

[0 1 0 0]

[0 0 0 3]

[0 0 2 0]

[0 0 7 0]

[0 0 5 0]

[3 0 0 0]

[0 0 0 7]

[3 0 0 0]

[0 0 6 0]

[0 9 0 0]

]

{int} succ [tasks] = [{6} {10} {9} {11} {7} {8 11} {} {} {} {}]

// Model

dvar interval task[t in tasks] length len[t]

dexpr usage [r in resources] =

sum (t in tasks : consumption[t][r] > 0)

pulse (task[t], consumption[t][r])

minimize max (t in tasks) end task[t]

constraints {

// Precedences

forall (i in tasks, j in succ[i]) end task[i] <= start task[j]

// Cumulative capacity

forall (r in resources) usage[r] <= capacity[r]

}

`const capacity = [3,9,9,9]`

const len = [2,4,6,9,8,10,4,3,5,1]

const consumption = [

[0,1,0,0],

[0,0,0,3],

[0,0,2,0],

[0,0,7,0],

[0,0,5,0],

[3,0,0,0],

[0,0,0,7],

[3,0,0,0],

[0,0,6,0],

[0,9,0,0],

]

const succ = [[6],[10],[9],[11],[7],[8,11],[],[],[],[]]

### Exercise

Solve instance mv1 on the gantt chart

FIXME : replace forall succ adjust precedence with forall succ, forall pred[succ], adjust precedence

### Benchmarks

FIXME

There are many variants of the project scheduling problem. The bechmmark section of the website explains the context in which each variant appears, and provides benchmark data as well as the current OptalCP results.

## No overlap

The special case in which the cumul expression is constrained to be in `[0 .. 1]`

is so common
and important that it has a different name, and even a different API.
It is called the **noOverlap** constraint.

- OPtaL
- TypeScript

`range workers = 1 .. 10`

dvar interval task[workers] length 4

constraints {

noOverlap { task[w] | w in workers }

}

`const model = new CP.Model`

const task = Array <IntervalVar> (10)

for (let i = 0; i < 10; i++) task[i] = model.intervalVar({ length: 4 })

model.noOverlap(tasks)

The `noOverlap`

constraints takes as argument the interval variables that shouldn't overlap

The algorithms used to propagate

- a
`noOverlap`

- a cumulative using only
`pulse`

- a cumulative using combinations of
`stepAt`

,`stepAtStart`

,`stepAtEnd`

and`pulse`

are different as each level is more general than the previous one. The more specific the constraint, the more precise the algorithm.

### Setup / transition times

As we have seen, the purpose of the `noOverlap`

constraint is to decide precedences
between tasks on a (unitary) resource. We can also see the `noOverlap`

constraint
as computing of a **rank function** such that

`rank[i] < rank[j] => end task[i] <= start task[j]`

We may want an extra distance between the end of task $i$ and the beginning of task $j$.
If that distance only depends on $i$ then just making `task[i]`

longer solves the problem.
However if the extra distance depends both on $i$ and $j$ we need to introduce the
concept of **setup times**, also named **transition times**

In order to reduce the size of the $s_{ij}$ matrix, tasks can be assigned types $(t_i)$ and we have

$\forall i, j\quad i \leq j \Rightarrow \mathrm{end}(\mathrm{task}_i) + s_{t_i t_j} \leq \mathrm{start}(\mathrm{task}_j)$Setup times will only be available in the next version of OptalCP

## Jobshop Scheduling problem (JSP)

The jobshop problem is a very simplified form of manufacturing problem that captures the core concepts in scheduling with multiple-resources (precedences and no-overlap). Many job-shop variants extend this core with other dimensions (cost functions, resource types, inventories, workers, etc).

The problem considers `m`

machines and `n`

jobs. A job is a sequence
of tasks that need to be done in a specific order, each one on a specific machine.

Example : job 1 is in order

- a task on machine 1 of duration 2
- a task on machine 3 of duration 3
- a task on machine 2 of duration 1

A simple jobshop with 3 machines and 3 jobs (try to find the optimal solution)

`job 1 : m0 x 2 -> m2 x 3 -> m1 x 1 `

job 2 : m0 x 4 -> m1 x 7 -> m2 x 2

job 3 : m0 x 5 -> m1 x 2 -> m2 x 3

The order of machines for each job may or may not be different (in our example job 2 and 3 have the same order) and task durations may be 0 which would capture a process that only needs to be done on a subset of the machines

There are only two constraints

- precedences within tasks of the same job
- no-overlap of tasks on the same machine

### Data format

A typical format for jobshop data files is

`m n`

m_1 d_1 m_2 d_2 ..

...

where

- $m$ is the number of machines
- $n$ is the number of jobs
- each line $j$ represents a job with $m$ tasks $t^r_j$
- $m_r$ the machine of task $t^r_j$
- $d_r$ the duration of task $t^r_j$

This small instance is known as ft06 (from
**H. Fisher and G. L. Thompson**. *Probabilistic learning combinations of local job-shop scheduling rules.
1963 In: Industrial Scheduling: 225-251*)

`6 6 `

2 1 0 3 1 6 3 7 5 3 4 6

1 8 2 5 4 10 5 10 0 10 3 4

2 5 3 4 5 8 0 9 1 1 4 7

1 5 0 5 2 5 3 3 4 8 5 9

2 9 1 3 4 5 5 4 0 3 3 1

1 3 3 3 5 9 0 10 4 4 2 1

### Model

- OPtaL
- TypeScript

Each task is uniquely identified by its job $j$ and its rank $r$ in the sequence of tasks that constitutes the job. We create two separate indices, one for machines, one for ranks, even if numerically they are identical.

`int m = ...`

int n = ...

range machines = 0 .. m - 1

range ranks = 0 .. m - 1

range jobs = 1 .. n

int machine [jobs][ranks] = ...

int duration [jobs][ranks] = ...

dvar interval task [j in jobs][r in ranks] length duration[j][r]

minimize max (j in jobs) end task[j][m - 1]

constraints {

// Precedences between ranks in a job

forall (j in jobs, r in ranks : r + 1 in ranks)

end task[j][r] <= start task[j][r + 1]

// No overlap between tasks on each machine

forall (m in machines)

noOverlap { task[j][r] | j in jobs, r in ranks : machine[j][r] == m }

}

The construction `{ tasks[j][r] | j in jobs, r in ranks : machine[j][r] == m }`

is called a **set comprehension**. It builds the set of tasks such that
the condition specified is true, in this case `machine[j][r] == m`

FIXME

### Exercise

Solve instance ft06 on the gantt chart

In order to help a little with this manual task we provide a button "resolve overlaps" that pushes the tasks on each machine keeping the order in which they appear

$\forall i,j \quad m_i = m_j \wedge \mathrm{start}_i \lt \mathrm{start}_j \Rightarrow \mathrm{end}_i \leq \mathrm{start}_j$

In other words the button transforms any * start-after-start* into

*(ties are resolved using the job rank and then duration)*

**start-after-end**TODO: add a "enforce no-overlap" button to help and pushing precedences to the left TODO: add display of the makespan value TODO: add optimal makespan as reference

### Benchmarks

The jobshop problem is an * extreme simplification* of a manufacturing plant,
that has kept only the most core constraints

- precedences $\quad \forall j,r \enspace \mathrm{start}(\mathrm{task}^r_j) \leq \mathrm{end}(\mathrm{task}^{r+1}_j)$
- no overlap $\quad \forall m \enspace \mathrm{noOverlap}(\lbrace \mathrm{task}^r_j \mid \mathrm{machine}(j,r) = m \rbrace)$

To such an extent that a manufacturing professional probably wouldn't recognize the plant he works in.

The jobshop is used as a basis on which more complex constraints are addded

- limited capacity buffer zones
- subsets of machines, complex process graphs and recipes
- minimum / maximum time between tasks
- reentrance
- tasks within each job are processed on the same sequence of machines
- machine maintenance
- workers

The bechmmark section of the website explains the context in which each variant appears, and provides benchmark data as well as the current OptalCP results.