Loading...

Summer Research Fellowship Programme of India's Science Academies

Exact algorithms for pattern minimization in cutting stock problems

Kalin Krishna

Integrated BS-MS Student, Department of Mathematics, Indian Institute of Science Education and Research Thiruvananthapuram, Maruthamala P O, Vithura 695551

Dr. Ashutosh Mahajan

Assistant Professor, Industrial Engineering and Operations Research, Indian Institute of Technology Bombay, Powai, Mumbai 400076

Abstract

Cutting stock problem is one of the classic optimization problems in Operations Research. This type of problem arises where large stock material is to be divided and distributed so that they satisfy customer needs, like in that of steel, paper, and glass industries. Given rods or rolls of a fixed stock length and demand lengths for which orders can be placed, the main objective of the problem is to cut the demand lengths from the standard stock rolls and minimize the waste generated. These problems are a standard example of integer optimization. An auxiliary problem to cutting stock problem is that of reducing the number of distinct patterns (set of all feasible cuts) used in the solution, called the pattern minimization problem. This problem is of great practical importance because the time or resource required to change from one cutting pattern to another can be significantly large. So there is a need to optimize the waste as well as the number of distinct patterns used. The pattern minimization problem is more computationally complex and can be approached either in lexicographical order, that is first minimizing the waste and then minimizing the number of patterns used in the solution or minimize both simultaneously where the trade-off between pattern minimization and waste reduction can be explored. Several heuristic approaches were described in the literature to solve the pattern minimization problem. However, only a few numbers of exact algorithms and methods are present as of now. The formulation of pattern minimization we study is nonlinear. This problem can be reformulated using Dantzig- Wolfe decomposition into a linear optimization problem with a large number of binary variables. Branch-and-price(column generation) method is used as the subproblem to solve the main problem. A class of superadditive inequalities is used at nodes of the branching tree to improve the LP relaxation bound. Branching is done on a carefully selected subset of all feasible solution to prove optimality and yield integer solutions. A rounding heuristic is applied to improve the incumbent solution after column generation. In addition, another branching method using arc flow formulation of pattern minimization problem is also mentioned.

Keywords: column generation, integer programming, branch and price method

Abbreviations

A list of abbreviations frequently used in this report is given below:

Abbreviations
 CSPCutting Stock Problem 
 PMPPattern Minimization Problem 
 LPLinear Programming 
 IP Integer Programming

INTRODUCTION

In paper mills and steel industries, end products are usually large stock rolls of a fixed length. For further industrial uses and consumer needs, it is necessary to cut the large stocks to small lengths as needed. While cutting from the large stock, it is possible that a residual length is obtained which can not be further used. This length will turn out to be a waste. The Main objective of cutting stock problem is to minimize the waste generated while cutting the required number of small lengths from the large stock material.

The one-dimensional cutting stock problem with an infinite supply of stock rolls can be stated as follows: Given a set of lengths {l1,,lm}and their demands {d1,,dm}for every i,( i=1,,mi=1,\dots,m) di number of length li should be cut from an infinite number of stock rolls of length L while minimizing the number of stock rolls used.

In literature, there are various methods and approaches to solving the cutting stock problem. Our primary attention in this report is on Pattern minimization problem, an auxiliary problem to waste minimization. Switching from a cutting pattern to another one may need to change the knife position, which will lead to significant setup time, and it may also require to do some trial runs to validate the pattern. Both these aspects will generate a loss to the practitioners. So in the pattern minimization problem, the goal is to minimize the number of distinct patterns used. The pattern minimization problem is significantly complex than that of waste minimization. Here we are decomposing the formulation of pattern minimization using Dantzig-Wolfe decomposition, and the new formulation is solved using a branch-and-bound-and-price method.

LITERATURE REVIEW

Cutting Stock Problem Formulations.

We will mainly discuss three formulations of cutting stock problem, one which uses binary variables​​[1]​, one which uses cutting patterns and their frequencies​[2]​ ​and another one which uses graph and arc flows [3] . Each formulation has its benefits and demerits and illustrates a specific feature of the problem.

Assignment Formulation​[1]​ ​​

Consider stock rolls of length L, and orders can be placed for m lengths {l1,,lm}and let their demands be {d1,,dm}. Let n be a known upper bound on the total number of stock rolls needed (usually the sum of demands, assuming that only one piece is cut from one stock roll). Let yj denote the decision variable of choosing a particular stock roll that is if the jth roll is used yj=1otherwise 0, j=1,,n. xij denote the number of times length li is cut from roll j. Then the formulation is :

                                            minj=1nyj\displaystyle                                                                                        \min\sum_{j=1}^ny_j

subject to

                          j=1nxijyjdii{1,,m}\displaystyle                                                    \sum_{j=1}^{n} x_{ij} y_j \geq d_i \qquad \forall i \in \{1, \dots, m\}
                          i=1mlixijLyjj{1,,n}\displaystyle                                                     \sum_{i=1}^ml_ix_{ij}\leq Ly_j\qquad\forall j\in\{1,\dots,n\}
                                   yj{0,1}j{1,,n}\displaystyle                                                                       y_j \in \{0,1\} \qquad \forall j \in \{1, \dots, n\}
                                      0xijZj{1,,n},  i{1,,m}\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;0\leq x_{ij}\in \Z\qquad\forall j\in\{1,\dots,n\},\;\forall i\in\{1,\dots,m\}

The inequality ​(2a) is the demand constraints while inequality (2b) denote the feasibility of a cutting pattern. This formulation includes binary as well as integer variables. A lower bound for optimum can be obtained by the LP relaxation of the problem, that is relaxing the integrality constraints on yj and xij . The main drawback of this formulation is that the lower bound obtained by LP relaxation is weak [4] ​.

Pattern-Oriented Formulation​​[2]​ ​​

Let L, li , and di be defined as in ​Sec. 2.1​ for all i=1, \dots,m. Let n denote the total number of patterns, where patterns mean the list of all possible combination of cuts. Let sZ+m\textbf{s}\in Z^{m}_{+} be a cutting pattern such that si represents the number of times length li appears in ss. Let

S={sZ+m  i=1msiliL}\textbf{S}=\{\textbf{s}\in \Z^{m}_{+} \ |\ \sum_{i=1}^{m} s_i l_i \leq L\}. ​The set of all feasible patterns.

Assume the pattern ss was used xsx_s times. Then the CSP can be formulated as:

                                                                                        minsSxs\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\min\sum_{\textbf{s}\in\textbf{S}}x_s

subject to

                                     sSxssidii=1,,m\displaystyle                                                                          \sum_{\textbf{s}\in \textbf{S}} x_s s_i \geq d_i \qquad i=1,\dots,m
                                               xsZ+{0}\displaystyle                                                                                               x_s \in \Z_+\cup\{0\}

The inequality (4a) is the demand constraints, and the inequality (4b) is the integrality condition. Even though this formulation provides better LP relaxation bound than assignment formulation, the difficulty is in dealing with a large number of patterns. Since the possible number of patterns may increase exponentially as the number of demand lengths and their demands increase.

Pattern-oriented formulation is usually solved by a column generation approach. In this method, rather than using all the variables in the simplex table, we begin by a simple basic solution and then generate columns(cutting patterns) which have the potential to improve the objective, and they are added to the simplex table. For this minimization problem, the column with the most negative reduced cost is preferable to enter the basis. This subproblem of choosing a column with negative reduced cost is a knapsack problem.

Arc-Flow Formulation ​[3]

Let L be the stock roll length and as mentioned in ​Sec. 2.1li , di be the lengths and demands for i=1,...,m. Assume that li's are indexed inorder of decreasing size, that is l1 is greater than li for all i=2,...,m. Consider the acyclic, directed graph G=(V,A)G=(V,A), where V={0,1,,L}V=\{0,1,\dots,L\} and A=DTA=D\cup T where

D={(d,e):0deL & ed{l1,,lm}}D=\{ (d,e) : 0\leq d \leq e\leq L \:\&\: e-d \in \{l_1,\dots,l_m\}\}denote arcs corresponding to the demand length used and T={(d,d+1):0dL1}T=\{(d,d+1):0\leq d\leq L-1\} denote the trim loss occured in the pattern. The cutting stock problem can be modelled as :

                                                         minz\displaystyle                                                                                                                   \min z

​subject to

                     (d,e)Axde(e,f)Axef={z,if j=0,0,if j=1,, L-1z,if j=L \displaystyle                                          \sum_{(d,e)\in A} x_{de} -\sum_{(e,f)\in A}x_{ef} =\begin{cases} -z, & \text{if j=0,}\\0, & \text{if j=1,\dots, L-1}\\z, & \text{if j=L}   \end{cases}
                                    (d,d+li)Axd,d+lidi,  i=1,,m\displaystyle                                                                        \sum_{(d,d+l_i)\in A}x_{d,d+l_i} \geq d_i,    i=1,\dots,m
                                        xde0,  xdeZ(d,e)A\displaystyle                                                                                 x_{de} \geq 0, \:  x_{de}\in \Z \qquad \forall(d,e)\in A
                                                      z0,  zZ\displaystyle                                                                                                            z\geq 0,\:   z\in \Z

where xdex_{de} denote the number of items of size e-d cut from any stock roll at a distance d from the beginning and z denote the total number of stock rolls used. It can be shown that the linear relaxation of this formulation gives the same bounds as that given by the pattern-oriented approach​[3]​.

Pattern Minimization Problem.

One of the main factor besides waste minimization, which affects the profitability of cutting procedure is the number of times one has to switch between cutting patterns in order to satisfy the demands. This is of major importance in the case where the cutting equipment has to be set up anew for each distinct cutting pattern in a time-consuming procedure during which the cutting procedure is halted, and another factor is the waste generated from trial runs made to ensure the validity of the new cutting setup. Thus minimization of the number of distinct patterns used is also an objective for the generation of cutting pattern. Minimizing the number of stock rolls used and minimizing the distinct number of cutting patterns are conflicting goals since to minimize the number of patterns while satisfying demands means increasing the number of times a particular pattern is used, which will increase the number of stock rolls used.

The problem of pattern minimization is computationally more complex than cutting stock problem. This problem can be approached mainly in two different ways, one in which CSP is solved first, and then the number of patterns in the solution is minimized making sure no additional stock rolls are used so that waste remains minimum. The other approach followed relaxes the waste minimization problem, because the optimal solution of the minimum number of patterns may not correspond to the minimum waste. This problem considers the trade-off between minimizing waste and the number of patterns used.

Even though there are several heuristic approaches present in the literature for solving the PMP, only 2-3 exact methods have been put forward. We will be further exploring the branch and price method developed by Vanderbeck​​[5]​​ and will also explain the modifications made by Carvalho​​.

Compact Formulation for PMP​​​​[5]​​

A compact formulation for PMP as given by Vanderbeck is as follows:

                                                      mink=1Kyk\displaystyle                                                                                    \min \sum_{k=1}^K y_k

subject to

                                       k=1Kzkxik=dii{1,...,m}\displaystyle                                                                               \sum_{k=1}^Kz_kx_{ik}=d_i\qquad\forall i\in\{1,...,m\}
                                                        k=1KzkK\displaystyle                                                                                                                 \sum_{k=1}^{K} z_k\leq K
                                                zkKykk{1,...,K}\displaystyle                                                                                                 z_k \leq Ky_k \qquad \forall k \in\{1, ..., K\}
                                        i=1mlixikLykk{1,...,K}\displaystyle                                                                                \sum_{i=1}^{m} l_i x_{ik} \leq Ly_k \qquad \forall k \in \{1, ..., K\}
                          xik,zkN{0}k{1,...,K},i{1,...,m}\displaystyle \;\;\;\;\;\;\;                    \;x_{ik},z_k\in \N\cup \{0\}\qquad\forall k\in\{1,...,K\},\forall i\in\{1,...,m\}

                                  yk{0,1}k{1,...,K}                                                                   y_k\in\{0,1\} \qquad \forall k\in \{1, ..., K\}

In this formulation, K denotes the upper bound on the number of stock sheets required, it is usually the solution of CSP.  L,li, L, l_i, and did_i follow the same definitions as in CSP formulations. Let k=1,, Kk=1,\dots, K denote the patterns. zk z_k and xikx_{ik} denotes the number of times pattern k is used and the number of times length li appears in pattern k respectively. Let yky_k be the binary variable associated with choosing pattern k. That is

yk={1,if pattern k is used0,if k is not usedy_k=\begin{cases} 1, & \text{if pattern k is used}\\ 0, & \text{if k is not used}\end{cases}

The constraint (8a) ensure that the demand criteria is met and this constraint is non-linear. The constraint(8b) bounds the total number of stock rolls that can be used. The constraints (8c) and (8d) ensures the proper definition of variable yky_k and of the cutting pattern respectively

PMP Reformulation

The compact formulation given above is non-linear and there is symmetry in 'k'. So the formulation is reformulated after doing Dantzig-Wolfe decomposition adapted to linear programming. Let Q be set of all feasible solutions to constraints (8c), (8d) and (8e). That is

Q={q=(q0,q1,,qm)Z+m+1:i=1mliqiL  ;q0qidii}Q=\{q=(q_0,q_1,\dots,q_m)\in \Z_{+}^ {m+1}:\sum_{i=1}^ml_iq_i\leq L\;;q_0q_i\leq d_i\forall i\}

Define decision variable λq={ 1,if (q1,,qm)is used q0 times0,otherwise\lambda_q=\begin{cases}  1, & \text{if } (q_1, \dots, q_m)\text{is used } q_0\text{ times}\\0, & \text{otherwise} \end{cases}

Then the PMP can be formulated as :

                                               minqQλq\displaystyle                                                                                              \min \sum_{q\in Q}\lambda_q

subject to

                                    qQqoqiλq=dii=1,,m\displaystyle                                                                        \sum_{q\in Q} q_o q_i\lambda_q=d_i \qquad i=1, \dots, m
                                        qQqoλqK\displaystyle                                                                                 \sum_{q\in Q}q_o\lambda_q \leq K
                                      λq{0,1} qQ\displaystyle                                                                            \lambda_q \in \{0,1\} \qquad \forall  q\in Q

This formulation has a large number of variables and is an IP which can be solved by a branch and price algorithm. The nonlinearity present in the compact formulation is implicit here in the column definition.

Arc-flow Formulation [6]

Similar to the CSP, PMP can also be formulated using graphs. Let G=(V,A)G=(V,A) be an acyclic directed digraph with flow variables with 3 indices. The set of arcs A is subdivided into nmaxn^{max} subsets AnA^none for each multiplicity. An arc (i,j)(i,j)in AnA^n represents an item of width jij-iplaced at position i in a pattern with multiplicity n. The formulation is as follows:

                                      minn=1nmaxzn\displaystyle                                                                             \min \sum_{n=1}^{n^{max}} z^n
     (s,t)Anxstn(r,s)Anxrsn={zn,if s=0zn,if s=L0,otherwise n\displaystyle          \sum_{(s,t)\in A^n} x_{st}^n -\sum_{(r,s)\in A^n} x^n_{rs} =\begin{cases}z^n, & \text{if }s=0 \\ -z^n, & \text{if } s=L \\ 0, & \text{otherwise} \end{cases}\quad \mid \forall  n
              n=1nmax(r,r+li)Annxr,r+lin=dii=1,,m\displaystyle                             \sum_{n=1}^{n^{max}}\sum_{(r,r+l_i)\in A^n}nx_{r,r+l_i}^n=d_i\qquad i=1,\dots,m
                  0xr,snZ(r,s)An   n=1,,nmax\displaystyle                                     0\leq x_{r,s}^n \in \Z \qquad \forall (r,s)\in A^n  \quad  \mid  n=1,\dots, n^{max}
                                   0znZ  n=1,,nmax \displaystyle                                                                      0\leq z^n \in \Z \qquad \mid \;n=1, \dots, n^{max} 

where znz^n is the number of patterns with multiplicity n that is used. xr,snx_{r,s}^n denotes the number of arcs (r,s)(r,s) of length r-s cut from any pattern with multiplicity n. Here nmaxn^{max} denotes the largest multiplicity any pattern can have.

METHODOLOGY

In this section, the methods and approaches used to solve Cutting stock problem and pattern minimization problem are discussed.

Solving CSP

CSP is usually solved using a column generation scheme, where instead of considering all the feasible pattern in the simplex table, new patterns are generated which improves the objective function.Initially we consider only a trivial basic solution in the simplex method, then solve a subproblem to find which pattern can enter the algorithm. Since the main objective in the CSP is a minimization problem, a pattern which can have the most negative reduced cost is added to the simplex table. So the subproblem is given as

mini=1m1xiπi\min \sum _{i=1}^m 1-x_i \pi_i

subject to the constraint i=1nlixiL\sum_{i=1}^n l_ix_i \leq L. where  πi \pi_i are the dual values associated with inequality (4a) for each i and (x1,,xm)(x_1,\dots, x_m) denotes the new pattern. The algorithm stops when the value of objective of the subproblem is non-negative, that is when no more new pattern can enter the basis.

Solving PMP (Vanderbeck's exact algorithm)

Solving PMP is more complicated than solving CSP. Here we consider the exact algorithm devised by Vanderbeck​[5]​​. To use the algorithm, the Column generation reformulation of the Compact formulation is used. For a new column to enter, we must solve the subproblem which is

min{1qo(i=1mπiqiσ):qQ}\min\{ 1-q_o(\sum_{i=1}^{m} \pi_iq_i-\sigma):q\in Q\}

where πi\pi_i corresponds to the dual value from inequalities(10a) for each i, and σ\sigma is the dual value arising from inequality(10b). The subproblem can be reformulated as :

                            max{qo(i=1mπiqiσ):qQ}\displaystyle                                                        \max\{q_o(\sum_{i=1}^{m} \pi_iq_i-\sigma): q\in Q\}

subject to

   i=1mliqiL    \sum_{i=1}^ml_iq_i\leq L

qoqidii{1,,m}q_oq_i\leq d_i\qquad \forall i\in \{1,\dots, m\}

qo,qiN{0}i{1,,m}q_o, q_i \in \N\cup \{0\}\quad\forall i \in \{1,\dots, m\}

This subproblem is non-linear, but for a fixed qoq_o the problem becomes a bounded integer knapsack problem. The possible values for qoq_o is bounded above by a trivial upperbound qomax=min{KZ+1,maxi di}q_o^{max}=\min \{ K-\underset{-}{Z} +1, \underset{i}{\max}  d_i \} where Z\underset{-}{Z} is any trivial lowerbound for the number of patterns to be used. So one brute force method is to enumerate all the qo=1,,qomaxq_o=1,\dots, q_o^{max} and solve the corresponding bounded integer knapsack problem

max{i=1mπixi:i=1nlixiL,xiui(qo)  &  xiN{0}  i}\max \{ \sum_{i=1}^m \pi_ix_i : \sum_{i=1}^n l_ix_i\leq L, x_i\leq u_i(q_o)\; \&\; x_i\in \N \cup \{0\}\; \forall i \}

where ui(qo)u_i(q_o) is the item upper bound, ui(qo)=min{diqo,Lli}u_i(q_o)=\min\{\lfloor \frac{d_i}{q_o}\rfloor,\lfloor \frac{L}{l_i}\rfloor \}. But only a subset of multiplicity values qoq_o need to be considered for solving the subproblem because if xNmx^* \in \N^m is a solution of knapsack problem for qo=1q_o=1, it will remain optimal for all qoq_o upto  m=mini{dixi} m^*=\underset{i}{min} \{\lfloor \frac{d_i}{x_i^*}\rfloor\}​The procedure to solve the subproblem is :

Let qo=1,c=1While (qoqomax) doCompute ui(qo)iLet ν=cqomax+σ.Solve the subproblem with the incumbent solution ν.Let ν  &   x be the optimum value and solution.Compute the multiplicity m.If m(νσ)>c,  let c=m(νσ)  &   q=(m,x)Let qo=m+1,end.\text{Let }q_o=1,c=1\\ \text{While } (q_o\leq q_o^{max})\text{ do}\\ \text{Compute }u_i(q_o) \forall i \\ \text{Let } \nu^*=\frac{c}{q_o^{max}}+\sigma.\\ \text{Solve the subproblem with the incumbent solution }\nu^*.\\ \text{Let }\nu^* \; \&\;  x^* \text{ be the optimum value and solution.}\\ \text{Compute the multiplicity }m^*.\\ \text{If }m^*(\nu^* -\sigma)> c,  \text{ let } c=m^*(\nu^* -\sigma) \; \&  \; q=(m^*,x^*)\\ \text{Let }q_o=m^*+1, \text{end.}

Cutting Planes

The bound obtained by the LP relaxation is not strong. Hence we follow the cutting plane procedure to strengthen the LP bound. Here we are adding general purpose cuts which can dominate any of the rank one Chvátal-Gomory cuts. Combining general purpose cuts and column generation may destroy the structure of the subproblem drastically because of the modification to the reduced cost due to the addition of new inequalities. To limit modifications, we only consider a specific class of inequalities, each derived from inequalities from the formulation of the problem. It is shown that the inequalities

qQ(γqoK1)λqγ1  γ{2,,K}\displaystyle \sum_{q\in Q} \bigg(\bigg\lceil \frac{\gamma q_o}{K}\bigg\rceil-1\bigg)\lambda_q \leq \gamma -1 \qquad \forall \; \gamma \in \{2,\dots, K\}
qQ:qi>0(γqoqidi1)λqγ1  γ{2,,K}  i \displaystyle \sum_{q\in Q : q_i>0} \bigg(\bigg\lceil \frac{\gamma q_oq_i}{d_i}\bigg\rceil-1\bigg)\lambda_q \leq \gamma -1 \qquad \forall \; \gamma \in \{2,\dots, K\}\quad \forall \; i 

derived from (10b) and (10a) are valid inequalities for QQ and they dominate every rank 1 C-G cuts​[5]​​[7]​. From the above cuts, the most violated cuts are selected in preference because they have more potential to improve the LP bound.

Branching

If the LP bound does not allow to prove the optimality of the current incumbent solution, we must resort to branching. An appropriate branching scheme is ensuring qQλqN \sum_{q \in \overline{Q}} \lambda_q \in \N  for a carefully selected subset Q\overline{Q}. That is if in the master LP solution λ\lambda is fractional, a subset Q\overline{Q} is chosen such that qQλq=αN\sum_{q \in \overline{Q}} \lambda_q=\alpha \notin \N and two branch and bound nodes are defined by imposing

                                    qQλqαorqQλqα\displaystyle                                                                        \sum_{q\in \overline{Q}} \lambda_q \leq \lfloor \alpha\rfloor \quad \text{or} \quad \sum_{q\in \overline{Q}} \lambda_q \geq \lceil \alpha\rceil

The Q\overline{Q} is chosen in such a way that it is easy to detect whether a solution to the subproblem is in Q\overline{Q} and hence whether it must include the dual price associated with the branching constraint in its reduced cost and it is also made sure that the branching will result in a balanced branch and bound tree.

Following the above logic, we branch on the first subset Q\overline{Q} in the following list which yields a fractional sum for λ\lambda. (Note qibq_i \leftrightarrow b means that b is the binary vector associated with logarithmic decomposition of qiq_i, i.e. qi=k=0log2qimax2kbkq_i=\sum_{k=0}^{\lfloor \log_2 q_i^{max}\rfloor} 2^kb_k)

Screenshot 2019-07-03 at 11.26.12 AM.png
    Branching rules

    The branching rules 1 and 2 amount to fixing the number of columns with a particular multiplicity,3 implies that we are fixing the number of columns that include length li . 4 and 5 imply that the number of columns that include a specific number of length li. 6th rule amounts to fixing the number of columns having the length li and with multiplicity m. The last rule fixes the number of columns satisfying restrictions on both multiplicities as well as on the number of items of length li. These rules itself may not be sufficient in practice to find the integer solutions, so we resort to a rounding heuristics in the end.

    Subproblem modifications

    Adding the cutting planes (15 a & b) and the branching constraints modify the objective of the subproblem because of entry of new dual values. (15a) together with branching constraints 1 and 2 have a contribution to the column reduced cost as a function of qo q_o, say G(qo) G(q_o), while branching constraints 3-5 show up in the objective as a function of qi q_i say Fi(qi) F^i(q_i). Equation(15b) with 6-7 have a contribution which is a function of both multiplicity qo q_oand number of lengths qi q_i, let it be Vi(qo,qi) V^i(q_o,q_i). So a generic form of modified subproblem is :

                                                          maxG(qo)+i=1mVi(qo,qi)+i=1mFi(qi)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\max G(q_o)+\sum_{i=1}^mV^i(q_o,q_i)+\sum_{i=1}^mF^i(q_i)

    subject to

     i=1mliqiL qoqidi    qo,qiN{0}\qquad \qquad \qquad \qquad \qquad \qquad \sum_{i=1}^ml_iq_i\leq L \\ \qquad \qquad \qquad \qquad \qquad \qquad q_oq_i\leq d_i \\\qquad \qquad \qquad \qquad \qquad \qquad  q_o, q_i \in \N\cup \{0\}

    For a fixed qoq_o, say qo=kq_o=k the modified subproblem becomes a bounded integer knapsack problem

       maxi=1mVi(k,qi)+i=1mFi(qi)\qquad \qquad \qquad \qquad \qquad \max \sum_{i=1}^m V^i(k,q_i)+\sum_{i=1}^m F^i(q_i)

    subject to

         i=1mliqiL    qiui(k)i    qiN\qquad \qquad \qquad \qquad \qquad \sum_{i=1}^ml_iq_i\leq L \\ \qquad \qquad \qquad \qquad \qquad  q_i \leq u_i(k) \quad \forall i \\ \qquad \qquad \qquad \qquad \qquad q_i \in \N

    ​However, the objective function can be non-linear. If we can compute the cost cipc_{ip} associated with each pp where pp is a possible value that can be taken by qiq_i, the problem becomes a multiple-choice knapsack problem:

       maxi=1mp=0ui(k)cipxip\qquad \qquad \qquad  \qquad \max\sum_{i=1}^m\sum_{p=0}^{u_i(k)}c_{ip}x_{ip}

    subject to

          i=1mp=0ui(k)lipxipL \qquad \qquad \qquad \qquad \qquad \sum_{i=1}^m \sum_{p=0}^{u_i(k)} l_i p x_{ip} \leq L
          p=0ui(k)xip=1i     xip{0,1}i,p\\ \qquad  \qquad \qquad \qquad \qquad   \sum_{p=0}^{u_i(k)} x_{ip}=1 \quad \forall i\\ \qquad \qquad \qquad \qquad \qquad   x_{ip} \in \{0,1\} \quad \forall i,p\\

    The main bounded knapsack problem can be converted into a simpler form if we don't use branching rules 3,5 and consider a weaker version of (14b). Here we use the change of variable qi=k=0log2qimax2kbki q_i=\sum_{k=0}^{\lfloor\log_2q_i^{max}\rfloor} 2^kb^i_k where bki{0,1}i b_k^i \in \{0,1\} \quad \forall i. This binary form is easier to solve than multiple choice bounded knapsack.

    The modified subproblem(16) can be solved by using a brute force approach, that is by enumerating all the qoq_o values and solving the corresponding bounded integer knapsack problem. This approach is only used as a last resort since we can make use of cases where implicit enumeration of all values of qoq_o happens.

    CASE 1

    This case considers G(qo)G(q_o) to be a linear function, Vi(qo,qi)=qoUi(qi)V^i(q_o,q_i)=q_o U^i(q_i) for some function Uii   &   Fi(qi)=0U^i \quad \forall i\;  \&\;  F^i(q_i)=0. In this case the algorithm(14) can be applied.

    CASE 2

    When G(qo)G(q_o) is a non-linear function but like in case 1 Vi(qo,qi)=qoUi(qi)V^i(q_o,q_i)=q_oU^i(q_i) for some function UiU^i for every i and Fi(qi)=0    iF^i(q_i)=0 \; \;\forall i, an amended version of algorithm (14) can be used. For a given knapsack solution xx^\ast and associated multiplicity mm^\ast enumerate qo=1,,mq_o=1,\dots, m^\ast to identify the optimal multiplicty qoq_o^\ast associated with the solution xx^\ast. The next knapsack is solved by setting the item upperbound to ui(m+1) u_i(m^\ast+1) instead of ui(q0)u_i(q_0^*). This special case occurs when one only uses cuts (15a) and branching constraints (1-2). These cuts and constraints which fixes the pattern multiplicity to integer values has the most significant impact on lower bound on minimum number of distinct patterns used.​

    CASE 3

    The only assumption in this case is that Vi(qo,qi)=qoUi(qi)V^i(q_o,q_i)=q_oU^i(q_i), which is the case as long as we do not use the cut(15b) and branching constraints 6-7. This case can also be solved by using a branch and bound procedure through an implicit enumeration on qoq_o. At each node we restrict qo[a,b]q_o \in [a,b](At root node qo[0,qomax]q_o\in [0,q_o^{max}]) . For qo[a,b]q_o \in [a,b] the subproblem is

    SP(a,b)=max{G(qo)+qoU(q)+F(q):aqo  ;ΣiliqiL  ;qoqidi  i}SP(a,b) =\\ \max\{G(q_o)+q_oU(q)+F(q):a\leq q_o\leq\;;\underset i\Sigma l_iq_i\leq L\;;\\ \qquad \qquad \qquad \qquad q_oq_i\leq d_i\;\forall i\}

    where U(q)=i=1mUi(qi)U(q)=\sum_{i=1}^mU^i(q_i) and F(q)=i=1mFi(qi)F(q)=\sum_{i=1}^mF^i(q_i). To obtain an upperbound on the subproblem we relax the constraint qoqidiq_oq_i\leq d_i to aqidiaq_i\leq d_i. So the problem now decomposes into

    maxG(qo)+maxqoU(q)+F(q)aqobΣiliqiL    aqidii\qquad \max G(q_o) \qquad +\qquad \max q_oU(q)+F(q) \\\qquad a\leq q_o\leq b\qquad \qquad \quad \underset{i}{\Sigma}l_iq_i\leq L\\ \qquad \qquad \qquad \qquad \qquad \quad \; \;aq_i\leq d_i \forall i

    Where the first part can be solved by a simple iteration. For the second part of this problem, assume that q=(qo,q1,,qm)q^\ast=(q_o^*,q_1^*, \dots, q_m^*)is an optimal solution. Here U(q)>0    qo=bU(q^*)>0 \implies q^*_o=b similiarly U(q)0    qo=aU(q^*)\leq 0\implies q^*_o=a. So its enough to solve two knapsack problems of the format​ νk=maxkU(q)+F(q) \nu^k=\max k U(q)+F(q)  for k=a & k=bk=a\ \& \ k=b with the constraints iliqiL  &  qidia  i\sum_i l_iq_i\leq L \; \&\; q_i\leq \lfloor\frac{d_i}{a}\rfloor \;\forall i. So an upperbound for the problem is

    UB=maxG(qo)+max{νa,νb} aqobUB=\max G(q_o) +\max \{\nu^a,\nu^b\} \qquad  a\leq q_o \leq b

    Let xax^a and xbx^b denote the solution for the two knapsack problems and mam^a and mbm^b be their respective multiplicities. We derive lower bounds by optimising in qoq_o for fixed xax^a and xbx^b.

    LBk=F(xk)+maxG(q)+qU(xk) LB^k=F(x^k)+\max G(q)+qU(x^k) where qmkq\leq m^k for k=a & k=bk=a\ \& \ k=b

    LBaLB^ais a valid upperbound on SP(a,a)SP(a,a), Hence if current node can't be pruned, we will have to divide the problem SP(a+1,b)SP(a+1,b). Moreover if mbb,LBbm^b\geq b, LB^b is valid upperbound on SP(b,b)SP(b,b). So we have to consider only SP(a+1,b1)SP(a+1,b-1). For branching, we partition remaining interval of q0q_0 values such that current upperbound will no longer be valid, that is if νaνb,a\nu^a\leq\nu^b, a then divide SP(a+1,b)SP(a+1,b) into SP(a+1,mb)  &  SP(mb+1,b)SP(a+1, m^b) \;\&\; SP(m^b+1,b).

    Implementing details

    First, we solve the CSP Problem associated with the given PMP and put K (defined in the PMP column generation reformulation) as the solution of CSP. We also compute L2 lower bound[4] for the corresponding Bin Packing Problem (setting all the demands did_i=1) and it is used as an initial lower bound for PMP. At the root node, we start with a single column (with entries equal to that of RHS of constraints) and with a cost which is equal to the cost of solution of CSP(incumbent solution)​. This column is considered as an artificial variable. After each iteration of a column generation process, a valid lower bound on LP can be obtained from the optimal reduced cost value. One of the best lower bounds is due to Farley[8] defined for a branch-and-bound node u as ZuRνu\lceil\frac{Z_u^R}{\nu_u}\rceil where ZuRZ_u^Ris the current value of restricted master problem and νu\nu_uis the value of the solution of column generation subproblem. The column generation is interrupted when no more column can enter, or when the master LP gap is closed, that is ZuRLBuZ_u^R \leq LB^u where LBuLB^uis the current lower bound at node u, or when current lower bound is worse than an aspiration level set as LB+1, where LB is the current best lower bound for PMP.

    When a column generation process is terminated, we check whether the artificial variable is still in the basic solution. If it is, we increase the cost of the artificial variable and do column generation again. If not, we check for the most violated cut among(15a) and (15b), add it to the master problem and redo the column generation procedure. If a violated cut could not be obtained and if the node cannot be pruned, we resort to branching using the constraints, following the rule mentioned in ​​Fig 1​​ that is violated by the current LP solution. The node with the best bound is chosen as the next node. If multiple nodes have the same lower bound, the deepest node is selected. After completion of column generation procedure at the root node, we apply a rounding heuristic​[5]​. The heuristic consists of recursively rounding up a fractional master variable to one. The variables that are fixed this way are removed from the optimization problem, and right hand side values of constraints are adjusted to account for the fixed variables(taking into account their demands, contribution to cuts and branching rule, etc.) we return to the column generation procedure now for the reduced size master problem.

    Improving Vanderbeck's algorithm

    Even though Vanderbeck's algorithm gives bound to the compact formulation of PMP, the lower bound provided by Vanderbeck remains weak. It is due to the fact that columns with large multiplicity can get fractional values in the Lp solution. Another demerit of his approach is the 7 rule branching scheme. These complex rules do not actually ensure the integrality of the solution. Thus the integrality gap may not be closed at the end of the algorithm. There is another exact algorithm for solving PMP devised by Belov in his Ph.D. thesis​[9]​, which can be made as strong as Vanderbeck's algorithm but while considering practical instances and application, it has a weak LP bound and could only solve 8 out of 16 instances to integrality where Vanderbeck could solve 12.

    Vanderbeck's model can be improved by introducing a bound on the total waste generated. Since the maximum number of rolls that can be used is fixed to K (the CSP solution) the waste in any solution cannot be more than ​ W=K.LΣni=1lidiW=K.L-\underset{i=1}{\overset{n}{\Sigma}} l_id_i, where L,li,diL, l_i, d_i are the roll length, item length, and item demand respectively. A column with multiplicity qoq_o with a total waste greater than WWcannot be in any optimal integer solution. This implies that any column that is generated, say (q1,,qm)(q_1,\dots, q_m)with multiplicity qoq_oshould satisfy

    Li=1mliqiWqoL-\sum_{i=1}^ml_iq_i\leq \lfloor\frac{W}{q_o}\rfloor

    So we add this constraint to the pricing subproblem. In implementing Vanderbeck's algorithm, we fix qoq_o and solve the corresponding bounded knapsack problem, the addition of this inequality will decrease the number of columns compared to the Vanderbeck instance. The LP bound of the master problem will be stronger, and this constraint also helps in reducing the value of qomaxq_o^{max}so that the number of knapsack problems to solve also get reduced. This reduction in multiplicity occurs from the fact that if for a value qo=qq_o=q'the pricing subproblem with bounded waste is infeasible, it will be infeasible for all qoq_o greater than qq'so the maximum multiplicity qomax=min{KZ+1,maxi  di,q}q_o^{max} =\min \{K-\underset{-}{Z}+1,\underset{i}{\max}\;d_i, q'\} where Z\underset{-}{Z} is any trivial lower bound for the number of patterns to be used, and qq'is the largest value of qoq_ofor which the pricing subproblem with bounded waste is feasible, similarly for a pattern, the maximum multiplicity mm^\ast as defined in Sec4.2 will be m=min{mini  dixi,WLilixi}m^\ast=\min\{ \underset{i}{\min}\; \lfloor\frac{d_i}{x_i}\rfloor, \lfloor\frac{W}{L-\sum_il_ix_i}\rfloor\}

    Alves, and Carvalho​[6]​ further improves the Vanderbeck's approach by adding general cuts derived from dual feasible functions taken from Fekete and Schepers​[10]​. Another main modification was made in the branching rules. Ideal branching scheme should guarantee that all the fractional solutions are eliminated, as we have seen Vanderbeck's scheme does not ensure that, so the problem is converted to its arc flow formulation and branching is done in the arc flow variables. This branching scheme does not make any intractable changes to the structure of pricing subproblem and also guarantees that the algorithm finds an optimal integer solution after a finite number of steps. The Branch-and-price-and-cut procedure due to Alves, and Carvalho can be summarised as follows:

    The first restricted master LP is initialized with an artificial variable of high cost. At each node of the branching tree, the corresponding LP problem is solved with the bound on maximum waste. To improve the LP bound, the cutting plane procedure with the new class of cuts derived from dual feasible functions are invoked. The resulting LP solution is converted to its arc flow formulation. If the transformation gives rise to integer arc flow variables, an integer solution to the LP with the same cost can be recovered. If there is at least one fractional arc flow variable xr,snx^n_{r,s}, the one with the largest multiplicity and which corresponds to leftmost arc (large n, small r) is selected to branch. The branching constraints

    xr,snxr,sn& xr,snxr,snx^n_{r,s}\leq \lfloor x^n_{r,s} \rfloor \qquad \&\qquad x^n_{r,s}\geq \lceil x^n_{r,s} \rceil are enforced back into the LP and converted into constraints in the PMP reformulation. The algorithm is continued until an optimal integer solution is found.

    RESULTS AND DISCUSSION

    For solving CSP, the example given in IBM ILOG CPLEX Optimization Studio was exploited. IBM ILOG CPLEX (informally CPLEX) is an optimization software package which can solve integer programming problems, large linear programming problems using a primal or dual variant of the simplex method, convex and non-convex quadratic programming problems, and convex quadratically constrained problems. This example used only just a column generation approach and did not branch on the master variables. From the initial basic solution(added as a tuple by the user) and constraints, the dual values are found out and new columns are added to the tuple, which has a negative reduced cost. Once new column cannot enter the tuple, that is the column generation subproblem does not have a negative reduced cost, optimization is done within the columns in the tuple. Cutting stock example provided by the algebraic modeling language AMPL(A Mathematical Programming Language) also uses a similar approach as in CPLEX. This is only a suboptimal strategy, since adding branching constraints to the master variable will modify the reduced cost values and new columns may be able to enter the tuple.

    CSP was solved to optimality using GCG(Generic Column Generation). GCG​[11]​ is a generic branch-and-cut-and-price solver for mixed-integer programs. It is an add-on for SCIP, where SCIP(Solving Constraint Integer Programs) is one of the efficient non-commercial solvers for solving mixed integer programs(MIP) and mixed integer nonlinear programs. It also acts as a framework for constraint integer programming and branch-an-cut-and-price algorithm. GCG takes into account the structure of a MIP and solves it with a branch-and-cut-and-price algorithm after decomposing it using Dantzig-Wolfe decomposition. Decomposition can be done automatically or the user can provide a decomposition structure. Assignment formulation of CSP is used for solving using GCG and model was made using ZIMPL(Zuse Institut Mathematical Programming Language) and converted into a .lp file. In this .lp file, all the variables and constraints are defined explicitly and GCG will do a decomposition in this problem, assign values to variables and solve for optimality.

    Sometimes, finding a suboptimal solution quickly is more useful and efficient than taking a long time to find the optimal solution. A suboptimal algorithm was developed to solve the PMP using CPLEX. The PMP compact formulation has a non-convex quadratic constraint(8a) which cannot be compiled by the CPLEX. Hence, the PMP reformulation is used and for solving the subproblem, explicit iteration is done on the multiplicity values qoq_o and the corresponding knapsack problem is solved based on the algorithm mentioned in (14). We start the algorithm by giving the CSP solution as a tuple which acts as a basic solution for column generation and the upper bound on the number of rolls used is also set to the optimum value obtained from CSP. Dual values associated with each constraint is obtained and they are used for defining the pricing subproblem. The columns with negative reduced cost enter the tuple. After the column generation stops(that is no new column with negative reduced cost is obtained), integer optimization is done among the columns currently present in the tuple. Like the CSP algorithm in CPLEX, we are not branching in the master variables, so a better solution is possible where a new column can enter because of modification of dual values due to the branching constraints(if applied).

    CONCLUSION AND RECOMMENDATIONS

    The problem of finding an optimal cutting pattern which minimizes both the waste generated and the number of distinct patterns used is crucial for industrial applications. We have briefly explained different types of formulation for cutting stock problem as well as the pattern minimization problem and have mentioned some advantages/disadvantages of using one formulation over the other. An exact algorithm for solving the PMP devised by Vanderbeck and further modifications to improve the algorithm made by Alves and Carvalho was also explained. Some quick and easy algorithms which solve the CSP and PMP suboptimally have been modeled and tried in CPLEX.

    The exact algorithm of Vanderbeck could not be implemented entirely in CPLEX. As future work, we could try implementing Vanderbeck's algorithm together with the modifications suggested by Alves and Carvalho. Even though GCG does Dantzig-Wolfe algorithm intrinsically, we could not implement the PMP formulation in GCG because of the presence of the quadratic non-convex constraint. However, fixing multiplicity iteratively, GCG might be able to compile and produce an optimal solution to the PMP. This can be taken up as future work.

    One dimensional cutting stock problem with divisible items is a new problem in cutting stock literature, where each item can be divided into smaller pieces and the smaller pieces can be recombined into a large piece by process of welding or gluing. As far as our knowledge, no exact algorithm has been devised for solving this problem. As future work, one might try to check the feasibility of applying Vanderbeck's approach to this problem or try to devise a new algorithm for the same.

    ACKNOWLEDGEMENTS

    I want to thank my guide, Dr. Ashutosh Mahajan, for his valuable advice and feedbacks. I would also like to show my gratitude to the Indian Academy of Sciences for selecting me as a Summer Research Fellow. I want to thank IIT Bombay for providing me with accommodation during the period of this project. Finally, I would like to thank all my friends at IIT Bombay for their valuable support and for providing a vibrant atmosphere to work.

    References

    • L. V. Kantorovich, 1960, Mathematical Methods of Organizing and Planning Production, Management Science, vol. 6, no. 4, pp. 366-422

    • P. C. Gilmore, R. E. Gomory, 1961, A Linear Programming Approach to the Cutting-Stock Problem, Operations Research, vol. 9, no. 6, pp. 849-859

    • Valério de Carvalho, 1999, Exact solution of bin‐packing problems using column generation and branch‐and‐bound, J. Annals of Operations Research, vol86: 629. 

    • Silvano Martello, Paolo Toth, 1990, Lower bounds and reduction procedures for the bin packing problem, Discrete Applied Mathematics, vol. 28, no. 1, pp. 59-70

    • François Vanderbeck, 2000, Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem, Operations Research, vol. 48, no. 6, pp. 915-926

    • Cláudio Alves, J.M. Valério de Carvalho, 2008, A branch-and-price-and-cut algorithm for the pattern minimization problem, RAIRO - Operations Research, vol. 42, no. 4, pp. 435-453

    • George Nemhauser, Laurence Wolsey, 2014, Strong Valid Inequalities and Facets for Structured Integer Programs, Integer and Combinatorial Optimization, pp. 259-295

    • A. A. Farley, 1990, A Note on Bounding a Class of Linear Programming Problems, Including Cutting Stock Problems, Operations Research, vol. 38, no. 5, pp. 922-923

    • Belov G. Problems, models and algorithms in one- and two-dimensional cutting. (2003) . Ph.D. thesis, Institute of Numerical Mathematics, Technischen Universität Dresden, Dresden, Germany

    • Sándor P. Fekete, Jörg Schepers, 2001, New classes of fast lower bounds for bin packing problems, Mathematical Programming, vol. 91, no. 1, pp. 11-31

    • Gerald Gamrath, Marco E. Lübbecke, 2010, Experiments with a Generic Dantzig-Wolfe Decomposition for Integer Programs, Experimental Algorithms,Lecture Notes in Computer Science, pp. 239-252

    More
    Written, reviewed, revised, proofed and published with