Exact algorithms for pattern minimization in cutting stock problems
Cutting stock problem is one of the classic optimization problems in Operations Research. This type of problem arises where large stock material is to be divided and distributed so that they satisfy customer needs, like in that of steel, paper, and glass industries. Given rods or rolls of a fixed stock length and demand lengths for which orders can be placed, the main objective of the problem is to cut the demand lengths from the standard stock rolls and minimize the waste generated. These problems are a standard example of integer optimization. An auxiliary problem to cutting stock problem is that of reducing the number of distinct patterns (set of all feasible cuts) used in the solution, called the pattern minimization problem. This problem is of great practical importance because the time or resource required to change from one cutting pattern to another can be significantly large. So there is a need to optimize the waste as well as the number of distinct patterns used. The pattern minimization problem is more computationally complex and can be approached either in lexicographical order, that is first minimizing the waste and then minimizing the number of patterns used in the solution or minimize both simultaneously where the trade-off between pattern minimization and waste reduction can be explored. Several heuristic approaches were described in the literature to solve the pattern minimization problem. However, only a few numbers of exact algorithms and methods are present as of now. The formulation of pattern minimization we study is nonlinear. This problem can be reformulated using Dantzig- Wolfe decomposition into a linear optimization problem with a large number of binary variables. Branch-and-price(column generation) method is used as the subproblem to solve the main problem. A class of superadditive inequalities is used at nodes of the branching tree to improve the LP relaxation bound. Branching is done on a carefully selected subset of all feasible solution to prove optimality and yield integer solutions. A rounding heuristic is applied to improve the incumbent solution after column generation. In addition, another branching method using arc flow formulation of pattern minimization problem is also mentioned.
A list of abbreviations frequently used in this report is given below:
|CSP||Cutting Stock Problem|
|PMP||Pattern Minimization Problem|
In paper mills and steel industries, end products are usually large stock rolls of a fixed length. For further industrial uses and consumer needs, it is necessary to cut the large stocks to small lengths as needed. While cutting from the large stock, it is possible that a residual length is obtained which can not be further used. This length will turn out to be a waste. The Main objective of cutting stock problem is to minimize the waste generated while cutting the required number of small lengths from the large stock material.
The one-dimensional cutting stock problem with an infinite supply of stock rolls can be stated as follows: Given a set of lengths and their demands for every i,( ) di number of length li should be cut from an infinite number of stock rolls of length L while minimizing the number of stock rolls used.
In literature, there are various methods and approaches to solving the cutting stock problem. Our primary attention in this report is on Pattern minimization problem, an auxiliary problem to waste minimization. Switching from a cutting pattern to another one may need to change the knife position, which will lead to significant setup time, and it may also require to do some trial runs to validate the pattern. Both these aspects will generate a loss to the practitioners. So in the pattern minimization problem, the goal is to minimize the number of distinct patterns used. The pattern minimization problem is significantly complex than that of waste minimization. Here we are decomposing the formulation of pattern minimization using Dantzig-Wolfe decomposition, and the new formulation is solved using a branch-and-bound-and-price method.
Cutting Stock Problem Formulations.
We will mainly discuss three formulations of cutting stock problem, one which uses binary variables, one which uses cutting patterns and their frequencies and another one which uses graph and arc flows  . Each formulation has its benefits and demerits and illustrates a specific feature of the problem.
Consider stock rolls of length L, and orders can be placed for m lengths and let their demands be . Let n be a known upper bound on the total number of stock rolls needed (usually the sum of demands, assuming that only one piece is cut from one stock roll). Let yj denote the decision variable of choosing a particular stock roll that is if the jth roll is used otherwise 0, . xij denote the number of times length li is cut from roll j. Then the formulation is :
The inequality (2a) is the demand constraints while inequality (2b) denote the feasibility of a cutting pattern. This formulation includes binary as well as integer variables. A lower bound for optimum can be obtained by the LP relaxation of the problem, that is relaxing the integrality constraints on yj and xij . The main drawback of this formulation is that the lower bound obtained by LP relaxation is weak  .
Let L, li , and di be defined as in Sec. 2.1 for all i=1, ,m. Let n denote the total number of patterns, where patterns mean the list of all possible combination of cuts. Let be a cutting pattern such that si represents the number of times length li appears in . Let
. The set of all feasible patterns.
Assume the pattern was used times. Then the CSP can be formulated as:
The inequality (4a) is the demand constraints, and the inequality (4b) is the integrality condition. Even though this formulation provides better LP relaxation bound than assignment formulation, the difficulty is in dealing with a large number of patterns. Since the possible number of patterns may increase exponentially as the number of demand lengths and their demands increase.
Pattern-oriented formulation is usually solved by a column generation approach. In this method, rather than using all the variables in the simplex table, we begin by a simple basic solution and then generate columns(cutting patterns) which have the potential to improve the objective, and they are added to the simplex table. For this minimization problem, the column with the most negative reduced cost is preferable to enter the basis. This subproblem of choosing a column with negative reduced cost is a knapsack problem.
Arc-Flow Formulation 
Let L be the stock roll length and as mentioned in Sec. 2.1 li , di be the lengths and demands for i=1,...,m. Assume that li's are indexed inorder of decreasing size, that is l1 is greater than li for all i=2,...,m. Consider the acyclic, directed graph , where and where
denote arcs corresponding to the demand length used and denote the trim loss occured in the pattern. The cutting stock problem can be modelled as :
where denote the number of items of size e-d cut from any stock roll at a distance d from the beginning and z denote the total number of stock rolls used. It can be shown that the linear relaxation of this formulation gives the same bounds as that given by the pattern-oriented approach.
Pattern Minimization Problem.
One of the main factor besides waste minimization, which affects the profitability of cutting procedure is the number of times one has to switch between cutting patterns in order to satisfy the demands. This is of major importance in the case where the cutting equipment has to be set up anew for each distinct cutting pattern in a time-consuming procedure during which the cutting procedure is halted, and another factor is the waste generated from trial runs made to ensure the validity of the new cutting setup. Thus minimization of the number of distinct patterns used is also an objective for the generation of cutting pattern. Minimizing the number of stock rolls used and minimizing the distinct number of cutting patterns are conflicting goals since to minimize the number of patterns while satisfying demands means increasing the number of times a particular pattern is used, which will increase the number of stock rolls used.
The problem of pattern minimization is computationally more complex than cutting stock problem. This problem can be approached mainly in two different ways, one in which CSP is solved first, and then the number of patterns in the solution is minimized making sure no additional stock rolls are used so that waste remains minimum. The other approach followed relaxes the waste minimization problem, because the optimal solution of the minimum number of patterns may not correspond to the minimum waste. This problem considers the trade-off between minimizing waste and the number of patterns used.
Even though there are several heuristic approaches present in the literature for solving the PMP, only 2-3 exact methods have been put forward. We will be further exploring the branch and price method developed by Vanderbeck and will also explain the modifications made by Carvalho.
Compact Formulation for PMP
A compact formulation for PMP as given by Vanderbeck is as follows:
In this formulation, K denotes the upper bound on the number of stock sheets required, it is usually the solution of CSP. and follow the same definitions as in CSP formulations. Let denote the patterns. and denotes the number of times pattern k is used and the number of times length li appears in pattern k respectively. Let be the binary variable associated with choosing pattern k. That is
The constraint (8a) ensure that the demand criteria is met and this constraint is non-linear. The constraint(8b) bounds the total number of stock rolls that can be used. The constraints (8c) and (8d) ensures the proper definition of variable and of the cutting pattern respectively
The compact formulation given above is non-linear and there is symmetry in 'k'. So the formulation is reformulated after doing Dantzig-Wolfe decomposition adapted to linear programming. Let Q be set of all feasible solutions to constraints (8c), (8d) and (8e). That is
Define decision variable
Then the PMP can be formulated as :
This formulation has a large number of variables and is an IP which can be solved by a branch and price algorithm. The nonlinearity present in the compact formulation is implicit here in the column definition.
Arc-flow Formulation 
Similar to the CSP, PMP can also be formulated using graphs. Let be an acyclic directed digraph with flow variables with 3 indices. The set of arcs A is subdivided into subsets one for each multiplicity. An arc in represents an item of width placed at position i in a pattern with multiplicity n. The formulation is as follows:
where is the number of patterns with multiplicity n that is used. denotes the number of arcs of length r-s cut from any pattern with multiplicity n. Here denotes the largest multiplicity any pattern can have.
In this section, the methods and approaches used to solve Cutting stock problem and pattern minimization problem are discussed.
CSP is usually solved using a column generation scheme, where instead of considering all the feasible pattern in the simplex table, new patterns are generated which improves the objective function.Initially we consider only a trivial basic solution in the simplex method, then solve a subproblem to find which pattern can enter the algorithm. Since the main objective in the CSP is a minimization problem, a pattern which can have the most negative reduced cost is added to the simplex table. So the subproblem is given as
subject to the constraint . where are the dual values associated with inequality (4a) for each i and denotes the new pattern. The algorithm stops when the value of objective of the subproblem is non-negative, that is when no more new pattern can enter the basis.
Solving PMP (Vanderbeck's exact algorithm)
Solving PMP is more complicated than solving CSP. Here we consider the exact algorithm devised by Vanderbeck. To use the algorithm, the Column generation reformulation of the Compact formulation is used. For a new column to enter, we must solve the subproblem which is
where corresponds to the dual value from inequalities(10a) for each i, and is the dual value arising from inequality(10b). The subproblem can be reformulated as :
This subproblem is non-linear, but for a fixed the problem becomes a bounded integer knapsack problem. The possible values for is bounded above by a trivial upperbound where is any trivial lowerbound for the number of patterns to be used. So one brute force method is to enumerate all the and solve the corresponding bounded integer knapsack problem
where is the item upper bound, . But only a subset of multiplicity values need to be considered for solving the subproblem because if is a solution of knapsack problem for , it will remain optimal for all upto The procedure to solve the subproblem is :
The bound obtained by the LP relaxation is not strong. Hence we follow the cutting plane procedure to strengthen the LP bound. Here we are adding general purpose cuts which can dominate any of the rank one Chvátal-Gomory cuts. Combining general purpose cuts and column generation may destroy the structure of the subproblem drastically because of the modification to the reduced cost due to the addition of new inequalities. To limit modifications, we only consider a specific class of inequalities, each derived from inequalities from the formulation of the problem. It is shown that the inequalities
derived from (10b) and (10a) are valid inequalities for and they dominate every rank 1 C-G cuts. From the above cuts, the most violated cuts are selected in preference because they have more potential to improve the LP bound.
If the LP bound does not allow to prove the optimality of the current incumbent solution, we must resort to branching. An appropriate branching scheme is ensuring for a carefully selected subset . That is if in the master LP solution is fractional, a subset is chosen such that and two branch and bound nodes are defined by imposing
The is chosen in such a way that it is easy to detect whether a solution to the subproblem is in and hence whether it must include the dual price associated with the branching constraint in its reduced cost and it is also made sure that the branching will result in a balanced branch and bound tree.
Following the above logic, we branch on the first subset in the following list which yields a fractional sum for . (Note means that b is the binary vector associated with logarithmic decomposition of , i.e. )
The branching rules 1 and 2 amount to fixing the number of columns with a particular multiplicity,3 implies that we are fixing the number of columns that include length li . 4 and 5 imply that the number of columns that include a specific number of length li. 6th rule amounts to fixing the number of columns having the length li and with multiplicity m. The last rule fixes the number of columns satisfying restrictions on both multiplicities as well as on the number of items of length li. These rules itself may not be sufficient in practice to find the integer solutions, so we resort to a rounding heuristics in the end.
Adding the cutting planes (15 a & b) and the branching constraints modify the objective of the subproblem because of entry of new dual values. (15a) together with branching constraints 1 and 2 have a contribution to the column reduced cost as a function of , say , while branching constraints 3-5 show up in the objective as a function of say . Equation(15b) with 6-7 have a contribution which is a function of both multiplicity and number of lengths , let it be . So a generic form of modified subproblem is :
For a fixed , say the modified subproblem becomes a bounded integer knapsack problem
However, the objective function can be non-linear. If we can compute the cost associated with each where is a possible value that can be taken by , the problem becomes a multiple-choice knapsack problem:
The main bounded knapsack problem can be converted into a simpler form if we don't use branching rules 3,5 and consider a weaker version of (14b). Here we use the change of variable where . This binary form is easier to solve than multiple choice bounded knapsack.
The modified subproblem(16) can be solved by using a brute force approach, that is by enumerating all the values and solving the corresponding bounded integer knapsack problem. This approach is only used as a last resort since we can make use of cases where implicit enumeration of all values of happens.
This case considers to be a linear function, for some function . In this case the algorithm(14) can be applied.
When is a non-linear function but like in case 1 for some function for every i and , an amended version of algorithm (14) can be used. For a given knapsack solution and associated multiplicity enumerate to identify the optimal multiplicty associated with the solution . The next knapsack is solved by setting the item upperbound to instead of . This special case occurs when one only uses cuts (15a) and branching constraints (1-2). These cuts and constraints which fixes the pattern multiplicity to integer values has the most significant impact on lower bound on minimum number of distinct patterns used.
The only assumption in this case is that , which is the case as long as we do not use the cut(15b) and branching constraints 6-7. This case can also be solved by using a branch and bound procedure through an implicit enumeration on . At each node we restrict (At root node ) . For the subproblem is
where and . To obtain an upperbound on the subproblem we relax the constraint to . So the problem now decomposes into
Where the first part can be solved by a simple iteration. For the second part of this problem, assume that is an optimal solution. Here similiarly . So its enough to solve two knapsack problems of the format for with the constraints . So an upperbound for the problem is
Let and denote the solution for the two knapsack problems and and be their respective multiplicities. We derive lower bounds by optimising in for fixed and .
is a valid upperbound on , Hence if current node can't be pruned, we will have to divide the problem . Moreover if is valid upperbound on . So we have to consider only . For branching, we partition remaining interval of values such that current upperbound will no longer be valid, that is if then divide into .
First, we solve the CSP Problem associated with the given PMP and put K (defined in the PMP column generation reformulation) as the solution of CSP. We also compute L2 lower bound for the corresponding Bin Packing Problem (setting all the demands =1) and it is used as an initial lower bound for PMP. At the root node, we start with a single column (with entries equal to that of RHS of constraints) and with a cost which is equal to the cost of solution of CSP(incumbent solution). This column is considered as an artificial variable. After each iteration of a column generation process, a valid lower bound on LP can be obtained from the optimal reduced cost value. One of the best lower bounds is due to Farley defined for a branch-and-bound node u as where is the current value of restricted master problem and is the value of the solution of column generation subproblem. The column generation is interrupted when no more column can enter, or when the master LP gap is closed, that is where is the current lower bound at node u, or when current lower bound is worse than an aspiration level set as LB+1, where LB is the current best lower bound for PMP.
When a column generation process is terminated, we check whether the artificial variable is still in the basic solution. If it is, we increase the cost of the artificial variable and do column generation again. If not, we check for the most violated cut among(15a) and (15b), add it to the master problem and redo the column generation procedure. If a violated cut could not be obtained and if the node cannot be pruned, we resort to branching using the constraints, following the rule mentioned in Fig 1 that is violated by the current LP solution. The node with the best bound is chosen as the next node. If multiple nodes have the same lower bound, the deepest node is selected. After completion of column generation procedure at the root node, we apply a rounding heuristic. The heuristic consists of recursively rounding up a fractional master variable to one. The variables that are fixed this way are removed from the optimization problem, and right hand side values of constraints are adjusted to account for the fixed variables(taking into account their demands, contribution to cuts and branching rule, etc.) we return to the column generation procedure now for the reduced size master problem.
Improving Vanderbeck's algorithm
Even though Vanderbeck's algorithm gives bound to the compact formulation of PMP, the lower bound provided by Vanderbeck remains weak. It is due to the fact that columns with large multiplicity can get fractional values in the Lp solution. Another demerit of his approach is the 7 rule branching scheme. These complex rules do not actually ensure the integrality of the solution. Thus the integrality gap may not be closed at the end of the algorithm. There is another exact algorithm for solving PMP devised by Belov in his Ph.D. thesis, which can be made as strong as Vanderbeck's algorithm but while considering practical instances and application, it has a weak LP bound and could only solve 8 out of 16 instances to integrality where Vanderbeck could solve 12.
Vanderbeck's model can be improved by introducing a bound on the total waste generated. Since the maximum number of rolls that can be used is fixed to K (the CSP solution) the waste in any solution cannot be more than , where are the roll length, item length, and item demand respectively. A column with multiplicity with a total waste greater than cannot be in any optimal integer solution. This implies that any column that is generated, say with multiplicity should satisfy
So we add this constraint to the pricing subproblem. In implementing Vanderbeck's algorithm, we fix and solve the corresponding bounded knapsack problem, the addition of this inequality will decrease the number of columns compared to the Vanderbeck instance. The LP bound of the master problem will be stronger, and this constraint also helps in reducing the value of so that the number of knapsack problems to solve also get reduced. This reduction in multiplicity occurs from the fact that if for a value the pricing subproblem with bounded waste is infeasible, it will be infeasible for all greater than so the maximum multiplicity where is any trivial lower bound for the number of patterns to be used, and is the largest value of for which the pricing subproblem with bounded waste is feasible, similarly for a pattern, the maximum multiplicity as defined in Sec4.2 will be
Alves, and Carvalho further improves the Vanderbeck's approach by adding general cuts derived from dual feasible functions taken from Fekete and Schepers. Another main modification was made in the branching rules. Ideal branching scheme should guarantee that all the fractional solutions are eliminated, as we have seen Vanderbeck's scheme does not ensure that, so the problem is converted to its arc flow formulation and branching is done in the arc flow variables. This branching scheme does not make any intractable changes to the structure of pricing subproblem and also guarantees that the algorithm finds an optimal integer solution after a finite number of steps. The Branch-and-price-and-cut procedure due to Alves, and Carvalho can be summarised as follows:
The first restricted master LP is initialized with an artificial variable of high cost. At each node of the branching tree, the corresponding LP problem is solved with the bound on maximum waste. To improve the LP bound, the cutting plane procedure with the new class of cuts derived from dual feasible functions are invoked. The resulting LP solution is converted to its arc flow formulation. If the transformation gives rise to integer arc flow variables, an integer solution to the LP with the same cost can be recovered. If there is at least one fractional arc flow variable , the one with the largest multiplicity and which corresponds to leftmost arc (large n, small r) is selected to branch. The branching constraints
are enforced back into the LP and converted into constraints in the PMP reformulation. The algorithm is continued until an optimal integer solution is found.
RESULTS AND DISCUSSION
For solving CSP, the example given in IBM ILOG CPLEX Optimization Studio was exploited. IBM ILOG CPLEX (informally CPLEX) is an optimization software package which can solve integer programming problems, large linear programming problems using a primal or dual variant of the simplex method, convex and non-convex quadratic programming problems, and convex quadratically constrained problems. This example used only just a column generation approach and did not branch on the master variables. From the initial basic solution(added as a tuple by the user) and constraints, the dual values are found out and new columns are added to the tuple, which has a negative reduced cost. Once new column cannot enter the tuple, that is the column generation subproblem does not have a negative reduced cost, optimization is done within the columns in the tuple. Cutting stock example provided by the algebraic modeling language AMPL(A Mathematical Programming Language) also uses a similar approach as in CPLEX. This is only a suboptimal strategy, since adding branching constraints to the master variable will modify the reduced cost values and new columns may be able to enter the tuple.
Sometimes, finding a suboptimal solution quickly is more useful and efficient than taking a long time to find the optimal solution. A suboptimal algorithm was developed to solve the PMP using CPLEX. The PMP compact formulation has a non-convex quadratic constraint(8a) which cannot be compiled by the CPLEX. Hence, the PMP reformulation is used and for solving the subproblem, explicit iteration is done on the multiplicity values and the corresponding knapsack problem is solved based on the algorithm mentioned in (14). We start the algorithm by giving the CSP solution as a tuple which acts as a basic solution for column generation and the upper bound on the number of rolls used is also set to the optimum value obtained from CSP. Dual values associated with each constraint is obtained and they are used for defining the pricing subproblem. The columns with negative reduced cost enter the tuple. After the column generation stops(that is no new column with negative reduced cost is obtained), integer optimization is done among the columns currently present in the tuple. Like the CSP algorithm in CPLEX, we are not branching in the master variables, so a better solution is possible where a new column can enter because of modification of dual values due to the branching constraints(if applied).
CONCLUSION AND RECOMMENDATIONS
The problem of finding an optimal cutting pattern which minimizes both the waste generated and the number of distinct patterns used is crucial for industrial applications. We have briefly explained different types of formulation for cutting stock problem as well as the pattern minimization problem and have mentioned some advantages/disadvantages of using one formulation over the other. An exact algorithm for solving the PMP devised by Vanderbeck and further modifications to improve the algorithm made by Alves and Carvalho was also explained. Some quick and easy algorithms which solve the CSP and PMP suboptimally have been modeled and tried in CPLEX.
The exact algorithm of Vanderbeck could not be implemented entirely in CPLEX. As future work, we could try implementing Vanderbeck's algorithm together with the modifications suggested by Alves and Carvalho. Even though GCG does Dantzig-Wolfe algorithm intrinsically, we could not implement the PMP formulation in GCG because of the presence of the quadratic non-convex constraint. However, fixing multiplicity iteratively, GCG might be able to compile and produce an optimal solution to the PMP. This can be taken up as future work.
One dimensional cutting stock problem with divisible items is a new problem in cutting stock literature, where each item can be divided into smaller pieces and the smaller pieces can be recombined into a large piece by process of welding or gluing. As far as our knowledge, no exact algorithm has been devised for solving this problem. As future work, one might try to check the feasibility of applying Vanderbeck's approach to this problem or try to devise a new algorithm for the same.
I want to thank my guide, Dr. Ashutosh Mahajan, for his valuable advice and feedbacks. I would also like to show my gratitude to the Indian Academy of Sciences for selecting me as a Summer Research Fellow. I want to thank IIT Bombay for providing me with accommodation during the period of this project. Finally, I would like to thank all my friends at IIT Bombay for their valuable support and for providing a vibrant atmosphere to work.
L. V. Kantorovich, 1960, Mathematical Methods of Organizing and Planning Production, Management Science, vol. 6, no. 4, pp. 366-4221
P. C. Gilmore, R. E. Gomory, 1961, A Linear Programming Approach to the Cutting-Stock Problem, Operations Research, vol. 9, no. 6, pp. 849-8591
Valério de Carvalho, 1999, Exact solution of bin‐packing problems using column generation and branch‐and‐bound, J. Annals of Operations Research, vol. 86: 629.1
Silvano Martello, Paolo Toth, 1990, Lower bounds and reduction procedures for the bin packing problem, Discrete Applied Mathematics, vol. 28, no. 1, pp. 59-701
François Vanderbeck, 2000, Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem, Operations Research, vol. 48, no. 6, pp. 915-9261
Cláudio Alves, J.M. Valério de Carvalho, 2008, A branch-and-price-and-cut algorithm for the pattern minimization problem, RAIRO - Operations Research, vol. 42, no. 4, pp. 435-4531
George Nemhauser, Laurence Wolsey, 2014, Strong Valid Inequalities and Facets for Structured Integer Programs, Integer and Combinatorial Optimization, pp. 259-2951
A. A. Farley, 1990, A Note on Bounding a Class of Linear Programming Problems, Including Cutting Stock Problems, Operations Research, vol. 38, no. 5, pp. 922-9231
Belov G. Problems, models and algorithms in one- and two-dimensional cutting. (2003) . Ph.D. thesis, Institute of Numerical Mathematics, Technischen Universität Dresden, Dresden, Germany1
Sándor P. Fekete, Jörg Schepers, 2001, New classes of fast lower bounds for bin packing problems, Mathematical Programming, vol. 91, no. 1, pp. 11-311
Gerald Gamrath, Marco E. Lübbecke, 2010, Experiments with a Generic Dantzig-Wolfe Decomposition for Integer Programs, Experimental Algorithms,Lecture Notes in Computer Science, pp. 239-2521