2015-11-28T02:15:32Z
http://oai.repec.openlib.org/oai.php
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:317-3302015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:317-330
article
A cooperative location game based on the 1-center location problem
In this paper we introduce and analyze new classes of cooperative games related to facility location models defined on general metric spaces. The players are the customers (demand points) in the location problem and the characteristic value of a coalition is the cost of serving its members. Specifically, the cost in our games is the service radius of the coalition. We call these games the Minimum Radius Location Games (MRLG). We study the existence of core allocations and the existence of polynomial representations of the cores of these games, focusing on network spaces, i.e., finite metric spaces induced by undirected graphs and positive edge lengths, and on the lp metric spaces defined over .
Cooperative combinatorial games Core solutions Radius Diameter
http://www.sciencedirect.com/science/article/pii/S037722171100364X
Puerto, Justo
Tamir, Arie
Perea, Federico
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:498-5102015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:498-510
article
Last time buy and repair decisions for spare parts
Original Equipment Manufacturers (OEM's) of advanced capital goods often offer service contracts for system support to their customers, for which spare parts are needed. Due to technological changes, suppliers of spare parts may stop production at some point in time. As a reaction to that decision, an OEM may place a so-called Last Time Buy (LTB) order to cover demand for spare parts during the remaining service period, which may last for many years. The fact that there might be other alternative sources of supply in the next periods complicates the decision on the LTB. In this paper, we develop a heuristic method to find the near-optimal LTB quantity in presence of an imperfect repair option of the failed parts that can be returned from the field. Comparison of our method to simulation shows high approximation accuracy. Numerical experiments reveal that repair is an excellent option as alternative sourcing, even if it is more expensive than buying a new part, because of the option to postpone the repair until the parts are needed. In addition, we show the impact of other key parameters on costs and LTB quantity.
Inventory; Stochastic processes; Spare parts; Last time buy; Repair;
http://www.sciencedirect.com/science/article/pii/S037722171500082X
Behfard, S.
van der Heijden, M.C.
Al Hanbali, A.
Zijm, W.H.M.
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:110-1162015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:110-116
article
An exact algorithm for the reliability redundancy allocation problem
The redundancy allocation problem is the problem of finding an optimal allocation of redundant components subject to a set of resource constraints. The problem studied in this paper refers to a series-parallel system configuration and allows for component mixing. We propose a new modeling/solution approach, in which the problem is transformed into a multiple choice knapsack problem and solved to optimality via a branch and cut algorithm. The algorithm is tested on well-known sets of benchmark instances. All instances have been solved to optimality in milliseconds or very few seconds on a normal workstation.
Redundancy allocation problem; Multiple choice knapsack problem; Branch and cut;
http://www.sciencedirect.com/science/article/pii/S0377221715000284
Caserta, Marco
Voß, Stefan
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:514-5222015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:514-522
article
Scheduling identical parallel machines with fixed delivery dates to minimize total tardiness
This paper addresses the problem of minimizing the total tardiness of a set of jobs to be scheduled on identical parallel machines where jobs can only be delivered at certain fixed delivery dates. Scheduling problems with fixed delivery dates are frequent in industry, for example when a manufacturer has to rely on the timetable of a logistics provider to ship its products to customers. We develop and empirically evaluate both optimal and heuristic solution procedures to solve the problem. As the problem is NP-hard, only relatively small instances can be optimally solved in reasonable computational time using either an efficient mathematical programming formulation or a branch-and-bound algorithm. Consequently, we develop a tabu search and a hybrid genetic algorithm to quickly find good approximate solutions for larger instances.
Scheduling; Assignment problems; Branch and bound; Metaheuristics; Fixed delivery dates;
http://www.sciencedirect.com/science/article/pii/S0377221714009849
Mensendiek, Arne
Gupta, Jatinder N.D.
Herrmann, Jan
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:100-1092015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:100-109
article
An approach to the asymmetric multi-depot capacitated arc routing problem
Despite the fact that the Capacitated Arc Routing Problems (CARPs) received substantial attention in the literature, most of the research concentrates on the symmetric and single-depot version of the problem. In this paper, we fill this gap by proposing an approach to solving a more general version of the problem and analysing its properties. We present an MILP formulation that accommodates asymmetric multi-depot case and consider valid inequalities that may be used to tighten its LP relaxation. A symmetry breaking scheme for a single-depot case is also proposed. An extensive numerical study is carried to investigate the properties of the problem and the proposed solution approach.
Arc routing; Valid inequalities; Branch-and-cut;
http://www.sciencedirect.com/science/article/pii/S0377221715000065
Krushinsky, Dmitry
Van Woensel, Tom
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:369-3782015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:369-378
article
A bi-objective column generation algorithm for the multi-commodity minimum cost flow problem
We present a column generation algorithm for solving the bi-objective multi-commodity minimum cost flow problem. This method is based on the bi-objective simplex method and Dantzig–Wolfe decomposition. The method is initialised by optimising the problem with respect to the first objective, a single objective multi-commodity flow problem, which is solved using Dantzig–Wolfe decomposition. Then, similar to the bi-objective simplex method, our algorithm iteratively moves from one non-dominated extreme point to the next one by finding entering variables with the maximum ratio of improvement of the second objective over deterioration of the first objective. Our method reformulates the problem into a bi-objective master problem over a set of capacity constraints and several single objective linear fractional sub-problems each over a set of network flow conservation constraints. The master problem iteratively updates cost coefficients for the fractional sub-problems. Based on these cost coefficients an optimal solution of each sub-problem is obtained. The solution with the best ratio objective value out of all sub-problems represents the entering variable for the master basis. The algorithm terminates when there is no entering variable which can improve the second objective by deteriorating the first objective. This implies that all non-dominated extreme points of the original problem are obtained. We report on the performance of the algorithm on several directed bi-objective network instances with different characteristics and different numbers of commodities.
Network flows; Bi-objective multi-commodity minimum cost flow problem; Dantzig–Wolfe decomposition; Column generation; Bi-objective simplex method;
http://www.sciencedirect.com/science/article/pii/S0377221715000417
Moradi, Siamak
Raith, Andrea
Ehrgott, Matthias
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:944-9552015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:944-955
article
A discontinuous mispricing model under asymmetric information
We study a discontinuous mispricing model of a risky asset under asymmetric information where jumps in the asset price and mispricing are modelled by Lévy processes. By contracting the filtration of the informed investor, we obtain optimal portfolios and maximum expected utilities for the informed and uninformed investors. We also discuss their asymptotic properties, which can be estimated using the instantaneous centralized moments of return. We find that optimal and asymptotic utilities are increased due to jumps in mispricing for the uninformed investor but the informed investor still has excess utility, provided there is not too little or too much mispricing.
Mispricing; Levy jumps; Asymmetric information; Optimal portfolio; Expected utility;
http://www.sciencedirect.com/science/article/pii/S0377221714010637
Buckley, Winston S.
Long, Hongwei
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:774-7882015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:774-788
article
Mathematical programming techniques in water network optimization
In this article we survey mathematical programming approaches to problems in the field of drinking water distribution network optimization. Among the predominant topics treated in the literature, we focus on two different, but related problem classes. One can be described by the notion of network design, while the other is more aptly termed by network operation. The basic underlying model in both cases is a nonlinear network flow model, and we give an overview on the more specific modeling aspects in each case. The overall mathematical model is a Mixed Integer Nonlinear Program having a common structure with respect to how water dynamics in pipes are described. Finally, we survey the algorithmic approaches to solve the proposed problems and we discuss computation on various types of water networks.
Networks; Mixed Integer Nonlinear Programming; Combinatorial optimization; Global optimization;
http://www.sciencedirect.com/science/article/pii/S0377221714010571
D’Ambrosio, Claudia
Lodi, Andrea
Wiese, Sven
Bragalli, Cristiana
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:763-7732015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:763-773
article
Inverse chromatic number problems in interval and permutation graphs
Given a graph G and a positive integer K, the inverse chromatic number problem consists in modifying the graph as little as possible so that it admits a chromatic number not greater than K. In this paper, we focus on the inverse chromatic number problem for certain classes of graphs. First, we discuss diverse possible versions and then focus on two application frameworks which motivate this problem in interval and permutation graphs: the inverse booking problem and the inverse track assignment problem. The inverse booking problem is closely related to some previously known scheduling problems; we propose new hardness results and polynomial cases. The inverse track assignment problem motivates our study of the inverse chromatic number problem in permutation graphs; we show how to solve in polynomial time a generalization of the problem with a bounded number of colors.
Inverse combinatorial optimization; Graph coloring; Interval graphs; Permutation graphs;
http://www.sciencedirect.com/science/article/pii/S0377221714010467
Chung, Yerim
Culus, Jean-François
Demange, Marc
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:380-3922015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:380-392
article
An experimental study of the effect of uncertainty representation on decision making
This paper presents the results of an experiment investigating the effects of using different formats for representing uncertain attribute evaluations on decision making. Study participants make a series of hypothetical choices using six uncertainty formats - probability distributions, expected values, standard deviations, three-point (minimum-median-maximum) approximations, quantiles, and scenarios - and effects on decision making are tracked in terms of the quality of the final choice, the specific characteristics of the selected alternatives, and the difficulty experienced in making a decision. The results provide insights into how subjects make single- and multi-criteria choices in the presence of uncertainty (and some format for representing uncertainty) but in the absence of any real facilitation. The use of probability distributions appeared to overload subjects with information, leading to poorer and more difficult choices than if some intermediate level of summary was used - in particular three-point approximations or quantiles.
Multi-criteria analysis Decision analysis Decision support systems Uncertainty modelling Psychology
http://www.sciencedirect.com/science/article/pii/S0377221711003651
Durbach, Ian N.
Stewart, Theodor J.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:647-6572015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:647-657
article
Adding flexibility in a natural gas transportation network using interruptible transportation services
We present a modeling framework for analyzing if the use of interruptible transportation services can improve capacity utilization in a natural gas transportation network. The network consists of two decision makers: the transmission system operator (TSO) and a shipper of natural gas. The TSO is responsible for the routing of gas in the network and allocates capacity to the shipper to ensure that the security of supply in the network is within given bounds. The TSO can offer two different types of transportation services: firm and interruptible. Only firm services have a security of supply measure, while the interruptible services can freely be interrupted whenever the available capacity in the transportation network is not sufficiently large. We apply our modeling framework on a case study with realistic data from the Norwegian Continental Shelf. The results indicate substantial increased throughput and profits with the introduction of interruptible services.
OR in energy; Interruptible transportation service; Natural gas; Security of supply; Stochastic programming;
http://www.sciencedirect.com/science/article/pii/S037722171400993X
Fodstad, Marte
Midthun, Kjetil T.
Tomasgard, Asgeir
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:216-2222015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:216-222
article
The performance evaluation of a multi-stage JIT production system with stochastic demand and production capacities
This paper discusses a single-item, multi-stage, serial Just-in-Time (JIT) production system with stochastic demand and production capacities. The JIT production system is modeled as a discrete-time, M/G/1-type Markov chain. A necessary and sufficient condition, or a stability condition, under which the system has a steady-state distribution is derived. A performance evaluation algorithm is then developed using the matrix analytic methods. In numerical examples, the optimal numbers of kanbans are determined by the proposed algorithm. The optimal numbers of kanbans are robust for the variations in production capacity distribution and demand distribution.
Production Multi-stage JIT production system M/G/1-type Markov chain Stability condition Matrix analytic methods Numerical results
http://www.sciencedirect.com/science/article/pii/S0377221711003584
Iwase, Masaharu
Ohno, Katsuhisa
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:246-2552015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:246-255
article
Marketing-driven channel coordination with revenue-sharing contracts under price promotion to end-customers
This paper explores the equilibrium behavior of a basic supplier-retailer distribution channel with and without revenue-sharing contracts under price promotion to end-customers. Three types of promotional demand patterns characterized by different features of dynamic price sensitivity are considered to rationalize price promotional effects on end-customer demands. Under such a retail price promotion scheme, this work develops a basic model to investigate decentralized channel members' equilibrium decisions in pricing and logistics operations using a two-stage Stackelberg game approach. Extending from the basic model, this work further derives the equilibrium solutions of the dyadic members under channel coordination with revenue-sharing contracts. Analytical results show that under certain conditions both the supplier and retailer can gain more profits through revenue-sharing contracts by means of appropriate promotional pricing strategies. Moreover, the supplier should provide additional economic incentives to the retailer. Furthermore, a counter-profit revenue-sharing chain effect is found in the illustrative examples. Such a phenomenon infers that the more the retailer requests to share from a unit of sale the more it may lose under the revenue-sharing supply chain coordination scheme.
Supply chain management Channel coordination Promotional effect Revenue sharing
http://www.sciencedirect.com/science/article/pii/S0377221711003754
Sheu, Jiuh-Biing
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:86-992015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:86-99
article
Single machine scheduling with two competing agents and equal job processing times
We study various two-agent scheduling problems on a single machine with equal job processing times. The equal processing time assumption enables us to design new polynomial-time or faster-than-known optimization algorithms for many problems. We prove, however, that there exists a subset of problems for which the computational complexity remains NP-hard. The set of hard problems includes different variations where the objective functions of the two agents are either minimizing the weighted sum of completion times or the weighted number of tardy jobs. For these problems, we present pseudo-polynomial time algorithms.
Single machine scheduling; Two competing agents; Equal processing times; Complexity;
http://www.sciencedirect.com/science/article/pii/S0377221715000041
Oron, Daniel
Shabtay, Dvir
Steiner, George
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:565-5752015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:565-575
article
A Lagrangian approach to the winner determination problem in iterative combinatorial reverse auctions
Combinatorial auctions allow allocation of bundles of items to the bidders who value them the most. The NP-hardness of the winner determination problem (WDP) has imposed serious computational challenges when designing efficient solution algorithms. This paper analytically studies the Lagrangian relaxation of WDP and expounds a novel technique for efficiently solving the relaxation problem. Moreover, we introduce a heuristic algorithm that adjusts any infeasibilities from the Lagrangian optimal solution to reach an optimal or a near optimal solution. Extensive numerical experiments illustrate the class of problems on which application of this technique provides near optimal solutions in much less time, as little as a fraction of a thousand, as compared to the CPLEX solver.
Reverse multi-unit combinatorial auction; Lagrangian relaxation; Iterative auctions;
http://www.sciencedirect.com/science/article/pii/S0377221715000739
Mansouri, Bahareh
Hassini, Elkafi
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:308-3162015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:308-316
article
Proactive policies for the stochastic resource-constrained project scheduling problem
The resource-constrained project scheduling problem involves the determination of a schedule of the project activities, satisfying the precedence and resource constraints while minimizing the project duration. In practice, activity durations may be subject to variability. We propose a stochastic methodology for the determination of a project execution policy and a vector of predictive activity starting times with the objective of minimizing a cost function that consists of the weighted expected activity starting time deviations and the penalties or bonuses associated with late or early project completion. In a computational experiment, we show that our procedure greatly outperforms existing algorithms described in the literature.
Project scheduling Proactive scheduling Execution policies Stochastic RCPSP
http://www.sciencedirect.com/science/article/pii/S0377221711003638
Deblaere, Filip
Demeulemeester, Erik
Herroelen, Willy
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:555-5652015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:555-565
article
Influence of order acceptance policies on optimal capacity investment with stochastic customer required lead times
The influence of applying queue state dependent order acceptance policies, where either decision is with customer or with manufacturer, on optimal capacity investment is discussed. Therefore, three order acceptance policies are developed where either the customer has a certain service level threshold for each order or the manufacturer has an overall service level threshold. The third policy, modeling queue state independent order acceptance, is used to identify performance gains of including queue state knowledge into this decision. Equations for state probabilities, order acceptance rate, work-in-process, finished-goods-inventory, backorders and service level are developed for a system with stochastic customer-required lead times applying queuing methodology. An optimization problem minimizing capacity, work-in-process, finished-goods-inventory, backorder and lost sales cost (for rejected orders) in a single stage MTO production system is presented. The system is modeled as an M/M/1 queue with input rates depending on queue length and random customer required lead time. For the optimization problem, which cannot be solved explicitly, a solution heuristic is developed and a broad numerical study is conducted. The numerical study shows that allowing the customer to know the expected production lead time and—based on this knowledge—decide whether or not to place an order can have positive or negative influences on the overall costs, depending on the customer's service level target. Furthermore, the study shows that a high cost reduction potential exists for simultaneously optimizing capacity investment and order acceptance policy if the production system can decide whether or not to accept an order.
Order acceptance; Stochastic customer-required lead time; Queuing theory; Service level; Tardiness; Operations management;
http://www.sciencedirect.com/science/article/pii/S0377221714009850
Altendorfer, Klaus
Minner, Stefan
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:277-2882015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:277-288
article
Generalized analytic network process
The analytic network process (ANP) is a methodology for multi-criteria decision making used to derive priorities of the compared elements in a network hierarchy, where the dependences and feedback within and between the elements can be considered. However, the ANP is limited to the input preferences as crisp judgments, which is often unfavorable in practical applications. As an extension of the ANP, a generalized analytic network process (G-ANP) is developed to allow multiple forms of preferences, such as crisp (fuzzy) judgments, interval (interval fuzzy) judgments, hesitant (hesitant fuzzy) judgments and stochastic (stochastic fuzzy) judgments. In the G-ANP, a concept of complex comparison matrices (CCMs) is developed to collect decision makers’ preferences in the multiple forms. From a stochastic point of view, we develop an eigenvector method based stochastic preference method (EVM-SPM) to derive priorities from CCMs. The main steps of the G-ANP are summarized, and the implementation of the G-ANP in Matlab and Excel environments are given in detail, which is also a prototype for a decision support system. A real-life example of the piracy risk assessment to the energy channels of China is proposed to demonstrate the G-ANP.
Decision support systems; Decision analysis; Distribution; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221715000314
Zhu, Bin
Xu, Zeshui
Zhang, Ren
Hong, Mei
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:457-4702015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:457-470
article
Metaheuristics for the risk-constrained cash-in-transit vehicle routing problem
This paper proposes a variant of the well-known capacitated vehicle routing problem that models the routing of vehicles in the cash-in-transit industry by introducing a risk constraint. In the Risk-constrained Cash-in-Transit Vehicle Routing Problem (RCTVRP), the risk of being robbed, which is assumed to be proportional both to the amount of cash being carried and the time or the distance covered by the vehicle carrying the cash, is limited by a risk threshold. A library containing two sets of instances for the RCTVRP, some with known optimal solution, is generated. A mathematical formulation is developed and small instances of the problem are solved by using IBM CPLEX. Four constructive heuristics as well as a local search block composed of six local search operators are developed and combined using two different metaheuristic structures: a multistart heuristic and a perturb-and-improve structure. In a statistical experiment, the best parameter settings for each component are determined, and the resulting heuristic configurations are compared in their best possible setting. The resulting metaheuristics are able to obtain solutions of excellent quality in very limited computing times.
Metaheuristics; Vehicle routing; Risk management; Cash-in-transit; Combinatorial optimization;
http://www.sciencedirect.com/science/article/pii/S0377221715000600
Talarico, Luca
Sörensen, Kenneth
Springael, Johan
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:67-772015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:67-77
article
Compact bidding languages and supplier selection for markets with economies of scale and scope
Combinatorial auctions have been used in procurement markets with economies of scope. Preference elicitation is already a problem in single-unit combinatorial auctions, but it becomes prohibitive even for small instances of multi-unit combinatorial auctions, as suppliers cannot be expected to enumerate a sufficient number of bids that would allow an auctioneer to find the efficient allocation. Auction design for markets with economies of scale and scope are much less well understood. They require more compact and yet expressive bidding languages, and the supplier selection typically is a hard computational problem. In this paper, we propose a compact bidding language to express the characteristics of a supplier's cost function in markets with economies of scale and scope. Bidders in these auctions can specify various discounts and markups on overall spend on all items or selected item sets, and specify complex conditions for these pricing rules. We propose an optimization formulation to solve the resulting supplier selection problem and provide an extensive experimental evaluation. We also discuss the impact of different language features on the computational effort, on total spend, and the knowledge representation of the bids. Interestingly, while in most settings volume discount bids can lead to significant cost savings, some types of volume discount bids can be worse than split-award auctions in simple settings.
Decision support systems Auctions/bidding E-commerce
http://www.sciencedirect.com/science/article/pii/S0377221711003109
Bichler, Martin
Schneider, Stefan
Guler, Kemal
Sayal, Mehmet
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:1004-10152015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:1004-1015
article
Analytical debiasing of corporate cash flow forecasts
We propose and empirically test statistical approaches to debiasing judgmental corporate cash flow forecasts. Accuracy of cash flow forecasts plays a pivotal role in corporate planning as liquidity and foreign exchange risk management are based on such forecasts. Surprisingly, to our knowledge there is no previous empirical work on the identification, statistical correction, and interpretation of prediction biases in large enterprise financial forecast data in general, and cash flow forecasting in particular. Employing a unique set of empirical forecasts delivered by 34 legal entities of a multinational corporation over a multi-year period, we compare different forecast correction techniques such as Theil’s method and approaches employing robust regression, both with various discount factors. Our findings indicate that rectifiable mean as well as regression biases exist for all business divisions of the company and that statistical correction increases forecast accuracy significantly. We show that the parameters estimated by the models for different business divisions can also be related to the characteristics of the business environment and provide valuable insights for corporate financial controllers to better understand, quantify, and feedback the biases to the forecasters aiming to systematically improve predictive accuracy over time.
Analytics; Judgmental forecasting; Forecast bias correction; Cash flow forecasting;
http://www.sciencedirect.com/science/article/pii/S0377221714010534
Blanc, Sebastian M.
Setzer, Thomas
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:490-4972015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:490-497
article
An approximate policy for a dual-sourcing inventory model with positive lead times and binomial yield
This paper studies the inventory system of a retailer who orders his products from two supply sources, a local one that is responsive and reliable, but expensive, and a global one that is low-cost but less reliable. The deliveries from the global source only partially satisfy the quality requirements. We model this situation with a dual-sourcing inventory model with positive lead times and random yield. We propose a dual-index order-up-to policy (DOP) based on approximating the inventory model with an unreliable supplier by a sequence of dual-sourcing models with reliable suppliers and suitably modified demand distributions. Numerical results show that the performance of this heuristic is close to that of the optimal DOP. Moreover, we extend the heuristic to models with advance yield information and study its impact on the total inventory costs.
Inventory; Supply chain management; Applied probability; Yield uncertainty; Dual-index order-up-to policy;
http://www.sciencedirect.com/science/article/pii/S0377221715000727
Ju, Wanrong
Gabor, Adriana F.
van Ommeren, J.C.W.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:540-5462015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:540-546
article
Pattern-set generation algorithm for the one-dimensional cutting stock problem with setup cost
The primary objective in the one-dimensional cutting stock problem is to minimize material cost. In real applications it is often necessary to consider auxiliary objectives, one of which is to reduce the number of different cutting patterns (setups). This paper first presents an integer linear programming model to minimize the sum of material and setup costs over a given pattern set, and then describes a sequential grouping procedure to generate the patterns in the set. Two sets of benchmark instances are used in the computational test. The results indicate that the approach is efficient in improving the solution quality.
Cutting; Cutting stock; One-dimensional cutting; Pattern reduction; Setup cost;
http://www.sciencedirect.com/science/article/pii/S0377221714010169
Cui, Yaodong
Zhong, Cheng
Yao, Yi
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:624-6362015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:624-636
article
Modeling the dynamics of a multi-product manufacturing system: A real case application
In this paper, a continuous multi-product model is developed to represent the shop floor dynamics of a job shop, based on dynamic modeling and on analogies to electrical components. This approach allows the mathematical formulation of the model (state representation) and the analysis of its dynamic response via simulation. A real case application in the textile industry is presented. Thus, this research contributes in the following ways: first, proposing a model that is suitable for multi-product systems with intricate job shop configuration and that is generalizable to various manufacturing systems; second, presenting a real case application of the proposed model. As practical implications, it provides production managers and practitioners with a prescriptive decision model that considers the dynamics of the production systems and the interdependencies of the decisions made in the shop floor. From the academic perspective, it contributes to the existing literature by presenting the application of an alternative modeling methodology, and by extending this methodology to manufacturing systems with multiple products, instead of single-product systems. Continuous models such as the one proposed can benefit from a wide range of tools for system analysis and control design, come from control theory. Although these tools have been extensively applied to model the supply chain, applications devoted to the plant level seem to be neglected over the past years. This model also aims to contribute in this direction.
Control; Manufacturing; Dynamic modeling; Simulation; Job shop;
http://www.sciencedirect.com/science/article/pii/S0377221715000375
Sagawa, Juliana Keiko
Nagano, Marcelo Seido
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:551-5582015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:551-558
article
A trading mechanism contingent on several indices
We introduce a trading mechanism where the execution of an order on a security can be made contingent on the relation between the clearing price of the security and the clearing price of one or several indices. A mechanism similar to ours, but limited to only one index, was implemented on the Tel Aviv Stock Exchange. We argue that it is in some cases crucial to make the execution of an order contingent on several indices. Our mechanism consists of a particular implementation of a double-sided multi-unit combinatorial auction with substitutes (or DMCS auction), which we introduced in an earlier article.
Trading systems Limit orders Market microstructure
http://www.sciencedirect.com/science/article/pii/S0377221711002682
Schellhorn, Henry
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:865-8732015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:865-873
article
Start-up demonstration tests with sparse connection
Based on the concept of sparse connection, three start-up demonstration tests with sparse connection are introduced which are called CSTF with sparse d1, TSCF with sparse d2 and CSCF with sparse d3 and d4. The traditional start-up demonstration tests such as CSTF, TSCF and CSCF are special cases of these new tests. Furthermore, the new tests exhibit obvious improvement in test efficiency. In this paper, by using the finite Markov chain imbedding approach, several probabilistic indexes are given for these new start-up demonstration tests based on the assumption that the tests are i.i.d. case. The analyses are also extended to independent and non-identical and Markov dependent cases. In addition, procedures are provided in order to determine the optimal parameters needed in a demonstration test for selecting the products to meet the reliability requirement. Three comparison analyses are finally presented in order to illustrate the high efficiency of these new start-up demonstration tests and the effectiveness of this method.
Reliability; Start-up demonstration tests; Sparse connection; Finite Markov chain imbedding approach;
http://www.sciencedirect.com/science/article/pii/S037722171500003X
Zhao, Xian
Wang, Xiaoyue
Sun, Ge
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:637-6472015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:637-647
article
Revenue management for Cloud computing providers: Decision models for service admission control under non-probabilistic uncertainty
Cloud computing promises the flexible delivery of computing services in a pay-as-you-go manner. It allows customers to easily scale their infrastructure and save on the overall cost of operation. However Cloud service offerings can only thrive if customers are satisfied with service performance. Allowing instantaneous access and flexible scaling while maintaining the service levels and offering competitive prices poses a significant challenge to Cloud computing providers. Furthermore services will remain available in the long run only if this business generates a stable revenue stream. To address these challenges we introduce novel policy-based service admission control models that aim at maximizing the revenue of Cloud providers while taking informational uncertainty regarding resource requirements into account. Our evaluation shows that policy-based approaches statistically significantly outperform first come first serve approaches, which are still state of the art. Furthermore the results give insights in how and to what extent uncertainty has a negative impact on revenue.
Admission control; Informational uncertainty; Revenue management; Cloud computing;
http://www.sciencedirect.com/science/article/pii/S0377221715000478
Püschel, Tim
Schryen, Guido
Hristova, Diana
Neumann, Dirk
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:3-122015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:3-12
article
A Lagrangian heuristic for an integrated lot-sizing and fixed scheduling problem
This paper presents a novel approach for solving an integrated production planning and scheduling problem. In theory as well as in practice, because of their complexity, these two decision levels are most of the time treated sequentially. Scheduling largely depends on the production quantities (lot sizes) computed at the production planning level and ignoring scheduling constraints in planning leads to inconsistent decisions. Integrating production planning and scheduling is therefore important for efficiently managing operations. An integrated model and an iterative solution procedure were proposed in earlier research papers: The approach has limitations, in particular when solving the planning problem. In this paper, a new formulation is proposed to determine a feasible optimal production plan, i.e. lot sizes, for a fixed sequence of operations on the machines when setup costs and times are taken into account. Capacity constraints correspond to paths of the conjunctive graph associated to the sequence. An original Lagrangian relaxation approach is proposed to solve this NP-hard problem. A lower bound is derived and an upper bound is calculated using a novel constructive heuristic. The quality of the approach is tested on numerous problem instances.
Production planning; Lot sizing; Scheduling; Lagrangian relaxation; Heuristic;
http://www.sciencedirect.com/science/article/pii/S0377221715000545
Wolosewicz, Cathy
Dauzère-Pérès, Stéphane
Aggoune, Riad
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:555-5642015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:555-564
article
PartiSim: A multi-methodology framework to support facilitated simulation modelling in healthcare
Discrete event simulation (DES) studies in healthcare are thought to benefit from stakeholder participation during the study lifecycle. This paper reports on a multi-methodology framework, called PartiSim that is intended to support participative simulation studies. PartiSim combines DES, a traditionally hard OR approach, with soft systems methodology (SSM) in order to incorporate stakeholder involvement in the study lifecycle. The framework consists of a number of prescribed activities and outputs as part of the stages involved in the simulation lifecycle, which include study initiation, finding out about the problem, defining a conceptual model, model coding, experimentation and implementation. In PartiSim four of these stages involve facilitated workshops with a group of stakeholders. We explain the organisation of workshops, the key roles assigned to analysts and stakeholders, and how facilitation is embedded in the framework. We discuss our experience of using the framework, provide guidance on when to use it and conclude with future research directions.
Problem structuring; Facilitated modelling; Simulation; Multi-methodology framework; Healthcare;
http://www.sciencedirect.com/science/article/pii/S0377221715000661
Tako, Antuela A.
Kotiadis, Kathy
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:471-4892015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:471-489
article
Channel and pricing decisions in a supply chain with advance selling of gift cards
Many service providers, such as restaurants, are selling their gift cards through independent retailers. We analyze a supply chain of a service provider who sells products and gift cards at face value at its locations. The service provider also sells its gift cards through a retailer. Consumers may buy gift cards from the service provider or the retailer for their own use and/or to use as gifts. Consumers may be customers of both the service provider and retailer (Dual), only the service provide (SP-only), or only the retailer (retailer-only). We find that under a large enough gift cards’ redemption rate and no gift-givers, it is sub-optimal for a service provider to sell gift cards through a retailer. When there are some Retailer-only gift-givers, it is optimal for the service provider to sell gift cards through a retailer. We identify threshold redemption rates at which it is optimal for a service provider to sell gift cards through an independent retailer to different consumer segments. We also find that the SP may not always prefer a low redemption rate and for some service providers with large additional spending rates above redeemed gift cards’ value, profit may increase with the redemption rate. Also, centralization in the SP–retailer supply chain in this paper may lead to only a small increase in profits. Numerical analysis indicate that the redemption rate needed to make it optimal for the service provider to sell gift cards to all consumers through a retailer is unlikely to occur in practice.
Pricing; Analytical modeling; Channel management;
http://www.sciencedirect.com/science/article/pii/S037722171500065X
Khouja, Moutaz
Zhou, Jing
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:815-8252015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:815-825
article
Analysis of the bullwhip effect in two parallel supply chains with interacting price-sensitive demands
This paper offers insights into how the bullwhip effect in two parallel supply chains with interacting price-sensitive demands is affected in contrast to the situation of a single product in a serial supply chain. In particular, this research studies two parallel supply chains, each consisting of a manufacturer and a retailer, and the external demand for a single product depends on its price and the other's price in a situation in which each price follows a first-order autoregressive process. In this paper, we propose an analytical framework that incorporates two parallel supply chains, and we explore their interactions to determine the bullwhip effect. We identify the conditions under which the bullwhip effect is amplified or lessened with interacting price-sensitive demands relative to the situation without interaction.
Supply chain management; Bullwhip effect; Multiple supply chains; Interaction; Price-sensitive;
http://www.sciencedirect.com/science/article/pii/S0377221714010613
Ma, Yungao
Wang, Nengmin
He, Zhengwen
Lu, Jizhou
Liang, Huigang
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:309-3212015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:309-321
article
Interdependent network restoration: On the value of information-sharing
We consider restoring multiple interdependent infrastructure networks after a disaster damages components in them and disrupts the services provided by them. Our particular focus is on interdependent infrastructure restoration (IIR) where both the operations and the restoration of the infrastructures are linked across systems. We provide new mathematical formulations of restoration interdependencies in order to incorporate them into an interdependent integrated network design and scheduling (IINDS) problem. The IIR efforts resulting from solving this IINDS problem model a centralized decision-making environment where a single decision-maker controls the resources of all infrastructures. In reality, individual infrastructures often determine their restoration efforts in an independent, decentralized manner with little communication among them. We provide algorithms to model various levels of decentralization in IIR efforts. These algorithms are applied to realistic damage scenarios for interdependent infrastructure systems in order to determine the loss in restoration effectiveness resulting from decentralized decision-making. Our computational tests demonstrate that this loss can be greatly mitigated by having infrastructures share information about their planned restoration efforts.
OR in societal problem analysis; OR in disaster relief; Interdependent infrastructure restoration;
http://www.sciencedirect.com/science/article/pii/S0377221714010698
Sharkey, Thomas C.
Cavdaroglu, Burak
Nguyen, Huy
Holman, Jonathan
Mitchell, John E.
Wallace, William A.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:423-4412015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:423-441
article
Preference-inspired co-evolutionary algorithms using weight vectors
Decomposition based algorithms perform well when a suitable set of weights are provided; however determining a good set of weights a priori for real-world problems is usually not straightforward due to a lack of knowledge about the geometry of the problem. This study proposes a novel algorithm called preference-inspired co-evolutionary algorithm using weights (PICEA-w) in which weights are co-evolved with candidate solutions during the search process. The co-evolution enables suitable weights to be constructed adaptively during the optimisation process, thus guiding candidate solutions towards the Pareto optimal front effectively. The benefits of co-evolution are demonstrated by comparing PICEA-w against other leading decomposition based algorithms that use random, evenly distributed and adaptive weights on a set of problems encompassing the range of problem geometries likely to be seen in practice, including simultaneous optimisation of up to seven conflicting objectives. Experimental results show that PICEA-w outperforms the comparison algorithms for most of the problems and is less sensitive to the problem geometry.
Evolutionary algorithms; Multi-objective optimisation; Many-objective; Co-evolution; Weights;
http://www.sciencedirect.com/science/article/pii/S0377221714004263
Wang, Rui
Purshouse, Robin C.
Fleming, Peter J.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:345-3462015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:345-346
article
Feature Cluster on “Evolutionary multiobjective optimization”
http://www.sciencedirect.com/science/article/pii/S0377221714010170
Brockhoff, Dimo
Derbel, Bilel
Liefooghe, Arnaud
Verel, Sébastien
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:339-3592015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:339-359
article
Airplane boarding
The time required to board an airplane directly influences an airplane’s turn-around time, i.e., the time that the airplane requires at the gate between two flights. Thus, the turn-around time can be reduced by using efficient boarding methods and such actions may also result in cost savings. The main contribution of this paper is fourfold. First, we provide a general problem description including partly established and partly new definitions of relevant terms. Next, we survey boarding methods known from theory and practice and provide an according classification scheme. Third, we present a broad overview on the current literature in this field and we describe 12 most relevant papers in detail and juxtapose their results. Fourth, we summarize the state-of-the-art of research in this field showing e.g., that the commonly used strategy back-to-front generally requires more time than other easy to implement strategies such as random boarding. Further concepts and approaches that can help speed up the boarding process are also presented and these can be studied in future research.
Aviation; Aircraft boarding; Aircraft turnaround; Sequencing;
http://www.sciencedirect.com/science/article/pii/S0377221714009904
Jaehn, Florian
Neumann, Simone
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:136-1462015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:136-146
article
Trade credit for supply chain coordination
Trade-credit is a seller's short-term loan to the buyer, allowing the buyer to delay payment of an invoice. It has been the largest source of working capital for a majority of business-to-business firms in the United States. Numerous theories have been proposed to explain trade-credit, mainly from finance perspectives. It has also been an important issue in supply chain management. Surprisingly, most literature in supply chain management has examined the retailer's stocking policies given a supplier's trade-credit. This paper attempts to shed light on trade-credit from a supplier's perspective, and presents it as a tool for supply chain coordination. Specifically, we explicitly assume firms' financial needs for inventory. Following a Newsvendor framework, we assume that the supplier grants trade-credit and markdown allowance. Given the supplier's offer, the retailer determines order quantity and the financing option for the inventory, either trade-credit or direct financing from a financial institution. Our result shows that the supplier's markdown allowance alone cannot fully coordinate the supply chain if the retailer employs direct financing. Positive financing costs call for trade-credit in order to subsidize the retailer's costs of inventory financing. Using trade-credit in addition to markdown allowance, the supplier fully coordinates the retailer's decisions for the largest joint profit, and extracts a greater portion of the maximized joint profit.
Finance Trade-credit Inventory financing Supply chain coordination Newsvendor framework
http://www.sciencedirect.com/science/article/pii/S0377221711003171
Lee, Chang Hwan
Rhee, Byong-Duk
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:331-3392015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:331-339
article
Optimal Bayesian fault prediction scheme for a partially observable system subject to random failure
A new method for predicting failures of a partially observable system is presented. System deterioration is modeled as a hidden, 3-state continuous time homogeneous Markov process. States 0 and 1, which are not observable, represent good and warning conditions, respectively. Only the failure state 2 is assumed to be observable. The system is subject to condition monitoring at equidistant, discrete time epochs. The vector observation process is stochastically related to the system state. The objective is to develop a method for optimally predicting impending system failures. Model parameters are estimated using EM algorithm and a cost-optimal Bayesian fault prediction scheme is proposed. The method is illustrated using real data obtained from spectrometric analysis of oil samples collected at regular time epochs from transmission units of heavy hauler trucks used in mining industry. A comparison with other methods is given, which illustrates effectiveness of our approach.
Maintenance Stochastic optimization Failure prediction Hidden Markov modeling Multivariate Bayesian control
http://www.sciencedirect.com/science/article/pii/S0377221711003675
Kim, Michael Jong
Jiang, Rui
Makis, Viliam
Lee, Chi-Guhn
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:607-6172015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:607-617
article
Controlling the workload of M/G/1 queues via the q-policy
We consider a single-server queueing system with Poisson arrivals and generally distributed service times. To systematically control the workload of the queue, we define for each busy period an associated timer process, {R(t), t ≥ 0}, where R(t) represents the time remaining before the system is closed to potential arrivals. The process {R(t), t ≥ 0} is similar to the well-known workload process, in that it decreases at unit rate and consists of up-jumps at the arrival instants of admitted customers. However, if X represents the service requirement of an admitted customer, then the magnitude of the up-jump for the timer process occurring at the arrival instant of this customer is (1 − q)X for a fixed q ∈ [0, 1]. Consequently, there will be an instant in time within the busy period when the timer process hits level zero, at which point the system immediately closes and will remain closed until the end of the current busy period. We refer to this particular blocking policy as the q-policy. In this paper, we employ a level crossing analysis to derive the Laplace–Stieltjes transform (LST) of the steady-state waiting time distribution of serviceable customers. We conclude the paper with a numerical example which shows that controlling arrivals in this fashion can be beneficial.
Queueing; Customer blocking; Level crossing analysis; M/G/1 queue with accumulating priority;
http://www.sciencedirect.com/science/article/pii/S0377221714010546
Fajardo, Val Andrei
Drekic, Steve
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:164-1752015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:164-175
article
Product contamination in a multi-stage food supply chain
Food product contamination has potentially devastating effects on companies and supply chains. However, the impact of contamination has still not been thoroughly studied from a supply chain planning perspective. This paper models a contamination event in a generic food supply chain consisting of suppliers, processing centers, and retailers. Contamination is detected through either company or government agency sampling tests or through reports of a food borne illness. In this research, we analyze the impact of origin and choice of sampling strategies, and product and supply chain attributes on a contamination event. We also simulate a real-world tomato contamination case to gain further insights.
Supply chain management; Contamination; Food supply network;
http://www.sciencedirect.com/science/article/pii/S0377221715000363
Chebolu-Subramanian, Vijaya
Gaukler, Gary M.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:798-8142015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:798-814
article
A variable neighborhood search for the capacitated vehicle routing problem with two-dimensional loading constraints
This paper addresses the capacitated vehicle routing problem with two-dimensional loading constraints (2L-CVRP), which is a generalized capacitated vehicle routing problem in which customer demand is a set of two-dimensional, rectangular, weighted items. The objective is to design the route set of minimum cost for a homogenous fleet of vehicles, starting and terminating at a central depot, to serve all the customers. All the items packed in one vehicle must satisfy the two-dimensional orthogonal packing constraints. A variable neighborhood search is proposed to address the routing aspect, and a skyline heuristic is adapted to examine the loading constraints. To speed up the search process, an efficient data structure (Trie) is utilized to record the loading feasibility information of routes, but also to control the computational effort of the skyline spending on the same route. The effectiveness of our approach is verified through experiments on widely used benchmark instances involving two distinct versions of loading constraints (unrestricted and sequential versions). Numerical experiments show that the proposed method outperforms all existing methods and improves or matches the majority of best known solutions for both problem versions.
Routing; Packing; Variable neighborhood search; Skyline heuristic; 2L-CVRP;
http://www.sciencedirect.com/science/article/pii/S0377221714010662
Wei, Lijun
Zhang, Zhenzhen
Zhang, Defu
Lim, Andrew
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:284-2972015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:284-297
article
Equilibrium and optimal strategies to join a queue with partial information on service times
In this paper, we study customer equilibrium as well as socially optimal strategies to join a queue with only partial information on the service time distribution such as moments and the range. Based on such partial information, customers adopt the entropy-maximization principle to obtain the expectation of their waiting cost and decide to join or balk. We find that more information encourages customers to join the queue. And it is beneficial for decision makers to convey partial information to customers in welfare maximization but reveal full information in profit maximization.
Queueing Partial information Equilibrium Joining/balking behavior Entropy maximization
http://www.sciencedirect.com/science/article/pii/S0377221711003523
Guo, Pengfei
Sun, Wei
Wang, Yulan
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:13-252015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:13-25
article
Serial batching scheduling of deteriorating jobs in a two-stage supply chain to minimize the makespan
This paper investigates the coordinated scheduling problem of production and transportation in a two-stage supply chain, where the actual job processing time is a linear function of its starting time. During the production stage the jobs are first processed in serial batches on a bounded serial batching machine at the manufacturer's site. Then, the batches are delivered to a customer by a single vehicle with limited capacity during the transportation stage, and the vehicle can only deliver one batch at one time. The objective of this proposed scheduling problem is to make decisions on job batching and batch sequencing so as to minimize the makespan. Moreover, we consider two different models. With regards to the scheduling model with a buffer for storing the processed batches before transportation, we develop an optimal algorithm to solve it. For the scheduling model without buffer, we present some useful properties and develop a heuristic H for solving it. Then a novel lower bound is derived and two optimal algorithms are designed for solving two special cases. Furthermore, computational experiments with random instances of different sizes are conducted to evaluate the proposed heuristic H, and the results show that our proposed algorithm is superior to other four approaches in the literature. Besides, heuristic H in our experiments can effectively and efficiently solve both small-size and large-size problems in a reasonable time.
Batch scheduling; Supply chain; Deterioration; Transportation; Heuristic;
http://www.sciencedirect.com/science/article/pii/S0377221714009576
Pei, Jun
Pardalos, Panos M.
Liu, Xinbao
Fan, Wenjuan
Yang, Shanlin
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:405-4132015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:405-413
article
A new approach to multi-criteria sorting based on fuzzy outranking relations: The THESEUS method
In this paper, we propose the THESEUS method, a new approach based on fuzzy outranking relations to multi-criteria sorting problems. Compared with other outranking-based methods, THESEUS is inspired by another view of multi-criteria classification problems. It utilizes a new way of evaluating the assignment of an object to an element of the set of ordered categories that were previously defined. This way is based on comparing every possible assignment with the information from various preference relations that are derived from a fuzzy outranking relation defined on the universe of objects. The appropriate assignment is determined by solving a simple selection problem. The capacity of a reference set for making appropriate assignments is related to a good characterization of the categories. A single reference action characterizing a category may be insufficient to achieve well-determined assignments. In this paper, the reference set capacity to perform appropriate assignments is characterized by some new concepts. This capacity may be increased when more objects are added to the reference set. THESEUS is a method for handling the preference information contained in such larger reference sets.
Multiple criteria analysis Sorting Outranking methods
http://www.sciencedirect.com/science/article/pii/S0377221711002736
Fernandez, Eduardo
Navarro, Jorge
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:498-5082015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:498-508
article
Approximate analysis of non-stationary loss queues and networks of loss queues with general service time distributions
A Fixed Point Approximation (FPA) method has recently been suggested for non-stationary analysis of loss queues and networks of loss queues with Exponential service times. Deriving exact equations relating time-dependent mean numbers of busy servers to blocking probabilities, we generalize the FPA method to loss systems with general service time distributions. These equations are combined with associated formulae for stationary analysis of loss systems in steady state through a carried load to offered load transformation. The accuracy and speed of the generalized methods are illustrated through a wide set of examples.
Queueing Erlang loss model Time-dependent arrival rate Carried load
http://www.sciencedirect.com/science/article/pii/S0377221711002402
Izady, N.
Worthington, D.
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:379-3912015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:379-391
article
The discrete time window assignment vehicle routing problem
In this paper we introduce the discrete time window assignment vehicle routing problem (DTWAVRP) that can be viewed as a two-stage stochastic optimization problem. Given a set of customers that must be visited on the same day regularly within some period of time, the first-stage decisions are to assign to each customer a time window from a set of candidate time windows before demand is known. In the second stage, when demand is revealed for each day of the time period, vehicle routes satisfying vehicle capacity and the assigned time windows are constructed. The objective of the DTWAVRP is to minimize the expected total transportation cost. To solve this problem, we develop an exact branch-price-and-cut algorithm and derive from it five column generation heuristics that allow to solve larger instances than those solved by the exact algorithm. We illustrate the performance of these algorithms by means of computational experiments performed on randomly generated instances.
Vehicle routing; Time window assignment; Column generation; Uncertain demand;
http://www.sciencedirect.com/science/article/pii/S0377221715000405
Spliet, Remy
Desaulniers, Guy
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:168-1782015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:168-178
article
An optimization approach to gene stacking
We present a multi-objective integer programming model for the gene stacking problem, which is to bring desirable alleles found in multiple inbred lines to a single target genotype. Pareto optimal solutions from the model provide strategic stacking schemes to maximize the likelihood of successfully creating the target genotypes and to minimize the number of generations associated with a stacking strategy. A consideration of genetic diversity is also incorporated in the models to preserve all desirable allelic variants in the target population. Although the gene stacking problem is proved to be NP-hard, we have been able to obtain Pareto frontiers for smaller sized instances within one minute using the state-of-the-art commercial computer solvers in our computational experiments.
Gene stacking Multi-objective optimization Pareto frontier Integer programming
http://www.sciencedirect.com/science/article/pii/S0377221711003559
Xu, Pan
Wang, Lizhi
Beavis, William D.
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:611-6232015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:611-623
article
Genetic algorithms for condition-based maintenance optimization under uncertainty
This paper proposes and compares different techniques for maintenance optimization based on Genetic Algorithms (GAs), when the parameters of the maintenance model are affected by uncertainty and the fitness values are represented by Cumulative Distribution Functions (CDFs). The main issues addressed to tackle this problem are the development of a method to rank the uncertain fitness values, and the definition of a novel Pareto dominance concept. The GA-based methods are applied to a practical case study concerning the setting of a condition-based maintenance policy on the degrading nozzles of a gas turbine operated in an energy production plant.
Maintenance optimization; Genetic algorithms; Uncertain fitness; Ranking; Pareto dominance;
http://www.sciencedirect.com/science/article/pii/S0377221715000776
Compare, M.
Martini, F.
Zio, E.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:576-5872015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:576-587
article
Inventory sharing and coordination among n independent retailers
Inventory sharing among decentralized retailers has been widely used in practice to improve profitability and reduce risks at the same time. We study the coordination of a decentralized inventory sharing system with n (n > 2) retailers who non-cooperatively determine their order quantities but cooperatively share their inventory. There has been very limited research on coordinating such a system due to the many unique challenges involved, e.g., incomplete residual sharing, formation of subcoalitions for inventory sharing etc. In this paper, we develop a coordination mechanism (nRCM) that simultaneously possesses a few important properties—leading to formation of only grand coalition, inducing complete residual sharing, and ensuring each retailer obtains a higher profit as the system size increases. We also consider the impact of asymmetric demand distribution parameter information on the coordination mechanisms when the retailers privately hold such information. We show that although true coordination requires complete information sharing, under any n-retailer inventory sharing coordination mechanism, retailers may not have incentives to share information with all other retailers and will not share true information even if they do so. In this regard, nRCM possesses another important property: it can be implemented under asymmetric information and retailers can obtain profits very close to their first-best profits even if they do not share demand information. Such nice properties of nRCM also hold when retailers have correlated demands. This paper is the first to study coordination mechanism for an n-retailer (n > 2) inventory sharing system considering asymmetric information.
Supply chain management; Inventory sharing; Coordination; Asymmetric information; Coalition;
http://www.sciencedirect.com/science/article/pii/S0377221714010510
Yan, Xinghao
Zhao, Hui
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:697-7022015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:697-702
article
Dynamic scaling on the limited memory BFGS method
This paper describes a limited-memory quasi-Newton method in which the initial inverse Hessian approximation is constructed based on the concept of equilibration of the inverse Hessian matrix. Curvature information about the objective function is stored in the form of a diagonal matrix, and plays the dual role of providing an initial matrix and of equilibrating for limited memory BFGS (LBFGS) iterations. An extensive numerical testing has been performed showing that the diagonal scaling strategy proposed is very effective.
(B)Large scale optimization; (I)Nonlinear programming; Limited memory quasi-Newton methods; Column scaling; Equilibrated matrix,;
http://www.sciencedirect.com/science/article/pii/S0377221714010686
Biglari, Fahimeh
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:662-6732015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:662-673
article
Integration of RFID and business analytics for trade show exhibitors
Drastic changes in consumer markets over the last decades have increased the pressure and challenges for the trade exhibition industry. Exhibiting organizations demand higher levels of justification for involvement and expect returns on trade show investments. This study proposes an RFID-enabled track and traceability framework to improve information visibility at the trade site. The identification information can potentially create detailed, accurate, and complete visibility of attendees’ movements and purchasing behaviors and consequently lead to considerable analytical benefits. Leveraging the wealth of information made available by RFID is challenging; thus, the objective of this study is to outline how to incorporate RFID data into existing enterprise data to deliver analytical solutions to the trade show and exhibition industry. The results show that the exhibitor can use RFID to gather visitor intelligence and the key findings of this study provide valuable feedback to business analysts to promote follow-up marketing strategies.
RFID; Analytics; Trade show; Exhibition; Traceability;
http://www.sciencedirect.com/science/article/pii/S0377221715000740
Chongwatpol, Jongsawas
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:261-2762015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:261-276
article
Assessing and hedging the cost of unseasonal weather: Case of the apparel sector
Retail activities are increasingly exposed to unseasonal weather causing lost sales and profits, as climate change is aggravating climate variability. Although research has provided insights into the role of weather on consumption, little is known about the precise relationship between weather and sales for strategic and financial decision-making. Using apparel as an illustration, for all seasons, we estimate the impact on sales caused by unexpected deviations of daily temperature from seasonal patterns. We apply Seasonal Trend decomposition using Loess to isolate changes in sales volumes. We use a linear regression to find the relationship between temperature and sales anomalies and construct the historical distribution to determine sales-at-risk due to unseasonal weather. We show how to use weather derivatives to offset the potential loss. Our contribution is twofold. We provide a new general method for managers to understand how their performance is weather-related. We lay out a blueprint for tailor-made weather derivatives to mitigate this risk.
Weather sensitivity; Weather risk management; Decision making; Statistical model; Retail sales;
http://www.sciencedirect.com/science/article/pii/S0377221715000326
Bertrand, Jean-Louis
Brusset, Xavier
Fortin, Maxime
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:358-3642015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:358-364
article
Integrated design and operation of remnant inventory supply chains under uncertainty
We consider the simultaneous design and operation of remnant inventory supply chains. Remnant inventory is generated when demand for various lengths of a product may be satisfied by existing inventory, or by cutting a large piece into smaller pieces. We formulate our problem as a two-stage stochastic mixed-integer program. In solving our stochastic program, we enhance the standard L-shaped method in two ways. Our computational experiments demonstrate that these enhancements are effective, dramatically reducing the solution time for large instances.
Remnant inventory Stochastic programming Mixed integer programming
http://www.sciencedirect.com/science/article/pii/S0377221711003833
Rajgopal, Jayant
Wang, Zhouyan
Schaefer, Andrew J.
Prokopyev, Oleg A.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:745-7512015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:745-751
article
Characterization of the split closure via geometric lifting
We analyze split cuts from the perspective of cut generating functions via geometric lifting. We show that α-cuts, a natural higher-dimensional generalization of the k-cuts of Cornuéjols et al., give all the split cuts for the mixed-integer corner relaxation. As an immediate consequence we obtain that the k-cuts are equivalent to split cuts for the 1-row mixed-integer relaxation. Further, we show that split cuts for finite-dimensional corner relaxations are restrictions of split cuts for the infinite-dimensional relaxation. In a final application of this equivalence, we exhibit a family of pure-integer programs whose split closure has arbitrarily bad integrality gap. This complements the mixed-integer example provided by Basu et al. (2011).
Integer programming; Cutting-plane; Split cut; Cut-generating function; Geometric lifting;
http://www.sciencedirect.com/science/article/pii/S0377221714010194
Basu, Amitabh
Molinaro, Marco
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:995-10032015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:995-1003
article
Adoption of an emerging infrastructure with uncertain technological learning and spatial reconfiguration
This paper develops a stylized (or conceptual) system optimization model to analyze the adoption of an emerging infrastructure associated with uncertain technological learning and spatial reconfigurations. The model first assumes that the emerging infrastructure will be implemented for the entire system when it is adopted. With the model, this paper explores (1) how the emerging infrastructure's initial investment cost, technological learning and its uncertainty, market size, and efficiency influence the adoption of the emerging infrastructure and (2) how the efficiency and investment cost of the associated technology (which will be located in a different place with the adoption of the emerging infrastructure) influence the adoption of the emerging infrastructure. Then, this paper extends the model and explores whether it is a better solution to implement the emerging infrastructure for part of the distance from resource site to demand site if its efficiency is a function of the implemented distance. With optimizations under three types of efficiency dynamics, this paper finds that whether the emerging infrastructure should be implemented partly or entirely is not determined by the value of its efficiency but by the dynamics of its efficiency.
Technology adoption; Technological learning; Uncertainties; Spatial reconfiguration;
http://www.sciencedirect.com/science/article/pii/S0377221714010443
Ma, Tieju
Chen, Huayi
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:852-8642015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:852-864
article
Clustering financial time series: New insights from an extended hidden Markov model
In recent years, large amounts of financial data have become available for analysis. We propose exploring returns from 21 European stock markets by model-based clustering of regime switching models. These econometric models identify clusters of time series with similar dynamic patterns and moreover allow relaxing assumptions of existing approaches, such as the assumption of conditional Gaussian returns. The proposed model handles simultaneously the heterogeneity across stock markets and over time, i.e., time-constant and time-varying discrete latent variables capture unobserved heterogeneity between and within stock markets, respectively. The results show a clear distinction between two groups of stock markets, each one characterized by different regime switching dynamics that correspond to different expected return-risk patterns. We identify three regimes: the so-called bull and bear regimes, as well as a stable regime with returns close to 0, which turns out to be the most frequently occurring regime. This is consistent with stylized facts in financial econometrics.
Data mining; Hidden Markov model; Stock indexes; Latent class model; Regime-switching model;
http://www.sciencedirect.com/science/article/pii/S0377221714010595
Dias, José G.
Vermunt, Jeroen K.
Ramos, Sofia
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:300-3082015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:300-308
article
Directional monotonicity of fusion functions
In this paper we deal with fusion functions, i.e., mappings from [0, 1]n into [0, 1]. As a generalization of the standard monotonicity and recently introduced weak monotonicity, we introduce and study the directional monotonicity of fusion functions. For distinguished fusion functions the sets of all directions in which they are increasing are determined. Moreover, in the paper the directional monotonicity of piecewise linear fusion functions is completely characterized. These results cover, among others, weighted arithmetic means, OWA operators, the Choquet, Sugeno and Shilkret integrals.
Multiple criteria analysis; Aggregation function; Fusion function; Directional monotonicity; Piecewise linear function;
http://www.sciencedirect.com/science/article/pii/S0377221715000387
Bustince, H.
Fernandez, J.
Kolesárová, A.
Mesiar, R.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:53-662015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:53-66
article
Pareto analysis of supply chain contracts under satisficing objectives
Supply chain coordination has become critical to firms as increased pressure is placed on them to improve performance. We evaluate the performance of Push, Pull, and Advance-purchase discount (APD) contracts in a manufacturer-retailer supply chain where one or both firms have a satisficing objective of maximizing the probability of achieving a target profit. We identify the resulting operational modes of the supply chain and potential conflicts over the preferred contracts under the Push, Pull, and APD contracts. When both firms are satisficing, conflict over the preferred contract arises when the manufacturer has an ambitious profit target or the retailer has a low profit target. We show that the Push contract can result in a large decrease in the expected profit of a risk-neutral manufacturer when the retailer maximizes the probability of achieving her maximum expected profit. We find that a modified buy-back and profit guarantee contracts can provide significant Pareto improvement over Push or APD contracts when the manufacturer is risk-neutral and the retailer is satisficing, while revenue-sharing contracts cannot. In contrast, revenue sharing and modified buy-back contracts are Pareto dominant under certain conditions when the manufacturer is satisficing and the retailer is risk-neutral.
Supply chain contracts Newsvendor model Pareto improvements
http://www.sciencedirect.com/science/article/pii/S0377221711003092
He, Xiuli
Khouja, Moutaz
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:118-1352015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:118-135
article
ELECTREGKMS: Robust ordinal regression for outranking methods
We present a new method, called ELECTREGKMS, which employs robust ordinal regression to construct a set of outranking models compatible with preference information. The preference information supplied by the decision maker (DM) is composed of pairwise comparisons stating the truth or falsity of the outranking relation for some real or fictitious reference alternatives. Moreover, the DM specifies some ranges of variation of comparison thresholds on considered pseudo-criteria. Using robust ordinal regression, the method builds a set of values of concordance indices, concordance thresholds, indifference, preference, and veto thresholds, for which all specified pairwise comparisons can be restored. Such sets are called compatible outranking models. Using these models, two outranking relations are defined, necessary and possible. Whether for an ordered pair of alternatives there is necessary or possible outranking depends on the truth of outranking relation for all or at least one compatible model, respectively. Distinguishing the most certain recommendation worked out by the necessary outranking, and a possible recommendation worked out by the possible outranking, ELECTREGKMS answers questions of robustness concern. The method is intended to be used interactively with incremental specification of pairwise comparisons, possibly with decreasing confidence levels. In this way, the necessary and possible outranking relations can be, respectively, enriched or impoverished with the growth of the number of pairwise comparisons. Furthermore, the method is able to identify troublesome pieces of preference information which are responsible for incompatibility. The necessary and possible outranking relations are to be exploited as usual outranking relations to work out recommendation in choice or ranking problems. The introduced approach is illustrated by a didactic example showing how ELECTREGKMS can support real-world decision problems.
Robust ordinal regression Outranking relation Multiple criteria ranking and choice ELECTRE-like method
http://www.sciencedirect.com/science/article/pii/S0377221711003079
Greco, Salvatore
Kadzinski, Milosz
Mousseau, Vincent
Slowinski, Roman
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:210-2182015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:210-218
article
Feature selection for support vector machines using Generalized Benders Decomposition
We propose an exact method, based on Generalized Benders Decomposition, to select the best M features during induction. We provide details of the method and highlight some interesting parallels between the technique proposed here and some of those published in the literature. We also propose a relaxation of the problem where selecting too many features is penalized. The original method performs well on a variety of data sets. The relaxation, though competitive, is sensitive to the penalty parameter.
Data mining; Feature selection; Support vector machines (SVM); Generalized Benders Decomposition;
http://www.sciencedirect.com/science/article/pii/S0377221715000077
Aytug, Haldun
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:985-9942015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:985-994
article
Hybrid search for the optimal PMU placement problem on a power grid
With increasing global concerns regarding energy management, the concept of the smart grid has become a particularly important interdisciplinary research topic. In order to continually monitor a power utility system and efficiently observe all of the states of electric nodes and branches on a smart grid, placing PMUs (phasor measurement units) at selected nodes on the grid can monitor the operation conditions of the entire power grid. This study investigates methods for minimizing the high installation costs of PMUs, in order to monitor the entire system using a set of PMUs according to the power observation rules. Notably, this problem of monitoring a power grid can be transformed into the OPP (optimal PMU placement) problem. The objective is to simultaneously minimize the number of PMUs and ensure the complete observability of the whole power grid. This combinatorial optimization problem has been shown to be NP-complete. In this paper, we propose a hybrid two-phase algorithm for this problem. The first phase of the algorithm quickly identifies a set of candidate locations of PMUs based on a graph-theoretic decomposition approach for the power domination problem in tree-type graphs. Then, we use a local search heuristic method to derive the minimum number of PMUs in the second phase. In addition to the practical model, this study also considers the ideal model, in which all load nodes are assumed to be zero injection. The numerical studies on various IEEE power test systems demonstrate the superior performance of the proposed algorithm in both the models in regard to computational time and solution quality. In particular, in the ideal model, the number of PMUs required for the test systems can be significantly reduced. We also provide theoretical lower bounds on the number of installed PMUs in the ideal model and show that the derived solution can achieve the bound of the test systems.
Combinatorial optimization; Smart grid; Optimal PMU placement; Power domination; Power system;
http://www.sciencedirect.com/science/article/pii/S0377221714010650
Liao, Chung-Shou
Hsieh, Tsung-Jung
Guo, Xian-Chang
Liu, Jian-Hong
Chu, Chia-Chi
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:262-2722015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:262-272
article
The stochastic transportation problem with single sourcing
We propose a branch-and-price algorithm for solving a class of stochastic transportation problems with single-sourcing constraints. Our approach allows for general demand distributions, nonlinear cost structures, and capacity expansion opportunities. The pricing problem is a knapsack problem with variable item sizes and concave costs that is interesting in its own right. We perform an extensive set of computational experiments illustrating the efficacy of our approach. In addition, we study the cost of the single-sourcing constraints.
Transportation problem Random demands Nonlinear costs
http://www.sciencedirect.com/science/article/pii/S0377221711003845
Edwin Romeijn, H.
Zeynep Sargut, F.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:547-5542015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:547-554
article
Total completion time minimization on multiple machines subject to machine availability and makespan constraints
This paper studies preemptive bi-criteria scheduling on m parallel machines with machine unavailable intervals. The goal is to minimize the total completion time subject to the constraint that the makespan is at most a constant T. We study the unavailability model such that the number of available machines cannot go down by 2 within any period of pmax where pmax is the maximum processing time among all jobs. We show that there is an optimal polynomial time algorithm.
Scheduling; Parallel machine; Bi-criteria; Limited machine availability; Polynomial time algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221714010133
Huo, Yumei
Zhao, Hairong
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:599-6062015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:599-606
article
A fast calibrating volatility model for option pricing
In this paper, we propose a new random volatility model, where the volatility has a deterministic term structure modified by a scalar random variable. Closed-form approximation is derived for European option price using higher order Greeks with respect to volatility. We show that the calibration of our model is often more than two orders of magnitude faster than the calibration of commonly used stochastic volatility models, such as the Heston model or Bates model. On 15 different index option data sets, we show that our model achieves accuracy comparable with the aforementioned models, at a much lower computational cost for calibration. Further, our model yields prices for certain exotic options in the same range as these two models. Lastly, the model yields delta and gamma values for options in the same range as the other commonly used models, over most of the data sets considered. Our model has a significant potential for use in high frequency derivative trading.
Stochastic volatility models; Option pricing;
http://www.sciencedirect.com/science/article/pii/S0377221714010492
Date, Paresh
Islyaev, Suren
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:393-4022015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:393-402
article
Optimal production strategy under demand fluctuations: Technology versus capacity
This paper provides a comparative analysis of five possible production strategies for two kinds of flexibility investment, namely flexible technology and flexible capacity, under demand fluctuations. Each strategy is underpinned by a set of operations decisions on technology level, capacity amount, production quantity, and pricing. By evaluating each strategy, we show how market uncertainty, production cost structure, operations timing, and investment costing environment affect a firm's strategic decisions. The results show that there is no sequential effect of the two flexibility investments. We also illustrate the different ways in which flexible technology and flexible capacity affect a firm's profit under demand fluctuations. The results reveal that compared to no flexibility investment, flexible technology investment earns the same or a higher profit for a firm, whereas flexible capacity investment can be beneficial or harmful to a firm's profit. Moreover, we prove that higher flexibility does not guarantee more profit. Depending on the situation, the optimal strategy can be any one of the five possible strategies. We also provide the optimality conditions for each strategy.
Production Investment analysis Flexible manufacturing systems Manufacturing
http://www.sciencedirect.com/science/article/pii/S0377221711003729
Yang, L.
Ng, C.T.
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:1016-10272015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:1016-1027
article
A decomposition of profit loss under output price uncertainty
In this paper, firm profit loss is decomposed as the sum of two terms related to the output price uncertainty (price expectation error and risk preference), plus one extra term expressing technical inefficiency. We then describe the implementation of our theoretical model in a robust data envelopment analysis (DEA) framework, which allows an effective and separate estimation of each term of the decomposition. In addition, we offer an operational tool to reveal producers’ risk preferences. A 2009 database of French fattening pig farms is used as an illustration. Our results indicate that risk preference and technical inefficiency are the main sources of profit loss.
Profit loss; Risk preference; Technical inefficiency; Data envelopment analysis; Fattening pig farms;
http://www.sciencedirect.com/science/article/pii/S0377221714010625
Boussemart, Jean-Philippe
Crainich, David
Leleu, Hervé
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:658-6642015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:658-664
article
Optimal selection of project portfolios using reinvestment strategy within a flexible time horizon
In this paper, we address the issue of optimal selection of portfolio of projects using reinvestment strategy within a flexible time horizon. We assume that an investor intends to invest his/her initial capital on the implementation of some projects in a flexible time horizon. The investor’s motivation for considering a flexible time horizon is to maximize his/her gain by determining the optimal time horizon for investing on the selected portfolio of projects. Projects have different durations and their potential rates of return are also different. The profit yielded by the completed projects can be reinvested in the implementation of other projects. The implementation costs of projects can be allocated at equally spaced intervals with equal amounts during their life cycles or can be assigned to each project according to its estimated s-curve. We assume that the profit yielded by each project is accrued after the investment for the project ends. Therefore, in order to maximize gains, the investor needs to optimize three issues: combination of projects, schedule of the selected projects and the time horizon. An integer program is presented and discussed to address the given issues.
Project management; Project portfolio selection; Project scheduling; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221714010145
Jafarzadeh, M.
Tareghian, H.R.
Rahbarnia, F.
Ghanbari, R.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:897-9112015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:897-911
article
Optimal technology adoption when the arrival rate of new technologies changes
Our paper contributes to the literature of technology adoption. In most of these models it is assumed that the intensity rate of new arrivals is constant. We extend this approach by assuming that after the last technology jump the intensity of a new arrival can change. Right after the arrival of a new technology the intensity equals a specific value that switches if no new technology arrival has taken place within a certain period after the last technology arrival. We look at different scenarios, dependent on whether the firm is threatened by a drop in the arrival rate after a certain time period or expects the rate to rise. We analyze the effect of a mean preserving spread of the time between two consecutive arrivals on the optimal investment timing and show that larger variance can accelerate investment in case the arrival rate rises while it can decelerate investment in case the arrival rate drops. We find that firms often adopt a new technology a time lag after its introduction, which is a phenomenon frequently observed in practice. Regarding a firm’s technology releasing strategy we explain why additional uncertainty can stimulate customers’ buying behavior. The optimal adoption timing changes significantly, depending on whether the arrival rate is assumed to change or be constant over time. Adding uncertainty about the length of the time period after which the arrival intensity changes, we find that increasing uncertainty accelerates investment, a result that is opposite to the standard real options theory.
Innovation; Capital budgets; Optimal control models;
http://www.sciencedirect.com/science/article/pii/S037722171401025X
Hagspiel, Verena
Huisman, Kuno J.M.
Nunes, Clàudia
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:26-462015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:26-46
article
DC approximation approaches for sparse optimization
Sparse optimization refers to an optimization problem involving the zero-norm in objective or constraints. In this paper, nonconvex approximation approaches for sparse optimization have been studied with a unifying point of view in DC (Difference of Convex functions) programming framework. Considering a common DC approximation of the zero-norm including all standard sparse inducing penalty functions, we studied the consistency between global minimums (resp. local minimums) of approximate and original problems. We showed that, in several cases, some global minimizers (resp. local minimizers) of the approximate problem are also those of the original problem. Using exact penalty techniques in DC programming, we proved stronger results for some particular approximations, namely, the approximate problem, with suitable parameters, is equivalent to the original problem. The efficiency of several sparse inducing penalty functions have been fully analyzed. Four DCA (DC Algorithm) schemes were developed that cover all standard algorithms in nonconvex sparse approximation approaches as special versions. They can be viewed as, an ℓ1-perturbed algorithm/reweighted-ℓ1 algorithm / reweighted-ℓ2 algorithm. We offer a unifying nonconvex approximation approach, with solid theoretical tools as well as efficient algorithms based on DC programming and DCA, to tackle the zero-norm and sparse optimization. As an application, we implemented our methods for the feature selection in SVM (Support Vector Machine) problem and performed empirical comparative numerical experiments on the proposed algorithms with various approximation functions.
Global optimization; Sparse optimization; DC approximation function; DC programming and DCA; Feature selection in SVM;
http://www.sciencedirect.com/science/article/pii/S0377221714009540
Le Thi, H.A.
Pham Dinh, T.
Le, H.M.
Vo, X.T.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:912-9202015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:912-920
article
A solution concept for network games: The role of multilateral interactions
We propose an allocation rule that takes into account the importance of both players and their links and characterize it for a fixed network. Our characterization is along the lines of the characterization of the Position value for Network games by van den Nouweland and Slikker (2012). The allocation rule so defined admits multilateral interactions among the players through their links which distinguishes it from the other existing rules. Next, we extend our allocation rule to flexible networks à la Jackson (2005).
Network games; Allocation rules; Cooperative games;
http://www.sciencedirect.com/science/article/pii/S0377221714010455
Borkotokey, Surajit
Kumar, Rajnish
Sarangi, Sudipta
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:289-2992015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:289-299
article
Global minimum variance portfolio optimisation under some model risk: A robust regression-based approach
The global minimum variance portfolio computed using the sample covariance matrix is known to be negatively affected by parameter uncertainty, an important component of model risk. Using a robust approach, we introduce a portfolio rule for investors who wish to invest in the global minimum variance portfolio due to its strong historical track record, but seek a rule that is robust to parameter uncertainty. Our robust portfolio corresponds theoretically to the global minimum variance portfolio in the worst-case scenario, with respect to a set of plausible alternative estimators of the covariance matrix, in the neighbourhood of the sample covariance matrix. Hence, it provides protection against errors in the reference sample covariance matrix. Monte Carlo simulations illustrate the dominance of the robust portfolio over its non-robust counterpart, in terms of portfolio stability, variance and risk-adjusted returns. Empirically, we compare the out-of-sample performance of the robust portfolio to various competing minimum variance portfolio rules in the literature. We observe that the robust portfolio often has lower turnover and variance and higher Sharpe ratios than the competing minimum variance portfolios.
Global minimum variance portfolio; Model risk; Parameter uncertainty; Robust least squares; Robust portfolio;
http://www.sciencedirect.com/science/article/pii/S0377221715000302
Maillet, Bertrand
Tokpavi, Sessi
Vaucher, Benoit
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:789-7972015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:789-797
article
Robust bunker management for liner shipping networks
This paper examines the sailing speed of containerships and refueling of bunker in a liner shipping network while considering that the real speed may deviate from the planned one. It develops a mixed-integer nonlinear optimization model to minimize the total cost consisting of ship cost, bunker cost, and inventory cost, under the worst-case bunker consumption scenario. A close-form expression for the worst-case bunker consumption is derived and three linearization techniques are proposed to transform the nonlinear model to a mixed-integer linear programming formulation. A case study based on the Asia–Europe–Oceania network of a global liner shipping company demonstrates the applicability of the proposed model and interesting managerial insights are obtained.
Transportation; Container liner shipping; Bunker; Sailing speed; Refill port;
http://www.sciencedirect.com/science/article/pii/S0377221714010674
Wang, Shuaian
Meng, Qiang
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:322-3302015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:322-330
article
Tying mechanism for airlines’ air cargo capacity allocation
Airlines commonly experience the problem that the sum of freight forwarders’ orders exceeds the airline's fixed capacity for hot-selling routes, while the orders are usually <50 percent for underutilized routes. Airlines cannot dynamically change flights to address the imbalance, since they have to serve passenger traffic when carrying cargo in the belly space of passenger flights. The imbalance problem is likely to become even more severe when the number of wide-body passenger aircraft increases in the near future, as expected by the International Air Transport Association (IATA). Motivated by a joint project with a large airline, we propose a tying mechanism for capacity allocation by integrating hot-selling routes and underutilized routes. The strategic foreclosure theory is adopted in the proposed mechanism. Some forwarders are selected as the partners to whom more capacity of hot-selling routes are allocated with the condition that they will order more underutilized routes. Other “excluded” forwarders temporarily operate underutilized routes. By observing the cost structure information of forwarders, we design the tying mechanism for air cargo capacity allocation and derive the closed-form optimal solution. Using data from the airline, we demonstrate that the proposed tying capacity allocation mechanism is very effective.
OR in airlines; Air cargo; Capacity allocation; Tying mechanism;
http://www.sciencedirect.com/science/article/pii/S037722171500034X
Feng, Bo
Li, Yanzhi
Shen, Huaxiao
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:232-2452015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:232-245
article
Single firm product diffusion model for single-function and fusion products
The prosperity of multifunction products (also referred to as fusion products) has changed the landscape of the marketplace for several electronics products. To illustrate, as fusion products gain popularity in cellular phones and office machines, we observe that single-function products (e.g., stand-alone PDAs and stand-alone scanners) gradually disappear from the market as they are supplanted by fusion products. This paper presents a product diffusion model that captures the diffusion transition from two distinct single-function products into one fusion product. We investigate the optimal launch time of the fusion product under various conditions and conduct a numerical analysis to demonstrate the dynamics among the three products. Similar to previous multi-generation single product diffusion models, we find that the planning horizon, the products' relative profit margin, and substitution effects are important to the launch time decision. However, there are several unique factors that warrant special consideration when a firm introduces a fusion product to the market: the firm's competitive role, buyer consolidation of purchases to a multi-function product, the fusion technology and the age of current single-function products.
Manufacturing Marketing Technology management Multifunction products Fusion products Product diffusion
http://www.sciencedirect.com/science/article/pii/S0377221711003742
Chen, Yuwen
Carrillo, Janice E.
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:240-2472015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:240-247
article
Optimal deleveraging with nonlinear temporary price impact
In this paper, we first propose a portfolio management model where the objective is to balance equity and liability. The asset price dynamics includes both permanent and temporary price impact, where the permanent impact is a linear function of the cumulative trading amount and the temporary impact is a kth (between 0 and 1) order power function of the instantaneous trading rate. We construct efficient frontiers to visualize the tradeoff between equity and liability and obtain analytical properties regarding the optimal trading strategies. In the second part, we further consider an optimal deleveraging problem with leverage constraints. It reduces to a non-convex polynomial optimization program with polynomial and box constraints. A Lagrangian method for solving the problem is presented and the quality of the solution is studied.
Portfolio deleveraging; Equity and liability; Polynomial optimization; Nonlinear temporary price impact; Lagrangian method;
http://www.sciencedirect.com/science/article/pii/S0377221714010522
Chen, Jingnan
Feng, Liming
Peng, Jiming
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:964-9732015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:964-973
article
Performance evaluation of participating nations at the 2012 London Summer Olympics by a two-stage data envelopment analysis
This study measures the performance of participating nations at the Olympics, considering the quest for medals as a two-stage Olympic process. The first stage is characterized as athlete preparation (AP) and the second stage as athlete competition (AC). We extend the relational model from the constant returns to scale framework to the variable returns to scale version. The efficiency of each participating nation in the entire two-stage Olympic process is calculated as a product of the efficiencies of both stages, and a heuristic search is applied to the extended relational model. The efficiency of each stage can be obtained and directions for improving the performance of participating nations in the two-stage Olympic process can be identified. An empirical study of the 2012 London Summer Olympic Games reveals that the efficiency of the AP stage is higher than that of the AC stage for the majority of participants. In addition, a plot of the relationship between these three efficiencies shows that the efficiency of the entire two-stage Olympic process is more significantly related to that of the AC stage than that of the AP stage.
Data envelopment analysis; Two-stage process; Performance evaluation; Heuristic search procedure;
http://www.sciencedirect.com/science/article/pii/S0377221714010509
Li, Yongjun
Lei, Xiyang
Dai, Qianzhi
Liang, Liang
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:921-9312015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:921-931
article
Dynamic portfolio optimization with transaction costs and state-dependent drift
The problem of dynamic portfolio choice with transaction costs is often addressed by constructing a Markov Chain approximation of the continuous time price processes. Using this approximation, we present an efficient numerical method to determine optimal portfolio strategies under time- and state-dependent drift and proportional transaction costs. This scenario arises when investors have behavioral biases or the actual drift is unknown and needs to be estimated. Our numerical method solves dynamic optimal portfolio problems with an exponential utility function for time-horizons of up to 40 years. It is applied to measure the value of information and the loss from transaction costs using the indifference principle.
Dynamic programming; Numerical methods; State-dependent drift; Transaction costs; Markov Chain approximation;
http://www.sciencedirect.com/science/article/pii/S0377221714010583
Palczewski, Jan
Poulsen, Rolf
Schenk-Hoppé, Klaus Reiner
Wang, Huamao
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:883-8962015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:883-896
article
The implication of missing the optimal-exercise time of an American option
The optimal-exercise policy of an American option dictates when the option should be exercised. In this paper, we consider the implications of missing the optimal exercise time of an American option. For the put option, this means holding the option until it is deeper in-the-money when the optimal decision would have been to exercise instead. We derive an upper bound on the maximum possible loss incurred by such an option holder. This upper bound requires no knowledge of the optimal-exercise policy or true price function. This upper bound is a function of only the option-holder’s exercise strategy and the intrinsic value of the option. We show that this result holds true for both put and call options under a variety of market models ranging from the simple Black–Scholes model to complex stochastic-volatility jump-diffusion models. Numerical illustrations of this result are provided. We then use this result to study numerically how the cost of delaying exercise varies across market models and call and put options. We also use this result as a tool to numerically investigate the relation between an option-holder’s risk-preference levels and the maximum possible loss he may incur when adopting a target-payoff policy that is a function of his risk-preference level.
Finance; American options; Delaying exercise; Suboptimal exercise policy; Free boundary problem;
http://www.sciencedirect.com/science/article/pii/S0377221714010121
Chockalingam, Arun
Feng, Haolin
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:404-4162015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:404-416
article
Resource loading with time windows
Resource loading appears in many variants in tactical (mid-term) capacity planning in multi-project environments. It develops a rough sketch of the resource usage and timing of the work packages of a portfolio of orders. The orders need to be executed within a time horizon organized into periods, each of which has a known number of workers available. Each order has a time window during which it must be executed, as well as an upper and lower bound on the number of workers that can work on this order in a period. The duration of the order is not fixed beforehand, but depends on the number of workers (intensity) with which it is executed. In this article we define three fundamental variants of resource loading and study six special cases that are common to the three variants. We present algorithms for those cases that can be solved either in polynomial time or in pseudo-polynomial time. The remaining cases are proven to be np-complete in the strong sense, and we discuss the existence of approximation algorithms for some of these cases. Finally, we comment on the validity of our results when orders must be executed without preemption. Although inspired by a number of practical applications, this work focuses on the properties of the underlying generic combinatorial problems. Our findings contribute to a better understanding of these problems and may also serve as a reference work for authors looking to design efficient algorithms for similar problems.
Resource loading; Complexity theory; Manpower planning; Preemption;
http://www.sciencedirect.com/science/article/pii/S0377221715000569
Talla Nobibon, Fabrice
Leus, Roel
Nip, Kameng
Wang, Zhenbo
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:519-5242015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:519-524
article
Compromise programming: Non-interactive calibration of utility-based metrics
Utility functions have been used widely to support multi-objective decision-making. Expansion of a general additive utility function around the ideal results in a composite linear-quadratic metric of a compromise programming problem. Determining the unknown parameters of the composite linear-quadratic metric requires substantial interaction with the decision maker who might not always be available or capable to participate in such a process. We propose a non-interactive method that uses information on observed attribute levels to obtain the unknown parameters of the composite linear-quadratic metric and enables forecasting and scenario analysis. The method is illustrated with a small scale numerical example.
Utility optimization; Goal programming; Compromise programming; Preferential weights; Model calibration;
http://www.sciencedirect.com/science/article/pii/S037722171500051X
Kanellopoulos, A.
Gerdessen, J.C.
Claassen, G.D.H.
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:1-22015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:1-2
article
Guest Editorial to the Feature Cluster ``EURO/INFORMS 2013 Conference''
http://www.sciencedirect.com/science/article/pii/S0377221715001356
Sevaux, Marc
Sörensen, Kenneth
Bourreau, Eric
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:592-6002015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:592-600
article
Cost sharing solutions defined by non-negative eigenvectors
The problem of sharing a cost M among n individuals, identified by some characteristic ci∈R+, appears in many real situations. Two important proposals on how to share the cost are the egalitarian and the proportional solutions. In different situations a combination of both distributions provides an interesting approach to the cost sharing problem. In this paper we obtain a family of (compromise) solutions associated to the Perron’s eigenvectors of Levinger’s transformations of a characteristics matrix A. This family includes both the egalitarian and proportional solutions, as well as a set of suitable intermediate proposals, which we analyze in some specific contexts, as claims problems and inventory cost games.
Cost sharing; Egalitarian; Proportional; Perron’s eigenvector; Compromise solution;
http://www.sciencedirect.com/science/article/pii/S0377221715000752
Subiza, Begoña
Silva-Reus, José A.
Peris, Josep E.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:403-4102015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:403-410
article
Quality investment and price decision in a risk-averse supply chain
In this paper, we investigate quality investment and price decision of a make-to-order (MTO) supply chain with uncertain demand in international trade. Due to volatility of orders from buyers, the supplier and the manufacturer in the supply chain are subject to financial risk. In contrast to the general assumption that players in a supply chain are risk neutral in quality investment and price decision, we consider the risk-averse behavior of the players in three different supply chain strategies: Vertical Integration (VI), Manufacturer's Stackelberg (MS) and Supplier's Stackelberg (SS). The study shows that both supply chain strategy and risk-averse behavior have significant impacts on quality investment and pricing. Compared to a risk-neutral supply chain, a risk-averse supply chain has lower, same and higher quality of products in VI, MS and SS, respectively. Also, we derive the conditions under which the supply chain strategy is implemented in a decentralized setting. A numerical study is used to illustrate some related issues.
Quality investment Supply chain strategy Preference theory Make-to-order Risk tolerance
http://www.sciencedirect.com/science/article/pii/S0377221711003808
Xie, Gang
Yue, Wuyi
Wang, Shouyang
Lai, Kin Keung
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:362-3682015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:362-368
article
Cross entropy for multiobjective combinatorial optimization problems with linear relaxations
While the cross entropy methodology has been applied to a fair number of combinatorial optimization problems with a single objective, its adaptation to multiobjective optimization has been sporadic. We develop a multiobjective optimization cross entropy (MOCE) procedure for combinatorial optimization problems for which there is a linear relaxation (obtained by ignoring the integrality restrictions) that can be solved in polynomial time. The presence of a relaxation that can be solved with modest computational time is an important characteristic of the problems under consideration because our procedure is designed to exploit relaxed solutions. This is done with a strategy that divides the objective function space into areas and a mechanism that seeds these areas with relaxed solutions. Our main interest is to tackle problems whose solutions are represented by binary variables and whose relaxation is a linear program. Our tests with multiobjective knapsack problems and multiobjective assignment problems show the merit of the proposed procedure.
Multiobjective combinatorial optimization; Cross entropy; EMO; Linear relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221714006122
Caballero, Rafael
Hernández-Díaz, Alfredo G.
Laguna, Manuel
Molina, Julián
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:428-4412015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:428-441
article
An optimization framework for solving capacitated multi-level lot-sizing problems with backlogging
This paper proposes two new mixed integer programming models for capacitated multi-level lot-sizing problems with backlogging, whose linear programming relaxations provide good lower bounds on the optimal solution value. We show that both of these strong formulations yield the same lower bounds. In addition to these theoretical results, we propose a new, effective optimization framework that achieves high quality solutions in reasonable computational time. Computational results show that the proposed optimization framework is superior to other well-known approaches on several important performance dimensions.
Capacitated Multi-level Lot-sizing Backlogging Lower and upper bound guided nested partitions
http://www.sciencedirect.com/science/article/pii/S0377221711003730
Wu, Tao
Shi, Leyuan
Geunes, Joseph
AkartunalI, Kerem
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:160-1672015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:160-167
article
Estimation methods for choice-based conjoint analysis of consumer preferences
Conjoint analysis, a preference measurement method typical in marketing research, has gradually expanded to other disciplines. Choice-based conjoint analysis (CBC) is currently the most popular type. Very few alternative estimation approaches have been suggested since the introduction of the Hierarchical Bayes (HB) method for estimating CBC utility functions. Studies that compare the performance of more than one of the proposed approaches and the HB are almost non- existing. We compare the performance of four published optimization-based procedures and additionally we introduce a new one called CP. The CP is an estimation approach based on convex penalty minimization. In comparison with HB as the benchmark we use eight field data sets. We base the performance comparisons on holdout validation, i.e. predictive performance. Among the optimization based procedures CP performs best. We run simulations to compare the extent to which CP and HB can recover the true utilities. With the field data on the average, the CP and HB results are equally good. However, depending on the problem characteristics, one may perform better than the other. In terms of average performance, the other four methods were inferior to CP and HB.
Utility function Optimization Conjoint analysis Marketing research Preference estimation
http://www.sciencedirect.com/science/article/pii/S0377221711003110
Halme, Merja
Kallio, Markku
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:395-4042015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:395-404
article
Multi-objectivization Via Decomposition: An analysis of helper-objectives and complete decomposition
Multi-objectivization has been used to solve several single objective problems with improved results over traditional genetically inspired optimization methods. Multi-objectivization reformulates the single objective problem into a multiple objective problem. The reformulated problem is then solved with a multiple objective method to obtain a resulting solution to the original problem. Multi-objectivization Via Decomposition (MVD) and the addition of novel objectives are the two major approaches used in multi-objectivization. This paper focuses on analysis of two major MVD methods: helper-objectives and complete decomposition. Helper-objectives decomposition methods identify one or more decomposed objectives that are used simultaneously with the main objective to focus attention on components of the decomposed objectives. Complete decomposition, unlike helper-objectives does not explicitly use the main objective and instead uses decomposed objectives that exhaustively cover all portions of the main objective. This work examines the relationship between helper-objective decompositions and complete decomposition using both an analytic and experimental methodology. Pareto dominance relationships are examined analytically to clarify the relationship between dominant solutions in both types of decompositions. These results more clearly characterize how solutions from the two approaches rank in Pareto-frontier based fitness algorithms such as NSGA-II. An empirical study on job shop scheduling problems shows how fitness signal and fitness noise are affected by the balance of decomposition size. Additionally we provide evidence that, for the settings and instances studied, complete decompositions have a better on-average performance when compared to analogous helper-objective decompositions. Lastly we examine the underlying forces that determine effective decomposition size. We argue that it is advantageous to use less balanced decompositions as within-decomposition conflict increases and as heuristic strength increases.
Multi-objectivization; Multi-Objectivization via Segmentation (MOS); Helper-objectives; Complete decomposition; Job shop scheduling problem (JSSP);
http://www.sciencedirect.com/science/article/pii/S0377221714009916
Lochtefeld, Darrell F.
Ciarallo, Frank W.
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:176-1862015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:176-186
article
The expected value of the traceability information
Recent regulations on agri-food traceability prescribe traceability throughout the entire supply chain, in order to ensure consumers’ safety and product quality. This has led producers and retailers to consider the opportunity to improve the firm's reputation and consumer confidence through the implementation of traceability systems designed not only to satisfy the legal requirements, but also to track the quality of the products through the supply chain for optimization purposes. However the actual implementation of such systems depends on the possibility of gathering specific information related to the product quality. Nowadays, innovative and non invasive technologies such as the Radio Frequency Identification (RFID) allow the automatic real time collection of data, thus enabling the development of effective traceability systems. In such context the expected value of traceability is a fundamental issue concerning the economic analysis of costs involved in such an investment and the optimal granularity level of implementation. This paper aims at evaluating the expected value of the implementation of traceability systems for perishable products like fruits and vegetables, and its profit. The study presents a mathematical stochastic approach for optimizing the supply chain profit and establishing the optimal granularity level (namely the Economic Traceability Lot) when a RFID solution is adopted. In particular, the supply chain profit in the presence of RFID traceability system has been calculated and compared with the expected profit in absence of such a system, and the results confirm the importance of the specific characteristics of the supply chain in determining the optimal configuration of the traceability system.
Traceability systems; Quality control; Supply chain optimization;
http://www.sciencedirect.com/science/article/pii/S037722171500048X
Aiello, Giuseppe
Enea, Mario
Muriana, Cinzia
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:223-2312015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:223-231
article
A reduced variable neighborhood search algorithm for uncapacitated multilevel lot-sizing problems
Multilevel lot-sizing (MLLS) problems, which involve complicated product structures with interdependence among the items, play an important role in the material requirement planning (MRP) system of modern manufacturing/assembling lines. In this paper, we present a reduced variable neighborhood search (RVNS) algorithm and several implemental techniques for solving uncapacitated MLLS problems. Computational experiments are carried out on three classes of benchmark instances under different scales (small, medium, and large). Compared with the existing literature, RVNS shows good performance and robustness on a total of 176 tested instances. For the 96 small-sized instances, the RVNS algorithm can find 100% of the optimal solutions in less computational time; for the 40 medium-sized and the 40 large-sized instances, the RVNS algorithm is competitive against other methods, enjoying good effectiveness as well as high computational efficiency. In the calculations, RVNS updated 7 (17.5%) best known solutions for the medium-sized instances and 16 (40%) best known solutions for the large-sized instances.
Meta-heuristics Uncapacitated multilevel lot-sizing (MLLS) problem Material requirement planning (MRP) Reduced variable neighborhood search (RVNS) algorithm Production planning
http://www.sciencedirect.com/science/article/pii/S0377221711003596
Xiao, Yiyong
Kaku, Ikou
Zhao, Qiuhong
Zhang, Renqian
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:361-3742015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:361-374
article
The newsvendor problem: Review and directions for future research
In this paper, we review the contributions to date for analyzing the newsvendor problem. Our focus is on examining the specific extensions for analyzing this problem in the context of modeling customer demand, supplier costs, and the buyer risk profile. More specifically, we analyze the impact of market price, marketing effort, and stocking quantity on customer demand; how supplier prices can serve as a coordination mechanism in a supply chain setting; integrating alternative supplier pricing policies within the newsvendor framework; and how the buyer's risk profile moderates the newsvendor order quantity decision. For each of these areas, we summarize the current literature and develop extensions. Finally, we also propose directions for future research.
Inventory Logistics Supply chain management
http://www.sciencedirect.com/science/article/pii/S0377221710008040
Qin, Yan
Wang, Ruoxuan
Vakharia, Asoo J.
Chen, Yuwen
Seref, Michelle M.H.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:723-7302015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:723-730
article
A practicable branch and bound algorithm for sum of linear ratios problem
This article presents a practicable algorithm for globally solving sum of linear ratios problem (SLR). The algorithm works by globally solving a bilinear programming problem (EQ) that is equivalent to the problem (SLR). In the algorithm, by utilizing convex envelope and concave envelope of bilinear function, the initial nonconvex programming problem is reduced to a sequence of linear relaxation programming problems. In order to improve the computational efficiency of the algorithm, a new accelerating technique is introduced, which provides a theoretical possibility to delete a large part of the investigated region in which there exists no global optimal solution of the (EQ). By combining this innovative technique with branch and bound operations, a global optimization algorithm is designed for solving the problem (SLR). Finally, numerical experimental results show the feasibility and efficiency of the proposed algorithm.
Global optimization; Sum of linear ratios; Branch and bound; Accelerating technique,;
http://www.sciencedirect.com/science/article/pii/S0377221715000594
Jiao, Hong-Wei
Liu, San-Yang
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:465-4792015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:465-479
article
Efficient optimization of many objectives by approximation-guided evolution
Multi-objective optimization problems arise frequently in applications, but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multi-objective problems. These algorithms use different measures to ensure diversity in the objective space but are not guided by a formal notion of approximation. We present a framework for evolutionary multi-objective optimization that allows to work with a formal notion of approximation. This approximation-guided evolutionary algorithm (AGE) has a worst-case runtime linear in the number of objectives and works with an archive that is an approximation of the non-dominated objective vectors seen during the run of the algorithm. Our experimental results show that AGE finds competitive or better solutions not only regarding the achieved approximation, but also regarding the total hypervolume. For all considered test problems, even for many (i.e., more than ten) dimensions, AGE discovers a good approximation of the Pareto front. This is not the case for established algorithms such as NSGA-II, SPEA2, and SMS-EMOA. In this paper we compare AGE with two additional algorithms that use very fast hypervolume-approximations to guide their search. This significantly speeds up the runtime of the hypervolume-based algorithms, which now allows a comparison of the underlying selection schemes.
Multi-objective optimization; Approximation; Comparative study;
http://www.sciencedirect.com/science/article/pii/S0377221714009552
Wagner, Markus
Bringmann, Karl
Friedrich, Tobias
Neumann, Frank
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:369-3852015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:369-385
article
Anytime Pareto local search
Pareto Local Search (PLS) is a simple and effective local search method for tackling multi-objective combinatorial optimization problems. It is also a crucial component of many state-of-the-art algorithms for such problems. However, PLS may be not very effective when terminated before completion. In other words, PLS has poor anytime behavior. In this paper, we study the effect that various PLS algorithmic components have on its anytime behavior. We show that the anytime behavior of PLS can be greatly improved by using alternative algorithmic components. We also propose Dynagrid, a dynamic discretization of the objective space that helps PLS to converge faster to a good approximation of the Pareto front and continue to improve it if more time is available. We perform a detailed empirical evaluation of the new proposals on the bi-objective traveling salesman problem and the bi-objective quadratic assignment problem. Our results demonstrate that the new PLS variants not only have significantly better anytime behavior than the original PLS, but also may obtain better results for longer computation time or upon completion.
Pareto local search; Anytime optimization; Multi-objective optimization; Traveling salesman problem; Quadratic assignment problem;
http://www.sciencedirect.com/science/article/pii/S0377221714009011
Dubois-Lacoste, Jérémie
López-Ibáñez, Manuel
Stützle, Thomas
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:392-4032015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:392-403
article
The data transfer problem in a system of systems
Systems of systems are collections of independent systems which interact and share information to provide services. To communicate, systems can opportunistically make use of contacts that occur when two entities are close enough to each other. In this paper, it is assumed that reliable predictions can be made about the sequence of such contacts for each system. An information item (a datum) is split into several datum units which are to be delivered to recipient systems. During a contact between two systems, a sending system can transfer one stored datum unit to a receiving system. Source systems initially store some of the datum units. The data transfer problem consists in searching for a valid transfer plan, i.e. a transfer plan allowing the datum units to be transmitted from their source systems to the recipient systems. The dissemination problem consists in searching a valid transfer plan which minimizes the dissemination length, i.e. the number of contacts which are necessary to deliver all the datum units to the recipient nodes. To our knowledge, there is no previous work attempting to determine the theoretical complexity of these problems. The aim of this paper is to determine the frontier between easy and hard problems. We show that the problems are strongly NP-Hard when the number of recipients is equal to 2 or more (while the number of datum units is unbounded) or the number of datum units is equal to 2 or more (while the number of recipients is unbounded). We also show that these problems are polynomially solvable when the number of datum units or the number of recipient nodes is equal to 1, or when these parameters are all upper bounded by given positive numbers. The complexity of two related problems is also studied. It is shown that knowing whether there exist k mutually arc-disjoint branchings in an evolving graph and k arc-disjoint Steiner trees in a directed graph without circuit are strongly NP-Complete.
Combinatorial optimization; Data transfer; Scheduling; Routing;
http://www.sciencedirect.com/science/article/pii/S0377221715000636
Bocquillon, Ronan
Jouglet, Antoine
Carlier, Jacques
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:839-8512015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:839-851
article
Modeling and optimization control of a demand-driven, conveyor-serviced production station
This study investigates the look-ahead control of a conveyor-serviced production station (CSPS), viewed as a production center, which is connected to a sales center. The production station is equipped with a buffer to temporarily store the parts that will flow into the product bank of the sales center after processing. The whole two-center system is characterized by random parts arrival, random customer demand, random processing time and limited buffer or bank capacities. Thus, the decision-making on the look-ahead range of such demand-driven CSPS is subject to the constraints of production and sales levels. In this paper, we will focus on modeling the stochastic control problem and providing solutions for finding the optimal look-ahead control policy under either average- or discounted-cost criteria. We first establish a detailed semi-Markov decision process for the look-ahead control of the demand-driven CSPS by combining the vacancies of both the buffer and the bank into one state, which can be solved by policy iteration or value iteration if the system parameters are known precisely. Then, to avoid the curse of dimensionality and modeling in the numerical optimization methods, we also propose a Q-learning algorithm combined with a simulated annealing technique to derive the approximate solutions. Simulation results are finally presented to show that by our established model and proposed optimization methods the system can achieve an optimal or suboptimal look-ahead control policy once the capacities of both the buffer and the bank are designed appropriately.
Production; Random customer demands; Look-ahead control; Semi-Markov decision process; Q-learning;
http://www.sciencedirect.com/science/article/pii/S0377221715000296
Tang, Hao
Xu, Lingling
Sun, Jing
Chen, Yingjun
Zhou, Lei
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:523-5392015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:523-539
article
A matheuristic approach for the Pollution-Routing Problem
This paper deals with the Pollution-Routing Problem (PRP), a Vehicle Routing Problem (VRP) with environmental considerations, recently introduced in the literature by Bektaş and Laporte (2011) [Transportation Research Part B: Methodological 45 (8), 1232–1250]. The objective is to minimize operational and environmental costs while respecting capacity constraints and service time windows. Costs are based on driver wages and fuel consumption, which depends on many factors, such as travel distance and vehicle load. The vehicle speeds are considered as decision variables. They complement routing decisions, impacting the total cost, the travel time between locations, and thus the set of feasible routes. We propose a method which combines a local search-based metaheuristic with an integer programming approach over a set covering formulation and a recursive speed-optimization algorithm. This hybridization enables to integrate more tightly route and speed decisions. Moreover, two other “green” VRP variants, the Fuel Consumption VRP (FCVRP) and the Energy Minimizing VRP (EMVRP), are addressed, as well as the VRP with time windows (VRPTW) with distance minimization. The proposed method compares very favorably with previous algorithms from the literature, and new improved solutions are reported for all considered problems.
Combinatorial optimization; Green logistics; Vehicle routing; Time windows; Speed optimization;
http://www.sciencedirect.com/science/article/pii/S0377221714009928
Kramer, Raphael
Subramanian, Anand
Vidal, Thibaut
Cabral, Lucídio dos Anjos F.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:395-4042015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:395-404
article
Optimization problems in statistical learning: Duality and optimality conditions
Regularization methods are techniques for learning functions from given data. We consider regularization problems the objective function of which consisting of a cost function and a regularization term with the aim of selecting a prediction function f with a finite representation which minimizes the error of prediction. Here the role of the regularizer is to avoid overfitting. In general these are convex optimization problems with not necessarily differentiable objective functions. Thus in order to provide optimality conditions for this class of problems one needs to appeal on some specific techniques from the convex analysis. In this paper we provide a general approach for deriving necessary and sufficient optimality conditions for the regularized problem via the so-called conjugate duality theory. Afterwards we employ the obtained results to the Support Vector Machines problem and Support Vector Regression problem formulated for different cost functions.
Machine learning Tikhonov regularization Convex duality theory Optimality conditions
http://www.sciencedirect.com/science/article/pii/S0377221711002323
Bot, Radu Ioan
Lorenz, Nicole
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:826-8382015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:826-838
article
Blood platelet inventory management with protection levels
We consider a discrete-time inventory system for a perishable product where demand exists for product of different ages; an example of such a product is blood platelets. In addition to the classical costs for inventory holding, outdating, and shortage, our model includes substitution (mismatch) costs incurred when a demand for a certain-aged item is satisfied by a different-aged item. We propose a simple inventory replenishment and allocation heuristic to minimize the expected total cost over an infinite time horizon. In our heuristic, inventory of the newest items is replenished in fixed quantities and the newest items are protected for future use by limiting some substitutions when making allocation decisions according to a critical-level policy. We model our problem as a Markov Decision Process (MDP), derive the costs of our heuristic policy, and computationally compare this policy to extant “near optimal” policies in the literature. Our extensive computational study shows that our policy leads to superior performance compared to existing heuristics in the literature, particularly when supplies are limited.
OR in health services; Inventory management; Perishable goods; Heuristics; Critical level policy;
http://www.sciencedirect.com/science/article/pii/S0377221715000430
Civelek, Ismail
Karaesmen, Itir
Scheller-Wolf, Alan
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:375-3832015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:375-383
article
Admission policies for the customized stochastic lot scheduling problem with strict due-dates
This papers considers admission control and scheduling of customer orders in a production system that produces different items on a single machine. Customer orders drive the production and belong to product families, and have family dependent due-date, size, and reward. When production changes from one family to another a setup time is incurred. Moreover, if an order cannot be accepted, it is considered lost upon arrival. The problem is to find a policy that accepts/rejects and schedules orders such that long run profit is maximized. This problem finds its motivation in batch industries in which suppliers have to realize high machine utilization while delivery times should be short and reliable and the production environment is subject to long setup times. We model the joint admission control/scheduling problem as a Markov decision process (MDP) to gain insight into the optimal control of the production system and use the MDP to benchmark the performance of a simple heuristic acceptance/scheduling policy. Numerical results show that the heuristic performs very well compared with the optimal policy for a wide range of parameter settings, including product family asymmetries in arrival rate, order size, and order reward.
Scheduling Order acceptance Make-to-order production Stochastic lot scheduling problem Due-dates
http://www.sciencedirect.com/science/article/pii/S0377221711002311
Germs, Remco
Van Foreest, Nicky D.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:932-9432015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:932-943
article
Two exact algorithms for the traveling umpire problem
In this paper, we study the traveling umpire problem (TUP), a difficult combinatorial optimization problem that is formulated based on the key issues of Major League Baseball. We introduce an arc-flow model and a set partition model to formulate the problem. Based on these two models, we propose a branch-and-bound algorithm and a branch-and-price-and-cut algorithm. The branch-and-bound algorithm relies on lower bounds provided by a Lagrangian relaxation of the arc-flow model, while the branch-and-price-and-cut algorithm exploits lower bounds from the linear programming relaxation of the set partition model strengthened by a family of valid inequalities. In the branch-and-price-and-cut algorithm, we design an efficient label-setting algorithm to solve the pricing problem, and a branching strategy that combines three different branching rules. The two algorithms are tested on a set of well-known benchmark instances. The two exact algorithms are both able to rapidly solve instances with 10 teams or less, while the branch-and-price-and-cut algorithm can solve two instances with 14 teams. This is the first time that instances with 14 teams have been solved to optimality.
Umpire scheduling; Lagrangian relaxation; Column generation; OR in sports; Mathematical model;
http://www.sciencedirect.com/science/article/pii/S037722171401056X
Xue, Li
Luo, Zhixing
Lim, Andrew
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:153-1632015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:153-163
article
Multi-class dynamic inventory rationing with stochastic demands and backordering
Dynamic inventory rationing is considered for systems with multiple demand classes, stationary stochastic demands, and backordering. In the literature, dynamic programming has been often applied to address this type of problems. However, due to the curse of dimensionality, computation is a critical challenge for dynamic programming. In this paper, an innovative two-step approach is proposed based on an idea similar to the certainty equivalence principle. First the deterministic inventory rationing problem is studied, where the future demands are set to be the expectation of the stochastic demand processes. The important properties obtained from solving the problem with the KKT conditions are then used to develop effective dynamic rationing policies for stochastic demands, which gives closed-form expressions for dynamic rationing thresholds. These expressions are easy to calculate and are applicable to any number of demand classes. Numerical results show that the expressions are close to and provide a lower bound for the optimal dynamic thresholds. They also shed light on important managerial insights, for example, the relation between different parameters and the rationing thresholds.
Dynamic inventory rationing; Multiple classes stochastic demands; Backordering; Closed-form expressions;
http://www.sciencedirect.com/science/article/pii/S0377221715000429
Liu, Shudong
Song, Miao
Tan, Kok Choon
Zhang, Changyong
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:199-2152015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:199-215
article
A convex optimisation framework for the unequal-areas facility layout problem
The unequal-areas facility layout problem is concerned with finding the optimal arrangement of a given number of non-overlapping indivisible departments with unequal area requirements within a facility. We present a convex-optimisation-based framework for efficiently finding competitive solutions for this problem. The framework is based on the combination of two mathematical programming models. The first model is a convex relaxation of the layout problem that establishes the relative position of the departments within the facility, and the second model uses semidefinite optimisation to determine the final layout. Aspect ratio constraints, frequently used in facility layout methods to restrict the occurrence of overly long and narrow departments in the computed layouts, are taken into account by both models. We present computational results showing that the proposed framework consistently produces competitive, and often improved, layouts for well-known large instances when compared with other approaches in the literature.
Facility layout Semidefinite programming Convex programming Global optimisation
http://www.sciencedirect.com/science/article/pii/S0377221711003560
Jankovits, Ibolya
Luo, Chaomin
Anjos, Miguel F.
Vannelli, Anthony
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:956-9632015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:956-963
article
Range contracts: Risk sharing and beyond
We introduce and study the range contract, which allows a buyer to procure from a supplier at a prescribed price any amount within a specified range. In return, the supplier is compensated up front for the width of the range with a range fee. This fee can be viewed as the buyer trading monetary value for reduced uncertainty. The range contract generalizes and unifies many common contracts, such as fixed-price, JIT, option, and quantity-flexibility contracts. The parameters that maximize the expected profit of the centralized supply chain are derived here and are shown to crucially depend on production flexibility. We also study here the buyer’s expected profit-maximizing range endpoints as a function of the pricing parameters of the contract. Using the buyer’s optimal range, we demonstrate how the supplier can set the contract’s pricing parameters so as to maximize the supplier’s expected profit for a uniform distribution of demand. We provide computational evidence, for uniformly distributed demand, that the range contract allows the optimal decentralized supply chain to attain significant reductions in standard deviation of profit in exchange for moderate reductions in expected value of profit. We further demonstrate computationally that both the buyer and supplier can benefit simultaneously, attaining higher risk-adjusted profits than the centralized supply chain.
Risk management; Supply chain management; Contracts;
http://www.sciencedirect.com/science/article/pii/S0377221714010601
Hochbaum, Dorit S.
Wagner, Michael R.
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:117-1282015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:117-128
article
Job-shop production scheduling with reverse flows
In this paper, we conduct a study of the job-shop scheduling problem with reverse flows. This NP-hard problem is characterized by two flows of jobs that cover the same machines in opposite directions. The objective is to minimize the maximal completion time of the jobs (i.e., the makespan).
Scheduling; Job-shop; Reverse flows; Heuristics; Linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221715000338
Abdeljaouad, Mohamed Amine
Bahroun, Zied
Omrane, Anissa
Fondrevelle, Julien
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:648-6612015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:648-661
article
Optimal wholesale facilities location within the fruit and vegetables supply chain with bimodal transportation options: An LP-MIP heuristic approach
Population growth creates a challenge to food availability and access. To balance supply with growing demand, more food has to move from production to consumption sites. Moreover, demand for locally-grown food is increasing and the U.S. Department of Agriculture (USDA) seeks to develop and strengthen regional and local food systems. This article examines wholesale facility (hub) locations in food supply chain systems on a national scale to facilitate the efficient transfer of food from production regions to consumption locations. It designs an optimal national wholesale or hub location network to serve food consumption markets through efficient connections with production sites. The mathematical formulation is a mixed integer linear programming (MILP) problem that minimizes total network costs which include costs of transporting goods and locating facilities. A scenario study is used to examine the model's sensitivity to parameter changes, including travel distance, hub capacity, transportation cost, etc. An application is made to the U.S. fruit and vegetable industry. We demonstrate how parameter changes affect the optimal locations and number of wholesale facilities.
Optimal hub location; Bimodal transportation; Operation research; Supply chain;
http://www.sciencedirect.com/science/article/pii/S0377221715000648
Etemadnia, Hamideh
Goetz, Stephan J.
Canning, Patrick
Tavallali, Mohammad Sadegh
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:129-1402015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:129-140
article
The k-dissimilar vehicle routing problem
In this paper we define a new problem, the aim of which is to find a set of k dissimilar solutions for a vehicle routing problem (VRP) on a single instance. This problem has several practical applications in the cash-in-transit sector and in the transportation of hazardous materials. A min–max mathematical formulation is proposed which requires a maximum similarity threshold between VRP solutions, and the number k of dissimilar VRP solutions that need to be generated. An index to measure similarities between VRP solutions is defined based on the edges shared between pairs of alternative solutions. An iterative metaheuristic to generate k dissimilar alternative solutions is also presented. The solution approach is tested using large and medium size benchmark instances for the capacitated vehicle routing problem.
Decision support systems; Combinatorial optimization; Metaheuristic; Logistics; Risk management;
http://www.sciencedirect.com/science/article/pii/S0377221715000399
Talarico, L.
Sörensen, K.
Springael, J.
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:470-4772015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:470-477
article
Branch and price for the vehicle routing problem with discrete split deliveries and time windows
The Discrete Split Delivery Vehicle Routing Problem with Time Windows (DSDVRPTW) consists of designing the optimal set of routes to serve, at least cost, a given set of customers while respecting constraints on vehicles' capacity and customer time windows. Each customer can be visited by more than one vehicle since each customer's demand, discretized in items, can be split in orders, i.e., feasible combinations of items. In this work, we model the DSDVRPTW assuming that all feasible orders are known in advance. Remarkably, service time at customer's location depends on the delivered combination of items, which is a modeling feature rarely found in literature. We present a flow-based mixed integer program for the DSDVRPTW, we reformulate it via Dantzig-Wolfe and we apply column generation. The proposed branch-and-price algorithm largely outperforms a commercial solver, as shown by computational experiments on Solomon-based instances. A comparison in terms of complexity between constant service time vs delivery-dependent service time is presented and potential savings are discussed.
Vehicle routing Split delivery Column generation Dantzig-Wolfe decomposition Branch and price
http://www.sciencedirect.com/science/article/pii/S0377221711002347
Salani, Matteo
Vacca, Ilaria
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:637-6462015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:637-646
article
Stochastic linear programming games with concave preferences
We study stochastic linear programming games: a class of stochastic cooperative games whose payoffs under any realization of uncertainty are determined by a specially structured linear program. These games can model a variety of settings, including inventory centralization and cooperative network fortification. We focus on the core of these games under an allocation scheme that determines how payoffs are distributed before the uncertainty is realized, and allows for arbitrarily different distributions for each realization of the uncertainty. Assuming that each player’s preferences over random payoffs are represented by a concave monetary utility functional, we prove that these games have a nonempty core. Furthermore, by establishing a connection between stochastic linear programming games, linear programming games and linear semi-infinite programming games, we show that an allocation in the core can be computed efficiently under some circumstances.
Game theory; Stochastic cooperative game;
http://www.sciencedirect.com/science/article/pii/S0377221714010261
Uhan, Nelson A.
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:434-4442015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:434-444
article
An object-coding genetic algorithm for integrated process planning and scheduling
Process planning and jobshop scheduling problems are both crucial functions in manufacturing. In reality, dynamic disruptions such as machine breakdown or rush order will affect the feasibility and optimality of the sequentially-generated process plans and machining schedules. With the approach of integrated process planning and scheduling (IPPS), the actual process plan and the schedule are determined dynamically in accordance with the order details and the status of the manufacturing system. In this paper, an object-coding genetic algorithm (OCGA) is proposed to resolve the IPPS problems in a jobshop type of flexible manufacturing systems. An effective object-coding representation and its corresponding genetic operations are suggested, where real objects like machining operations are directly used to represent genes. Based on the object-coding representation, customized methods are proposed to fulfill the genetic operations. An unusual selection and a replacement strategy are integrated systematically for the population evolution, aiming to achieve near-optimal solutions through gradually improving the overall quality of the population, instead of exploring neighborhoods of good individuals. Experiments show that the proposed genetic algorithm can generate outstanding outcomes for complex IPPS instances.
Process planning; Jobshop scheduling; Genetic algorithm; Object-coding;
http://www.sciencedirect.com/science/article/pii/S0377221715000521
Zhang, Luping
Wong, T.N.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:273-2832015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:273-283
article
Strategic capability investments and competition for supply contracts
Suppliers often make proactive investments to strategically position themselves to win contracts with a large buyer. Such investments reduce the suppliers' variable costs of serving the buyer's demand. We show that an auction mechanism does not always benefit the buyer, the supply chain, or the society. We identify scenarios where the buyer can implement the supply chain and socially optimal solution by committing to a bilateral relationship with fair reimbursement, and forgoing the benefits of competition altogether. We explore the role of commitment by the buyer (to a procurement mechanism) and by the suppliers (to an investment level) by analyzing different timing games under symmetric and asymmetric information about suppliers' types. We show that it never benefits anyone for the suppliers to commit first. Equilibrium investments and cost structures depend upon the buyer's bargaining power (opportunity cost). However, the winning supplier's investments are almost always below the supply chain optimal level.
Auctions/bidding Capability investments Supply contracts Type-conscious agents First mover advantages
http://www.sciencedirect.com/science/article/pii/S0377221711004085
Li, Ying
Gupta, Sudheer
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:414-4212015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:414-421
article
Stochastic convergence of random search methods to fixed size Pareto front approximations
In this paper we investigate to what extent random search methods, equipped with an archive of bounded size to store a limited amount of solutions and other data, are able to obtain good Pareto front approximations. We propose and analyze two archiving schemes that allow for maintaining a sequence of solution sets of given cardinality that converge with probability one to an [epsilon]-Pareto set of a certain quality, under very mild assumptions on the process used to sample new solutions. The first algorithm uses a hierarchical grid to define a family of approximate dominance relations to compare solutions and solution sets. Acceptance of a new solution is based on a potential function that counts the number of occupied boxes (on various levels) and thus maintains a strictly monotonous progress to a limit set that covers the Pareto front with non-overlapping boxes at finest resolution possible. The second algorithm uses an adaptation scheme to modify the current value of [epsilon] based on the information gathered during the run. This way it will be possible to achieve convergence to the best (smallest) [epsilon] value, and to a corresponding solution set of k solutions that [epsilon]-dominate all other solutions, which is probably the best possible result regarding the limit behavior of random search methods or metaheuristics for obtaining Pareto front approximations.
Multiple criteria analysis Multiobjective optimization Metaheuristics Search theory
http://www.sciencedirect.com/science/article/pii/S0377221711002761
Laumanns, Marco
Zenklusen, Rico
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:540-5542015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:540-554
article
The financing of innovative SMEs: A multicriteria credit rating model
Small and Medium-sized Enterprises (SMEs) face many obstacles when they try to access the credit market. These obstacles increase if the SMEs are innovative. In this case, financial data are insufficient or even unreliable. Thus, building a judgmental rating model, mainly based on qualitative criteria (soft information), is very important to finance SMEs’ activities. Till now, there has not been a multicriteria credit risk model based on soft information for innovative SMEs. In this paper, we try to fill this gap by presenting a multicriteria credit risk model named ELECTRE-TRI. A SMAA-TRI analysis is also implemented to obtain robust SMEs’ assignments to the risk classes. SMAA-TRI incorporates ELECTRE-TRI by considering different sets of preference parameters and uncertainty in the data via Monte Carlo simulations. Finally, we carry out a real case study with the aim of illustrating the multicriteria credit risk model.
Multiple Criteria Decision Aiding; Qualitative criteria; SMAA-TRI; Innovative SMEs; Judgmental credit rating model;
http://www.sciencedirect.com/science/article/pii/S0377221715000533
Angilella, Silvia
Mazzù, Sebastiano
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:109-1172015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:109-117
article
Capital renewal as a real option
We consider the timing of replacement of obsolete subsystems within an extensive, complex infrastructure. Such replacement action, known as capital renewal, must balance uncertainty about future profitability against uncertainty about future renewal costs. Treating renewal investments as real options, we derive an optimal solution to the infinite horizon version of this problem and determine the total present value of an institution's capital renewal options. We investigate the sensitivity of the infinite horizon solution to variations in key problem parameters and highlight the system scenarios in which timely renewal activity is most profitable. For finite horizon renewal planning, we show that our solution performs better than a policy of constant periodic renewals if more than two renewal cycles are completed.
Investment analysis Maintenance Replacement Facilities planning and design Simulation
http://www.sciencedirect.com/science/article/pii/S0377221711002797
Reindorp, Matthew J.
Fu, Michael C.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:703-7222015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:703-722
article
Algorithms for the continuous nonlinear resource allocation problem—New implementations and numerical studies
Patriksson (2008) provided a then up-to-date survey on the continuous, separable, differentiable and convex resource allocation problem with a single resource constraint. Since the publication of that paper the interest in the problem has grown: several new applications have arisen where the problem at hand constitutes a subproblem, and several new algorithms have been developed for its efficient solution. This paper therefore serves three purposes. First, it provides an up-to-date extension of the survey of the literature of the field, complementing the survey in Patriksson (2008) with more then 20 books and articles. Second, it contributes improvements of some of these algorithms, in particular with an improvement of the pegging (that is, variable fixing) process in the relaxation algorithm, and an improved means to evaluate subsolutions. Third, it numerically evaluates several relaxation (primal) and breakpoint (dual) algorithms, incorporating a variety of pegging strategies, as well as a quasi-Newton method. Our conclusion is that our modification of the relaxation algorithm performs the best. At least for problem sizes up to 30 million variables the practical time complexity for the breakpoint and relaxation algorithms is linear.
Resource allocation; Convex optimization; Lagrangian duality; Applications; Numerical analysis;
http://www.sciencedirect.com/science/article/pii/S0377221715000491
Patriksson, Michael
Strömberg, Christoffer
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:538-5502015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:538-550
article
Heuristic algorithms for the cardinality constrained efficient frontier
This paper examines the application of genetic algorithm, tabu search and simulated annealing metaheuristic approaches to finding the cardinality constrained efficient frontier that arises in financial portfolio optimisation. We consider the mean-variance model of Markowitz as extended to include the discrete restrictions of buy-in thresholds and cardinality constraints. Computational results are reported for publicly available data sets drawn from seven major market indices involving up to 1318 assets. Our results are compared with previous results given in the literature illustrating the effectiveness of the proposed metaheuristics in terms of solution quality and computation time.
Efficient frontier Genetic algorithm Portfolio optimisation Simulated annealing Tabu search
http://www.sciencedirect.com/science/article/pii/S0377221711002670
Woodside-Oriakhi, M.
Lucas, C.
Beasley, J.E.
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:219-2262015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:219-226
article
Decline and repair, and covariate effects
The failure processes of repairable systems may be impacted by operational and environmental stress factors. To accommodate such factors, reliability can be modelled using a multiplicative intensity function. In the proportional intensity model, the failure intensity is the product of the failure intensity function of the baseline system that quantifies intrinsic factors and a function of covariates that quantify extrinsic factors. The existing literature has extensively studied the failure processes of repairable systems using general repair concepts such as age-reduction when no covariate effects are considered. This paper investigates different approaches for modelling the failure and repair process of repairable systems in the presence of time-dependent covariates. We derive probabilistic properties of the failure processes for such systems.
Repair; Proportional intensity model; Virtual age; Maintenance;
http://www.sciencedirect.com/science/article/pii/S0377221715000612
Wu, Shaomin
Scarf, Philip
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:974-9842015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:974-984
article
Historical evolution and benefit–cost explanation of periodical fluctuation in coal mine safety supervision: An evolutionary game analysis framework
The periodical fluctuation phenomenon appears in coal mine and other fields of government safety supervision. The paper provides a theoretical explanation by building an evolutionary game model between coal mine industry and governmental supervision institutions. Moreover, the paper provides a numerical example to demonstrate how the initial state and the costs (or gains) influence the fluctuation amplitude and the equilibrium position. We find that the initial state and the payoffs of different strategies are the two main determinants of periodical fluctuation phenomenon. The successful experience of coal mine safety supervision in China shows the importance of highly efficient government safety governance in developing countries.
OR in societal problem analysis; Evolutionary game; Government safety supervision; Periodical fluctuation; Coal mine accidents;
http://www.sciencedirect.com/science/article/pii/S0377221714010649
Liu, Dehai
Xiao, Xingzhi
Li, Hongyi
Wang, Weiguo
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:201-2092015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:201-209
article
Consistent weight restrictions in data envelopment analysis
It has recently been shown that the incorporation of weight restrictions in models of data envelopment analysis (DEA) may induce free or unlimited production of output vectors in the underlying production technology, which is expressly disallowed by standard production assumptions. This effect may either result in an infeasible multiplier model with weight restrictions or remain undetected by normal efficiency computations. The latter is potentially troubling because even if the efficiency scores appear unproblematic, they may still be assessed in an erroneous model of production technology. Two approaches to testing the existence of free and unlimited production have recently been developed: computational and analytical. While the latter is more straightforward than the former, its application is limited only to unlinked weight restrictions. In this paper we develop several new analytical conditions for a larger class of unlinked and linked weight restrictions.
Data envelopment analysis; Weight restrictions; Production trade-offs;
http://www.sciencedirect.com/science/article/pii/S0377221715000570
Podinovski, Victor V.
Bouzdine-Chameeva, Tatiana
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:459-4692015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:459-469
article
Experiments on forecasting behavior with several sources of information - A review of the literature
Decision makers frequently have to forecast the future values of a time series (e.g. the price of a commodity, sales figures) given several sources of information (e.g. leading indicators, forecasts of advisors). As a subdomain of decision theory the explanation and the improvement of human forecasting behavior are interdisciplinary issues and have been subject to extensive empirical field and laboratory research. We here review the relevant experimental literature, demonstrate the significance of these results for decision science in general, and summarize the implications for practical forecasting applications.
Experimental economics Heuristics Expectation formation Judgmental forecasting Adjustment of forecasts Revision of forecasts
http://www.sciencedirect.com/science/article/pii/S0377221711000099
Leitner, Johannes
Leopold-Wildburger, Ulrike
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:55-652015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:55-65
article
Models, solutions and enabling technologies in humanitarian logistics
We present a survey that focuses on the response and recovery planning phases of the disaster lifecycle. Related mathematical models developed in this area of research are classified in terms of vehicle/network representation structures and their functionality. The relationships between these characteristics and model size are discussed. The review provides details on goals, constraints, and structures of available mathematical models as well as solution methods. In this review, information systems applications in humanitarian logistics are also surveyed, since humanitarian logistics models and their solutions need to be integrated with information technology to enable their use in practice.
Humanitarian logistics; Modeling approaches; Information systems;
http://www.sciencedirect.com/science/article/pii/S0377221714009539
Özdamar, Linet
Ertem, Mustafa Alp
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:384-3872015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:384-387
article
Unbounded knapsack problems with arithmetic weight sequences
We investigate a special case of the unbounded knapsack problem in which the item weights form an arithmetic sequence. We derive a polynomial time algorithm for this special case with running time O(n8), where n denotes the number of distinct items in the instance. Furthermore, we extend our approach to a slightly more general class of knapsack instances.
Combinatorial optimization Computational complexity Dynamic programming Polynomially solvable special case
http://www.sciencedirect.com/science/article/pii/S0377221711002396
Deineko, Vladimir G.
Woeginger, Gerhard J.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:405-4222015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:405-422
article
Multi-objectivization, fitness landscape transformation and search performance: A case of study on the hp model for protein structure prediction
Multi-objectivization represents a current and promising research direction which has led to the development of more competitive search mechanisms. This concept involves the restatement of a single-objective problem in an alternative multi-objective form, which can facilitate the process of finding a solution to the original problem. Recently, this transformation was applied with success to the HP model, a simplified yet challenging representation of the protein structure prediction problem. The use of alternative multi-objective formulations, based on the decomposition of the original objective function of the problem, has significantly increased the performance of search algorithms. The present study goes further on this topic. With the primary aim of understanding and quantifying the potential effects of multi-objectivization, a detailed analysis is first conducted to evaluate the extent to which this problem transformation impacts on an important characteristic of the fitness landscape, neutrality. To the authors’ knowledge, the effects of multi-objectivization have not been previously investigated by explicitly sampling and evaluating the neutrality of the fitness landscape. Although focused on the HP model, most of the findings of such an analysis can be extrapolated to other problem domains, contributing thus to the general understanding of multi-objectivization. Finally, this study presents a comparative analysis where the advantages of multi-objectivization are evaluated in terms of the performance of a basic evolutionary algorithm. Both the two- and three-dimensional variants of the HP model (based on the square and cubic lattices, respectively) are considered.
Multi-objectivization; Fitness landscape analysis; Protein structure prediction; Hydrophobic-polar model; Multi-objective evolutionary algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221714004925
Garza-Fabre, Mario
Toscano-Pulido, Gregorio
Rodriguez-Tello, Eduardo
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:517-5252015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:517-525
article
Group and individual heterogeneity in a stochastic frontier model: Container terminal operators
Container ports are a major component of international trade and the global supply chain. Hence, the improvement of port efficiency can have a significant impact on the wider maritime economy. This paper deconstructs a representation in the existing literature that neglects the heterogeneity of individual and group-specific terminal operators. In its place, we present a hierarchical model to make a connection between efficiency and terminal operator group characteristics. The paper develops a stochastic frontier model that controls not only individual heterogeneity but also group-specific variations. The model decomposes the total stochastic derivation from the frontier into inefficiency, individual heterogeneity, group-specific variations, and noise components, with the estimation being performed using Markov chain Monte Carlo simulations. The validity of the model is tested with a panel of container terminal operator data from 1997-2004. Our findings show that terminal operator groups are important in promoting terminal efficiency at the global level, and that the operators with stevedore backgrounds show a higher efficiency than carriers.
Stochastic processes Stochastic production frontier Markov processes Container terminal operators Port globalisation Group-specific
http://www.sciencedirect.com/science/article/pii/S0377221711002773
Yip, Tsz Leung
Sun, Xin Yu
Liu, John J.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:566-5752015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:566-575
article
Inventory and credit decisions for time-varying deteriorating items with up-stream and down-stream trade credit financing by discounted cash flow analysis
In today's competitive markets, most firms in United Kingdom and United States offer their products on trade credit to stimulate sales and reduce inventory. Trade credit is calculated based on time value of money on the purchase cost (i.e., discounted cash flow analysis). Recently, many researchers use discounted cash flow analysis only on the purchase cost but not on the revenue (which is significantly larger than the purchase cost) and the other costs. For a sound and rigorous analysis, we should use discounted cash flow analysis on revenue and costs. In addition, expiration date for a deteriorating item (e.g., bread, milk, and meat) is an important factor in consumer's purchase decision. However, little attention has been paid to the effect of expiration date. Hence, in this paper, we establish a supplier–retailer–customer supply chain model in which: (a) the retailer receives an up-stream trade credit from the supplier while grants a down-stream trade credit to customers, (b) the deterioration rate is non-decreasing over time and near 100 percent particularly close to its expiration date, and (c) discounted cash flow analysis is adopted for calculating all relevant factors: revenue and costs. The proposed model is an extension of more than 20 previous papers. We then demonstrate that the retailer's optimal credit period and cycle time not only exist but also are unique. Thus, the search of the optimal solution reduces to a local one. Finally, we run several numerical examples to illustrate the problem and gain managerial insights.
Supply chain management; Expiration dates; Trade credit; Discounted cash flow;
http://www.sciencedirect.com/science/article/pii/S0377221714009898
Chen, Sheng-Chih
Teng, Jinn-Tsair
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:422-4292015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:422-429
article
Incentive based energy market design
Current energy market designs and pricing schemes fail to give investors the appropriate market signals. In particular, energy prices are not high enough to attract investors to build new or maintain existing power capacity. In this paper we propose a method to compute second-best Pareto optimal equilibrium prices for any market exhibiting non-convexities and, based on this result, an energy market design able to restore the correct energy price signals for supply investors.
Economics Energy market design Equilibria
http://www.sciencedirect.com/science/article/pii/S037722171100213X
Muratore, Gabriella
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:417-4332015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:417-433
article
Surrogate upper bound sets for bi-objective bi-dimensional binary knapsack problems
The paper deals with the definition and the computation of surrogate upper bound sets for the bi-objective bi-dimensional binary knapsack problem. It introduces the Optimal Convex Surrogate Upper Bound set, which is the tightest possible definition based on the convex relaxation of the surrogate relaxation. Two exact algorithms are proposed: an enumerative algorithm and its improved version. This second algorithm results from an accurate analysis of the surrogate multipliers and the dominance relations between bound sets. Based on the improved exact algorithm, an approximated version is derived. The proposed algorithms are benchmarked using a dataset composed of three groups of numerical instances. The performances are assessed thanks to a comparative analysis where exact algorithms are compared between them, the approximated algorithm is confronted to an algorithm introduced in a recent research work.
Combinatorial optimization; Multiple objective programming; Bi-dimensional binary knapsack problem; Surrogate relaxation; Bound sets;
http://www.sciencedirect.com/science/article/pii/S0377221715000557
Cerqueus, Audrey
Przybylski, Anthony
Gandibleux, Xavier
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:628-6362015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:628-636
article
The center of a convex set and capital allocation
A capital allocation scheme for a company that has a random total profit Y and uses a coherent risk measure ρ has been suggested. The scheme returns a unique real number Λρ*(X,Y), which determines the capital that should be allocated to company’s subsidiary with random profit X. The resulting capital allocation is linear and diversifying as defined by Kalkbrener (2005). The problem is reduced to selecting the “center” of a non-empty convex weakly compact subset of a Banach space, and the solution to the latter problem proposed by Lim (1981) has been used. Our scheme can also be applied to selecting the unique Pareto optimal allocation in a wide class of optimal risk sharing problems.
Capital allocation; Risk contribution; Coherent risk measures; Risk sharing;
http://www.sciencedirect.com/science/article/pii/S0377221714009862
Grechuk, Bogdan
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:340-3472015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:340-347
article
The nested consideration model: Investigating dynamic store consideration sets and store competition
The nested logit model has been widely used to study nested choice. A typical example of such nested choice is store patronage and brand choice. An important limitation of the nested logit model is that it assumes that all alternatives at both levels of the nest are available in the choice set of the consumer. While there is a wide literature on the incorporation of restricted choice sets into the logit model, the logical extension of this analysis to nested restricted choice sets has not been pursued in the literature. In this study we develop a nested consideration model that integrates store choice and brand choice incorporating the formation of dynamic restricted choice sets of both stores and brands. We term the model the nested consideration model and derive the related probabilities in a manner analogous to the well-known nested logit model. In an empirical illustration, we find that the nested consideration model shows better prediction than nested logit models (with the same explanatory variables). We find that it is important to account for dynamic store consideration sets rather than static sets or store loyalty measures. We also conduct simulations to demonstrate the importance of the nested consideration set model for correct pricing and store location decisions of business managers.
Marketing Nested consideration Store competition Consideration sets Nested logit
http://www.sciencedirect.com/science/article/pii/S0377221711003717
Pancras, Joseph
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:360-3682015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:360-368
article
An effective branch-and-price algorithm for the Preemptive Resource Constrained Project Scheduling Problem based on minimal Interval Order Enumeration
In this paper we address the Preemptive Resource Constrained Project Scheduling Problem (PRCPSP). PRCPSP requires a partially ordered set of activities to be scheduled using limited renewable resources such that any activity can be interrupted and later resumed without penalty. The objective is to minimize the project duration. This paper proposes an effective branch-and-price algorithm for solving PRCPSP based upon minimal Interval Order Enumeration involving column generation as well as constraint propagation. Experiments conducted on various types of instances have given very satisfactory results. Our algorithm is able to solve to optimality the entire set of J30, BL and Pack instances while satisfying the preemptive requirement. Furthermore, this algorithm provides improved best-known lower bounds for some of the J60, J90 and J120 instances in the non-preemptive case (RCPSP).
Project scheduling; Interval order; Column generation; Constraint propagation; Antichain;
http://www.sciencedirect.com/science/article/pii/S0377221714010558
Moukrim, Aziz
Quilliot, Alain
Toussaint, Hélène
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:525-5392015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:525-539
article
Multi-objective portfolio optimization considering the dependence structure of asset returns
Portfolio optimization context has shed only a little light on the dependence structure among the financial returns along with the fat-tailed distribution associated with them. This study tries to find a remedy for this shortcoming by exploiting stable distributions as the marginal distributions together with the dependence structure based on copula function. We formulate the portfolio optimization problem as a multi-objective mixed integer programming. Value-at-Risk (VaR) is specified as the risk measure due to its intuitive appeal and importance in financial regulations. In order to enhance the model's applicability, we take into account cardinality and quantity constraints in the model. Imposing such practical constraints has resulted in a non-continuous feasible region. Hence, we propose two variants of multi-objective particle swarm optimization (MOPSO) algorithms to tackle this issue. Finally, a comparative study among the proposed MOPSOs, NSGAII and SPEA2 algorithms is made to demonstrate which algorithm is outperformed. The empirical results reveal that one of the proposed MOPSOs is superior over the other salient algorithms in terms of performance metrics.
Metaheuristics; Portfolio optimization; Stable distribution; Copula function; Multi-objective particle swarm optimization;
http://www.sciencedirect.com/science/article/pii/S0377221715000454
Babaei, Sadra
Sepehri, Mohammad Mehdi
Babaei, Edris
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:187-2002015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:187-200
article
Sustainable trade credit and replenishment decisions with credit-linked demand under carbon emission constraints
In this paper, we consider issues of sustainability in the context of joint trade credit and inventory management in which the demand depends on the length of the credit period offered by the retailer to its customers. We quantify the impacts of the credit period and environmental regulations on the inventory model. Starting with some mild assumptions, we first analyze the model with generalized demand and default risk rates under the Carbon Cap-and-Trade policy, and then we make some extensions to the model with the Carbon Offset policy. We further analytically examine the effects of carbon emission parameters on the retailer’s trade credit and replenishment strategies. Finally, a couple of numerical examples and sensitivity analysis are given to illustrate the features of the proposed model, which is followed by concluding remarks.
Environmental regulation; Inventory; Trade credit; Default risk; Carbon emissions;
http://www.sciencedirect.com/science/article/pii/S0377221715000466
Dye, Chung-Yuan
Yang, Chih-Te
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:497-5132015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:497-513
article
Multiobjective optimization: When objectives exhibit non-uniform latencies
Building on recent work by the authors, we consider the problem of performing multiobjective optimization when the objective functions of a problem have differing evaluation times (or latencies). This has general relevance to applications since objective functions do vary greatly in their latency, and there is no reason to expect equal latencies for the objectives in a single problem. To deal with this issue, we provide a general problem definition and suitable notation for describing algorithm schemes that can use different evaluation budgets for each objective. We propose three schemes for the bi-objective version of the problem, including methods that interleave the evaluations of different objectives. All of these can be instantiated with existing multiobjective evolutionary algorithms (MOEAs). In an empirical study we use an indicator-based evolutionary algorithm (IBEA) as the MOEA platform to study performance on several benchmark test functions. Our findings generally show that the default approach of going at the rate of the slow objective is not competitive with our more advanced ones (interleaving evaluations) for most scenarios.
Multiobjective optimization; Evolutionary computation; Delayed objective functions; Closed-loop optimization; Budgeted optimization;
http://www.sciencedirect.com/science/article/pii/S0377221714007723
Allmendinger, Richard
Handl, Julia
Knowles, Joshua
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:665-6772015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:665-677
article
A probability tree model of audit quality
There remains little consensus about how to define and formulate audit quality. It does not have a consistent definition and operationalization across studies and this has troubled theorists for many years. This study contributes to this discussion by introducing a probability tree model of audit quality. This model is built up of characteristics to create an association with the four sets of audit quality indicators; inputs, process, context, and outcomes (Knechel, W. R, Krishnan, G. V., Pevzner, M., Shefchik, L. B., & Velury, U. (2012). Audit quality: Insights from the academic literature. Auditing: A Journal of Practice & Theory 32(1), 385–421). The purpose is to show how these indicators interplay in the context of audit quality. The model describes the audit program of an audit engagement as a random tree model based on a stochastic process. Following Simon's (Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review 63(2), 129–138; Simon, H. A. (1957). Models of man. Social and rational. New York: John Wiley & Sons) description of adaptive behavior the model describes an audit program as an organic procedure where an auditor does not maximize but is seeking for material misstatements (inadvertent errors) in a random environment. If the auditor under a budget constraint does not (in spite of positive inherent risk) detect any misstatement, the audit program will erroneously end with an unqualified report (false negative outcome). In this context, we measure subjective audit quality as the probability of the complement of this event (probability of detecting one or more misstatements). We also introduce a concept of perfect auditor with optimal characteristics. Finally, we measure objective audit quality as the relation of the complement event probabilities between the auditor and the perfect auditor. The analytical results are demonstrated by numerical examples.
Applied probability; Probabilistic model; Random tree; Audit quality; Detection probability;
http://www.sciencedirect.com/science/article/pii/S0377221714010224
Laitinen, Erkki K.
Laitinen, Teija
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:386-3942015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:386-394
article
Quantifying uncertainty on Pareto fronts with Gaussian process conditional simulations
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Multi-objective optimization; Attainment function; Vorob’ev expectation; Expected Hypervolume Improvement; Kriging;
http://www.sciencedirect.com/science/article/pii/S0377221714005980
Binois, M.
Ginsbourger, D.
Roustant, O.
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:511-5182015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:511-518
article
Testing the accuracy of DEA estimates under endogeneity through a Monte Carlo simulation
Endogeneity, and the distortions on the estimation of economic models that it causes, is a usual problem in the econometrics literature. Although non-parametric methods like Data Envelopment Analysis (DEA) are among the most used techniques for measuring technical efficiency, the effects of such problem on efficiency estimates have received little attention. The aim of this paper is to alert DEA practitioners about the accuracy of their estimates under the presence of endogeneity. For this, first we illustrate the endogeneity problem and its causes in production processes and its implications for the efficiency measurement from a conceptual perspective. Second, using synthetic data generated in a Monte Carlo experiment we evaluate how different levels of positive and negative endogeneity can impair DEA estimations. We conclude that, although DEA is robust to negative endogeneity, a high positive endogeneity level, i.e., the existence of a high positive correlation between one input and the true efficiency level, might bias severely DEA estimates.
Data envelopment analysis; Endogeneity; Monte Carlo experiments;
http://www.sciencedirect.com/science/article/pii/S0377221715000351
Cordero, José Manuel
Santín, Daniel
Sicilia, Gabriela
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:227-2392015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:227-239
article
Algorithm for computing the queue length distribution at various time epochs in DMAP/G(1, a, b)/1/N queue with batch-size-dependent service time
This paper presents a discrete-time single-server finite-buffer queue with Markovian arrival process and generally distributed batch-size-dependent service time. Given that infinite service time is not commonly encountered in practical situations, we suppose that the distribution of the service time has a finite support. Recently, a similar continuous-time system with Poisson input process was discussed by Banerjee and Gupta (2012). But unfortunately, their method is hard to apply in the analysis of discrete-time case with versatile Markovian point process due to the fact that the difference equation governing the boundary state probabilities is more complex than the continuous one. If we follow their ideas, we will eventually find that some important joint queue length distributions cannot be computed and thus some key performance measures cannot be derived. In this paper, replacing the finite support renewal distribution with an appropriate phase-type distribution, the joint state probabilities at various time epochs (arbitrary, pre-arrival and departure) have been obtained by using matrix analytic method and embedded Markov chain technique. Furthermore, UL-type RG-factorization is employed in numerical computation of block-structured Markov chains with finitely-many levels. Some numerical examples are presented to demonstrate the feasibility of the proposed algorithm for several service time distributions. Moreover, the impact of the correlation factor on loss probability and mean sojourn time is also investigated.
Queueing; Batch-size-dependent service; Markovian arrival process; Phase-type distribution; Finite-buffer;
http://www.sciencedirect.com/science/article/pii/S0377221715000764
Yu, Miaomiao
Alfa, Attahiru Sule
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:77-852015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:77-85
article
A parallelised distributed implementation of a Branch and Fix Coordination algorithm
Branch and Fix Coordination is an algorithm intended to solve large scale multi-stage stochastic mixed integer problems, based on the particular structure of such problems, so that they can be broken down into smaller subproblems. With this in mind, it is possible to use distributed computation techniques to solve the several subproblems in a parallel way, almost independently. To guarantee non-anticipativity in the global solution, the values of the integer variables in the subproblems are coordinated by a master thread. Scenario ‘clusters’ lend themselves particularly well to parallelisation, allowing us to solve some problems noticeably faster. Thanks to the decomposition into smaller subproblems, we can also attempt to solve otherwise intractable instances. In this work, we present details on the computational implementation of the Branch and Fix Coordination algorithm.
Stochastic mixed-integer problems; Branch and fix coordination algorithm; Parallel programming;
http://www.sciencedirect.com/science/article/pii/S0377221715000053
Pagès-Bernaus, Adela
Pérez-Valdés, Gerardo
Tomasgard, Asgeir
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:1-142015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:1-14
article
Interest rate term structure modelling
This article surveys approaches to modelling the term structure of interest rates. Over the last few decades several frameworks have been developed, which are actively used in banks for the pricing and risk management of interest rate related products. There seems to be a need for an introductory overview of modelling approaches aimed at the yet unfamiliar reader with a quantitative background.
Finance Interest rate Term structure Arbitrage pricing theory
http://www.sciencedirect.com/science/article/pii/S0377221711000877
Schmidt, Wolfgang M.
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:480-4962015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:480-496
article
Generalized higher-level automated innovization with application to inventory management
This paper generalizes the automated innovization framework using genetic programming in the context of higher-level innovization. Automated innovization is an unsupervised machine learning technique that can automatically extract significant mathematical relationships from Pareto-optimal solution sets. These resulting relationships describe the conditions for Pareto-optimality for the multi-objective problem under consideration and can be used by scientists and practitioners as thumb rules to understand the problem better and to innovate new problem solving techniques; hence the name innovization (innovation through optimization). Higher-level innovization involves performing automated innovization on multiple Pareto-optimal solution sets obtained by varying one or more problem parameters. The automated innovization framework was recently updated using genetic programming. We extend this generalization to perform higher-level automated innovization and demonstrate the methodology on a standard two-bar bi-objective truss design problem. The procedure is then applied to a classic case of inventory management with multi-objective optimization performed at both system and process levels. The applicability of automated innovization to this area should motivate its use in other avenues of operational research.
Automated innovization; Higher-level innovization; Genetic programming; Inventory management; Knowledge discovery;
http://www.sciencedirect.com/science/article/pii/S0377221714009199
Bandaru, Sunith
Aslam, Tehseen
Ng, Amos H.C.
Deb, Kalyanmoy
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:365-3792015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:365-379
article
Credit risk model with contagious default dependencies affected by macro-economic condition
We consider a credit risk model with two industrial sectors, where defaults of corporations would be influenced by two factors. The first factor represents the macro economic condition which would affect the default intensities of the two industrial sectors differently. The second factor reflects the influences of the past defaults of corporations against other active corporations, where such influences would affect the two industrial sectors differently. A two-layer Markov chain model is developed, where the macro economic condition is described as a birth-death process, while another Markov chain represents the stochastic characteristics of defaults with default intensities dependent on the state of the birth-death process and the number of defaults in two sectors. Although the state space of the two-layer Markov chain is huge, the fundamental absorbing process with a reasonable state space size could capture the first passage time structure of the two-layer Markov chain, thereby enabling one to evaluate the joint probability of the number of defaults in two sectors via the uniformization procedure of Keilson. This in turn enables one to value a variety of derivatives defined on the underlying credit portfolios. In this paper, we focus on a financial product called CDO, and a related option.
Finance Pricing Risk analysis Systems dynamics
http://www.sciencedirect.com/science/article/pii/S0377221711003857
Takada, Hideyuki
Sumita, Ushio
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:678-6812015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:678-681
article
New results on high-order risk changes
This note extends the results on the first four derivatives of the utility function by Menegatti (Eur. J. Oper. Res. 232 (2014) 613–617) to the case of high-order derivatives. We show that, under usual assumptions, if the generic derivative of the utility function of order n is sign invariant then all the derivatives from order n to order 2 alternate in sign. We then focus on the case where the derivative of the utility function of order n is either positive when n is odd or negative when n is even, and we show the implications of this result for high-order risk changes and for saving decisions.
Utility theory; Risk; nth-Order risk change; nth-Order derivatives ,;
http://www.sciencedirect.com/science/article/pii/S0377221714010248
Menegatti, Mario
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:731-7442015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:731-744
article
On service consistency in multi-period vehicle routing
In this paper, we investigate a new variant of the vehicle routing problem (VRP), termed the multi-period vehicle routing problem with time windows and limited visiting quota (MVRPTW-LVQ), which requires that any customer can be served by at most a certain number of different vehicles over the planning horizon. We first formulate this problem as a mixed integer programming model and then devise a three-stage heuristic approach to solve the problem. Extensive computational experiments demonstrate the effectiveness of our approach. Moreover, we empirically analyze the impacts of varying the levels of service consistency and demand fluctuation on the operational cost. The analysis results show that when demand fluctuation is relatively small compared to vehicle capacity, enforcing consistent service can increase customer satisfaction with only a slight increase in the operational cost. However, when a vehicle can only serve a small number of customers due to its capacity limit, relaxing the service consistency requirement by increasing the value of the visiting quota could be considered.
Heuristics; Vehicle routing; Service consistency; Demand fluctuation;
http://www.sciencedirect.com/science/article/pii/S0377221714010200
Luo, Zhixing
Qin, Hu
Che, ChanHou
Lim, Andrew
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:442-4532015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:442-453
article
The iPICEA-g: a new hybrid evolutionary multi-criteria decision making approach using the brushing technique
Various preference-based multi-objective evolutionary algorithms have been developed to help a decision-maker search for his/her preferred solutions to multi-objective problems. In most of the existing approaches the decision-maker preferences are formulated either by mathematical expressions such as the utility function or simply by numerical values such as aspiration levels and weights. However, in some sense a decision-maker may find it easier to specify preferences visually by drawing rather than using numbers. This paper introduces such a method, namely, the brushing technique. Using this technique the decision-maker can specify his/her preferences easily by drawing in the objective space. Combining the brushing technique with one existing algorithm PICEA-g, we present a novel approach named iPICEA-g for an interactive decision-making. The performance of iPICEA-g is tested on a set of benchmark problems and is shown to be good.
Preference articulation; Decision making; Multi-objective optimization; Evolutionary algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221714008789
Wang, Rui
Purshouse, Robin C.
Giagkiozis, Ioannis
Fleming, Peter J.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:418-4272015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:418-427
article
Forecasting the value effect of seasoned equity offering announcements
Seasoned Equity Offers (SEOs) by publicly listed firms generally result in unexpected negative share price returns, being often perceived as a signal of overvalued share prices and information asymmetries. Hence, forecasting the value effect of such announcements is of crucial importance for issuers, who wish to avoid share price dilution, but also for professional fund managers and individual investors alike. This study adopts the OR forecasting paradigm, where the latest part of the data is used as a holdout, on which a competition is performed unveiling the most effective forecasting techniques for the matter in question. We employ data from a European Market raising in total [euro]8 billion through 149 SEOs. We compare economic and econometric models to forecasting techniques mostly applied in the OR literature such as Nearest Neighbour approaches, Artificial Neural Networks as well as human Judgment. Evaluation in terms of statistical accuracy metrics indicates the superiority of the econometric models, while economic evaluation based on trading strategies and simulated profits attests expert judgement and nearest-neighbour approaches as top performers.
Financial forecasting Forecasting competitions Econometric models Artificial neural networks Judgment
http://www.sciencedirect.com/science/article/pii/S0377221711003481
Bozos, Konstantinos
Nikolopoulos, Konstantinos
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:576-5912015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:576-591
article
Generic constraints handling techniques in constrained multi-criteria optimization and its application
This paper investigates the constraints handling technique (CHT) in algorithms of the constrained multi-criteria optimization problem (CMOP). The CHT is an important research topic in constrained multi-criteria optimization (MO). In this paper, two simple and practicable CHTs are proposed, where one is a nonequivalent relaxation approach which is much suitable for the constrained multi-criteria discrete optimization problem (MDOP), and the other is an equivalent relaxation approach for the general CMOP. By using these CHTs, a CMOP (i.e., the primal problem) can be transformed into an unconstrained multi-criteria optimization problem (MOP) (i.e., the relaxation problem). Based on the first CHT, it is theoretically proven that the efficient set of the primal CMOP is a subset of the strictly efficient set E¯ of the relaxation problem and can be extracted from E¯ by simply checking the dominance relation between the solutions in E¯. Follows from these theoretical results, a three-phase based idea is given to effectively utilize the existing algorithms for the unconstrained MDOP to solve the constrained MDOP. In the second CHT, the primal CMOP is equivalently transformed into an unconstrained MOP by a special relaxation approach. Based on such a CHT, it is proven that the primal problem and its relaxation problem have the same efficient set and, therefore, general CMOPs can be solved by utilizing any of the existing algorithms for the unconstrained MOPs. The implementing idea, say two-phase based idea, of the second CHT is illustrated by implanting a known MOEA. Finally, the two-phase based idea is applied to some of the early MOEAs and the application performances are comprehensively tested with some benchmarks of the CMOP.
Constraint programming; Multi-criteria optimization (MO); Multi-criteria optimization evolutionary algorithm (MOEA); Evolutionary algorithm (EA); Constraints handling technique (CHT);
http://www.sciencedirect.com/science/article/pii/S0377221715000715
Liu, Linzhong
Mu, Haibo
Yang, Juhua
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:15-262015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:15-26
article
Pivot-and-reduce cuts: An approach for improving Gomory mixed-integer cuts
Gomory mixed-integer cuts are of crucial importance in solving large-scale mixed-integer linear programs. Recently, there has been considerable research particularly on the strengthening of these cuts. We present a new heuristic algorithm which aims at improving Gomory mixed-integer cuts. Our approach is related to the reduce-and-split cuts. These cuts are based on an algorithm which tries to reduce the coefficients of the continuous variables by forming integer linear combinations of simplex tableau rows. Our algorithm is designed to achieve the same result by performing pivots on the simplex tableau. We give a detailed description of the algorithm and its implementation. Finally, we report on computational results with our approach and analyze its performance. The results indicate that our algorithm can enhance the performance of the Gomory mixed-integer cuts.
Integer programming Cutting planes Gomory mixed-integer cuts
http://www.sciencedirect.com/science/article/pii/S037722171100350X
Wesselmann, Franz
Koberstein, Achim
Suhl, Uwe H.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:99-1082015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:99-108
article
Interaction indices for games on combinatorial structures with forbidden coalitions
The notion of interaction among a set of players has been defined on the Boolean lattice and Cartesian products of lattices. The aim of this paper is to extend this concept to combinatorial structures with forbidden coalitions. The set of feasible coalitions is supposed to fulfil some general conditions. This general representation encompasses convex geometries, antimatroids, augmenting systems and distributive lattices. Two axiomatic characterizations are obtained. They both assume that the Shapley value is already defined on the combinatorial structures. The first one is restricted to pairs of players and is based on a generalization of a recursivity axiom that uniquely specifies the interaction index from the Shapley value when all coalitions are permitted. This unique correspondence cannot be maintained when some coalitions are forbidden. From this, a weak recursivity axiom is defined. We show that this axiom together with linearity and dummy player are sufficient to specify the interaction index. The second axiomatic characterization is obtained from the linearity, dummy player and partnership axioms. An interpretation of the interaction index in the context of surplus sharing is also proposed. Finally, our interaction index is instantiated to the case of games under precedence constraints.
Game theory Cooperative games Interaction index Combinatorial structure Shapley value
http://www.sciencedirect.com/science/article/pii/S0377221711003493
Labreuche, Christophe
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:66-762015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:66-76
article
Decomposition based hybrid metaheuristics
Difficult combinatorial optimization problems coming from practice are nowadays often approached by hybrid metaheuristics that combine principles of classical metaheuristic techniques with advanced methods from fields like mathematical programming, dynamic programming, and constraint programming. If designed appropriately, such hybrids frequently outperform simpler “pure” approaches as they are able to exploit the underlying methods’ individual advantages and benefit from synergy. This article starts with a general review of design patterns for hybrid approaches that have been successful on many occasions. More complex practical problems frequently have some special structure that might be exploited. In the field of mixed integer linear programming, three decomposition techniques are particularly well known for taking advantage of special structures: Lagrangian decomposition, Dantzig–Wolfe decomposition (column generation), and Benders’ decomposition. It has been recognized that these concepts may also provide a very fruitful basis for effective hybrid metaheuristics. We review the basic principles of these decomposition techniques and discuss for each promising possibilities for combinations with metaheuristics. The approaches are illustrated with successful examples from literature.
Combinatorial optimization; Metaheuristics; Mixed integer programming; Hybrid optimization approaches; Decomposition techniques;
http://www.sciencedirect.com/science/article/pii/S0377221714009874
Raidl, Günther R.
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:248-2602015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:248-260
article
CRM in social media: Predicting increases in Facebook usage frequency
The purpose of this study is to (1) assess the feasibility of predicting increases in Facebook usage frequency, (2) evaluate which algorithms perform best, (3) and determine which predictors are most important. We benchmark the performance of Logistic Regression, Random Forest, Stochastic Adaptive Boosting, Kernel Factory, Neural Networks and Support Vector Machines using five times twofold cross-validation. The results indicate that it is feasible to create models with high predictive performance. The top performing algorithm was Stochastic Adaptive Boosting with a cross-validated AUC of 0.66 and accuracy of 0.74. The most important predictors include deviation from regular usage patterns, frequencies of likes of specific categories and group memberships, average photo album privacy settings, and recency of comments. Facebook and other social networks alike could use predictions of increases in usage frequency to customize its services such as pacing the rate of advertisements and friend recommendations, or adapting News Feed content altogether. The main contribution of this study is that it is the first to assess the prediction of increases in usage frequency in a social network.
Decision support systems; Social media; Data mining; Predictive analytics; Facebook;
http://www.sciencedirect.com/science/article/pii/S0377221715000028
Ballings, Michel
Van den Poel, Dirk
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:347-3612015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:347-361
article
GP-DEMO: Differential Evolution for Multiobjective Optimization based on Gaussian Process models
This paper proposes a novel surrogate-model-based multiobjective evolutionary algorithm called Differential Evolution for Multiobjective Optimization based on Gaussian Process models (GP-DEMO). The algorithm is based on the newly defined relations for comparing solutions under uncertainty. These relations minimize the possibility of wrongly performed comparisons of solutions due to inaccurate surrogate model approximations. The GP-DEMO algorithm was tested on several benchmark problems and two computationally expensive real-world problems. To be able to assess the results we compared them with another surrogate-model-based algorithm called Generational Evolution Control (GEC) and with the Differential Evolution for Multiobjective Optimization (DEMO). The quality of the results obtained with GP-DEMO was similar to the results obtained with DEMO, but with significantly fewer exactly evaluated solutions during the optimization process. The quality of the results obtained with GEC was lower compared to the quality gained with GP-DEMO and DEMO, mainly due to wrongly performed comparisons of the inaccurately approximated solutions.
Multiple objective programming; Evolutionary algorithms; Surrogate models; Gaussian Process modeling; Probable Pareto dominance;
http://www.sciencedirect.com/science/article/pii/S0377221714003208
Mlakar, Miha
Petelin, Dejan
Tušar, Tea
Filipič, Bogdan
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:588-5982015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:588-598
article
A distribution-free TSP tour length estimation model for random graphs
Traveling Salesman Problem (TSP) tour length estimations can be used when it is not necessary to know an exact tour, e.g., when using certain heuristics to solve location-routing problems. The best estimation models in the TSP literature focus on random instances where the node dispersion is known; those that do not require knowledge of node dispersion are either less accurate or slower. In this paper, we develop a new regression-based tour length estimation model that is distribution-free, accurate, and fast, with a small standard deviation of the estimation errors. When the distribution of the node coordinates is known, it provides a close estimate of the well-known asymptotic tour length estimation formula of Beardwood et al. (1959); more importantly, when the distribution is unknown or non-integrable so Beardwood et al.’s estimation cannot be used, our model still provides good, fast tour length estimates.
TSP; Tour length estimation;
http://www.sciencedirect.com/science/article/pii/S0377221714010212
Çavdar, Bahar
Sokol, Joel
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:509-5162015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:509-516
article
Subexponential asymptotics of the stationary distributions of M/G/1-type Markov chains
This paper studies the subexponential asymptotics of the stationary distribution of an M/G/1-type Markov chain. We provide a sufficient condition for the subexponentiality of the stationary distribution. The sufficient condition requires only the subexponential integrated tail of level increments. On the other hand, the previous studies assume the subexponentiality of level increments themselves and/or the aperiodicity of the G-matrix. Therefore, our sufficient condition is weaker than the existing ones. We also mention some errors in the literature.
Queueing Subexponential asymptotics M/G/1-type Markov chain Periodicity G-matrix BMAP
http://www.sciencedirect.com/science/article/pii/S037722171100275X
Masuyama, Hiroyuki
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:618-6272015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:618-627
article
An analytical framework for supply network risk propagation: A Bayesian network approach
There are numerous examples of supply chain disruptions that have occurred which have had devastating impacts not only on a single firm but also on various other firms in the supply network. We utilize a Bayesian Network (BN) approach and develop a model of risk propagation in a supply network. The model takes into account the inter-dependencies among different risks, as well as the idiosyncrasies of a supply chain network structure. Specific risk measures are derived from this model and a simulation study is utilized to illustrate how these measures can be used in a supply chain setting.
Risk analysis; Risk management; Supply chain management; Networks; Uncertainty modeling;
http://www.sciencedirect.com/science/article/pii/S037722171400856X
Garvey, Myles D.
Carnovale, Steven
Yeniyurt, Sengun
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:874-8822015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:874-882
article
Cost analysis for multi-component system with failure interaction under renewing free-replacement warranty
In a multi-component system, the assumption of failure independence among components is seldom valid, especially for those complex systems with complicated failure mechanism. For such systems, warranty cost is subject to all the factors including system configuration, quality of each component and the extent of failure dependence among components. In this paper, a model is developed based on renewing free-replacement warranty by considering failure interaction among components. It is assumed that whenever a component (subsystem) fails, it can induce a failure of one or more of the remaining components (subsystems). Cost models for series and parallel system configurations are presented, followed by numerical examples with sensitivity analysis. The results show that, compared with series systems, warranty cost for parallel systems is more sensitive to failure interaction.
Multi-component system; Failure interaction; Renewing free-replacement warranty;
http://www.sciencedirect.com/science/article/pii/S0377221715000508
Liu, Bin
Wu, Jun
Xie, Min
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:445-4562015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:445-456
article
The mixed capacitated arc routing problem with non-overlapping routes
Real world applications for vehicle collection or delivery along streets usually lead to arc routing problems, with additional and complicating constraints. In this paper we focus on arc routing with an additional constraint to identify vehicle service routes with a limited number of shared nodes, i.e. vehicle service routes with a limited number of intersections. This constraint leads to solutions that are better shaped for real application purposes. We propose a new problem, the bounded overlapping MCARP (BCARP), which is defined as the mixed capacitated arc routing problem (MCARP) with an additional constraint imposing an upper bound on the number of nodes that are common to different routes. The best feasible upper bound is obtained from a modified MCARP in which the minimization criteria is given by the overlapping of the routes. We show how to compute this bound by solving a simpler problem. To obtain feasible solutions for the bigger instances of the BCARP heuristics are also proposed. Computational results taken from two well known instance sets show that, with only a small increase in total time traveled, the model BCARP produces solutions that are more attractive to implement in practice than those produced by the MCARP model.
Routing; Integer linear programming; Heuristics; District design; Capacitated arc routing;
http://www.sciencedirect.com/science/article/pii/S0377221715000624
Constantino, Miguel
Gouveia, Luís
Mourão, Maria Cândida
Nunes, Ana Catarina
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:430-4412015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:430-441
article
A fuzzy goal programming approach for mid-term assortment planning in supermarkets
We develop a fuzzy mixed integer non-linear goal programming model for the mid-term assortment planning of supermarkets in which three conflicting objectives namely profitability, customer service, and space utilization are incorporated. The items and brands in a supermarket compete to obtain more space and better shelf level. This model offers different service levels to loyal and disloyal customers, applies joint replenishment policy, and accounts for the holding time limitation of perishable items. We propose a fuzzy approach due to the imprecise nature of the goals' target levels and priorities as well as critical data. A heuristic method inspiring by the problem-specific rules is developed to solve this complex model approximately within a reasonable time. Finally, the proposed approach is validated through several numerical examples and results are reported.
Assortment planning Retailing Fuzzy goal programming
http://www.sciencedirect.com/science/article/pii/S0377221711003067
Lotfi, M.M.
Torabi, S.A.
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:752-7622015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:752-762
article
The multiple vehicle pickup and delivery problem with LIFO constraints
This paper approaches a pickup and delivery problem with multiple vehicles in which LIFO conditions are imposed when performing loading and unloading operations and the route durations cannot exceed a given limit. We propose two mixed integer formulations of this problem and a heuristic procedure that uses tabu search in a multi-start framework. The first formulation is a compact one, that is, the number of variables and constraints is polynomial in the number of requests, while the second one contains an exponential number of constraints and is used as the basis of a branch-and-cut algorithm. The performances of the proposed solution methods are evaluated through an extensive computational study using instances of different types that were created by adapting existing benchmark instances. The proposed exact methods are able to optimally solve instances with up to 60 nodes.
Vehicle routing; Pickup and delivery; LIFO constraints; Mixed-integer programming; Branch and cut;
http://www.sciencedirect.com/science/article/pii/S0377221714010479
Benavent, Enrique
Landete, Mercedes
Mota, Enrique
Tirado, Gregorio
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:39-522015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:39-52
article
Modeling and analysis of the multiperiod effects of social relationship on supply chain networks
In this paper, we analyze the effects of levels of social relationship on a multiperiod supply chain network with multiple decision-makers (suppliers, manufacturers, and retailers) associated at different tiers. The model incorporates the individual attitudes towards disruption and opportunism risks and allows us to investigate the interplay of the heterogeneous decision-makers and to compute the resultant network equilibrium pattern of production, transactions, prices, and levels of social relationship over the multiperiod planning horizon. In our analysis, we focus on the following questions: (1) how do the evolving relationships affect the profitability and risks of supply chain firms as well as the prices and demands of the product in the market? (2) how do the relationships with the upstream supply chain firms affect the relationships with the downstream firms, and how these relationships influence the profitability and risks of the supply chain firms? (3) how do the supply disruption risks interact with the opportunism risks through supply chain relationships, and how these risks influence the profitability of the firms? The results show that high levels of relationship can lead to lower supply chain overall cost, lower risk, lower prices, higher product transaction and therefore higher profit.
Supply chain management Social relationship Risk management Network equilibrium Pricing Multicriteria decision-making
http://www.sciencedirect.com/science/article/pii/S0377221711002815
Cruz, Jose M.
Liu, Zugang
oai:RePEc:eee:ejores:v:243:y:2015:i:2:p:454-4642015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:2:p:454-464
article
Refined ranking relations for selection of solutions in multi objective metaheuristics
Two methods for ranking of solutions of multi objective optimization problems are proposed in this paper. The methods can be used, e.g. by metaheuristics to select good solutions from a set of non dominated solutions. They are suitable for population based metaheuristics to limit the size of the population. It is shown theoretically that the ranking methods possess some interesting properties for such applications. In particular, it is shown that both methods form a total preorder and are both refinements of the Pareto dominance relation. An experimental investigation for a multi objective flow shop problem shows that the use of the new ranking methods in a Population-based Ant Colony Optimization algorithm and in a genetic algorithm leads to good results when compared to other methods.
Multi objective optimization; Ranking relations; Ant colony optimization; Genetic algorithms; Flow shop scheduling problem;
http://www.sciencedirect.com/science/article/pii/S0377221714008662
Moritz, Ruby L.V.
Reich, Enrico
Schwarz, Maik
Bernt, Matthias
Middendorf, Martin
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:47-542015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:47-54
article
TSP Race: Minimizing completion time in time-sensitive applications
In this paper, we present an approach for parallelizing computation and implementation time for problems where the objective is to complete the solution as soon after receiving the problem instance as possible. We demonstrate the approach on the TSP. We define the TSP race problem, present a computation-implementation parallelized (CIP) approach for solving it, and demonstrate CIP’s effectiveness on TSP Race instances. We also demonstrate a method for determining a priori when CIP will be effective. Although in this paper we focus on TSP, our general CIP approach can be effective on other problems and applications with similar time sensitivity.
Computation-implementation parallelization; Heuristic; TSP; TSP race;
http://www.sciencedirect.com/science/article/pii/S0377221714010236
Çavdar, Bahar
Sokol, Joel
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:478-4882015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:478-488
article
The impact of sharing customer returns information in a supply chain with and without a buyback policy
In this paper, we examine a single period problem in a supply chain in which a Stackelberg manufacturer supplies a product to a retailer who faces customer returns and demand uncertainty. We show that the manufacturer incurs a significant profit loss with and without a buyback policy if it fails to account for customer returns in the wholesale price decision. Under the assumption that the retailer is better informed than the manufacturer on customer returns information, we show that without a buyback policy, the retailer prefers not to share if the manufacturer overestimates while it prefers to share customer returns information if the manufacturer underestimates this information. If the manufacturer offers a buyback policy, we have the opposite results. We also discuss incentives to share the customer returns information and some of the issues that are raised in sharing this information.
Customer returns Information sharing Customer returns policies Buyback policies
http://www.sciencedirect.com/science/article/pii/S0377221711002384
Chen, Jing
oai:RePEc:eee:ejores:v:243:y:2015:i:3:p:683-6962015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:243:y:2015:i:3:p:683-696
article
Operational Research in education
Operational Research (OR) techniques have been applied, from the early stages of the discipline, to a wide variety of issues in education. At the government level, these include questions of what resources should be allocated to education as a whole and how these should be divided amongst the individual sectors of education and the institutions within the sectors. Another pertinent issue concerns the efficient operation of institutions, how to measure it, and whether resource allocation can be used to incentivise efficiency savings. Local governments, as well as being concerned with issues of resource allocation, may also need to make decisions regarding, for example, the creation and location of new institutions or closure of existing ones, as well as the day-to-day logistics of getting pupils to schools. Issues of concern for managers within schools and colleges include allocating the budgets, scheduling lessons and the assignment of students to courses. This survey provides an overview of the diverse problems faced by government, managers and consumers of education, and the OR techniques which have typically been applied in an effort to improve operations and provide solutions.
Markov processes; Optimal control theory; Data envelopment analysis; Stochastic frontier analysis; Scheduling and timetabling;
http://www.sciencedirect.com/science/article/pii/S0377221714008650
Johnes, Jill
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:256-2612015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:256-261
article
A study of repairable parts inventory system operating under performance-based contract
Performance-Based Logistics (PBL) is becoming a dominant logistics support strategy, especially in the defense industry. PBL contracts are designed to serve the customer's key performance measures, while the traditional contracts for after-sales services, such as Fixed-price (FP) and Cost-plus (C+), only provide insurance or incentive. In this research, we develop an inventory model for a repairable parts system operating under a PBL contract. We model the closed-loop inventory system as an M/M/m queue in which component failures are Poisson distributed and the repair times at the service facility are exponential. Our model provides the supplier and the customer increased flexibility in achieving target availability. Analysis of key parameters suggests that to improve the availability of the system with repairable spare parts, the supplier should work to improve the components reliability and efficiency of repair facility, rather than the base stock level, which has minimal impact on system availability.
Logistics Base stock Component reliability Repair facility
http://www.sciencedirect.com/science/article/pii/S0377221711003791
Mirzahosseinian, H.
Piplani, R.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:78-842015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:78-84
article
A multi-product risk-averse newsvendor with exponential utility function
We consider a multi-product newsvendor using an exponential utility function. We first establish a few basic properties for the newsvendor regarding the convexity of the model and monotonicity of the impact of risk aversion on the solution. When the product demands are independent and the ratio of the degree of risk aversion to the number of products is sufficiently small, we obtain closed-form approximations of the optimal order quantities. The approximations are as easy to compute as the risk-neutral solution. We prove that when this ratio approaches zero, the risk-averse solution converges to the corresponding risk-neutral solution. When the product demands are positively (negatively) correlated, we show that risk aversion leads to lower (higher) optimal order quantities than the solution with independent demands. Using a numerical study, we examine convergence rates of the approximations and thoroughly study the interplay of demand correlation and risk aversion. The numerical study confirms our analytical results and further shows that an increased risk aversion does not always lead to lower order quantities, when demands are strongly negatively correlated.
Supply chain management Risk analysis Expected utility theory
http://www.sciencedirect.com/science/article/pii/S0377221711003183
Choi, Sungyong
Ruszczynski, Andrzej
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:489-4972015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:489-497
article
Two-stage production scheduling with an outsourcing option
This paper considers a two-stage production scheduling problem in which each activity requires two operations to be processed in stages 1 and 2, respectively. There are two options for processing each operation: the first is to produce it by utilizing in-house resources, while the second is to outsource it to a subcontractor. For in-house operations, a schedule is constructed and its performance is measured by the makespan, that is, the latest completion time of operations processed in-house. Operations by subcontractors are instantaneous but require outsourcing cost. The objective is to minimize the weighted sum of the makespan and the total outsourcing cost. This paper analyzes how the model's computational complexity changes according to unit outsourcing costs in both stages and describes the boundary between NP-hard and polynomially solvable cases. Finally, this paper presents an approximation algorithm for one NP-hard case.
Scheduling Outsourcing Flow shop scheduling NP-completeness Approximation algorithm
http://www.sciencedirect.com/science/article/pii/S0377221711002748
Lee, Kangbok
Choi, Byung-Cheon
oai:RePEc:eee:ejores:v:244:y:2015:i:2:p:601-6102015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:2:p:601-610
article
A multi-objective interactive system for adaptive traffic control
In this paper, we consider the problem of adaptive traffic control on single junctions with the three following objectives to be minimized: the total waiting time and the number of stops for private vehicles, and a public transport criterion. This problem being modeled as a multi-objective mixed integer linear program, we provide an interactive system based on an adaptive reference point approach. This system adapts, in real-time, priorities given to the different criteria according to the traffic situation. Formal guarantees are provided on the behavior of our system. A comparison with a standard semi-adaptive system on a simulated traffic shows that it provides significantly better solutions.
Traffic control; Multi-objective optimization; Interactive procedures; Mixed integer linear programming; Reference point;
http://www.sciencedirect.com/science/article/pii/S037722171500079X
Dujardin, Yann
Vanderpooten, Daniel
Boillot, Florence
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:331-3382015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:331-338
article
Use of queue modelling in the analysis of elective patient treatment governed by a maximum waiting time policy
Many public healthcare systems struggle with excessive waiting lists for elective patient treatment. Different countries address this problem in different ways, and one interesting method entails a maximum waiting time guarantee. Introduced in Denmark in 2002, it entitles patients to treatment at a private hospital in Denmark or at a hospital abroad if the public healthcare system is unable to provide treatment within the stated maximum waiting time guarantee. Although clearly very attractive in some respects, many stakeholders have been very concerned about the negative consequences of the policy on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both Continuous-Time Markov Chain and Discrete Event Simulation models, to provide an insightful analysis of the public hospital performance under the policy rules. The aim of this paper is to support the enhancement of the quality of elective patient care, to be brought about by better understanding of the policy implications by hospital planners and strategic decision makers.
Queueing; Simulation; Waiting lists; Waiting time guarantee;
http://www.sciencedirect.com/science/article/pii/S0377221715000442
Kozlowski, Dawid
Worthington, Dave
oai:RePEc:eee:ejores:v:244:y:2015:i:1:p:141-1522015-03-26RePEc:eee:ejores
RePEc:eee:ejores:v:244:y:2015:i:1:p:141-152
article
Dedicated vs product flexible production technology: Strategic capacity investment choice
This paper studies the optimal investment strategies of an incumbent and a potential entrant that can both choose between a product flexible and dedicated technology, in a two-product market characterized by uncertain demand. The product flexible production technology has certain advantages, especially when the economic environment is uncertain. On the other hand, the dedicated production technology allows a firm to commit to production quantities. This gives strategic advantages, which can outweigh the ‘value of flexibility’. It turns out that both firms prefer, for some scenarios, the dedicated production technology. However, we find that in a game with sequential technology choices, both firms investing dedicated, will not be an equilibrium. Especially when the economic environment is more uncertain, the incumbent overinvests in product flexible capacity to force the entrant to choose the dedicated technology. Then, the incumbent is the only firm with the product flexible production technology, which results in a high payoff.
Flexible manufacturing systems; Strategic capacity investment; Commitment value; Demand uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221715000272
Boonman, H.J.
Hagspiel, V.
Kort, P.M.
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:727-7382015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:727-738
article
Lead time considerations for the multi-level capacitated lot-sizing problem
The classical multi-level capacitated lot-sizing problem formulation is often not suitable to correctly capture resource requirements and precedence relations. Depending on lead time assumptions, either the model provides infeasible production plans or plans with costly needless inventory. We tackle this issue by explicitly modeling these two aspects and the synchronization of batches of products in the multi-level lot-sizing and scheduling formulation. Two models are presented; one considering batch production and the other one allowing lot-streaming. Comparisons with traditional models demonstrate the capability of the new approach in delivering more realistic results. The generated production plans are always feasible and cost savings of 30–40 percent compared to classical models are observed.
Production; Lot-sizing; Scheduling; Mixed integer programming; Synchronization;
http://www.sciencedirect.com/science/article/pii/S0377221714007693
Almeder, Christian
Klabjan, Diego
Traxler, Renate
Almada-Lobo, Bernardo
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:149-1602015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:149-160
article
City streets parking enforcement inspection decisions: The Chinese postman’s perspective
We view an administrative activity of issuing parking tickets in a dense city street setting, like downtown Philadelphia or NYC, as a revenue collection activity. The task of designing parking permit inspection routes is modeled as a revenue collecting Chinese Postman Problem. After demonstrating that our design of inspection routes maximizes the expected revenue we investigate decision rules that allow the officers to adjust online their inspection routes in response to the observed parking permits’ times. A simple simulation study tests the sensitivity of expected revenues with respect to the problem’s parameters and underscores the main conclusion that allowing an officer to selectively wait by parked cars for the expiration of the cars’ permits increases the expected revenues between 10% and 69 percent.
Chinese postman; Routing; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221714008613
Summerfield, Nichalin S.
Dror, Moshe
Cohen, Morris A.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:316-3312015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:316-331
article
Helping business schools engage with real problems: The contribution of critical realism and systems thinking
The world faces major problems, not least climate change and the financial crisis, and business schools have been criticised for their failure to help address these issues and, in the case of the financial meltdown, for being causally implicated in it. In this paper we begin by describing the extent of what has been called the rigour/relevance debate. We then diagnose the nature of the problem in terms of historical, structural and contextual mechanisms that initiated and now sustain an inability of business schools to engage with real-world issues. We then propose a combination of measures, which mutually reinforce each other, that are necessary to break into this vicious circle – critical realism as an underpinning philosophy that supports and embodies the next points; holism and transdisciplinarity; multimethodology (mixed-methods research); and a critical and ethical-committed stance. OR and management science have much to contribute in terms of both powerful analytical methods and problem structuring methods.
Education; Ethics; Societal problems; Soft OR; Critical management;
http://www.sciencedirect.com/science/article/pii/S0377221714008972
Mingers, John
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:815-8292015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:815-829
article
A group decision-making approach to uncertain quality function deployment based on fuzzy preference relation and fuzzy majority
Quality function deployment (QFD) is one of the very effective customer-driven quality system tools typically applied to fulfill customer needs or requirements (CRs). It is a crucial step in QFD to derive the prioritization of design requirements (DRs) from CRs for a product. However, effective prioritization of DRs is seriously challenged due to two types of uncertainties: human subjective perception and customer heterogeneity. This paper tries to propose a novel two-stage group decision-making approach to simultaneously address the two types of uncertainties underlying QFD. The first stage is to determine the fuzzy preference relations of different DRs with respect to each customer based on the order-based semantics of linguistic information. The second stage is to determine the prioritization of DRs by synthesizing all customers’ fuzzy preference relations into an overall one by fuzzy majority. Two examples, a Chinese restaurant and a flexible manufacturing system, are used to illustrate the proposed approach. The restaurant example is also used to compare with three existing approaches. Implementation results show that the proposed approach can eliminate the burden of quantifying qualitative concepts and model customer heterogeneity and design team’s preference. Due to its easiness, our approach can reduce the cognitive burden of QFD planning team and give a practical convenience in QFD planning. Extensions to the proposed approach are also given to address application contexts involving a wider set of HOQ elements.
Quality management; Uncertain QFD; Group decision-making approach; Fuzzy preference relation; Fuzzy majority;
http://www.sciencedirect.com/science/article/pii/S0377221714007383
Yan, Hong-Bin
Ma, Tieju
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:917-9262015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:917-926
article
A cost-efficient method to optimize package size in emerging marketsAuthor-Name: Gámez Albán, Harol Mauricio
Packaging links the entire supply chain and coordinates all participants in the process to give a flexible and effective response to customer needs in order to maximize satisfaction at optimal cost. This research proposes an optimization model to define the minimum total cost combination of outer packs in various distribution channels with the least opening ratio (the percentage of total orders requiring the opening of an outer pack to exactly meet the demand). A simple routine to define a feasible start point is proposed to reduce the complexity caused by the number of possible combinations. A Fast-Moving Consumer Goods company in an emerging economy (Colombia) is analyzed to test the proposed methodology. The main findings are useful for emerging markets in that they provide significant savings in the whole supply chain and insights into the packaging problem.
Innovative application in OR; Integer programming; Packing; Supply chain management; Emerging markets;
http://www.sciencedirect.com/science/article/pii/S0377221714007590
Soto Cardona, Osman Camilo
Mejía Argueta, Christopher
Sarmiento, Alfonso Tullio
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:888-9062015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:888-906
article
A column generation approach for a multi-attribute vehicle routing problem
In this paper, we consider a multi-attribute vehicle routing problem derived from a real-life milk collection system. This problem is characterized by the presence of a heterogeneous fleet of vehicles, multiple depots, and several resource constraints. A branch-and-price methodology is proposed to tackle the problem. In this methodology, different branching strategies, adapted to the special structure of the problem, are implemented and compared. The computational results show that the branch-and-price algorithm performs well in terms of solution quality and computational efficiency.
Multi-attribute vehicle routing problem; Heterogeneous fleet; Multiple depots; Branch-and-price; Dairy transportation problem;
http://www.sciencedirect.com/science/article/pii/S037722171400736X
Dayarian, Iman
Crainic, Teodor Gabriel
Gendreau, Michel
Rei, Walter
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:212-2212015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:212-221
article
Critical infrastructure protection using secrecy – A discrete simultaneous game
In this research, critical infrastructure protection against intentional attacks is modeled as a discrete simultaneous game between the protector and the attacker, to model the situation that both players keep the information of their resource allocation secret. We prove that keeping the information regarding protection strategies secret can obtain a better effect of critical infrastructure protection than truthfully disclosing it. Solving a game theoretic problem, even in the case of two players, has been known to be intractable. To deal with this complexity, after proving that pure-strategy Nash equilibrium solutions do not exist for the proposed simultaneous game, a new approach is proposed to identify its mixed-strategy Nash equilibrium solution.
Critical infrastructure protection; Simultaneous game; Intentional attack; Information; Secrecy;
http://www.sciencedirect.com/science/article/pii/S0377221714008054
Zhang, Chi
Ramirez-Marquez, José Emmanuel
Wang, Jianhui
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:232-2422015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:232-242
article
The impact of voluntary disclosure on a firm’s investment policy
In this paper we provide a model which describes how voluntary disclosure impacts on the timing of a firm’s investment decisions. A manager chooses a time to invest in a project and a time to disclose the investment return in order to maximise his monetary payoff. We assume that this payoff is linked to the level of the firm’s stock price. Prior to investing, the profitability of the project and the market reaction to the disclosure of the investment return are uncertain, but the manager receives signals at random points in time which assist in resolving some of this uncertainty. We find that a manager whose objective can only be achieved through voluntarily disclosing the return is motivated to invest at a time that would be sub-optimal for an identical manager with a profit maximising objective.
Economics; Real options; Voluntary disclosure; Sub-optimal investment,;
http://www.sciencedirect.com/science/article/pii/S0377221714007863
Delaney, Laura
Thijssen, Jacco J.J.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:172-1812015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:172-181
article
Induction of ordinal classification rules from decision tables with unknown monotonicity
We are considering induction of ordinal classification rules, which assign objects to preference-ordered decision classes, within the dominance-based rough set approach. In order to extract such rules, it is necessary to define dominance inconsistencies with respect to a set of condition attributes containing at least one ordinal condition attribute. Furthermore, it is also assumed that we know if there exist increasing or decreasing monotonicity relationships between the values of ordinal condition and decision attributes. Very often, however, this information is unknown a priori. One solution to this issue is to transform the ordinal condition attributes with unknown directions of preference to pairs of attributes with supposed inverse monotonic relationships. Both local and global monotonicity relationships can be represented by decision rules induced from transformed decision tables. However, in some cases, transforming a decision table in this way is overcomplex. In this paper, we propose the inconsistency rates based on dominance and fuzzy preference relations that have the capacity of discovering monotonic relationships directly from data rather than induced decision rules. Moreover, we propose a refined transformation method by introducing an additional monotonicity checking using these inconsistency rates to determine whether an ordinal condition attribute should be cloned or not. Experiments are also provided to evaluate the usefulness of the refined transformation method.
Rough sets; Inconsistency rates; Multiple criteria decision aiding; Dominance relations;
http://www.sciencedirect.com/science/article/pii/S0377221714007735
Wang, Hailiang
Zhou, Mingtian
She, Kun
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:927-9302015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:927-930
article
Two faster algorithms for coordination of production and batch delivery: A note
This note suggests faster algorithms for two integrated production/distribution problems studied earlier, improving their complexities from O(n2V + 4) and O(n2(L + V)2) to O(n) and O(n + Vmin {V, n}) respectively, where n is the number of products to be delivered, V is the number of vehicles and L is the number of vehicle departure times.
Supply chain scheduling; Batch delivery; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221714008091
Agnetis, Alessandro
Aloulou, Mohamed Ali
Fu, Liang-Liang
Kovalyov, Mikhail Y.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:121-1332015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:121-133
article
Average-cost efficiency and optimal scale sizes in non-parametric analysis
Under fairly general assumptions requiring neither a differentiable frontier nor a constant-returns-to-scale technology, this paper introduces a new definition of an optimal scale size based on the minimization of unit costs. The corresponding measure, average-cost efficiency, combines scale and allocative efficiency, and generalizes the measurement of scale economies in efficiency analysis while providing a performance criterion which is stricter than both cost efficiency and scale efficiency measurement. The average-cost efficiency is not reliant upon the uniformity of the firms’ input-price vector, and we supply procedures to compute it in both convex and non-convex production technologies. Empirical illustration of the theoretical results is given with reference to large sets of production units.
Scale efficiency; Returns to scale; Scale economies; FDH; DEA;
http://www.sciencedirect.com/science/article/pii/S0377221714008017
Cesaroni, Giovanni
Giovannola, Daniele
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:222-2312015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:222-231
article
Multi-objective microzone-based vehicle routing for courier companies: From tactical to operational planning
Distribution companies that serve a very large number of customers, courier companies for example, often partition the geographical region served by a depot into zones. Each zone is assigned to a single vehicle and each vehicle serves a single zone. An alternative approach is to partition the distribution region into smaller microzones that are assigned to a preferred vehicle in a so-called tactical plan. The moment the workload in each microzone is known, the microzones can be reassigned to vehicles in such a way that the total distance traveled is minimized, the workload of the different vehicles is balanced, and as many microzones as possible are assigned to their preferred vehicle. In this paper we model the resulting microzone-based vehicle routing problem as a multi-objective optimization problem and develop a simple yet effective algorithm to solve it. We analyze this algorithm and discuss the results it obtains.
Metaheuristics; Variable neighborhood tabu search; Workload balancing; Multi-objective optimization; Vehicle routing; Courier companies;
http://www.sciencedirect.com/science/article/pii/S0377221714007656
Janssens, Jochen
Van den Bergh, Joos
Sörensen, Kenneth
Cattrysse, Dirk
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:286-3032015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:286-303
article
Bankruptcy prediction using terminal failure processes
Traditional bankruptcy prediction models, designed using classification or regression techniques, achieve short-term performances (1 year) that are fairly good, but that often worsen when the prediction horizon exceeds 1 year. We show how to improve the performance of such models beyond 1 year using models that take into account the evolution of firm’s financial health over a short period of time. For this purpose, we design models that fit the underlying failure process of different groups of firms. Our results demonstrate that such models lead to better prediction accuracy at a 3-year horizon than that achieved with common models.
Forecasting; Finance; Bankruptcy prediction; Failure processes;
http://www.sciencedirect.com/science/article/pii/S037722171400798X
du Jardin, Philippe
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:749-7622015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:749-762
article
Asymmetries in stock marketsAuthor-Name: Wang, Peijie
This paper analyzes three major asymmetries in stock markets, namely, asymmetry in return reversals, asymmetry in return persistency and asymmetry in return volatilities. It argues for a case of return persistency as stock returns do not always reverse, in theory and in practice. Patterns in return-volatility asymmetries are conjectured and investigated jointly, under different stock market conditions. Results from modeling the world's major stock return indexes render support to the propositions of the paper. Return reversal asymmetry is illusionary arising from ambiguous parameter estimations and deluding interpretations of parameter signs. Asymmetry in return persistency, still weak though, is more prevalent.
Asymmetry; Volatility; Return reversals; Return persistency;
http://www.sciencedirect.com/science/article/pii/S0377221714007681
Zhang, Bing
Zhou, Yun
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:872-8792015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:872-879
article
A branch-and-price-and-cut approach for sustainable crop rotation planning
In this paper, we study a multi-periodic production planning problem in agriculture. This problem belongs to the class of crop rotation planning problems, which have received considerable attention in the literature in recent years. Crop cultivation and fallow periods must be scheduled on land plots over a given time horizon so as to minimize the total surface area of land used, while satisfying crop demands every period. This problem is proven strongly NP-hard. We propose a 0-1 linear programming formulation based on crop-sequence graphs. An extended formulation is then provided with a polynomial-time pricing problem, and a Branch-and-Price-and-Cut (BPC) algorithm is presented with adapted branching rules and cutting planes. The numerical experiments on instances varying the number of crops, periods and plots show the effectiveness of the BPC for the extended formulation compared to solving the compact formulation, even though these two formulations have the same linear relaxation bound.
OR in agriculture; Production planning; Integer programming; Decomposition; Column generation;
http://www.sciencedirect.com/science/article/pii/S0377221714008558
Alfandari, Laurent
Plateau, Agnès
Schepler, Xavier
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:243-2602015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:243-260
article
Tackling uncertainty in multi-criteria decision analysis – An application to water supply infrastructure planning
We present a novel approach for practically tackling uncertainty in preference elicitation and predictive modeling to support complex multi-criteria decisions based on multi-attribute utility theory (MAUT). A simplified two-step elicitation procedure consisting of an online survey and face-to-face interviews is followed by an extensive uncertainty analysis. This covers uncertainty of the preference components (marginal value and utility functions, hierarchical aggregation functions, aggregation parameters) and the attribute predictions. Context uncertainties about future socio-economic developments are captured by combining MAUT with scenario planning. We perform a global sensitivity analysis (GSA) to assess the contribution of single uncertain preference parameters to the uncertainty of the ranking of alternatives. This is exemplified for sustainable water infrastructure planning in a case study in Switzerland. We compare 11 water supply alternatives ranging from conventional water supply systems to novel technologies and management schemes regarding 44 objectives. Their performance is assessed for four future scenarios and 10 stakeholders from different backgrounds and decision-making levels. Despite uncertainty in the ranking of alternatives, potential best and worst solutions could be identified. We demonstrate that a priori assumptions such as linear value functions or additive aggregation can result in misleading recommendations, unless thoroughly checked during preference elicitation and modeling. We suggest GSA to focus elicitation on most sensitive preference parameters. Our GSA results indicate that output uncertainty can be considerably reduced by additional elicitation of few parameters, e.g. the overall risk attitude and aggregation functions at higher-level nodes. Here, rough value function elicitation was sufficient, thereby substantially reducing elicitation time.
Decision analysis; Uncertainty modeling and global sensitivity analysis; Multi-attribute utility theory; Preference elicitation; Water infrastructure planning;
http://www.sciencedirect.com/science/article/pii/S0377221714007838
Scholten, Lisa
Schuwirth, Nele
Reichert, Peter
Lienert, Judit
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:662-6732015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:662-673
article
Constraint programming for LNG ship scheduling and inventory management
We propose a constraint programming approach for the optimization of inventory routing in the liquefied natural gas industry. We present two constraint programming models that rely on a disjunctive scheduling representation of the problem. We also propose an iterative search heuristic to generate good feasible solutions for these models. Computational results on a set of large-scale test instances demonstrate that our approach can find better solutions than existing approaches based on mixed integer programming, while being 4–10 times faster on average.
Routing; Inventory; Constraint programming;
http://www.sciencedirect.com/science/article/pii/S0377221714007875
Goel, V.
Slusky, M.
van Hoeve, W.-J.
Furman, K.C.
Shao, Y.
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:642-6522015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:642-652
article
A family of composite discrete bivariate distributions with uniform marginals for simulating realistic and challenging optimization-problem instances
We consider a family of composite bivariate distributions, or probability mass functions (pmfs), with uniform marginals for simulating optimization-problem instances. For every possible population correlation, except the extreme values, there are an infinite number of valid joint distributions in this family. We quantify the entropy for all member distributions, including the special cases under independence and both extreme correlations. Greater variety is expected across optimization-problem instances simulated based on a high-entropy pmf. We present a closed-form solution to the problem of finding the joint pmf that maximizes entropy for a specified population correlation, and we show that this entropy-maximizing pmf belongs to our family of pmfs. We introduce the entropy range as a secondary indicator of the variety of instances that may be generated for a particular correlation. Finally, we discuss how to systematically control entropy and correlation to simulate a set of synthetic problem instances that includes challenging examples and examples with realistic characteristics.
Entropy; Heuristics; Simulation; Correlation; Knapsack Problem;
http://www.sciencedirect.com/science/article/pii/S0377221714007760
Reilly, Charles H.
Sapkota, Nabin
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:45-502015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:45-50
article
Scheduling to minimize the maximum total completion time per machine
In this paper, we study the problem of minimizing the maximum total completion time per machine on m parallel and identical machines. We prove that the problem is strongly NP-hard if m is a part of the input. When m is a given number, a pseudo-polynomial time dynamic programming is proposed. We also show that the worst-case ratio of SPT is at most 2.608 and at least 2.5366 when m is sufficiently large. We further present another algorithm which has a worst-case ratio of 2.
Scheduling; Parallel machine; Worst-case ratio;
http://www.sciencedirect.com/science/article/pii/S0377221714008029
Wan, Long
Ding, Zhihao
Li, Yunpeng
Chen, Qianqian
Tan, Zhiyi
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:10-202015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:10-20
article
Reconfiguration of satellite orbit for cooperative observation using variable-size multi-objective differential evolutionAuthor-Name: Chen, Yingguo
A novel self-adaptive variable-size multi-objective differential evolution algorithm is presented to find the best reconfiguration of existing on-orbit satellites for some particular targets on the ground when an emergent requirement arises in a short period. The main contribution of this study is that three coverage metrics are designed to assess the performance of the reconfiguration. Proposed algorithm utilizes the idea of fixed-length chromosome encoding scheme combined with expression vector and the modified initialization, mutation, crossover and selection operators to search for optimal reconfiguration structure. Multi-subpopulation diversity initialization is adopted first, then the mutation based on estimation of distribution algorithm and adaptive crossover operators are defined to manipulate variable-length chromosomes, and finally a new selection mechanism is employed to generate well-distributed individuals for the next generation. The proposed algorithm is applied to three characteristically different case studies, with the objective to improve the performance with respect to specified targets by minimizing fuel consumption and maneuver time. The results show that the algorithm can effectively find the approximate Pareto solutions under different topological structures. A comparative analysis demonstrates that the proposed algorithm outperforms two other related multi-objective evolutionary optimization algorithms in terms of quality, convergence and diversity metrics.
Satellite orbit reconfiguration; Variable-size optimization; Multi-objective differential evolution; Evolutionary computations; Estimation of Distribution Algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221714007644
Mahalec, Vladimir
Chen, Yingwu
Liu, Xiaolu
He, Renjie
Sun, Kai
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:771-7822015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:771-782
article
Restricted risk measures and robust optimization
In this paper we consider characterizations of the robust uncertainty sets associated with coherent and distortion risk measures. In this context we show that if we are willing to enforce the coherent or distortion axioms only on random variables that are affine or linear functions of the vector of random parameters, we may consider some new variants of the uncertainty sets determined by the classical characterizations. We also show that in the finite probability case these variants are simple transformations of the classical sets. Finally we present results of computational experiments that suggest that the risk measures associated with these new uncertainty sets can help mitigate estimation errors of the Conditional Value-at-Risk.
Risk management; Stochastic programming; Uncertainty modeling;
http://www.sciencedirect.com/science/article/pii/S0377221714007632
Lagos, Guido
Espinoza, Daniel
Moreno, Eduardo
Vielma, Juan Pablo
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:697-7072015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:697-707
article
Integrating tabu search and VLSN search to develop enhanced algorithms: A case study using bipartite boolean quadratic programs
The bipartite boolean quadratic programming problem (BBQP) is a generalization of the well studied boolean quadratic programming problem. The model has a variety of real life applications; however, empirical studies of the model are not available in the literature, except in a few isolated instances. In this paper, we develop efficient heuristic algorithms based on tabu search, very large scale neighborhood (VLSN) search, and a hybrid algorithm that integrates the two. The computational study establishes that effective integration of simple tabu search with VLSN search results in superior outcomes, and suggests the value of such an integration in other settings. Complexity analysis and implementation details are provided along with conclusions drawn from experimental analysis. In addition, we obtain solutions better than the best previously known for almost all medium and large size benchmark instances.
Quadratic programming; Boolean variables; Metaheuristics; Tabu search; Worst-case analysis;
http://www.sciencedirect.com/science/article/pii/S0377221714007759
Glover, Fred
Ye, Tao
Punnen, Abraham P.
Kochenberger, Gary
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:880-8872015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:880-887
article
Information weighted sampling for detecting rare items in finite populations with a focus on security
Frequently one has to search within a finite population for a single particular individual or item with a rare characteristic. Whether an item possesses the characteristic can only be determined by close inspection. The availability of additional information about the items in the population opens the way to a more effective search strategy than just random sampling or complete inspection of the population. We will assume that the available information allows for the assignment to all items within the population of a prior probability on whether or not it possesses the rare characteristic. This is consistent with the practice of using profiling to select high risk items for inspection. The objective is to find the specific item with the minimum number of inspections. We will determine the optimal search strategies for several models according to the average number of inspections needed to find the specific item. Using these respective optimal strategies we show that we can order the numbers of inspections needed for the different models partially with respect to the usual stochastic ordering. This entails also a partial ordering of the averages of the number of inspections. Finally, the use, some discussion, extensions, and examples of these results, and conclusions about them are presented.
Applied probability; Probability sampling; Rare events; Profiling;
http://www.sciencedirect.com/science/article/pii/S0377221714007747
Hoogstrate, André J.
Klaassen, Chris A.J.
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:842-8502015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:842-850
article
Why and how to differentiate in claims problems? An axiomatic approach
In a bankruptcy situation individuals are not equally affected since each one has its own specific characteristics. These aspects cannot be ignored and may justify an allocation bias in favor of or against some individuals. This paper develops a theory of differentiation in claims problems that considers not only the vector of claims, but also some justified differentiating criteria based on other characteristics (wealth, net-income, GDP, etc.). Accordingly, we propose some progressive transfers from richer to poorer claimants with the purpose of distributing the damage as evenly as possible. Finally, we characterize our solution by means of the Lorenz criterion. Endogenous convex combinations between solutions are also considered.
Claims problems; Differentiation; Compensation; Redistribution;
http://www.sciencedirect.com/science/article/pii/S0377221714007620
Giménez-Gómez, José-Manuel
Osório, Antonio
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:21-332015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:21-33
article
Construction and improvement algorithms for dispersion problems
Given a set N, a pairwise distance function d and an integer number m, the Dispersion Problems (DPs) require to extract from N a subset M of cardinality m, so as to optimize a suitable function of the distances between the elements in M. Different functions give rise to a whole family of combinatorial optimization problems. In particular, the max-sum DP and the max-min DP have received strong attention in the literature. Other problems (e.g., the max-minsum DP and the min-diffsum DP) have been recently proposed with the aim to model the optimization of equity requirements, as opposed to that of more classical efficiency requirements. Building on the main ideas which underly some state-of-the-art methods for the max-sum DP and the max-min DP, this work proposes some constructive procedures and a Tabu Search algorithm for the new problems. In particular, we investigate the extension to the new context of key features such as initialization, tenure management and diversification mechanisms. The computational experiments show that the algorithms applying these ideas perform effectively on the publicly available benchmarks, but also that there are some interesting differences with respect to the DPs more studied in the literature. As a result of this investigation, we also provide optimal results and bounds as a useful reference for further studies.
Combinatorial optimization; Dispersion problems; Binary quadratic programming; Tabu Search;
http://www.sciencedirect.com/science/article/pii/S0377221714007978
Aringhieri, Roberto
Cordone, Roberto
Grosso, Andrea
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:763-7702015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:763-770
article
Using discrete event simulation cellular automata models to determine multi-mode travel times and routes of terrestrial suppression resources to wildland fires
Forest fires can impose substantial social, environmental and economic burdens on the communities on which they impact. Well managed and timely fire suppression can demonstrably reduce the area burnt and minimise consequent losses. In order to effectively coordinate emergency vehicles for fire suppression, it is important to have an understanding of the time that elapses between vehicle dispatch and arrival at a fire. Forest fires can occur in remote locations that are not necessarily directly accessible by road. Consequently estimations of vehicular travel time may need to consider both on and off road travel. We introduce and demonstrate a novel framework for estimating travel times and determining optimal travel routes for vehicles travelling from bases to forest fires where both on and off road travel may be necessary. A grid based, cost-distance approach was utilised, where a travel time surface was computed indicating travel time from the reported fire location. Times were calculated using a discrete event simulation cellular automata (CA) model, with the CA progressing outwards from the fire location. Optimal fastest travel paths were computed by recognising chains of parent–child relationships. Our results achieved comparable results to traditional network analysis techniques when considering travel along roads; however the method was also demonstrated to be effective in estimating travel times and optimal routes in complex terrain.
Transport; Network; OR in environment and climate change; Routing; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221714007401
Duff, Thomas J.
Chong, Derek M.
Tolhurst, Kevin G.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:332-3422015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:332-342
article
Frontier-based vs. traditional mutual fund ratings: A first backtesting analysis
We explore the potential benefits of a series of existing and new non-parametric convex and non-convex frontier-based fund rating models to summarize the information contained in the moments of the mutual fund price series. Limiting ourselves to the traditional mean-variance portfolio setting, we test in a simple backtesting setup whether these efficiency measures fare any better than more traditional financial performance measures in selecting promising investment opportunities. The evidence points to a remarkable superior performance of these frontier models compared to most, but not all traditional financial performance measures.
Mutual fund rating; DEA; FDH; Shortage function; Mean-variance portfolio frontier;
http://www.sciencedirect.com/science/article/pii/S037722171400914X
Brandouy, Olivier
Kerstens, Kristiaan
Van de Woestyne, Ignace
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:343-3462015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:343-346
article
On the MILP model for the U-shaped assembly line balancing problems
U-shaped assembly lines are an important configuration of modern manufacturing systems due to their flexibility to adapt to varying market demands. In U-shaped lines, tasks are assigned after their predecessors or successors. Some MILP models have been proposed to formulate the U-shaped assembly line balancing problem using either–or constraints to express precedence relationships. We show that this modeling approach reported in the literature may often find optimal solutions that are infeasible and verify this on a large set of benchmark problems. We present a revision to this model to accurately express the precedence relationships without introducing additional variables or constraints. We also illustrate on the same benchmark problems that our revision always reports solutions that are feasible.
Integer programming; Facilities planning and design; U-shaped assembly lines;
http://www.sciencedirect.com/science/article/pii/S0377221714008583
Fattahi, Ali
Turkay, Metin
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:631-6412015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:631-641
article
A mixed-integer linear programming model to optimize the vertical alignment considering blocks and side-slopes in road construction
In the vertical alignment phase of road design, one minimizes the cost of moving material between different sections of the road while maintaining safety and building code constraints. Existing vertical alignment models consider neither the side-slopes of the road nor the natural blocks like rivers, mountains, etc., in the construction area. The calculated cost without the side-slopes can have significant errors (more than 20 percent), and the earthwork schedule without considering the blocks is unrealistic. In this study, we present a novel mixed integer linear programming model for the vertical alignment problem that considers both of these issues. The numerical results show that the approximation of the side-slopes can generate solutions within an acceptable error margin specified by the user without increasing the time complexity significantly.
Combinatorial optimization; Mixed integer linear program; OR in road design (natural resources); Earthwork optimization; Vertical alignment optimization;
http://www.sciencedirect.com/science/article/pii/S0377221714007127
Hare, Warren
Lucet, Yves
Rahman, Faisal
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:274-2852015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:274-285
article
Incorporating priorities for waiting customers in the hypercube queuing model with application to an emergency medical service system in Brazil
Emergency medical services (EMS) assist different classes of patients according to their medical seriousness. In this study, we extended the well-known hypercube model, based on the theory of spatially distributed queues, to analyze systems with multiple priority classes and a queue for waiting customers. Then, we analyzed the computational results obtained when applying this approach to a case study from an urban EMS in the city of Ribeirão Preto, Brazil. We also investigated some scenarios for this system studying different periods of the day and the impact of increasing the demands of the patient classes. The results showed that relevant performance measures can be obtained to analyze such a system by using the analytical model extended to deal with queuing priority. In particular, it can accurately evaluate the average response time for each class of emergency calls individually, paying particular attention to high priority calls.
OR in health services; Hypercube model; Queuing priority; Emergency medical services; SAMU;
http://www.sciencedirect.com/science/article/pii/S0377221714007954
de Souza, Regiane Máximo
Morabito, Reinaldo
Chiyoshi, Fernando Y.
Iannoni, Ana Paula
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:88-992015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:88-99
article
Consumer returns policies with endogenous deadline and supply chain coordination
This paper considers returns policies under which consumers’ valuation depends on the refund amount they receive and the length of time they must wait after the item is returned. Consumers face an uncertain valuation before purchase, and the realization of that purchase's value occurs only after the return deadline has passed. Depending on the product lifecycle length and magnitude of return rate, a retailer decides on strategies for that product's return deadline, including return prohibition, life-cycle return, and fixed return deadline. In addition, the influence of the return deadline on consumers’ behavior and the pricing and inventory policies of the retailer are systematically investigated. Moreover, based on the analysis of consumer return behavior on a traditional buy-back contract, we present a new differentiated buy-back contract, contingent on return deadline, to coordinate a supply chain consisting of an upstream manufacturer and a downstream retailer. Finally, extensions on some specific behavioral factors such as moral hazard, inertia return, and external effect are investigated.
Supply chain management; Consumer behavior; Product returns; Return deadline; Buy-back contract;
http://www.sciencedirect.com/science/article/pii/S0377221714007887
Xu, Lei
Li, Yongjian
Govindan, Kannan
Xu, Xiaolin
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:583-5952015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:583-595
article
Operational research from Taylorism to Terabytes: A research agenda for the analytics age
The growing attention and prominence afforded to analytics presents a genuine challenge for the operational research community. Many in the community have recognised this growth and sought to align themselves with analytics. For instance, the US operational research society INFORMS now offers analytics related conferences, certification and a magazine. However, as shown in this research, the volume of analytics-orientated studies in journals associated with operational research is comparatively low. This paper seeks to address this paradox by seeking to better understand what analytics is, and how operational research is related to it. To do so literature from a range of academic disciplines is analysed, in what is conceived as concurrent histories in the shared tradition of a management paradigm spread over the last 100 years. The findings of this analysis reveal new insights as to how operational research exists within an ecosystem shared with several other disciplines, and how interactions and ripple effects diffuse knowledge and ideas between each. Whilst this ecosystem is developed and evolved through interdisciplinary collaborations, individual disciplines are cast into competition for the attention of the same business users. These findings are further explored by discussing the implication this has for operational research, as well as considering what directions future research may take to maximise the potential value of these relationships.
Analytics; Big data; Data visualisation; History of OR; History of computing;
http://www.sciencedirect.com/science/article/pii/S037722171400664X
Mortenson, Michael J.
Doherty, Neil F.
Robinson, Stewart
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:161-1712015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:161-171
article
Risk-sensitive dividend problems
We consider a discrete time version of the popular optimal dividend payout problem in risk theory. The novel aspect of our approach is that we allow for a risk averse insurer, i.e., instead of maximising the expected discounted dividends until ruin we maximise the expected utility of discounted dividends until ruin. This task has been proposed as an open problem in Gerber and Shiu (2004). The model in a continuous-time Brownian motion setting with the exponential utility function has been analysed in Grandits et al. (2007). Nevertheless, a complete solution has not been provided. In this work, instead we solve the problem in discrete time setup for the exponential and the power utility functions and give the structure of optimal history-dependent dividend policies. We make use of certain ideas studied earlier in Bäuerle and Rieder (2011), where Markov decision processes with general utility functions were treated. Our analysis, however, includes new aspects, since the reward functions in this case are not bounded.
Markov decision process; Dividend payout; Risk aversion; History-dependent policy; Fixed point problem;
http://www.sciencedirect.com/science/article/pii/S0377221714008686
Bäuerle, Nicole
Jaśkiewicz, Anna
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:100-1062015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:100-106
article
The modular tool switching problem
This article analyzes the complexity of the modular tool switching problem arising in flexible manufacturing environments. A single, numerically controlled placement machine is equipped with an online tool magazine consisting of several changeable tool feeder modules. The modules can hold a number of tools necessary for the jobs. In addition to the online modules, there is a set of offline modules which can be changed to the machine during a job change. A number of jobs are processed by the machine, each job requiring a certain set of tools. Tools between jobs can be switched individually, or by replacing a whole module containing multiple tools. or a whole module, containing multiple tools can be replaced. We consider the complexity of the problem of arranging tools into the modules, so that the work for module and tool loading is minimized. Tools are of uniform size and have unit loading costs. We show that the general problem is NP-hard, and in the case of fixed number of modules and fixed module capacity the problem is solvable in polynomial time.
Complexity theory; Flexible Manufacturing Systems; Tool loading; Set-up optimization; Printed circuit board;
http://www.sciencedirect.com/science/article/pii/S0377221714007917
Raduly-Baka, Csaba
Nevalainen, Olli S.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:201-2112015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:201-211
article
On the relations between ELECTRE TRI-B and ELECTRE TRI-C and on a new variant of ELECTRE TRI-B
ELECTRE TRI is a set of methods designed to sort alternatives evaluated on several criteria into ordered categories. The original method uses limiting profiles. A recently introduced method uses central profiles. We study the relations between these two methods. We do so by investigating if an ordered partition obtained with one method can also be obtained with the other method, after a suitable redefinition of the profiles. We also investigate a number of situations in which the original method using limiting profiles gives results that do not fit well our intuition. This leads us to propose a variant of ELECTRE TRI that uses limiting profiles. We show that this variant may have some advantages over the original method.
Decision with multiple attributes; Sorting models; ELECTRE TRI-B; ELECTRE TRI-C;
http://www.sciencedirect.com/science/article/pii/S0377221714007966
Bouyssou, Denis
Marchant, Thierry
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:596-6052015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:596-605
article
Using SVM to combine global heuristics for the Standard Quadratic Problem
The Standard Quadratic Problem (StQP) is an NP-hard problem with many local minimizers (stationary points). In the literature, heuristics based on unconstrained continuous non-convex formulations have been proposed (Bomze & Palagi, 2005; Bomze, Grippo, & Palagi, 2012) but none dominates the other in terms of best value found. Following (Cassioli, DiLorenzo, Locatelli, Schoen, & Sciandrone, 2012) we propose to use Support Vector Machines (SVMs) to define a multistart global strategy which selects the “best” heuristic. We test our method on StQP arising from the Maximum Clique Problem on a graph which is a challenging combinatorial problem. We use as benchmark the clique problems in the DIMACS challenge.
Quadratic programming; Nonlinear programming; Data mining; Maximum Clique Problem; Global optimization;
http://www.sciencedirect.com/science/article/pii/S0377221714007930
Dellepiane, Umberto
Palagi, Laura
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:606-6212015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:606-621
article
The continuous p-centre problem: An investigation into variable neighbourhood search with memory
A VNS-based heuristic using both a facility as well as a customer type neighbourhood structure is proposed to solve the p-centre problem in the continuous space. Simple but effective enhancements to the original Elzinga–Hearn algorithm as well as a powerful ‘locate–allocate’ local search used within VNS are proposed. In addition, efficient implementations in both neighbourhood structures are presented. A learning scheme is also embedded into the search to produce a new variant of VNS that uses memory. The effect of incorporating strong intensification within the local search via a VND type structure is also explored with interesting results. Empirical results, based on several existing data set (TSP-Lib) with various values of p, show that the proposed VNS implementations outperform both a multi-start heuristic and the discrete-based optimal approach that use the same local search.
p-Centre problem; Continuous space; Variable neighbourhood search with memory; Adaptive search; Elzinga–Hearn algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221714008108
Elshaikh, Abdalla
Salhi, Said
Nagy, Gábor
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:622-6302015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:622-630
article
On the unified dispersion problem: Efficient formulations and exact algorithms
Facility dispersion problems involve placing a number of facilities as far apart from each other as possible. Four different criteria of facility dispersal have been proposed in the literature (Erkut & Neuman, 1991). Despite their formal differences, these four classic dispersion objectives can be expressed in a unified model called the partial-sum dispersion model (Lei & Church, 2013). In this paper, we focus on the unweighted partial sum dispersion problem and introduce an efficient formulation for this generalized dispersion problem based on a construct by Ogryczak and Tamir (2003). We also present a fast branch-and-bound based exact algorithm.
Location; Facility dispersion;
http://www.sciencedirect.com/science/article/pii/S0377221714008418
Lei, Ting L.
Church, Richard L.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:63-712015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:63-71
article
A bidirectional building approach for the 2D constrained guillotine knapsack packing problem
This paper investigates the 2D guillotine knapsack packing problem, in which the objective is to select and cut a set of rectangles from a sheet with fixed size and maximize the total profit of the selected rectangles. The orientation of the rectangles is fixed. And the guillotine cut, in which the cut must be parallel to the sides of the sheet to divide it into two completely separated sheets, is required. Two well-known methods, namely the top-down and bottom-up approaches, are combined into a coherent algorithm to address this problem. Computational experiments on benchmark test sets show that the approach finds the optimal solution for almost all moderately sized instances and outperforms all existing approaches for larger instances.
Cutting and packing; Guillotine-cut; 2D knapsack; Block-building;
http://www.sciencedirect.com/science/article/pii/S037722171400808X
Wei, Lijun
Lim, Andrew
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:188-2002015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:188-200
article
A stochastic dynamic pricing model for the multiclass problems in the airline industry
In the airline industry, deciding the ticket price for each flight directly affects the number of people that in the future will try to buy a ticket. Depending on the willingness-to-pay of the customers the flight might take off with empty seats or seats sold at a lower price. Therefore, based on the behavior of the customers, a price must be fixed for each type of product in each period. We propose a stochastic dynamic pricing model to solve this problem, applying phase type distributions and renewal processes to model the inter-arrival time between two customers that book a ticket and the probability that a customer buys a ticket. We test this model in a real-world case where as a result the revenue is increased on average by 31 percent.
Revenue management; Phase-type distributions; Stochastic dynamic programming; Dynamic pricing; OR in airlines;
http://www.sciencedirect.com/science/article/pii/S0377221714007772
Otero, Daniel F.
Akhavan-Tabatabaei, Raha
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:686-6962015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:686-696
article
The linear ordering problem revisited
The Linear Ordering Problem is a popular combinatorial optimisation problem which has been extensively addressed in the literature. However, in spite of its popularity, little is known about the characteristics of this problem. This paper studies a procedure to extract static information from an instance of the problem, and proposes a method to incorporate the obtained knowledge in order to improve the performance of local search-based algorithms. The procedure introduced identifies the positions where the indexes cannot generate local optima for the insert neighbourhood, and thus global optima solutions. This information is then used to propose a restricted insert neighbourhood that discards the insert operations which move indexes to positions where optimal solutions are not generated. In order to measure the efficiency of the proposed restricted insert neighbourhood system, two state-of-the-art algorithms for the LOP that include local search procedures have been modified. Conducted experiments confirm that the restricted versions of the algorithms outperform the classical designs systematically when a maximum number of function evaluations is considered as the stopping criterion. The statistical test included in the experimentation reports significant differences in all the cases, which validates the efficiency of our proposal. Moreover, additional experiments comparing the execution times reveal that the restricted approaches are faster than their counterparts for most of the instances.
Combinatorial optimisation; Linear ordering problem; Local optima; Insert neighbourhood;
http://www.sciencedirect.com/science/article/pii/S0377221714007802
Ceberio, Josu
Mendiburu, Alexander
Lozano, Jose A.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:72-872015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:72-87
article
Queuing models to analyze dwell-point and cross-aisle location in autonomous vehicle-based warehouse systems
Technological innovations in warehouse automation systems, such as Autonomous Vehicle based Storage and Retrieval System (AVS/RS), are geared towards achieving greater operational efficiency and flexibility that would be necessary in warehouses of the future. AVS/RS relies on autonomous vehicles and lifts for horizontal and vertical transfer of unit-loads respectively. To implement a new technology such as AVS/RS, the choice of a design variable setting, interactions among the design variables, and the design trade-offs need to be well understood. In particular, design decisions such as the choice of vehicle dwell-point and location of cross-aisles could significantly affect the performance of an AVS/RS. Hence, we investigate the effect of these design decisions using customized analytical models based on multi-class semi-open queuing network theory. Numerical studies suggest that the average percentage reduction in storage and retrieval transactions with appropriate choice of dwell-point is about 8 percent and 4 percent respectively. While end of aisle location of the cross-aisle is commonly used in practice, our model suggests that there exists a better cross-aisle location within a tier (about 15 percent from end of aisle); however, the cycle time benefits by choosing the optimal cross-aisle location in comparison to the end of aisle cross-aisle location is marginal. Detailed simulations also indicate that the analytical model yields fairly accurate results.
Logistics; Facilities planning and design; Queuing; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221714007796
Roy, Debjit
Krishnamurthy, Ananth
Heragu, Sunderesh
Malmborg, Charles
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:806-8142015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:806-814
article
Penalty functions based upon a general class of restricted dissimilarity functions
In this paper the notion of restricted dissimilarity function is discussed and some general results are shown. The relation between the concepts of restricted dissimilarity function and penalty function is presented. A specific model of construction of penalty functions by means of a wide class of restricted dissimilarity functions based upon automorphisms of the unit interval is studied. A characterization theorem of the automorphisms which give rise to two-dimensional penalty functions is proposed. A generalization of the previous theorem to any dimension n > 2 is also provided. Finally, a not convex example of generator of penalty functions of arbitrary dimension is illustrated.
Restricted dissimilarity function; Penalty function; Quasi-convexity;
http://www.sciencedirect.com/science/article/pii/S0377221714007218
Ricci, Roberto Ghiselli
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:851-8622015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:851-862
article
Combined scheduling and capacity planning of electricity-based ammonia production to integrate renewable energies
Economic assessment of energy-related processes needs to adapt to the development of large-scale integration of renewable energies into the energy system. Flexible electrochemical processes, such as the electrolysis of water to produce hydrogen, are foreseen as cornerstones to renewable energy systems. These types of technologies require the current methods of energy storage scheduling and capacity planning to incorporate their distinct non-linear characteristics in order to be able to fully assess their economic impact. A combined scheduling and capacity planning model for an innovative, flexible electricity-to-hydrogen-to-ammonia plant is derived in this paper. A heuristic is presented, which is able to translate the depicted, non-convex and mixed-integer problem into a set of convex and continuous non-linear problems. These can be solved with commercially available solvers. The global optimum of the original problem is encircled by the heuristic, and, as the numerical illustration with German electricity market data of 2013 shows, can be narrowed down and approximated very well. The results show, that it is not only meaningfulness, but also feasible to solve a combined scheduling and capacity problem on a convex non-linear basis for this and similar new process concepts. Application to other hydrogen based concepts is straightforward and to other, non-linear chemical processes generally possible.
OR in energy; Non-linear programming; Heuristics; Hydrogen storage; Ammonia;
http://www.sciencedirect.com/science/article/pii/S0377221714007164
Schulte Beerbühl, S.
Fröhling, M.
Schultmann, F.
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:863-8712015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:863-871
article
Entrepreneurial finance with equity-for-guarantee swap and idiosyncratic risk
We consider a risk-averse entrepreneur who invests in a project with idiosyncratic risk. In contrast to the literature, we assume the entrepreneur is unable to get a loan from a bank directly because of the low creditability of the entrepreneur and so an innovative financial contract, named equity-for-guarantee swap, is signed among a bank, an insurer, and the entrepreneur. It is shown that the new swap leads to higher leverage, which brings more diversification and tax benefits. The new swap not only solves the problems of financing constraints, but also significantly improves the welfare level of the entrepreneur. The growth of welfare level increases dramatically with risk aversion index and the volatility of idiosyncratic risk.
Finance; Borrowing constraints; Equity-for-guarantee swap; Capital structure; Cash-out option,;
http://www.sciencedirect.com/science/article/pii/S0377221714007346
Wang, Huamao
Yang, Zhaojun
Zhang, Hai
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:830-8412015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:830-841
article
Modeling assignment-based pairwise comparisons within integrated framework for value-driven multiple criteria sorting
We introduce a new preference disaggregation modeling formulations for multiple criteria sorting with a set of additive value functions. The preference information supplied by the Decision Maker (DM) is composed of: (1) possibly imprecise assignment examples, (2) desired class cardinalities, and (3) assignment-based pairwise comparisons. The latter have the form of imprecise statements referring to the desired assignments for pairs of alternatives, but without specifying any concrete class. Additionally, we account for preferences concerning the shape of the marginal value functions and desired comprehensive values of alternatives assigned to a given class or class range. The exploitation of all value functions compatible with these preferences results in three types of results: (1) necessary and possible assignments, (2) extreme class cardinalities, and (3) necessary and possible assignment-based preference relations. These outputs correspond to different types of admitted preference information. By exhibiting different outcomes, we encourage the DM in various ways to enrich her/his preference information interactively. The applicability of the framework is demonstrated on data involving the classification of cities into liveability classes.
Multiple criteria analysis; Sorting; Multi-Attribute Value Theory; Preference disaggregation; Robust Ordinal Regression;
http://www.sciencedirect.com/science/article/pii/S0377221714007899
Kadziński, Miłosz
Ciomek, Krzysztof
Słowiński, Roman
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:931-9352015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:931-935
article
Common mistakes in computing the nucleolus
Despite linear programming and duality have correctly been incorporated in algorithms to compute the nucleolus, we have found mistakes in how these have been used in a broad range of applications. Overlooking the fact that a linear program can have multiple optimal solutions and neglecting the relevance of duality appear to be crucial sources of mistakes in computing the nucleolus. We discuss these issues and illustrate them in five mistaken examples from this and other journals. The purpose of this note is to prevent these mistakes propagate longer by clarifying how linear programming and duality can be correctly used for computing the nucleolus.
Game theory; Nucleolus; Cost allocation; Linear programming; Duality;
http://www.sciencedirect.com/science/article/pii/S0377221714008595
Guajardo, Mario
Jörnsten, Kurt
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:304-3152015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:304-315
article
Pricing, market coverage and capacity: Can green and brown products co-exist?
Environmental strains are causing consumers to trade up to greener alternatives and many brown products are losing market coverage to premium-priced green rivals. In order to tackle this threat, many companies currently offering only brown products are contemplating the launch of a green product to complement their product portfolio. This paper provides strategic insights into and tactical ramifications of expanding a brown product line with a new green product. Our analysis explicitly incorporates a segmented consumer market where individual consumers may value the same product differently, the economies of scale and the learning effects associated with new green products, and capacity constraints for the current production system. It is shown that a single pricing scheme for the new green product limits a firm’s ability to appropriate the value different customers will relinquish in a segmented market and/or to avoid cannibalization. A two-level pricing structure can diminish and even completely avoid the salience of cannibalization. However, when resources are scarce, a firm can never protect his products from the threat of cannibalization by just revising the pricing structure which can spell the end of his brown product’s presence in the market or preclude the firm from launching the green product. At this point, the degree of cannibalization is higher for the brown product when the green product offers a sufficiently differentiated proposition to green segment consumers.
OR in marketing; Sustainable operations; Product portfolio; Capacity; Market segmentation;
http://www.sciencedirect.com/science/article/pii/S0377221714007784
Yenipazarli, A.
Vakharia, A.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:1-92015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:1-9
article
Multi-attribute online reverse auctions: Recent research trends
This paper provides an updated overview of the rapidly developing research field of multi-attribute online reverse auctions. Our focus is on academic research, although we briefly comment on the state-of-the-art in practice. The role that Operational Research plays in such auctions is highlighted. We review decision- and game-theoretic research, experimental studies, information disclosure policies, and research on integrating and comparing negotiations and auctions. We conclude by discussing implementation issues regarding online procurement auctions in practice.
Multiattribute auctions; Procurement; Bidding;
http://www.sciencedirect.com/science/article/pii/S0377221714007292
Pham, Long
Teich, Jeffrey
Wallenius, Hannele
Wallenius, Jyrki
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:739-7482015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:739-748
article
Optimal delivery strategies considering carbon emissions, time-dependent demands and demand–supply interactions
The study addresses a distributor's delivery strategy problem with consideration of carbon emissions, retailers’ time-dependent demands and demand–supply interactions. A mathematical programming model is formulated for solving optimal number and time-window of service cycles. A case study is presented. The results show that the distributor would adopt a frequent delivery strategy if carbon taxes are insufficiently high. Providing the retailers with price discounts does not work out since retailers would value the delays in receiving their ordered products more than the compensations of less frequent delivery given by the distributor in response to carbon taxes. Models with demand–supply interactions can result in larger profit and market share than those without demand–supply interactions. This study sheds new light to the distributors, the retailers, and the regulators in greening the logistics transport.
Carbon tax; Time-dependent demand; Supply–demand interaction; Delivery strategy; Green logistics transport;
http://www.sciencedirect.com/science/article/pii/S0377221714007619
Li, Hui-Chieh
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:107-1202015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:107-120
article
Part logistics in the automotive industry: Decision problems, literature review and research agenda
With the ongoing trend of mass-customization and an increasing product variety, just-in-time part logistics more and more becomes one of the greatest challenges in today’s automobile production. Thousands of parts and suppliers, a multitude of different equipments, and hundreds of logistics workers need to be coordinated, so that the final assembly lines never run out of parts. This paper describes the elementary process steps of part logistics in the automotive industry starting with the initial call order to the return of empty part containers. In addition to a detailed process description, important decision problems are specified, existing literature is surveyed, and open research challenges are identified.
Automotive industry; Just-in-time; Part logistics; Survey;
http://www.sciencedirect.com/science/article/pii/S0377221714008042
Boysen, Nils
Emde, Simon
Hoeck, Michael
Kauderer, Markus
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:34-442015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:34-44
article
Complexity results for flow shop problems with synchronous movement
In this paper we present complexity results for flow shop problems with synchronous movement which are a variant of a non-preemptive permutation flow shop. Jobs have to be moved from one machine to the next by an unpaced synchronous transportation system, which implies that the processing is organized in synchronized cycles. This means that in each cycle the current jobs start at the same time on the corresponding machines and after processing have to wait until the last job is finished. Afterwards, all jobs are moved to the next machine simultaneously. Besides the general situation we also investigate special cases involving machine dominance which means that the processing times of all jobs on a dominating machine are at least as large as the processing times of all jobs on the other machines. Especially, we study flow shops with synchronous movement for a small number of dominating machines (one or two) and different objective functions.
Flow shop; Synchronous movement; Complexity; Dominating machines;
http://www.sciencedirect.com/science/article/pii/S0377221714007929
Waldherr, Stefan
Knust, Sigrid
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:708-7182015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:708-718
article
Investigating work-related ill health effects in optimizing the performance of manufacturing systems
Working environment affects human health condition and performance. Human Factors (HF) scholars aim to elaborate this effect. However, HF studies mostly focus on employee occupational health and safety elements and their consequences on employee health conditions. They do not take into account Work-related Ill Health (WIH) risk factor effects at the system level. In contrast, operations research studies usually assume that operators involved in a system have identical performances and rarely consider WIH risk factor effects in optimizing system performance. This paper proposes a 2-state Markov chain model to quantify WIH risk factor effects and thereby estimate their economic impacts in optimizing a serial assembly line’s performance. Results of this research demonstrate between 0.52 percent and 8 percent increase for the total cost of the system as WIH risk levels change. This paper opens a new window to understand economic consequences of WIH effects, and to enhance systems performance by investigating working conditions.
Operations research; Optimization; Serial assembly line; Workplace risk factors; Work-related health problems;
http://www.sciencedirect.com/science/article/pii/S0377221714007711
Sobhani, A.
Wahab, M.I.M.
Neumann, W.P.
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:674-6852015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:674-685
article
Dynamic reduction heuristics for the rectangle packing area minimization problem
The rectangle packing area minimization problem is a key sub-problem of floorplanning in VLSI design. This problem places a set of axis aligned two-dimensional rectangular items of given sizes onto a rectangular plane such that no two items overlap and the area of the enveloping rectangle is minimized. This paper presents a dynamic reduction algorithm that transforms an instance of the original problem to a series of instances of the rectangle packing problem by dynamically determining the dimensions of the enveloping rectangle. We define an injury degree to evaluate the possible negative impact for candidate placements, and we propose a least injury first approach for solving the rectangle packing problem. Next, we incorporate a compacting approach to compact the resulting layout by alternatively moving the items left and down toward a bottom-left corner such that we may obtain a smaller enveloping rectangle. We also show the feasibility, compactness, non-inferiority, and halting properties of the compacting approach. Comprehensive experiments were conducted on 11 MCNC and GSRC benchmarks and 28 instances reported in the literature. The experimental results show the high efficiency and effectiveness of the proposed dynamic reduction algorithm, especially on large-scale instances with hundreds of items.
Packing; Floorplanning; Layout optimization; Area minimization; Open dimension problem;
http://www.sciencedirect.com/science/article/pii/S0377221714007814
He, Kun
Ji, Pengli
Li, Chumin
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:51-622015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:51-62
article
Incremental network design with maximum flows
We study an incremental network design problem, where in each time period of the planning horizon an arc can be added to the network and a maximum flow problem is solved, and where the objective is to maximize the cumulative flow over the entire planning horizon. After presenting two mixed integer programming (MIP) formulations for this NP-complete problem, we describe several heuristics and prove performance bounds for some special cases. In a series of computational experiments, we compare the performance of the MIP formulations as well as the heuristics.
Network design; Approximation algorithms; Scheduling;
http://www.sciencedirect.com/science/article/pii/S0377221714008078
Kalinowski, Thomas
Matsypura, Dmytro
Savelsbergh, Martin W.P.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:182-1872015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:182-187
article
A new approach to estimating value–income ratios with income growth and time-varying yields
Value–income ratios, such as dividend yields in finance and price–rent ratios in housing and real estate markets, impact society in a variety of ways. This paper proposes a new type of the present value model that features income growth with time-varying yields. It offers a new risk perspective, which may alleviate timid investor behavior in market downturns while cooling down the market in seemingly booming times. A binding relationship, the value–income ratio adjusted by yields of the asset and growth in income, is revealed. This has notable implications for empirical research, which examines value–income ratios time and again. Incorrectly perceived market behavior distorts the formation of investor behavior, and vice versa, which has serious consequences to the functioning of the market and beyond.
Value–income ratio; Time-varying yield; Income growth; Present value model;
http://www.sciencedirect.com/science/article/pii/S0377221714007851
Wang, Peijie
Brand, Steven
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:796-8052015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:796-805
article
A cost Malmquist productivity index capturing group performance
This paper develops an index for comparing the productivity of groups of operating units in cost terms when input prices are available. In that sense it represents an extension of a similar index available in the literature for comparing groups of units in terms of technical productivity in the absence of input prices. The index is decomposed to reveal the origins of differences in performance of the groups of units both in terms of technical and cost productivity. The index and its decomposition are of value in contexts where the need arises to compare units which perform the same function but they can be grouped by virtue of the fact that they operate in different contexts as might for example arise in comparisons of water or gas transmission companies operating in different countries.
Data envelopment analysis; Malmquist index; Productivity; Cost efficiency;
http://www.sciencedirect.com/science/article/pii/S037722171400719X
Thanassoulis, Emmanuel
Shiraz, Rashed Khanjani
Maniadakis, Nikolaos
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:783-7952015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:783-795
article
Dividend policy, managerial ownership and debt financing: A non-parametric perspective
This paper examines the relation between dividend policy, managerial ownership and debt-financing for a large sample of firms listed on NYSE, AMEX and NASDAQ. In addition to standard parametric estimation methods, we use a semi-parametric approach, which helps capture more effectively non-linearities in the data. In line with the alignment effect of managerial ownership, our results support a negative relationship between managerial ownership and dividends when managerial ownership is at relatively low levels. However, this negative relationship turns into a positive one at very high levels of managerial ownership. We also find that the nature of the relationship between managerial ownership and dividends may be more complex than it has been previously thought, and it also differs significantly across firms with different levels of debt/financial constraints. The results are consistent with the view that agency theory provides useful insights but cannot fully explain how firms determine their dividend policy.
Dividends; Managerial ownership; Semi-parametric approach; Non-linearity; Capital structure;
http://www.sciencedirect.com/science/article/pii/S0377221714006870
Florackis, Chris
Kanas, Angelos
Kostakis, Alexandros
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:653-6612015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:653-661
article
An improved heuristic for parallel machine scheduling with rejection
In this paper we study a classical parallel machine scheduling model with m machines and n jobs, where each job is either accepted and then processed by one of the machines, or rejected and then a rejection penalty is paid. The objective is to minimize the completion time of the last accepted job plus the total penalty of all rejected jobs. The scheduling problem is known to be NP-hard in the strong sense. We find some new optimal properties and develop an O(nlog n + n/ε) heuristic to solve the problem with a worst-case bound of 1.5 + ε, where ε > 0 can be any small given constant. This improves upon the worst-case bound 2−1m of the heuristic presented by Bartal et al. (Bartal, Y., Leonardi, S., Marchetti-Spaccamela, A., Sgall, J., & Stougie, L. (2000). Multiprocessor scheduling with rejection. SIAM Journal on Discrete Mathematics, 13, 64–78) in the scheduling literature.
Scheduling; Rejection; Heuristic; Worst-case bound;
http://www.sciencedirect.com/science/article/pii/S037722171400767X
Ou, Jinwen
Zhong, Xueling
Wang, Guoqing
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:134-1482015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:134-148
article
On finite-time ruin probabilities in a generalized dual risk model with dependence
In this paper, we study the finite-time ruin probability in a reasonably generalized dual risk model, where we assume any non-negative non-decreasing cumulative operational cost function and arbitrary capital gains arrival process. Establishing an enlightening link between this dual risk model and its corresponding insurance risk model, explicit expressions for the finite-time survival probability in the dual risk model are obtained under various general assumptions for the distribution of the capital gains. In order to make the model more realistic and general, different dependence structures among capital gains and inter-arrival times and between both are also introduced and corresponding ruin probability expressions are also given. The concept of alarm time, as introduced in Das and Kratz (2012), is applied to the dual risk model within the context of risk capital allocation. Extensive numerical illustrations are provided.
Dual (dependent) risk model; Finite-time ruin probability; Capital allocation; Alarm time; (Exponential) classical Appell polynomials;
http://www.sciencedirect.com/science/article/pii/S037722171400811X
Dimitrova, Dimitrina S.
Kaishev, Vladimir K.
Zhao, Shouqi
oai:RePEc:eee:ejores:v:241:y:2015:i:3:p:719-7262015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:241:y:2015:i:3:p:719-726
article
Cooperation on capacitated inventory situations with fixed holding costs
In this paper we analyze a situation in which several firms deal with inventory problems concerning the same type of product. We consider that each firm uses its limited capacity warehouse for storing purposes and that it faces an economic order quantity model where storage costs are irrelevant (and assumed to be zero) and shortages are allowed. In this setting, we show that firms can save costs by placing joint orders and obtain an optimal order policy for the firms. Besides, we identify an associated class of costs games which we show to be concave. Finally, we introduce and study a rule to share the costs among the firms which provides core allocations and can be easily computed.
Inventory systems; cooperation; core allocations;
http://www.sciencedirect.com/science/article/pii/S0377221714007371
Fiestras-Janeiro, M.G.
García-Jurado, I.
Meca, A.
Mosquera, M.A.
oai:RePEc:eee:ejores:v:242:y:2015:i:1:p:261-2732015-01-15RePEc:eee:ejores
RePEc:eee:ejores:v:242:y:2015:i:1:p:261-273
article
Integrated business continuity and disaster recovery planning: Towards organizational resilienceAuthor-Name: Sahebjamnia, N.
Businesses are increasingly subject to disruptions. It is almost impossible to predict their nature, time and extent. Therefore, organizations need a proactive approach equipped with a decision support framework to protect themselves against the outcomes of disruptive events. In this paper, a novel framework is proposed for integrated business continuity and disaster recovery planning for efficient and effective resuming and recovering of critical operations after being disrupted. The proposed model addresses decision problems at all strategic, tactical and operational levels. At the strategic level, the context of the organization is first explored and the main features of the organizational resilience are recognized. Then, a new multi-objective mixed integer linear programming model is formulated to allocate internal and external resources to both resuming and recovery plans simultaneously. The model aims to control the loss of resilience by maximizing recovery point and minimizing recovery time objectives. Finally, at the operational level, hypothetical disruptive events are examined to evaluate the applicability of the plans. We also develop a novel interactive augmented ε-constraint method to find the final preferred compromise solution. The proposed model and solution method are finally validated through a real case study.
Risk management; Organizational resilience; Disaster operations management; Business continuity planning; Disaster recovery planning; Multi-objective mixed integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221714007942
Torabi, S.A.
Mansouri, S.A.
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:239-2512013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:239-251
article
Design and dimensioning of hydrogen transmission pipeline networks
This work considers the problem of the optimal design of an hydrogen transmission network. This design problem includes the topology determination and the pipelines dimensioning problem. We define a local search method that simultaneously looks for the least cost topology of the network and for the optimal diameter of each pipe. These two problems were generally solved separately these last years. The application to the case of development of future hydrogen pipeline networks in France has been conducted at the local, regional and national levels. We compare the proposed approach with another using Tabu search heuristic.
Hydrogen; Energy economics; Optimal design; Optimal dimensioning;
http://www.sciencedirect.com/science/article/pii/S0377221713001690
André, Jean
Auray, Stéphane
Brac, Jean
De Wolf, Daniel
Maisonnier, Guy
Ould-Sidi, Mohamed-Mahmoud
Simonnet, Antoine
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:114-1232013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:114-123
article
Run length not required: Optimal-mse dynamic batch means estimators for steady-state simulations
This paper addresses the estimation of the variance of the sample mean from steady-state simulations without requiring the knowledge of simulation run length a priori. Dynamic batch means is a new and useful approach to implementing the traditional batch means in limited memory without the knowledge of the simulation run length. However, existing dynamic batch means estimators do not allow one to control the value of batch size, which is the performance parameter of the batch means estimators. In this work, an algorithm is proposed based on two dynamic batch means estimators to dynamically estimate the optimal batch size as the simulation runs. The simulation results show that the proposed algorithm requires reasonable computation time and possesses good statistical properties such as small mean-squared-error (mse).
Variance of the sample mean; Simulation; Batch means estimator; Optimal batch size; Mean-squared-error;
http://www.sciencedirect.com/science/article/pii/S0377221712007655
Song, Wheyming Tina
Chih, Mingchang
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:453-4612013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:453-461
article
A dynamic programming approach to constrained portfolios
This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies the martingale method. More precisely, we construct the non-separable value function by formalizing the optimal constrained terminal wealth to be a (conjectured) contingent claim on the optimal non-constrained terminal wealth. This is relevant by itself, but also opens up the opportunity to derive new solutions to constrained problems. As a second contribution, we thus derive new results for non-strict constraints on the shortfall of intermediate wealth and/or consumption.
Finance; Markov processes; Consumption–investment problems; Utility maximization; Bellman equations;
http://www.sciencedirect.com/science/article/pii/S0377221713001720
Kraft, Holger
Steffensen, Mogens
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:364-3742013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:364-374
article
A simulation optimization approach for a two-echelon inventory system with service level constraints
In this paper, we present a simulation optimization algorithm for solving the two-echelon constrained inventory problem. The goal is to determine the optimal setting of stocking levels to minimize the total inventory investment costs while satisfying the expected response time targets for each field depot. The proposed algorithm is more adaptive than ordinary optimization algorithms, and can be applied to any multi-item multi-echelon inventory system, where the cost structure and service level function resemble what we assume. Empirical studies are performed to compare the efficiency of the proposed algorithms with other existing simulation algorithms.
Inventory; Multi-echelon system; Service-level constraints; Feasibility check; Stochastic optimization; Simulation;
http://www.sciencedirect.com/science/article/pii/S037722171300218X
Tsai, Shing Chih
Zheng, Ya-Xin
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:165-1782013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:165-178
article
Strategic bidding of offer curves: An agent-based approach to exploring supply curve equilibria
We model a market in which suppliers bid step-function offer curves using agent-based modeling. Our model is an abstraction of electricity markets where step-function offer curves are given to an independent system operator that manages the auctions in electricity markets. Positing an elementary and computationally accessible learning model, Probe and Adjust, we present analytic results that characterize both the behavior of the learning model and the properties of step-function equilibria. Thus, we have developed a framework for validating agent-based models prior to using them in situations that are too complicated to be analyzed using traditional economic theory. In addition, we demonstrate computationally that, by using alternative policies, even simple agents can achieve monopoly rewards for themselves by pursuing more industry-oriented strategies. This raises the issue of how participants in oligopolistic markets actually behave.
Economics; Electricity; Energy; Oligopoly; Agent-based model; Equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221713001240
Kimbrough, Steven O.
Murphy, Frederic H.
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:695-7062013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:695-706
article
Multi-criteria robust design of a JIT-based cross-docking distribution center for an auto parts supply chain
We present a solution framework based on discrete-event simulation and enhanced robust design technique to address a multi-response optimization problem inherent in logistics management. The objective is to design a robust configuration for a cross-docking distribution center so that the system is insensitive to the disturbances of supply uncertainty, and provides steady parts supply to downstream assembly plants. In the proposed approach, we first construct a simulation model using factorial design and central composite design (CCD), and then identify the models that best describe the relationship between the simulation responses and system factors. We employ the response surface methodology (RSM) to identify factor levels that would maximize system potential.
Cross docking; Supply chain simulation; Robust optimization; Response surface methodology; Latin hypercube sampling; Bootstrapping;
http://www.sciencedirect.com/science/article/pii/S037722171300221X
Shi, Wen
Liu, Zhixue
Shang, Jennifer
Cui, Yujia
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:645-6532013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:645-653
article
Cone contraction and reference point methods for multi-criteria mixed integer optimization
Interactive approaches employing cone contraction for multi-criteria mixed integer optimization are introduced. In each iteration, the decision maker (DM) is asked to give a reference point (new aspiration levels). The subsequent Pareto optimal point is the reference point projected on the set of admissible objective vectors using a suitable scalarizing function. Thereby, the procedures solve a sequence of optimization problems with integer variables. In such a process, the DM provides additional preference information via pair-wise comparisons of Pareto optimal points identified. Using such preference information and assuming a quasiconcave and non-decreasing value function of the DM we restrict the set of admissible objective vectors by excluding subsets, which cannot improve over the solutions already found. The procedures terminate if all Pareto optimal solutions have been either generated or excluded. In this case, the best Pareto point found is an optimal solution. Such convergence is expected in the special case of pure integer optimization; indeed, numerical simulation tests with multi-criteria facility location models and knapsack problems indicate reasonably fast convergence, in particular, under a linear value function. We also propose a procedure to test whether or not a solution is a supported Pareto point (optimal under some linear value function).
Multi-criteria decision making; Multi-criteria optimization; Cone contraction; Reference point method; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221713002142
Kallio, Markku
Halme, Merja
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:223-2292013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:223-229
article
A continuous-time dynamic pricing model knowing the competitor’s pricing strategy
In this paper we consider a dynamic pricing model for a firm knowing that a competitor adopts a static pricing strategy. We establish a continuous time model to analyze the effect of dynamic pricing on the improvement in expected revenue in the duopoly. We assume that customers arrive to purchase tickets in accordance with a geometric Brownian motion. We derive an explicit closed-form expression for an optimal pricing policy to maximize the expected revenue. It is shown that when the competitor adopts a static pricing policy, dynamic pricing is not always effective in terms of maximizing expected revenue compared to a fixed pricing strategy. Moreover, we show that the size of the reduction in the expected revenue depends on the competitor’s pricing strategy. Numerical results are presented to illustrate the dynamic pricing policy.
Revenue management; Dynamic pricing; Transport competition;
http://www.sciencedirect.com/science/article/pii/S0377221713001550
Sato, Kimitoshi
Sawaki, Katsushige
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:170-1802013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:170-180
article
Modeling special-day effects for forecasting intraday electricity demand
We propose and apply a novel approach for modeling special-day effects to predict electricity demand in Korea. Notably, we model special-day effects on an hourly rather than a daily basis. Hourly specified predictor variables are implemented in the regression model with a seasonal autoregressive moving average (SARMA) type error structure in order to efficiently reflect the special-day effects. The interaction terms between the hour-of-day effects and the hourly based special-day effects are also included to capture the unique intraday patterns of special days more accurately. The multiplicative SARMA mechanism is employed in order to identify the double seasonal cycles, namely, the intraday effect and the intraweek effect. The forecast results of the suggested model are evaluated by comparing them with those of various benchmark models for the following year. The empirical results indicate that the suggested model outperforms the benchmark models for both special- and non-special day predictions.
Forecasting; Double SARMA model; Intraday electricity demand; Special-day effect;
http://www.sciencedirect.com/science/article/pii/S0377221713002725
Kim, Myung Suk
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:673-6822013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:673-682
article
An adaptive ejection pool with toggle-rule diversification approach for the capacitated team orienteering problem
In the capacitated team orienteering problem (CTOP), we are given a set of homogeneous vehicles and a set of customers each with a service demand value and a profit value. A vehicle can get the profit of a customer by satisfying its demand, but the total demand of all customers in its route cannot exceed the vehicle capacity and the length of the route must be within a specified maximum. The problem is to design a set of routes that maximizes the total profit collected by the vehicles. In this article, we propose a new heuristic algorithm for the CTOP using the ejection pool framework with an adaptive strategy and a diversification mechanism based on toggling between two priority rules. Experimental results show that our algorithm can match or improve all the best known results on the standard CTOP benchmark instances proposed by Archetti et al. (2008).
Routing; Capacitated team orienteering problem; Ejection pool; Local search;
http://www.sciencedirect.com/science/article/pii/S037722171200968X
Luo, Zhixing
Cheang, Brenda
Lim, Andrew
Zhu, Wenbin
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:626-6362013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:626-636
article
Operation and preventive maintenance scheduling for containerships: Mathematical model and solution algorithm
This paper considers the problem of determining operation and maintenance schedules for a containership equipped with various subsystems during its sailing according to a pre-determined navigation schedule. The operation schedule, which specifies working time of each subsystem, determines the due-date of each maintenance activity and the maintenance schedule specifies the actual start time of each maintenance activity. The main constraints are subsystem requirements, workforce availability, working time limitation, and inter-maintenance time. To represent the problem mathematically, a mixed integer programming model is developed. Then, due to the complexity of the problem, we suggest a heuristic algorithm that minimizes the sum of earliness and tardiness between the due-date and the actual start time for each maintenance activity. Computational experiments were done on various test instances and the results are reported. In particular, a case study was done on a real instance and a significant amount of improvement is reported over the experience based conventional method.
Containerships; Operation and maintenance scheduling; Integer programming; Heuristic;
http://www.sciencedirect.com/science/article/pii/S0377221713003032
Go, Hun
Kim, Ji-Su
Lee, Dong-Ho
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:683-6942013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:683-694
article
Integrated projects planning in IS departments: A multi-period multi-project selection and assignment approach with a computerized implementation
This paper highlights the subject of integrated projects planning (IPP) in contemporary IS departments, and presents a multi-period, multi-project selection and assignment approach (MPPA) to assist the departments in handling continuous project-based IS requests. The MPPA features a model to optimize the selection and assignment of IS projects. In the scope of multi-project, multi-period planning, the model innovatively considers the losses due to (1) the accumulated postponement of a previously unselected IS request and (2) the expected delay of ongoing projects when inserting a new project request. The MPPA also features an event-based decisional process for cumulative selection and assignment on a multi-period basis. Due to the complex and contextual nature of data in this paper, a computerized system is implemented for aiding the execution of the model and the process. The paper reports on an industrial case for a demonstration of the proposed work. Finally the paper compares the MPPA with related work to summarize the value and role it may play in the IPP context.
Project management; Information system (IS) development projects; Projects selection and assignment; Multi-project multi-period; Mixed integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221713002063
Chen, Chung-Yang
Liu, Heng-An
Song, Je-Yi
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:143-1562013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:143-156
article
Dynamic sequencing and cut consolidation for the parallel hybrid-cut nested L-shaped method
The nested L-shaped method is used to solve two- and multi-stage linear stochastic programs with recourse, which can have integer variables on the first stage. In this paper we present and evaluate a cut consolidation technique and a dynamic sequencing protocol to accelerate the solution process. Furthermore, we present a parallelized implementation of the algorithm, which is developed within the COIN-OR framework. We show on a test set of 51 two-stage and 42 multi-stage problems, that both of the developed techniques lead to significant speed ups in computation time.
Stochastic programming; Nested L-shaped method; Sequencing protocols; Cut consolidation;
http://www.sciencedirect.com/science/article/pii/S0377221713003159
Wolf, Christian
Koberstein, Achim
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:637-6442013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:637-644
article
On the equivalence of quadratic optimization problems commonly used in portfolio theory
In the paper, we consider three quadratic optimization problems which are frequently applied in portfolio theory, i.e., the Markowitz mean–variance problem as well as the problems based on the mean–variance utility function and the quadratic utility. Conditions are derived under which the solutions of these three optimization procedures coincide and are lying on the efficient frontier, the set of mean–variance optimal portfolios. It is shown that the solutions of the Markowitz optimization problem and the quadratic utility problem are not always mean–variance efficient.
Investment analysis; Mean–variance analysis; Parameter uncertainty; Interval estimation; Test theory;
http://www.sciencedirect.com/science/article/pii/S0377221713002105
Bodnar, Taras
Parolya, Nestor
Schmid, Wolfgang
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:391-4032013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:391-403
article
Separable solutions for Markov processes in random environments
In this paper we address the problem of efficiently deriving the steady-state distribution for a continuous time Markov chain (CTMC) S evolving in a random environment E. The process underlying E is also a CTMC. S is called Markov modulated process. Markov modulated processes have been widely studied in literature since they are applicable when an environment influences the behaviour of a system. For instance, this is the case of a wireless link, whose quality may depend on the state of some random factors such as the intensity of the noise in the environment. In this paper we study the class of Markov modulated processes which exhibits separable, product-form stationary distribution. We show that several models that have been proposed in literature can be studied applying the Extended Reversed Compound Agent Theorem (ERCAT), and also new product-forms are derived. We also address the problem of the necessity of ERCAT for product-forms and show a meaningful example of product-form not derivable via ERCAT.
Stochastic processes; Product-form; Markov modulated models;
http://www.sciencedirect.com/science/article/pii/S0377221713002191
Balsamo, Simonetta
Marin, Andrea
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:21-282013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:21-28
article
Detecting copositivity of a symmetric matrix by an adaptive ellipsoid-based approximation scheme
It is co-NP-complete to decide whether a given matrix is copositive or not. In this paper, this decision problem is transformed into a quadratic programming problem, which can be approximated by solving a sequence of linear conic programming problems defined on the dual cone of the cone of nonnegative quadratic functions over the union of a collection of ellipsoids. Using linear matrix inequalities (LMI) representations, each corresponding problem in the sequence can be solved via semidefinite programming. In order to speed up the convergence of the approximation sequence and to relieve the computational effort of solving linear conic programming problems, an adaptive approximation scheme is adopted to refine the union of ellipsoids. The lower and upper bounds of the transformed quadratic programming problem are used to determine the copositivity of the given matrix.
Conic programming; Copositive; Cone of nonnegative quadratic functions; Adaptive approximation scheme;
http://www.sciencedirect.com/science/article/pii/S0377221713001641
Deng, Zhibin
Fang, Shu-Cherng
Jin, Qingwei
Xing, Wenxun
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:53-622013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:53-62
article
A periodic-review inventory control policy for a two-level supply chain with multiple retailers and stochastic demand
We consider a time-based inventory control policy for a two-level supply chain with one warehouse and multiple retailers in this paper. Let the warehouse order in a fixed base replenishment interval. The retailers are required to order in intervals that are integer-ratio multiples of the base replenishment interval at the warehouse. The warehouse and the retailers each adopt an order-up-to policy, i.e. order the needed stock at a review point to raise the inventory position to a fixed order-up-to level. It is assumed that the retailers face independent Poisson demand processes and no transshipments between them are allowed. The contribution of the study is threefold. First, we assume that when facing a shortage the warehouse allocates the remaining stock to the retailers optimally to minimize system cost in the last minute before delivery and provide an approach to evaluate the exact system cost. Second, we characterize the structural properties and develop an exact optimal solution for the inventory control system. Finally, we demonstrate that the last minute optimal warehouse stock allocation rule we adopt dominates the virtual allocation rule in which warehouse stock is allocated to meet retailer demand on a first-come first-served basis with significant cost benefits. Moreover, the proposed time-based inventory control policy can perform equally well or better than the commonly used stock-based batch-ordering policy for distribution systems with multiple retailers.
Warehouse system; Time-based inventory control policy; Coordinated replenishment; Risk/stock pooling;
http://www.sciencedirect.com/science/article/pii/S0377221713003020
Wang, Qinan
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:654-6622013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:654-662
article
Numerical scales generated individually for analytic hierarchy process
Because individual interpretations of the analytic hierarchy process (AHP) linguistic scale vary for each user, this study proposes a novel framework that AHP decision makers can use to generate numerical scales individually, based on the 2-tuple linguistic modeling of AHP scale problems. By using the concept of transitive calibration, individual characteristics in understanding the AHP linguistic scale are first defined. An algorithm is then proposed for detecting the individual characteristics from the linguistic pairwise comparison data that is associated with each of the AHP individual decision makers. Finally, a nonlinear programming model is proposed to generate individual numerical scales that optimally match the obtained individual characteristics. Two well-known numerical examples are re-examined using the proposed framework to demonstrate its validity.
Decision analysis; AHP; 2-Tuple linguistic representation model; Numerical scale;
http://www.sciencedirect.com/science/article/pii/S0377221713002476
Dong, Yucheng
Hong, Wei-Chiang
Xu, Yinfeng
Yu, Shui
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:15-252013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:15-25
article
Don’t forget your supplier when remanufacturing
A popular assumption in the current literature on remanufacturing is that the whole new product is produced by an integrated manufacturer, which is inconsistent with most industries. In this paper, we model a decentralised closed-loop supply chain consisting of a key component supplier and a non-integrated manufacturer, and demonstrate that the interaction between these players significantly impacts the economic and environmental implications of remanufacturing. In our model, the non-integrated manufacturer can purchase new components from the supplier to produce new products, and remanufacture used components to produce remanufactured products. Thus, the non-integrated manufacturer is not only a buyer but also a rival to the supplier. In a steady state period, we analyse the performances of an integrated manufacturer and the decentralised supply chain. We find that, although the integrated manufacturer always benefits from remanufacturing, the remanufacturing opportunity may constitute a lose–lose situation to the supplier and the non-integrated manufacturer, making their profits be lower than in an identical supply chain without remanufacturing. In addition, the non-integrated manufacturer may be worse off with a lower remanufacturing cost or a larger return rate of used products due to the interaction with the supplier. We further demonstrate that the government-subsidised remanufacturing in the non-integrated (integrated) manufacturer is detrimental (beneficial) to the environment.
Supply chain management; Closed-loop supply chain; Remanufacturing; Environmental impact; Government subsidy;
http://www.sciencedirect.com/science/article/pii/S0377221713002671
Xiong, Yu
Zhou, Yu
Li, Gendao
Chan, Hing-Kai
Xiong, Zhongkai
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:552-5592013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:552-559
article
A competitive magnet-based genetic algorithm for solving the resource-constrained project scheduling problem
This paper presents a genetic algorithm for solving the resource-constrained project scheduling problem. The innovative component of the algorithm is the use of a magnet-based crossover operator that can preserve up to two contiguous parts from the receiver and one contiguous part from the donator genotype. For this purpose, a number of genes in the receiver genotype absorb one another to have the same order and contiguity they have in the donator genotype. The ability of maintaining up to three contiguous parts from two parents distinguishes this crossover operator from the powerful and famous two-point crossover operator, which can maintain only two contiguous parts, both from the same parent. Comparing the performance of the new procedure with that of other procedures indicates its effectiveness and competence.
Genetic algorithms; Resource-constrained; Project scheduling; Precedence-based permutation; Crossover operators;
http://www.sciencedirect.com/science/article/pii/S0377221713002130
Zamani, Reza
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:261-2752013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:261-275
article
A multi-agent optimization formulation of earthquake disaster prevention and management
Natural earthquake disasters are unprecedented incidents which take many lives as a consequence and cause major damages to lifeline infrastructures. Various agencies in a country are responsible for reducing such adverse impacts within specific budgets. These responsibilities range from before to after the incident, targeting one of the main phases of disaster management (mitigation, preparedness, and response). Use of OR in disaster management and coordination of its phases has been mostly ignored and highly recommended in former reviews. This paper presents a formulation to coordinate three main agencies and proposes a heuristic approach to solve the different introduced sub-problems. The results show an improvement of 7.5–24% when the agencies are coordinated.
OR in societal problem analysis; Disaster management; Multi-agent optimization; Emergency response;
http://www.sciencedirect.com/science/article/pii/S0377221713002166
Edrissi, Ali
Poorzahedy, Hossain
Nassiri, Habibollah
Nourinejad, Mehdi
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:411-4212013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:411-421
article
Optimal average sample number of the SPRT chart for monitoring fraction nonconforming
The Sequential Probability Ratio Test (SPRT) control chart is a powerful tool for monitoring manufacturing processes. It is highly suitable for the applications where testing is destructive or very expensive, such as the automobile airbags test. This article studies the effect of the Average Sample Number (ASN) (i.e., the average sample size) on the chart’s performance. A design algorithm is proposed to develop the optimal SPRT chart for monitoring the fraction nonconforming p of Bernoulli processes. By optimizing the ASN and other charting parameters, the average detection speed of the SPRT chart is almost doubled. It is also found that the optimal SPRT chart significantly outperforms the optimal np and binomial CUSUM charts, in terms of Average Number of Defectives (AND), under different combinations of the design specifications. It is observed that the SPRT chart using a relatively smaller ASN and a shorter sampling interval (h) has a higher overall detection effectiveness.
Sequential Probability Ratio Test (SPRT); Control chart; Average Sample Number (ASN); Average Number of Defectives (AND); Sampling interval;
http://www.sciencedirect.com/science/article/pii/S0377221713002543
Haridy, Salah
Wu, Zhang
Lee, Ka Man
Bhuiyan, Nadia
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:732-7372013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:732-737
article
About negative efficiencies in Cross Evaluation BCC input oriented models
It will be shown in this paper that the input oriented DEA BCC model can generate negative efficiencies that are usually hidden in the model. The impact of these negative efficiencies becomes obvious when using input oriented Cross Evaluation models. With the help of an example with one input and one output, the conditions for the possible occurrence of negative efficiencies will be shown. Furthermore, we will show that a small intuitive change in the BCC multipliers model, previously presented in other papers, corrects this situation. We show why this change is used and compared it with an alternative formulation, which avoid negative efficiencies, namely the Non-Decreasing Returns to Scale (NDRS) model. We also show that the formulation studied in this paper is less restrictive than the NDRS model. The study of this variation in the DEA BCC model will be complemented with the formulation of the dual envelope model. This model changes the original frontier. Using the concept of non-observed DMUs, those variations can be graphically analyzed. We have also carried out some algebraic studies concerning benchmarks, multipliers and returns to scale.
Data envelopment analysis; Negative efficiencies; BCC model; Cross Evaluation;
http://www.sciencedirect.com/science/article/pii/S0377221713001380
Soares de Mello, João Carlos C.B.
Angulo Meza, Lidia
da Silveira, Juliana Quintanilha
Gomes, Eliane Gonçalves
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:332-3442013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:332-344
article
An effective PSO-inspired algorithm for the team orienteering problem
The Team Orienteering Problem (TOP) is a particular vehicle routing problem in which the aim is to maximize the profit gained from visiting customers without exceeding a travel cost/time limit. This paper proposes a new and fast evaluation process for TOP based on an interval graph model and a Particle Swarm Optimization inspired Algorithm (PSOiA) to solve the problem. Experiments conducted on the standard benchmark of TOP clearly show that our algorithm outperforms the existing solving methods. PSOiA reached a relative error of 0.0005% whereas the best known relative error in the literature is 0.0394%. Our algorithm detects all but one of the best known solutions. Moreover, a strict improvement was found for one instance of the benchmark and a new set of larger instances was introduced.
Vehicle routing; Knapsack problem; Interval graph; Optimal split; Swarm intelligence;
http://www.sciencedirect.com/science/article/pii/S0377221713001987
Dang, Duc-Cuong
Guibadj, Rym Nesrine
Moukrim, Aziz
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:190-1992013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:190-199
article
Image collection planning for KOrea Multi-Purpose SATellite-2
This paper studies an image collection planning problem for a Korean satellite, KOMPSAT-2 (KOrea Multi-Purpose SATellite-2). KOMPSAT-2 has the mission goal of maximizing image acquisition in time and quality requested by customers and operates under several complicating conditions. One of the characteristics in KOMPSAT-2 is its strip mode operation, in which segments of continuous-observation areas with known sizes are captured one at a time. In this paper, we regard the segment as a group of adjoining geographical square regions (scenes), whose size must also be determined. Thus, the problem involves the determination of proper segment lengths as well as an image collection schedule. We present a binary integer programming model for this problem in a multi-orbit long-term planning environment and provide a heuristic solution approach based on the Lagrangian relaxation and subgradient methods. We also present the results of our computational experiment based on randomly generated data.
Image collection planning problem; Large scale optimization; Decomposition; Lagrangian relaxation; Subgradient method; Scheduling;
http://www.sciencedirect.com/science/article/pii/S037722171300307X
Jang, Jinbong
Choi, Jiwoong
Bae, Hee-Jin
Choi, In-Chan
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:157-1692013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:157-169
article
Branch-and-price for staff rostering: An efficient implementation using generic programming and nested column generation
We present a novel generic programming implementation of a column-generation algorithm for the generalized staff rostering problem. The problem is represented as a generalized set partitioning model, which is able to capture commonly occurring problem characteristics given in the literature. Columns of the set partitioning problem are generated dynamically by solving a pricing subproblem, and constraint branching in a branch-and-bound framework is used to enforce integrality. The pricing problem is formulated as a novel three-stage nested shortest path problem with resource constraints that exploits the inherent problem structure. A very efficient implementation of this pricing problem is achieved by using generic programming principles in which careful use of the C++ pre-processor allows the generator to be customized for the target problem at compile-time. As well as decreasing run times, this new approach creates a more flexible modeling framework that is well suited to handling the variety of problems found in staff rostering. Comparison with a more-standard run-time customization approach shows that speedups of around a factor of 20 are achieved using our new approach. The adaption to a new problem is simple and the implementation is automatically adjusted internally according to the new definition. We present results for three practical rostering problems. The approach captures all features of each problem and is able to provide high-quality solutions in less than 15minutes. In two of the three instances, the optimal solution is found within this time frame.
OR in manpower planning; Generic programming; Branch-and-price; Scheduling; Staff rostering;
http://www.sciencedirect.com/science/article/pii/S0377221713002464
Dohn, Anders
Mason, Andrew
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:505-5172013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:505-517
article
Lexicographical dynamic goal programming approach to a robust design optimization within the pharmaceutical environment
The primary objective of this paper is to develop a new robust design (RD) optimization procedure based on a lexicographical dynamic goal programming (LDGP) approach for implementing time-series based multi-responses, while the conventional experimental design formats and frameworks may implement static responses. First, a parameter estimation method for time-dependent pharmaceutical responses (i.e., drug release and gelation kinetics) is proposed using the dual response estimation concept that separately estimates the response functions of the mean and variance, as a part of response surface method. Second, a multi-objective RD optimization model using the estimated response functions of both the process mean and variance is proposed by incorporating a time-series components within a dynamic modeling environment. Finally, a pharmaceutical case study associated with a generic drug development process is conducted for verification purposes. Based on the case study results, we conclude that the proposed LDGP approach effectively provides the optimal drug formulations with significantly small biases and MSE values, compared to other models.
Robust design; Response surface methodology (RSM); Time series response; Lexicographical dynamic goal programming;
http://www.sciencedirect.com/science/article/pii/S0377221713001355
Nha, Vo Thanh
Shin, Sangmun
Jeong, Seong Hoon
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:743-7502013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:743-750
article
On deciding how to decide: Designing participatory budget processes
Participatory budgets are becoming increasingly popular in many municipalities all around the world. The underlying idea is to allow citizens to participate in the allocation of a municipal budget. Many advantages have been suggested for such experiences, including legitimization and more informed and transparent decisions. There are many conceivable variants of such processes. However, in most cases both its design and implementation are carried out in an informal way. In this paper we propose a methodology to design a participatory budget process based on a multicriteria decision making model.
Group decision support; Public budgeting; Participatory budgeting; Participation; Multicriteria decision making; Multiattribute value function;
http://www.sciencedirect.com/science/article/pii/S0377221713002683
Gomez, J.
Insua, D. Rios
Lavin, J.M.
Alfaro, C.
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:613-6252013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:613-625
article
Univariate parameterization for global optimization of mixed-integer polynomial problems
This paper presents a new relaxation technique to globally optimize mixed-integer polynomial programming problems that arise in many engineering and management contexts. Using a bilinear term as the basic building block, the underlying idea involves the discretization of one of the variables up to a chosen accuracy level (Teles, J.P., Castro, P.M., Matos, H.A. (2013). Multiparametric disaggregation technique for global optimization of polynomial programming problems. J. Glob. Optim. 55, 227–251), by means of a radix-based numeric representation system, coupled with a residual variable to effectively make its domain continuous. Binary variables are added to the formulation to choose the appropriate digit for each position together with new sets of continuous variables and constraints leading to the transformation of the original mixed-integer non-linear problem into a larger one of the mixed-integer linear programming type. The new underestimation approach can be made as tight as desired and is shown capable of providing considerably better lower bounds than a widely used global optimization solver for a specific class of design problems involving bilinear terms.
Global optimization; Non-linear programming; Integer programming; Mixed-integer non-linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221713002750
Teles, João P.
Castro, Pedro M.
Matos, Henrique A.
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:482-4862013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:482-486
article
Non-dominance and potential optimality for partial preference relations
In this paper we obtain new theoretical results relating the notions of potential optimality and non-dominance without an assumption that a value function exists. In particular, we investigate a decision problem involving the choice of single or multiple best objects. Our results show that the notions of potential optimality and non-dominance are equivalent in a special case of preferences of the decision maker expressed by partial quasi-orders.
Binary preference relations; Non-dominated objects; Potentially optimal objects; Choosing l best objects; l-Non-dominated objects; Potentially l-optimal objects;
http://www.sciencedirect.com/science/article/pii/S037722171300194X
Podinovski, Vladislav V.
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:29-362013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:29-36
article
Robust aspects of solutions in deterministic multiple objective linear programming
We study questions of robustness of linear multiple objective problems in the sense of post-optimal analysis, that is, we study conditions under which a given efficient solution remains efficient when the criteria/objective matrix undergoes some alterations. We consider addition or removal of certain criteria, convex combination with another criteria matrix, or small perturbations of its entries. We provide a necessary and sufficient condition for robustness in a verifiable form and give two formulae to compute the radius of robustness.
Multiple objective linear problem; Robust efficient solutions; Radius of robustness;
http://www.sciencedirect.com/science/article/pii/S0377221713001707
Georgiev, Pando Gr.
Luc, Dinh The
Pardalos, Panos M.
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:353-3632013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:353-363
article
The single item uncapacitated lot-sizing problem with time-dependent batch sizes: NP-hard and polynomial cases
This paper considers the uncapacitated lot sizing problem with batch delivery, focusing on the general case of time-dependent batch sizes. We study the complexity of the problem, depending on the other cost parameters, namely the setup cost, the fixed cost per batch, the unit procurement cost and the unit holding cost. We establish that if any one of the cost parameters is allowed to be time-dependent, the problem is NP-hard. On the contrary, if all the cost parameters are stationary, and assuming no unit holding cost, we show that the problem is polynomially solvable in time O(T3), where T denotes the number of periods of the horizon. We also show that, in the case of divisible batch sizes, the problem with time varying setup costs, a stationary fixed cost per batch and no unit procurement nor holding cost can be solved in time O(T3 logT).
Inventory; Uncapacitated lot sizing; Batch delivery; Stepwise cost; Polynomial time algorithm; Complexity;
http://www.sciencedirect.com/science/article/pii/S0377221713002014
Akbalik, Ayse
Rapine, Christophe
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:88-962013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:88-96
article
Testing over-representation of observations in subsets of a DEA technology
This paper proposes a test for whether data are over-represented in a given production zone, i.e. a subset of a production possibility set which has been estimated using the non-parametric Data Envelopment Analysis (DEA) approach. A binomial test is used that relates the number of observations inside such a zone to a discrete probability weighted relative volume of that zone. A Monte Carlo simulation illustrates the performance of the proposed test statistic and provides good estimation of both facet probabilities and the assumed common inefficiency distribution in a three dimensional input space. Potential applications include tests for whether benchmark units dominate more (or less) observations than expected.
Data Envelopment Analysis (DEA); Over-representation; Data density; Binomial test; Benchmarks;
http://www.sciencedirect.com/science/article/pii/S0377221713002713
Asmild, Mette
Hougaard, Jens Leth
Olesen, Ole B.
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:433-4432013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:433-443
article
Optimal advertising and pricing in a class of general new-product adoption models
In [21], Sethi et al. introduced a particular new-product adoption model. They determine optimal advertising and pricing policies of an associated deterministic infinite horizon discounted control problem. Their analysis is based on the fact that the corresponding Hamilton–Jacobi–Bellman (HJB) equation is an ordinary non-linear differential equation which has an analytical solution. In this paper, generalizations of their model are considered. We take arbitrary adoption and saturation effects into account, and solve finite and infinite horizon discounted variations of associated control problems. If the horizon is finite, the HJB-equation is a 1st order non-linear partial differential equation with specific boundary conditions. For a fairly general class of models we show that these partial differential equations have analytical solutions. Explicit formulas of the value function and the optimal policies are derived. The controlled Bass model with isoelastic demand is a special example of the class of controlled adoption models to be examined and will be analyzed in some detail.
Optimal control; Dynamic programming; OR in marketing; Pricing; Sethi/Bass model;
http://www.sciencedirect.com/science/article/pii/S0377221713001689
Helmes, Kurt
Schlosser, Rainer
Weber, Martin
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:76-872013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:76-87
article
Strategic joining in M/M/1 retrial queues
The equilibrium and socially optimal balking strategies are investigated for unobservable and observable single-server classical retrial queues. There is no waiting space in front of the server. If an arriving customer finds the server idle, he occupies the server immediately and leaves the system after service. Otherwise, if the server is found busy, the customer decides whether or not to enter a retrial pool with infinite capacity and becomes a repeated customer, based on observation of the system and the reward–cost structure imposed on the system. Accordingly, two cases with respect to different levels of information are studied and the corresponding Nash equilibrium and social optimization balking strategies for all customers are derived. Finally, we compare the equilibrium and optimal behavior regarding these two information levels through numerical examples.
Queueing; M/M/1 queue; Balking; Equilibrium strategies; Threshold strategies; Retrials;
http://www.sciencedirect.com/science/article/pii/S0377221713002580
Wang, Jinting
Zhang, Feng
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:199-2112013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:199-211
article
Communication network formation with link specificity and value transferability
We model strategic communication network formation with (i) link specificity: link maintenance lowers specific attention and thus value (negative externality previously ignored for communication) and (ii) value transferability via indirect links for informational but not for social value (positive externality modeled uniformly before). Assuming only social value, the pairwise stable set includes many nonstandard networks under high and particular combinations of complete components under low link specificity. Allowing for social and informational value reduces this set to certain fragmented networks under high and the complete network under low link specificity. These extremes are efficient, whereas intermediate link specificity generates inefficiency.
Bilateral communication links; Link specificity; Value transferability; Social vs. informational value; Strategic network formation;
http://www.sciencedirect.com/science/article/pii/S0377221713001616
Harmsen - van Hout, Marjolein J.W.
Herings, P. Jean-Jacques
Dellaert, Benedict G.C.
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:155-1642013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:155-164
article
The effect of tax depreciation on the stochastic replacement policy
The optimal replacement policy for an asset subject to a stochastic deteriorating operating cost is determined for three different tax depreciation schedules and a known re-investment cost, as the solution to a two-factor model using a quasi-analytical method. We find that tax depreciation exerts a critical influence over the replacement policy by lowering the operating cost thresholds. Although typically a decline in the corporate tax rate, increase in any initial capital allowance, or decrease in the depreciation lifetime (increase in depreciation rate) results in a lower operating cost threshold which justifies replacing older equipment, these results are not universal, and indeed for younger age assets the result may be the opposite. An accelerating depreciation schedule may incentivize early replacement in a deterministic context, but not necessarily for an environment of uncertainty.
Uncertainty modelling; Equipment replacement; Capital budgeting; Quasi-analytical solution; Tax depreciation;
http://www.sciencedirect.com/science/article/pii/S0377221713000970
Adkins, Roger
Paxson, Dean
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:303-3172013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:303-317
article
Designing vehicle routes for a mix of different request types, under time windows and loading constraints
This article introduces and solves a new rich routing problem integrated with practical operational constraints. The problem examined calls for the determination of the optimal routes for a vehicle fleet to satisfy a mix of two different request types. Firstly, vehicles must transport three-dimensional, rectangular and stackable boxes from a depot to a set of predetermined customers. In addition, vehicles must also transfer products between pairs of pick-up and delivery locations. Service of both request types is subject to hard time window constraints. In addition, feasible palletization patterns must be identified for the transported products. A practical application of the problem arises in the transportation systems of chain stores, where vehicles replenish the retail points by delivering products stored at a central depot, while they are also responsible for transferring stock between pairs of the retailer network. To solve this very complex combinatorial optimization problem, our major objective was to develop an efficient methodology whose required computational effort is kept within reasonable limits. To this end, we propose a local search-based framework for optimizing vehicle routes, in which feasible loading arrangements are identified via a simple-structured packing heuristic. The algorithmic framework is enhanced with various memory components which store and retrieve useful information gathered through the search process, in order to avoid any duplicate unnecessary calculations. The proposed solution approach is assessed on newly introduced benchmark instances.
Vehicle routing; Pick-up and delivery; Time windows; Pallet loading; Local search;
http://www.sciencedirect.com/science/article/pii/S0377221713002051
Zachariadis, Emmanouil E.
Tarantilis, Christos D.
Kiranoudis, Chris T.
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:595-5982013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:595-598
article
A note on the separation of subtour elimination constraints in elementary shortest path problems
This note proposes an alternative procedure for identifying violated subtour elimination constraints (SECs) in branch-and-cut algorithms for elementary shortest path problems. The procedure is also applicable to other routing problems, such as variants of travelling salesman or shortest Hamiltonian path problems, on directed graphs. The proposed procedure is based on computing the strong components of the support graph. The procedure possesses a better worst-case time complexity than the standard way of separating SECs, which uses maximum flow algorithms, and is easier to implement.
Integer programming; Branch-and-cut; Separation; Subtour elimination constraints; Strong components;
http://www.sciencedirect.com/science/article/pii/S0377221713002178
Drexl, Michael
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:573-5842013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:573-584
article
A memetic algorithm for the multiperiod vehicle routing problem with profit
In this paper, we extend upon current research in the vehicle routing problem whereby labour regulations affect planning horizons, and therefore, profitability. We call this extension the multiperiod vehicle routing problem with profit (mVRPP). The goal is to determine routes for a set of vehicles that maximizes profitability from visited locations, based on the conditions that vehicles can only travel during stipulated working hours within each period in a given planning horizon and that the vehicles are only required to return to the depot at the end of the last period. We propose an effective memetic algorithm with a giant-tour representation to solve the mVRPP. To efficiently evaluate a chromosome, we develop a greedy procedure to partition a given giant-tour into individual routes, and prove that the resultant partition is optimal. We evaluate the effectiveness of our memetic algorithm with extensive experiments based on a set of modified benchmark instances. The results indicate that our approach generates high-quality solutions that are reasonably close to the best known solutions or proven optima, and significantly better than the solutions obtained using heuristics employed by professional schedulers.
Memetic algorithm; Metaheuristics; Giant-tour; Multiperiod; Periodic vehicle routing;
http://www.sciencedirect.com/science/article/pii/S0377221712009137
Zhang, Zizhen
Che, Oscar
Cheang, Brenda
Lim, Andrew
Qin, Hu
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:718-7312013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:718-731
article
A decomposition approach for the General Lotsizing and Scheduling Problem for Parallel production Lines
This paper presents a novel solution heuristic to the General Lotsizing and Scheduling Problem for Parallel production Lines (GLSPPL). The GLSPPL addresses the problem of simultaneously deciding about the sizes and schedules of production lots on parallel, heterogeneous production lines with respect to scarce capacity, sequence-dependent setup times and deterministic, dynamic demand of multiple products. Its objective is to minimize inventory holding, sequence-dependent setup and production costs. The new heuristic iteratively decomposes the multi-line problem into a series of single-line problems, which are easier to solve. Different approaches for decomposition and for the iteration between a modified multi-line master problem and the single-line subproblems are proposed. They are compared with an existing solution method for the GLSPPL by means of medium-sized and large practical problem instances from different types of industries. The new methods prove to be superior with respect to both solution quality and computation time.
Scheduling; Heuristics; Simultaneous Lotsizing and Scheduling; Production;
http://www.sciencedirect.com/science/article/pii/S0377221713002695
Meyr, Herbert
Mann, Matthias
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:318-3312013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:318-331
article
A decomposition approach for the integrated vehicle-crew-roster problem with days-off pattern
The integrated vehicle-crew-roster problem with days-off pattern aims to simultaneously determine minimum cost vehicle and daily crew schedules that cover all timetabled trips and a minimum cost roster covering all daily crew duties according to a pre-defined days-off pattern. This problem is formulated as a new integer linear programming model and is solved by a heuristic approach based on Benders decomposition that iterates between the solution of an integrated vehicle-crew scheduling problem and the solution of a rostering problem. Computational experience with data from two bus companies in Portugal and data from benchmark vehicle scheduling instances shows the ability of the approach for producing a variety of solutions within reasonable computing times as well as the advantages of integrating the three problems.
Transportation; Vehicle and crew scheduling; Driver rostering; Benders decomposition;
http://www.sciencedirect.com/science/article/pii/S037722171300204X
Mesquita, Marta
Moz, Margarida
Paias, Ana
Pato, Margarida
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:375-3812013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:375-381
article
Coordination of supply chains with bidirectional option contracts
In this paper we develop a supply contract for a two-echelon manufacturer–retailer supply chain with a bidirectional option, which may be exercised as either a call option or a put option. Under the bidirectional option contract, we derive closed-form expressions for the retailer’s optimal order strategies, including the initial order strategy and the option purchasing strategy, with a general demand distribution. We also analytically examine the feedback effects of the bidirectional option on the retailer’s initial order strategy. In addition, taking a chain-wide perspective, we explore how the bidirectional option contract should be set to attain supply chain coordination.
Supply chain management; Bidirectional option contract; Structural property; Channel coordination;
http://www.sciencedirect.com/science/article/pii/S0377221713002488
Zhao, Yingxue
Ma, Lijun
Xie, Gang
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:585-5942013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:585-594
article
The discrete forward–reserve problem – Allocating space, selecting products, and area sizing in forward order picking
To reduce labor-intensive and costly order picking activities, many distribution centers are subdivided into a forward area and a reserve (or bulk) area. The former is a small area where most popular stock keeping units (SKUs) can conveniently be picked, and the latter is applied for replenishing the forward area and storing SKUs that are not assigned to the forward area at all. Clearly, reducing SKUs stored in forward area enables a more compact forward area (with reduced picking effort) but requires a more frequent replenishment. To tackle this basic trade-off, different versions of forward–reserve problems determine the SKUs to be stored in forward area, the space allocated to each SKU, and the overall size of the forward area. As previous research mainly focuses on simplified problem versions (denoted as fluid models), where the forward area can continuously be subdivided, we investigate discrete forward–reserve problems. Important subproblems are defined and computation complexity is investigated. Furthermore, we experimentally analyze the model gaps between the different fluid models and their discrete counterparts.
Assignment; Logistics; Warehousing; Order picking; Forward–reserve problem;
http://www.sciencedirect.com/science/article/pii/S0377221713001963
Walter, Rico
Boysen, Nils
Scholl, Armin
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:663-6722013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:663-672
article
Two-player simultaneous location game: Preferential rights and overbidding
Competitive location problems can be characterized by the fact that the decisions made by others will affect our own payoffs. In this paper, we address a discrete competitive location game in which two decision-makers have to decide simultaneously where to locate their services without knowing the decisions of one another. This problem arises in a franchising environment in which the decision-makers are the franchisees and the franchiser defines the potential sites for locating services and the rules of the game. At most one service can be located at each site, and one of the franchisees has preferential rights over the other. This means that if both franchisees are interested in opening the service in the same site, only the one that has preferential rights will open it. We consider that both franchisees have budget constraints, but the franchisee without preferential rights is allowed to show interest in more sites than the ones she can afford. We are interested in studying the influence of the existence of preferential rights and overbidding on the outcomes for both franchisees and franchiser. A model is presented and an algorithmic approach is developed for the calculation of Nash equilibria. Several computational experiments are defined and their results are analysed, showing that preferential rights give its holder a relative advantage over the other competitor. The possibility of overbidding seems to be advantageous for the franchiser, as well as the inclusion of some level of asymmetry between the two decision-makers.
Location; Game theory; Competitive location problems; Simultaneous decisions; Nash-equilibria;
http://www.sciencedirect.com/science/article/pii/S0377221713002737
Godinho, Pedro
Dias, Joana
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:1-142013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:1-14
article
Quasi-stationary distributions for discrete-state models
This paper contains a survey of results related to quasi-stationary distributions, which arise in the setting of stochastic dynamical systems that eventually evanesce, and which may be useful in describing the long-term behaviour of such systems before evanescence. We are concerned mainly with continuous-time Markov chains over a finite or countably infinite state space, since these processes most often arise in applications, but will make reference to results for other processes where appropriate. Next to giving an historical account of the subject, we review the most important results on the existence and identification of quasi-stationary distributions for general Markov chains, and give special attention to birth–death processes and related models. The question of under what circumstances a quasi-stationary distribution, given its existence, is indeed a good descriptor of the long-term behaviour of a system before evanescence, is addressed as well. We conclude with a discussion of computational aspects, with more details given in a web appendix accompanying this paper.
Applied probability; Markov processes; Quasi-stationarity;
http://www.sciencedirect.com/science/article/pii/S0377221713000799
van Doorn, Erik A.
Pollett, Philip K.
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:26-412013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:26-41
article
A stochastic aggregate production planning model in a green supply chain: Considering flexible lead times, nonlinear purchase and shortage cost functions
In this paper we develop a stochastic programming approach to solve a multi-period multi-product multi-site aggregate production planning problem in a green supply chain for a medium-term planning horizon under the assumption of demand uncertainty. The proposed model has the following features: (i) the majority of supply chain cost parameters are considered; (ii) quantity discounts to encourage the producer to order more from the suppliers in one period, instead of splitting the order into periodical small quantities, are considered; (iii) the interrelationship between lead time and transportation cost is considered, as well as that between lead time and greenhouse gas emission level; (iv) demand uncertainty is assumed to follow a pre-specified distribution function; (v) shortages are penalized by a general multiple breakpoint function, to persuade producers to reduce backorders as much as possible; (vi) some indicators of a green supply chain, such as greenhouse gas emissions and waste management are also incorporated into the model. The proposed model is first a nonlinear mixed integer programming which is converted into a linear one by applying some theoretical and numerical techniques. Due to the convexity of the model, the local solution obtained from linear programming solvers is also the global solution. Finally, a numerical example is presented to demonstrate the validity of the proposed model.
Supply chain management; Aggregate production planning; Green principles; Quantity discount; Nonlinear shortage cost; Demand uncertainty;
http://www.sciencedirect.com/science/article/pii/S037722171300266X
Mirzapour Al-e-hashem, S.M.J.
Baboli, A.
Sazvar, Z.
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:48-582013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:48-58
article
Minimising the number of gap-zeros in binary matrices
We study a problem of minimising the total number of zeros in the gaps between blocks of consecutive ones in the columns of a binary matrix by permuting its rows. The problem is referred to as the Consecutive Ones Matrix Augmentation Problem, and is known to be NP-hard. An analysis of the structure of an optimal solution allows us to focus on a restricted solution space, and to use an implicit representation for searching the space. We develop an exact solution algorithm, which is linear-time in the number of rows if the number of columns is constant, and two constructive heuristics to tackle instances with an arbitrary number of columns. The heuristics use a novel solution representation based upon row sequencing. In our computational study, all heuristic solutions are either optimal or close to an optimum. One of the heuristics is particularly effective, especially for problems with a large number of rows.
Combinatorial optimisation; Binary matrices; Consecutive ones property; Scheduling; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221713000751
Chakhlevitch, Konstantin
Glass, Celia A.
Shakhlevich, Natalia V.
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:122-1322013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:122-132
article
Selfish routing in public services
It is well observed that individual behaviour can have an effect on the efficiency of queueing systems. The impact of this behaviour on the economic efficiency of public services is considered in this paper where we present results concerning the congestion related implications of decisions made by individuals when choosing between facilities. The work presented has important managerial implications at a public policy level when considering the effect of allowing individuals to choose between providers. We show that in general the introduction of choice in an already inefficient system will not have a negative effect. Introducing choice in a system that copes with demand will have a negative effect.
Game theory; Queueing theory; Health care; OR in health services;
http://www.sciencedirect.com/science/article/pii/S0377221713003019
Knight, Vincent A.
Harper, Paul R.
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:422-4322013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:422-432
article
Pricing and effort investment for a newsvendor-type product
We investigate a dominant retailer’s optimal joint strategy of pricing and timing of effort investment and analyze how it influences the decision of the manufacturer, the total supply chain profit, and the consumers’ payoff. We consider two pricing schemes of the retailer, namely, dollar markup and percentage markup, and two effort-investment sequences, namely, ex-ante and ex-post. A combination of four cases is analyzed. Our results show that: (1) under the same effort-decision sequence, a percentage-markup pricing scheme leads to higher expected profit for the retailer and the whole supply chain, but a lower expected profit for the manufacturer and a higher retail price for the consumers; (2) under the same markup-pricing strategy, the dominant retailer always prefers to postpone her effort decision until the manufacturer makes a commitment to wholesale price, since it can result in a Pareto-improvement for all the supply chain members. That is, the retailer’s and manufacturer’s expected profits are higher and the consumers pay a lower retail price; and (3) among the four joint strategies, the dominant retailer always prefers the joint strategy of percentage-markup plus ex-post effort decision. However, the dominated manufacturer always prefers the joint strategy of dollar-markup plus ex-post effort decision, which is also beneficial to the end consumers.
Supply chain management; Markup pricing; Timing of effort-investment; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221712009071
Wang, Yao-Yu
Wang, Jian-Cai
Shou, Biying
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:470-4812013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:470-481
article
Multiattribute preference models with reference points
In the context of multiple attribute decision making, preference models making use of reference points in an ordinal way have recently been introduced in the literature. This text proposes an axiomatic analysis of such models, with a particular emphasis on the case in which there is only one reference point. Our analysis uses a general conjoint measurement model resting on the study of traces induced on attributes by the preference relation and using conditions guaranteeing that these traces are complete. Models using reference points are shown to be a particular case of this general model. The number of reference points is linked to the number of equivalence classes distinguished by the traces. When there is only one reference point, the induced traces are quite rough, distinguishing at most two distinct equivalence classes. We study the relation between the model using a single reference point and other preference models proposed in the literature, most notably models based on concordance and models based on a discrete Sugeno integral.
Multiple criteria decision making; Reference point; Conjoint measurement;
http://www.sciencedirect.com/science/article/pii/S0377221713001951
Bouyssou, Denis
Marchant, Thierry
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:496-5042013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:496-504
article
Incorporating the learning effect into data envelopment analysis to measure MSW recycling performance
The effect of organizational learning, which results in continuous improvement of organizational performance over time, has been widely discussed. The cumulative learning effect may form as a source of intellectual capital. Thus far, the static data envelopment analysis (DEA) model has not been used to examine the longitudinal learning effect. Therefore, a two-stage approach is developed together with the estimation of a latent learning effect using time-series data; the estimated learning effect is then used as an input in the DEA Slacks-Based Measure (SBM) model. The proposed DEA SBM model can be used to investigate the efficiency of the organizational learning effect of Municipal Solid Waste (MSW) recycling systems.
Data envelopment analysis; Learning effect; MSW recycling;
http://www.sciencedirect.com/science/article/pii/S0377221713000738
Chang, Dong-Shang
Liu, Wenrong
Yeh, Li-Ting
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:85-942013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:85-94
article
A framework for performance measurement during production ramp-up of assembly stations
Production ramp-up is an important phase in the lifecycle of a manufacturing system which still has significant potential for improvement and thereby reducing the time-to-market of new and updated products. Production systems today are mostly one-of-a-kind complex, engineered-to-order systems. Their ramp-up is a complex order of physical and logical adjustments which are characterised by try and error decision making resulting in frequent reiterations and unnecessary repetitions. Studies have shown that clear goal setting and feedback can significantly improve the effectiveness of decision-making in predominantly human decision processes such as ramp-up. However, few measurement-driven decision aides have been reported which focus on ramp-up improvement and no systematic approach for ramp-up time reduction has yet been defined. In this paper, a framework for measuring the performance during ramp-up is proposed in order to support decision making by providing clear metrics based on the measurable and observable status of the technical system. This work proposes a systematic framework for data preparation, ramp-up formalisation, and performance measurement. A model for defining the ramp-up state of a system has been developed in order to formalise and capture its condition. Functionality, quality and performance based metrics have been identified to formalise a clear ramp-up index as a measurement to guide and support the human decision making. For the validation of the proposed framework, two ramp-up processes of an assembly station were emulated and their comparison was used to evaluate this work.
Manufacturing; Production; Ramp-up; Performance measures; Decision support;
http://www.sciencedirect.com/science/article/pii/S0377221713002002
Doltsinis, Stefanos C.
Ratchev, Svetan
Lohse, Niels
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:143-1542013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:143-154
article
Towards a new framework for evaluating systemic problem structuring methods
Operational researchers and social scientists often make significant claims for the value of systemic problem structuring and other participative methods. However, when they present evidence to support these claims, it is usually based on single case studies of intervention. There have been very few attempts at evaluating across methods and across interventions undertaken by different people. This is because, in any local intervention, contextual factors, the skills of the researcher and the purposes being pursued by stakeholders affect the perceived success or failure of a method. The use of standard criteria for comparing methods is therefore made problematic by the need to consider what is unique in each intervention. So, is it possible to develop a single evaluation approach that can support both locally meaningful evaluations and longer-term comparisons between methods? This paper outlines a methodological framework for the evaluation of systemic problem structuring methods that seeks to do just this.
Problem structuring methods; Soft operational research; Evaluation of methods; Participative methods; Systems methodology; Systems thinking;
http://www.sciencedirect.com/science/article/pii/S0377221713000945
Midgley, Gerald
Cavana, Robert Y.
Brocklesby, John
Foote, Jeff L.
Wood, David R.R.
Ahuriri-Driscoll, Annabel
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:561-5722013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:561-572
article
Maintenance models in warranty: A literature review
Along with increasing the warranty period for complex systems, reducing the warranty servicing costs has become an issue of great importance to the manufacturers. One possible way to reduce the expected warranty servicing cost is by making sound decision on the product warranty and maintenance strategies. Therefore, warranties (basic warranty and extended warranty) and maintenance (corrective and preventive) are strongly interlinked and of great interest to both manufacturers and customers. This paper is the first identifiable academic literature review to deal with warranty and maintenance. It provides a classification scheme for the articles that link warranty and maintenance published between 2001 and 2011 covering 44 journals and proposes a taxonomy scheme to classify these articles. Nine hundred articles were identified for their relevance to warranty and were carefully reviewed. One-hundred and twenty-two articles were subsequently selected for their relevance to maintenance and included in the classification.
Warranty; Corrective maintenance (CM); Preventive maintenance (PM);
http://www.sciencedirect.com/science/article/pii/S0377221713000441
Shafiee, Mahmood
Chukova, Stefanka
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:75-842013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:75-84
article
Using greedy clustering method to solve capacitated location-routing problem with fuzzy demands
In this paper, the capacitated location-routing problem with fuzzy demands (CLRP-FD) is considered. In CLRP-FD, facility location problem (FLP) and vehicle routing problem (VRP) are observed simultaneously. Indeed, the vehicles and the depots have a predefined capacity to serve the customers that have fuzzy demands. To model this problem, a fuzzy chance constrained programming model of that is designed based upon the fuzzy credibility theory. To solve this problem, a greedy clustering method (GCM) including the stochastic simulation is proposed. To obtain the best value of the dispatcher preference index of the model and to analyze its influence on the final solution, numerical experiments are carried out. Finally, to show the performance of the greedy clustering method, associated results are compared with the lower bound of the solutions.
Capacitated location-routing problem; Fuzzy demand; Credibility theory; Ant colony system; Stochastic simulation;
http://www.sciencedirect.com/science/article/pii/S0377221713001318
Zare Mehrjerdi, Yahia
Nadizadeh, Ali
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:252-2602013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:252-260
article
Estimating freeway traffic measures from mobile phone location data
The worldwide propagation of mobile phone and the rapid development of location technologies have provided the chance to monitor freeway traffic conditions without requiring extra infrastructure investment. Over the past decade, a number of research studies and operational tests have attempted to investigate the methods to estimate traffic measures using information from mobile phone. However, most of these works ignored the fact that each vehicle has more than one phone due to the rapid popularity of mobile phone. This paper considered the circumstance of multi-phones and proposed a relatively simplistic clustering technique to identify whether phones travel in the same vehicle. By using this technique, mobile phone data can be used to determine not only speed, but also vehicle counts by type, and therefore density. A complex simulation covering different traffic condition and location accuracy of mobile phone has been developed to evaluate the proposed approach. Simulation results indicate that location accuracy of mobile phone is a crucial factor to estimate accurate traffic measures in case of a given location frequency and the number of continuous location data. In addition, traffic demand and clustering method have a certain effect on the accuracy of traffic measures.
Traffic; Traffic measures estimation; Mobile phone; Clustering analysis; Freeway;
http://www.sciencedirect.com/science/article/pii/S0377221713001938
Gao, Hongyan
Liu, Fasheng
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:42-522013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:42-52
article
A Semi-Markov decision problem for proactive and reactive transshipments between multiple warehouses
Lateral transshipments are an effective strategy to pool inventories. We present a Semi-Markov decision problem formulation for proactive and reactive transshipments in a multi-location continuous review distribution inventory system with Poisson demand and one-for-one replenishment policy. For a two-location model we state the monotonicity of an optimal policy. In a numerical study, we compare the benefits of proactive and different reactive transshipment rules. The benefits of proactive transshipments are the largest for networks with intermediate opportunities of demand pooling and the difference between alternative reactive transshipment rules is negligible.
Lateral transshipments; Inventory control; Multiple locations; Markov decision problem;
http://www.sciencedirect.com/science/article/pii/S0377221713002749
Seidscher, Arkadi
Minner, Stefan
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:345-3522013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:345-352
article
An exact algorithm for the precedence-constrained single-machine scheduling problem
This study proposes an efficient exact algorithm for the precedence-constrained single-machine scheduling problem to minimize total job completion cost where machine idle time is forbidden. The proposed algorithm is based on the SSDP (Successive Sublimation Dynamic Programming) method and is an extension of the authors’ previous algorithms for the problem without precedence constraints. In this method, a lower bound is computed by solving a Lagrangian relaxation of the original problem via dynamic programming and then it is improved successively by adding constraints to the relaxation until the gap between the lower and upper bounds vanishes. Numerical experiments will show that the algorithm can solve all instances with up to 50 jobs of the precedence-constrained total weighted tardiness and total weighted earliness–tardiness problems, and most instances with 100 jobs of the former problem.
Scheduling; Single-machine; Precedence constraints; Exact algorithm; Lagrangian relaxation; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221713001975
Tanaka, Shunji
Sato, Shun
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:276-2782013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:276-278
article
Integrated data envelopment analysis: Global vs. local optimum
Chiou et al. (2010) (A joint measurement of efficiency and effectiveness for non-storable commodities: integrated data envelopment analysis approaches. European Journal of Operational Research 201, 477–489) propose an integrated data envelopment analysis model in measuring decision making units (DMUs) that have a two-stage internal network structure with multiple inputs, outputs, and consumptions. They claim that any optimal solutions determined by their DEA model are a global optimum, not a local optimum. We show that such a conclusion is a false statement due to their misuse of Hessian matrix in examining the concavity of the objective function, and their DEA model is actually a non-convex optimization problem. As a result, their DEA model is unusable in practice due to a lack of efficient algorithm for this particular non-convex DEA model. We further show that Chiou et al.’s (2010) model is a special case of a well-known two-stage network DEA model, and it can be transformed into a parametric linear program for which an approximate global optimal solution can be obtained by solving a sequence of linear programs in combination with a simple search algorithm.
Integrated data envelopment analysis; Efficiency; Two-stage; Local optimum; Global optimum;
http://www.sciencedirect.com/science/article/pii/S0377221713001562
Lim, Sungmook
Zhu, Joe
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:181-1892013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:181-189
article
An integrated model for screening cargo containers
This paper focuses on detecting nuclear weapons on cargo containers using port security screening methods, where the nuclear weapons would presumably be used to attack a target within the United States. This paper provides a linear programming model that simultaneously identifies optimal primary and secondary screening policies in a prescreening-based paradigm, where incoming cargo containers are classified according to their perceived risk. The proposed linear programming model determines how to utilize primary and secondary screening resources in a cargo container screening system given a screening budget, prescreening classifications, and different device costs. Structural properties of the model are examined to shed light on the optimal screening policies. The model is illustrated with a computational example. Sensitivity analysis is performed on the ability of the prescreening in correctly identifying prescreening classifications and secondary screening costs. Results reveal that there are fewer practical differences between the screening policies of the prescreening groups when prescreening is inaccurate. Moreover, devices that can better detect shielded nuclear material have the potential to substantially improve the system’s detection capabilities.
Cargo container security; Port security; Linear programming; Multiple-choice knapsack problems; OR in government;
http://www.sciencedirect.com/science/article/pii/S0377221713002993
Dreiding, Rebecca A.
McLay, Laura A.
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:230-2382013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:230-238
article
Cooperative game theoretic centrality analysis of terrorist networks: The cases of Jemaah Islamiyah and Al Qaeda
The identification of key players in a terrorist organization aids in preventing attacks, the efficient allocation of surveillance measures, and the destabilization of the corresponding network. In this paper, we introduce a game theoretic approach to identify key players in terrorist networks. In particular we use the Shapley value as a measure of importance in cooperative games that are specifically designed to reflect the context of the terrorist organization at hand. The advantage of this approach is that both the structure of the terrorist network, which usually reflects a communication and interaction structure, as well as non-network features, i.e., individual based parameters such as financial means or bomb building skills, can be taken into account. The application of our methodology to the analysis results in rankings of the terrorists in the network. We illustrate our methodology through two case studies: Jemaah Islamiyah’s Bali bombing and Al Qaedas 9/11 attack, which lead to new insights in the operational networks responsible for these attacks.
Terrorism; Network analysis; Centrality; Cooperative game theory;
http://www.sciencedirect.com/science/article/pii/S0377221713001653
Lindelauf, R.H.A.
Hamers, H.J.M.
Husslage, B.G.M.
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:133-1422013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:133-142
article
New product introduction and capacity investment by incumbents: Effects of size on strategy
We analyze a duopoly where capacity-constrained firms offer an established product and have the option to offer an additional new and differentiated product. We show that the firm with the smaller capacity on the established market has a higher incentive to innovate and reaches a larger market share on the market for the new product. An increase in capacity of the larger firm can prevent its competitor from innovating, whereas an increase in capacity of the smaller firm cannot prevent innovation of its larger competitor. In equilibrium the firm with smaller capacity on the established market might outperform the larger firm with respect to total payoffs.
Game theory; Innovation incentives; Capacity choice; Multi-product oligopoly; Subgame-perfect-equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221713003007
Dawid, Herbert
Kopel, Michael
Kort, Peter M.
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:41-472013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:41-47
article
Due-window assignment with identical jobs on parallel uniform machines
A scheduling problem with a common due-window, earliness and tardiness costs, and identical processing time jobs is studied. We focus on the setting of both (i) job-dependent earliness/tardiness job weights and (ii) parallel uniform machines. The objective is to find the job allocation to the machines and the job schedule, such that the total weighted earliness and tardiness cost is minimized. We study both cases of a non-restrictive (i.e. sufficiently late), and a restrictive due-window. For a given number of machines, the solutions of the problems studied here are obtained in polynomial time in the number of jobs.
Scheduling; Parallel uniform machines; Earliness–tardiness; Assignment problem; Due-window;
http://www.sciencedirect.com/science/article/pii/S0377221713000490
Gerstl, Enrique
Mosheiov, Gur
oai:RePEc:eee:ejores:v:229:y:2013:i:3:p:738-7422013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:3:p:738-742
article
On the inconsistency of the Malmquist–Luenberger index
Apart from the well-known weaknesses of the standard Malmquist productivity index related to infeasibility and not accounting for slacks, already addressed in the literature, we identify a new and significant drawback of the Malmquist–Luenberger index decomposition that questions its validity as an empirical tool for environmental productivity measurement associated with the production of bad outputs. In particular, we show that the usual interpretation of the technical change component in terms of production frontier shifts can be inconsistent with its numerical value, thereby resulting in an erroneous interpretation of this component that passes on to the index itself. We illustrate this issue with a simple numerical example. Finally, we propose a solution for this inconsistency issue based on incorporating a new postulate for the technology related to the production of bad outputs.
Data Envelopment Analysis; Malmquist–Luenberger productivity index; Technological change; Efficiency change; Directional distance function;
http://www.sciencedirect.com/science/article/pii/S0377221713002592
Aparicio, Juan
Pastor, Jesus T.
Zofio, Jose L.
oai:RePEc:eee:ejores:v:230:y:2013:i:1:p:113-1212013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:1:p:113-121
article
Robust weighted vertex p-center model considering uncertain data: An application to emergency management
This paper presents a generalized weighted vertex p-center (WVPC) model that represents uncertain nodal weights and edge lengths using prescribed intervals or ranges. The objective of the robust WVPC (RWVPC) model is to locate p facilities on a given set of candidate sites so as to minimize worst-case deviation in maximum weighted distance from the optimal solution. The RWVPC model is well-suited for locating urgent relief distribution centers (URDCs) in an emergency logistics system responding to quick-onset natural disasters in which precise estimates of relief demands from affected areas and travel times between URDCs and affected areas are not available. To reduce the computational complexity of solving the model, this work proposes a theorem that facilitates identification of the worst-case scenario for a given set of facility locations. Since the problem is NP-hard, a heuristic framework is developed to efficiently obtain robust solutions. Then, a specific implementation of the framework, based on simulated annealing, is developed to conduct numerical experiments. Experimental results show that the proposed heuristic is effective and efficient in obtaining robust solutions. We also examine the impact of the degree of data uncertainty on the selected performance measures and the tradeoff between solution quality and robustness. Additionally, this work applies the proposed RWVPC model to a real-world instance based on a massive earthquake that hit central Taiwan on September 21, 1999.
Uncertainty modeling; Emergency logistics; p-Center model; Robust optimization;
http://www.sciencedirect.com/science/article/pii/S0377221713002567
Lu, Chung-Cheng
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:462-4692013-06-13RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:462-469
article
Aggregation of utility-based individual preferences for group decision-making
Multi-attribute utility theory (MAUT) elicits an individual decision maker’s preferences for single attributes and develops a utility function by mathematics formulation to add up the preferences of the entire set of attributes when assessing alternatives. A common aggregation method of MAUT for group decisions is the simple additive weighting (SAW) method, which does not consider the different preferential levels and preferential ranks for individual decision makers’ assessments of alternatives in a decision group, and thus seems too intuitive in achieving the consensus and commitment for group decision aggregation. In this paper, the preferential differences denoting the preference degrees among different alternatives and preferential priorities denoting the favorite ranking of the alternatives for each decision maker are both considered and aggregated to construct the utility discriminative values for assessing alternatives in a decision group. A comparative analysis is performed to compare the proposed approach to the SAW model, and a satisfaction index is used to investigate the satisfaction levels of the final two resulting group decisions. In addition, a feedback interview is conducted to understand the subjective perceptions of decision makers while examining the results obtained from these two approaches for the second practical case. Both investigation results show that the proposed approach is able to achieve a more satisfying and agreeable group decision than that of the SAW method.
Multi-attribute utility theory; Group decision; Preference aggregation;
http://www.sciencedirect.com/science/article/pii/S0377221713001926
Huang, Yeu-Shiang
Chang, Wei-Chen
Li, Wei-Hao
Lin, Zu-Liang
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:109-1192013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:109-119
article
An integrated guaranteed- and stochastic-service approach to inventory optimization in supply chains
Multi-echelon inventory optimization literature distinguishes stochastic- (SS) and guaranteed-service (GS) approaches as mutually exclusive frameworks. While the GS approach considers flexibility measures at the stages to deal with stockouts, the SS approach only relies on safety stock. Within a supply chain, flexibility levels might differ between stages rendering them appropriate candidates for one approach or the other. The existing approaches, however, require the selection of a single framework for the entire supply chain instead of a stage-wise choice. We develop an integrated hybrid-service (HS) approach which endogenously determines the overall cost-optimal approach for each stage and computes the required inventory levels. We present a dynamic programming optimization algorithm for serial supply chains that partitions the entire system into subchains of different types. From a numerical study we find that, besides implicitly choosing the better of the two pure frameworks, whose cost differences can be considerable, the HS approach enables additional pipeline and on-hand stock cost savings. We further identify drivers for the preferability of the HS approach.
Inventory; Multi-echelon; Guaranteed service; Stochastic service; Partitioning;
http://www.sciencedirect.com/science/article/pii/S0377221713004311
Klosterhalfen, Steffen T.
Dittmar, Daniel
Minner, Stefan
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:162-1702013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:162-170
article
Information granulation and uncertainty measures in interval-valued intuitionistic fuzzy information systems
Information granulation and entropy are main approaches for investigating the uncertainty of information systems, which have been widely employed in many practical domains. In this paper, information granulation and uncertainty measures for interval-valued intuitionistic fuzzy binary granular structures are addressed. First, we propose the representation of interval-valued intuitionistic fuzzy information granules and examine some operations of interval-valued intuitionistic fuzzy granular structures. Second, the interval-valued intuitionistic fuzzy information granularity is introduced to depict the distinguishment ability of an interval-valued intuitionistic fuzzy granular structure (IIFGS), which is a natural extension of fuzzy information granularity. Third, we discuss how to scale the uncertainty of an IIFGS using the extended information entropy and the uncertainty among interval-valued intuitionistic fuzzy granular structures using the expanded mutual information derived from the presented intuitionistic fuzzy information entropy. Fourth, we discovery the relationship between the developed interval-valued intuitionistic fuzzy information entropy and the intuitionistic fuzzy information granularity presented in this paper.
Uncertainty modeling; Granular computing; Uncertainty measure; Granularity; Entropy;
http://www.sciencedirect.com/science/article/pii/S0377221713003901
Huang, Bing
Zhuang, Yu-liang
Li, Hua-xiong
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:190-2012013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:190-201
article
A fuzzy set-based approach to origin–destination matrix estimation in urban traffic networks with imprecise data
An important issue in the management of urban traffic networks is the estimation of origin–destination (O–D) matrices whose entries represent the travel demands of network users. We discuss the challenges of O–D matrix estimation with incomplete, imprecise data. We propose a fuzzy set-based approach that utilises successive linear approximation. The fuzzy sets used have triangular membership functions that are easy to interpret and enable straightforward calibration of the parameters that weight the discrepancy between observed data and those predicted by the proposed approach. The method is potentially useful when prior O–D matrix entry estimates are unavailable or scarce, requiring trip generation information on origin departures and/or destination arrivals, leading to multiple modelling alternatives. The method may also be useful when there is no O–D matrix that can be user-optimally assigned to the network to reproduce observed link counts exactly. The method has been tested on some numerical examples from the literature and the results compare favourably with the results of earlier methods. It has also been successfully used to estimate O–D matrices for a practical urban traffic network in Brazil.
Traffic; O–D matrix estimation; Successive linear approximation; Linear programming; Fuzzy sets;
http://www.sciencedirect.com/science/article/pii/S0377221713004116
Foulds, Les R.
do Nascimento, Hugo A.D.
Calixto, Iacer C.A.C.
Hall, Bryon R.
Longo, Humberto
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:171-1812013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:171-181
article
Multi-criteria semantic dominance: A linguistic decision aiding technique based on incomplete preference information
A linguistic decision aiding technique for multi-criteria decision is presented. We define a relation between alternatives as multi-criteria semantic dominance (MCSD). It adopts the similar ideal of the stochastic dominance by utilizing the partial information of the decision maker’s preference, which is only ordinal or partially cardinal. The MCSD rules based on three typical types of semanteme functions are introduced and proven. By using these rules, all the alternatives under consideration are divided into two mutually exclusive sets called efficient set and inefficient set. The decision maker who has such a semanteme function will never choose the alternative from the corresponding inefficient set as the optimal one. In such a way, when we analyze the linguistic decision information, the inherent fuzziness of preference can be handled and several controversial operations of the linguistic terms can be avoided. An example is also provided to illustrate the procedure of the proposed method.
Linguistic modeling; Semantic dominance; Decision analysis; Multi-criteria; Multi-attribute;
http://www.sciencedirect.com/science/article/pii/S0377221713003895
Yang, Wu-E
Wang, Jian-Qiang
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:507-5112013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:507-511
article
Short communication: DEA based auctions simulations
In this paper we use simulations to numerically evaluate the Hybrid DEA – Second Score Auction. In a procurement setting, the winner of the Hybrid auction by design receives payment at the most equal to the Second Score auction. It is therefore superior to the traditional Second Score scheme from the point of view of a principal interested in acquiring an item at the minimum price without losing in quality. For a set of parameters we quantify the size of the improvements and show that the improvement depends intimately on the regularity imposed on the underlying cost function. In the least structured case of a variable returns to scale technology, the hybrid auction only improved the outcome for a small percentage of cases. For other technologies with constant returns to scale, the gains are considerably higher and payments are lowered in a large percentage of cases. We also show that the number of the participating agents, the concavity of the principal value functions, and the number of quality dimensions impact the expected payment.
Multi-dimensional auctions; Data envelopment analysis; Second score auction; Yardstick competition; Hybrid auction;
http://www.sciencedirect.com/science/article/pii/S0377221713004773
Papakonstantinou, Athanasios
Bogetoft, Peter
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:299-3132013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:299-313
article
A reduction dynamic programming algorithm for the bi-objective integer knapsack problem
This paper presents a backward state reduction dynamic programming algorithm for generating the exact Pareto frontier for the bi-objective integer knapsack problem. The algorithm is developed addressing a reduced problem built after applying variable fixing techniques based on the core concept. First, an approximate core is obtained by eliminating dominated items. Second, the items included in the approximate core are subject to the reduction of the upper bounds by applying a set of weighted-sum functions associated with the efficient extreme solutions of the linear relaxation of the multi-objective integer knapsack problem. Third, the items are classified according to the values of their upper bounds; items with zero upper bounds can be eliminated. Finally, the remaining items are used to form a mixed network with different upper bounds. The numerical results obtained from different types of bi-objective instances show the effectiveness of the mixed network and associated dynamic programming algorithm.
Multi-objective programming; Integer knapsack problem; Dynamic programming; Dominance relation; Core concept; State reduction;
http://www.sciencedirect.com/science/article/pii/S0377221713004621
Rong, Aiying
Figueira, José Rui
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:337-3482013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:337-348
article
Joint control of production, remanufacturing, and disposal activities in a hybrid manufacturing–remanufacturing system
To generate insights into how production of new items and remanufacturing and disposal of returned products can be effectively coordinated, we develop a model of a hybrid manufacturing–remanufacturing system. Formulating the model as a Markov decision process, we investigate the structure of the optimal policy that jointly controls production, remanufacturing, and disposal decisions. Considering the average profit maximization criterion, we show that the joint optimal policy can be characterized by three monotone switching curves. Moreover, we show that there exist serviceable (i.e., as-new) and remanufacturing (i.e., returned) inventory thresholds beyond which production cannot be optimal but disposal is always optimal. We also identify conditions under which idling and disposal actions are always optimal when the system is empty. Using numerical comparisons between models with and without remanufacturing and disposal options, we generate insights into the benefit of utilizing these options. To effectively coordinate production, remanufacturing, and disposal activities, we propose a simple, implementable, and yet effective heuristic policy. Our extensive numerical results suggest that the proposed heuristic can greatly help firms to effectively coordinate their production, remanufacturing, and disposal activities and thereby reduce their operational costs.
Remanufacturing; Return disposal; Production; Inventory; Markov decision process; Heuristic;
http://www.sciencedirect.com/science/article/pii/S0377221713004712
Kim, Eungab
Saghafian, Soroush
Van Oyen, Mark P.
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:349-3612013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:349-361
article
Discrete Malliavin calculus and computations of greeks in the binomial tree
This paper proposes new methods for computation of greeks using the binomial tree and the discrete Malliavin calculus. In the last decade, the Malliavin calculus has come to be considered as one of the main tools in financial mathematics. It is particularly important in the computation of greeks using Monte Carlo simulations. In previous studies, greeks were usually represented by expectation formulas that are derived from the Malliavin calculus and these expectations are computed using Monte Carlo simulations. On the other hand, the binomial tree approach can also be used to compute these expectations. In this article, we employ the discrete Malliavin calculus to obtain expectation formulas for greeks by the binomial tree method. All the results are obtained in an elementary manner.
Discrete Malliavin calculus; European options; Greeks; Binomial tree;
http://www.sciencedirect.com/science/article/pii/S0377221713004372
Muroi, Yoshifumi
Suda, Shintaro
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:263-2732013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:263-273
article
Improving an interior-point approach for large block-angular problems by hybrid preconditioners
The computational time required by interior-point methods is often dominated by the solution of linear systems of equations. An efficient specialized interior-point algorithm for primal block-angular problems has been used to solve these systems by combining Cholesky factorizations for the block constraints and a conjugate gradient based on a power series preconditioner for the linking constraints. In some problems this power series preconditioner resulted to be inefficient on the last interior-point iterations, when the systems became ill-conditioned. In this work this approach is combined with a splitting preconditioner based on LU factorization, which works well for the last interior-point iterations. Computational results are provided for three classes of problems: multicommodity flows (oriented and nonoriented), minimum-distance controlled tabular adjustment for statistical data protection, and the minimum congestion problem. The results show that, in most cases, the hybrid preconditioner improves the performance and robustness of the interior-point solver. In particular, for some block-angular problems the solution time is reduced by a factor of 10.
Interior-point methods; Large-scale optimization; Preconditioned conjugate gradient; Structured problems;
http://www.sciencedirect.com/science/article/pii/S0377221713003056
Bocanegra, Silvana
Castro, Jordi
Oliveira, Aurelio R.L.
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:288-2982013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:288-298
article
Heuristic for the rectangular two-dimensional single stock size cutting stock problem with two-staged patterns
Two-staged patterns are often used in manufacturing industries to divide stock plates into rectangular items. A heuristic algorithm is presented to solve the rectangular two-dimensional single stock size cutting stock problem with two-staged patterns. It uses the column-generation method to solve the residual problems repeatedly, until the demands of all items are satisfied. Each pattern is generated using a procedure for the constrained single large object placement problem to guarantee the convergence of the algorithm. The computational results of benchmark and practical instances indicate the following: (1) the algorithm can solve most instances to optimality, with the gap to optimality being at most one plate for those solutions whose optimality is not proven and (2) for the instances tested, the algorithm is more efficient (on average) in reducing the number of plates used than a published algorithm and a commercial stock cutting software package.
Cutting; Two-dimensional cutting stock; Two-staged patterns; Column generation;
http://www.sciencedirect.com/science/article/pii/S0377221713004591
Cui, Yaodong
Zhao, Zhigang
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:465-4732013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:465-473
article
Specification and estimation of multiple output technologies: A primal approach
This paper addresses specification and estimation of multiple-outputs and multiple-inputs production technology in the presence of technical inefficiency. The primary focus is on the primal formulations. Several competing specifications such as production function, input (output) distance function, input requirement function are considered. We show that all these specifications come from the same transformation function and are algebraically identical. We also show that: (i) unless the transformation function is separable (i.e., outputs are separable from inputs), the input (output) ratios in the input (output) distance function can not be treated as exogenous (uncorrelated with technical inefficiency) resulting inconsistent estimates of the input (output) distance function parameters. (ii) Even if input (output) ratios are exogenous, estimation of the input (output) distance function will result in inconsistent parameter estimates if outputs (inputs) are endogenous. We address endogeneity and instrumental variable issues in details in the context of flexible (translog) functional forms. Estimation of several specifications using both single and system approaches are discussed using Norwegian dairy farming data.
Production function; Distance function; Input requirement function; Transformation function;
http://www.sciencedirect.com/science/article/pii/S0377221713004189
Kumbhakar, Subal C.
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:481-4912013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:481-491
article
Stochastic non-convex envelopment of data: Applying isotonic regression to frontier estimation
Isotonic nonparametric least squares (INLS) is a regression method for estimating a monotonic function by fitting a step function to data. In the literature of frontier estimation, the free disposal hull (FDH) method is similarly based on the minimal assumption of monotonicity. In this paper, we link these two separately developed nonparametric methods by showing that FDH is a sign-constrained variant of INLS. We also discuss the connections to related methods such as data envelopment analysis (DEA) and convex nonparametric least squares (CNLS). Further, we examine alternative ways of applying isotonic regression to frontier estimation, analogous to corrected and modified ordinary least squares (COLS/MOLS) methods known in the parametric stream of frontier literature. We find that INLS is a useful extension to the toolbox of frontier estimation both in the deterministic and stochastic settings. In the absence of noise, the corrected INLS (CINLS) has a higher discriminating power than FDH. In the case of noisy data, we propose to apply the method of non-convex stochastic envelopment of data (non-convex StoNED), which disentangles inefficiency from noise based on the skewness of the INLS residuals. The proposed methods are illustrated by means of simulated examples.
Data envelopment analysis; Free disposal hull; Nonparametric regression; Productive efficiency analysis; Stochastic frontier analysis;
http://www.sciencedirect.com/science/article/pii/S0377221713004748
Keshvari, Abolfazl
Kuosmanen, Timo
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:79-872013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:79-87
article
Order sequencing on a unidirectional cyclical picking line
A real life order-picking configuration that requires multiple pickers to cyclically move around fixed locations in a single direction is considered. This configuration is not the same, but shows similarities to, unidirectional carousel systems described in literature. The problem of minimising the pickers’ travel distance to pick all orders on this system is a variant of the clustered travelling salesman problem. An integer programming (IP) formulation of this problem cannot be solved in a realistic time frame for real life instances of the problem. A relaxation of this IP formulation is proposed that can be used to determine a lower bound on an optimal solution. It is shown that the solution obtained from this relaxation can always be transformed to a feasible solution for the IP formulation that is, at most, within one pick cycle of the lower bound. The computational results and performance of the proposed methods as well as adapted order sequencing approaches for bidirectional carousel systems from literature are compared to one another by means of real life historical data instances obtained from a retail distribution centre.
Distribution; Order-picking; Sequencing; Combinatorial optimization; Unidirectional carousel;
http://www.sciencedirect.com/science/article/pii/S0377221713004104
Matthews, Jason
Visagie, Stephan
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:229-2412013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:229-241
article
Lifetime maximization in wireless directional sensor network
This paper addresses two versions of a lifetime maximization problem for target coverage with wireless directional sensor networks. The sensors used in these networks have a maximum sensing range and a limited sensing angle. In the first problem version, predefined sensing directions are assumed to be given, whereas sensing directions can be freely devised in the second problem version. In that case, a polynomial-time algorithm is provided for building sensing directions that allow to maximize the network lifetime. A column generation algorithm is proposed for both problem versions, the subproblem being addressed with a hybrid approach based on a genetic algorithm, and an integer linear programming formulation. Numerical results show that addressing the second problem version allows for significant improvements in terms of network lifetime while the computational effort is comparable for both problem versions.
Wireless sensor networks; Directional sensors; Column generation; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221713004323
Rossi, André
Singh, Alok
Sevaux, Marc
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:151-1612013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:151-161
article
The uses of qualitative data in multimethodology: Developing causal loop diagrams during the coding process
In this research note we describe a method for exploring the creation of causal loop diagrams (CLDs) from the coding trees developed through a grounded theory approach and using computer aided qualitative data analysis software (CAQDAS). The theoretical background to the approach is multimethodology, in line with Minger’s description of paradigm crossing and is appropriately situated within the Appreciate and Analyse phases of PSM intervention. The practical use of this method has been explored and three case studies are presented from the domains of organisational change and entrepreneurial studies. The value of this method is twofold; (i) it has the potential to improve dynamic sensibility in the process of qualitative data analysis, and (ii) it can provide a more rigorous approach to developing CLDs in the formation stage of system dynamics modelling. We propose that the further development of this method requires its implementation within CAQDAS packages so that CLD creation, as a precursor to full system dynamics modelling, is contemporaneous with coding and consistent with a bridging strategy of paradigm crossing.
Multimethodology; Paradigm crossing; Qualitative data analysis; Causal loop diagrams (CLDs); Computer aided qualitative data analysis software (CAQDAS); Problem structuring methods (PSMs);
http://www.sciencedirect.com/science/article/pii/S037722171300386X
Yearworth, Mike
White, Leroy
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:182-1892013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:182-189
article
A real options approach to labour shifts planning under different service level targets
Firms that experience uncertainty in demand as well as challenging service levels face, among other things, the problem of managing employee shift numbers. Decisions regarding shift numbers often involve significant expansions or reductions in capacity, in response to changes in demand. In this paper, we quantify the impact of treating shifts in workforce expansion as investments, while considering required service level improvements. The decision to increase shifts, whether by employing temporary workers or hiring permanent employees, is one that involves significant risks. Traditional theories typically consider reversible investments, and thus do not capture the idiosyncrasies involved in shift management, in which costs are not fully reversible. In our study, by using real options theory, we quantify managers’ ability to consider this irreversibility, aiming to enable them to make shift decisions under conditions of uncertainty with the maximum level of flexibility. Our model aims to help managers make more accurate decisions with regard to shift expansion under service level targets, and to defer commitment until future uncertainties can be at least partially resolved. Overall, our investigation contributes to studies on the time required to introduce labour shift changes, while keeping the value of service level improvements in mind.
OR in manpower planning; Uncertainty modelling; Investment analysis; Timing options;
http://www.sciencedirect.com/science/article/pii/S0377221713003925
Fernandes, Rui
Gouveia, Borges
Pinho, Carlos
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:362-3702013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:362-370
article
A computationally efficient state-space partitioning approach to pricing high-dimensional American options via dimension reduction
This paper studies the problem of pricing high-dimensional American options. We propose a method based on the state-space partitioning algorithm developed by Jin et al. (2007) and a dimension-reduction approach introduced by Li and Wu (2006). By applying the approach in the present paper, the computational efficiency of pricing high-dimensional American options is significantly improved, compared to the extant approaches in the literature, without sacrificing the estimation precision. Various numerical examples are provided to illustrate the accuracy and efficiency of the proposed method. Pseudcode for an implementation of the proposed approach is also included.
High dimensional American-style option; Dimension reduction; Stochastic dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221713004347
Jin, Xing
Li, Xun
Tan, Hwee Huat
Wu, Zhenyu
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:257-2622013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:257-262
article
Surrogate duality for robust optimization
Robust optimization problems, which have uncertain data, are considered. We prove surrogate duality theorems for robust quasiconvex optimization problems and surrogate min–max duality theorems for robust convex optimization problems. We give necessary and sufficient constraint qualifications for surrogate duality and surrogate min–max duality, and show some examples at which such duality results are used effectively. Moreover, we obtain a surrogate duality theorem and a surrogate min–max duality theorem for semi-definite optimization problems in the face of data uncertainty.
Nonlinear programming; Quasiconvex programming; Robust optimization;
http://www.sciencedirect.com/science/article/pii/S0377221713001999
Suzuki, Satoshi
Kuroiwa, Daishi
Lee, Gue Myung
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:492-5022013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:492-502
article
Pricing and advertisement in a manufacturer–retailer supply chain
We use a game theoretical approach to study pricing and advertisement decisions in a manufacturer–retailer supply chain when price discounts are offered by both the manufacturer and retailer. When the manufacturer is the leader of the game, we obtained Stackelberg equilibrium with manufacturer’s local allowance, national brand name investment, manufacturer’s preferred price discount, retailer’s price discount, and local advertising expense. For the special case of two-stage equilibrium when the manufacturer’s price discount is exogenous, we found that the retailer is willing to increase local advertising expense if the manufacturer increases local advertising allowance and provides deeper price discount, or if the manufacturer decreases its brand name investment. When both the manufacturer and retailer have power, Nash equilibrium in a competition game is obtained. The comparison between the Nash equilibrium and Stackelberg equilibrium shows that the manufacturer always prefers Stackelberg equilibrium, but there is no definitive conclusion for the retailer. The bargaining power can be used to determine the profit sharing between the manufacturer and the retailer. Once the profit sharing is determined, we suggest a simple contract to help the manufacturer and retailer obtain their desired profit sharing.
Supply chain; Advertising; Price discount; Game theory; Contract;
http://www.sciencedirect.com/science/article/pii/S0377221713004761
Yue, Jinfeng
Austin, Jill
Huang, Zhimin
Chen, Bintong
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:131-1402013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:131-140
article
Decision tree analysis for a risk averse decision maker: CVaR Criterion
Risk aversion is a prevalent phenomenon when sufficiently large amounts are at risk. In this paper, we introduce a new prescriptive approach for coping with risk in sequential decision problems with discrete scenario space. We use Conditional Value-at-Risk (CVaR) risk measure as optimization criterion and prove that there is an explicit linear representation of the proposed model for the problem.
Discrete scenario space; Sequential decision making; Conditional Value-at-Risk; Risk aversion;
http://www.sciencedirect.com/science/article/pii/S0377221713003470
Eskandarzadeh, Saman
Eshghi, Kourosh
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:69-782013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:69-78
article
The two-machine flowshop scheduling problem with sequence-independent setup times: New lower bounding strategies
The two-machine flowshop environment with sequence-independent setup times has been intensely investigated both from theoretical and practical perspectives in the scheduling literature. Nevertheless, very scant attention has been devoted to deriving effective lower bounding strategies. In this paper, we propose new lower bounds for the total completion time minimization criterion. These bounds are based on three relaxation schemes, namely the waiting time-based relaxation scheme, the single machine-based relaxation scheme, and the Lagrangian relaxation scheme. Extensive computational study carried on instances with up to 500 jobs reveals that embedding the waiting time-based bounding strategy within the Lagrangian relaxation framework yields the best performance while requiring negligible CPU time.
Scheduling; Two-machine flowshop; Sequence-independent setup times; Total completion time; Lower bounds; Lagrangian relaxation;
http://www.sciencedirect.com/science/article/pii/S037722171300430X
Gharbi, Anis
Ladhari, Talel
Msakni, Mohamed Kais
Serairi, Mehdi
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:57-682013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:57-68
article
New insights on integer-programming models for the kidney exchange problem
In recent years several countries have set up policies that allow exchange of kidneys between two or more incompatible patient–donor pairs. These policies lead to what is commonly known as kidney exchange programs.
Kidney exchange program; Integer programming; Combinatorial optimization; Healthcare;
http://www.sciencedirect.com/science/article/pii/S0377221713004244
Constantino, Miguel
Klimentova, Xenia
Viana, Ana
Rais, Abdur
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:443-4512013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:443-451
article
Accounting for slacks to measure and decompose revenue efficiency in the Spanish Designation of Origin wines with DEA
In this paper, we show how Data Envelopment Analysis (DEA) may be used to measure and decompose revenue inefficiency, taking into account all sources of technical waste in the context of an application to assess the Spanish quality wine sector, in particular Designation of Origin (DO) wines. We try to go beyond the standard approaches, which use Shephard distance functions or directional distance functions, to provide decomposition that incorporates slacks as a source of technical inefficiency. To accomplish this, we will base our analysis on a recent approach introduced in Cooper et al. (2011a). In particular, we show how an output-oriented version of the Weighted Additive model can be used to properly identify revenue, technical, and allocative inefficiencies in Spanish DOs. In the application, we conclude that the main source of revenue inefficiency in this sector is technical waste, and that Cava can be highlighted as the DO that performs as a benchmark for more numbers of units.
Data Envelopment Analysis; Spanish wine sector; Revenue inefficiency; Slacks;
http://www.sciencedirect.com/science/article/pii/S0377221713004645
Aparicio, Juan
Borras, Fernando
Pastor, Jesus T.
Vidal, Fernando
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:274-2812013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:274-281
article
On the convergence of inexact block coordinate descent methods for constrained optimization
We consider the problem of minimizing a smooth function over a feasible set defined as the Cartesian product of convex compact sets. We assume that the dimension of each factor set is huge, so we are interested in studying inexact block coordinate descent methods (possibly combined with column generation strategies). We define a general decomposition framework where different line search based methods can be embedded, and we state global convergence results. Specific decomposition methods based on gradient projection and Frank–Wolfe algorithms are derived from the proposed framework. The numerical results of computational experiments performed on network assignment problems are reported.
Nonlinear programming; Block coordinate descent methods; Inexact decomposition methods; Gradient projection; Frank–Wolfe;
http://www.sciencedirect.com/science/article/pii/S0377221713004669
Cassioli, A.
Di Lorenzo, D.
Sciandrone, M.
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:43-562013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:43-56
article
A tabu search for Time-dependent Multi-zone Multi-trip Vehicle Routing Problem with Time Windows
We propose a tabu search meta-heuristic for the Time-dependent Multi-zone Multi-trip Vehicle Routing Problem with Time Windows. Two types of neighborhoods, corresponding to the two sets of decisions of the problem, together with a strategy controlling the selection of the neighborhood type for particular phases of the search, provide the means to set up and combine exploration and exploitation capabilities for the search. A diversification strategy, guided by an elite solution set and a frequency-based memory, is also used to drive the search to potentially unexplored good regions and, hopefully, enhance the solution quality. Extensive numerical experiments and comparisons with the literature show that the proposed tabu search yields very high quality solutions, improving those currently published.
Multi-trip vehicle Routing with Time Windows; Synchronization; Time-dependent demand; Tabu search;
http://www.sciencedirect.com/science/article/pii/S0377221713004256
Nguyen, Phuong Khanh
Crainic, Teodor Gabriel
Toulouse, Michel
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:381-3922013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:381-392
article
Stochastic competitive entries and dynamic pricing
How should firms price new products when they do not know the timing, nor the nature of the next competitive entry? To guide managers’ pricing decisions in such contexts, we propose a dynamic pricing model with two types of randomly timed entry, i.e. imitative and innovative. The characterization of the equilibrium strategies reveals how optimal prices vary with the manager’s knowledge about the timing of future competitive entries. We show that price skimming is not always optimal when entry dates are unknown to managers. Everything else equal, we demonstrate that the randomness of competitive entries make forward looking managers to choose constant prices, even though the characteristics of the market would have justified skimming the demand in the normal course. Moreover, we show that the constant pricing policy remains optimal even when the incumbent’s optimal pricing strategy influences the probability of facing a competitive entry. Finally, we find that uncertainty does not necessarily hurt firms’ profits.
Optimal control; Pricing; Competitive entry; Stochastic differential games;
http://www.sciencedirect.com/science/article/pii/S037722171300413X
Rubel, Olivier
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:242-2432013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:242-243
article
On ratio-based RTS determination: An extension
This technical note extends the results of our recent paper [Korhonen, Soleimani-damaneh, Wallenius, EJOR 215 (2011) 431–438], for determining the RTS status of Decision Making Units in Weight-Restricted DEA models.
Data Envelopment Analysis (DEA); Returns to Scale (RTS); Weight restrictions;
http://www.sciencedirect.com/science/article/pii/S0377221713004190
Korhonen, Pekka J.
Soleimani-damaneh, Majid
Wallenius, Jyrki
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:503-5062013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:503-506
article
Endogenous production capacity investment in natural gas market equilibrium models
The large-scale natural gas equilibrium model applied in Egging, 2013 combines long-term market equilibria and investments in infrastructure while accounting for market power by certain suppliers. Such models are widely used to simulate market outcomes given different scenarios of demand and supply development, environmental regulations and investment options in natural gas and other resource markets.
Natural gas; Equilibrium model; Endogenous investment; Capacity expansion; Logarithmic cost function;
http://www.sciencedirect.com/science/article/pii/S0377221713004657
Huppmann, Daniel
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:474-4802013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:474-480
article
Shadow pricing of undesirable outputs in nonparametric analysis
For three decades a growing interest in the modeling of desirable and undesirable outputs has led to a theoretical and methodological debate in the nonparametric literature on production technology and efficiency. The first main discussion is about the way of modeling ‘bad/undesirables’ as inputs or outputs, or by transformation functions. The second debate concerns the implications of the weak disposability assumption in the modeling of bad outputs, in particular the possibility of assigning unexpected signs to shadow prices of bad outputs. In addition, we point out a current error in the modeling of weak disposability under a variable returns to scale technology. In this paper we introduce a hybrid model to ensure the economically meaningful jointness of good and bad outputs while constraining shadow prices of bad outputs to their expected sign. We argue that it is a sound compromise to model undesirable outputs with a meaningful primal/dual economic interpretation. Finally we propose an extension to define shadow prices for undesirable outputs following the Law of One Price (LoOP) rule.
Data Envelopment Analysis; Undesirable outputs; Weak disposability; Shadow prices;
http://www.sciencedirect.com/science/article/pii/S037722171300427X
Leleu, Hervé
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:452-4642013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:452-464
article
A hybrid metaheuristic method for the Maximum Diversity Problem
The Maximum Diversity Problem (MDP) consists in selecting a subset of m elements from a given set of n elements (n>m) in such a way that the sum of the pairwise distances between the m chosen elements is maximized. We present a hybrid metaheuristic algorithm (denoted by MAMDP) for MDP. The algorithm uses a dedicated crossover operator to generate new solutions and a constrained neighborhood tabu search procedure for local optimization. MAMDP applies also a distance-and-quality based replacement strategy to maintain population diversity. Extensive evaluations on a large set of 120 benchmark instances show that the proposed approach competes very favorably with the current state-of-art methods for MDP. In particular, it consistently and easily attains all the best known lower bounds and yields improved lower bounds for 6 large MDP instances. The key components of MAMDP are analyzed to shed light on their influence on the performance of the algorithm.
Maximum Diversity Problem; Solution combination; Local search; Constrained neighborhood; Population diversity;
http://www.sciencedirect.com/science/article/pii/S0377221713004682
Wu, Qinghua
Hao, Jin-Kao
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:98-1082013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:98-108
article
Joint-optimization of inventory policies on a multi-product multi-echelon pharmaceutical system with batching and ordering constraints
This paper presents a methodology to find near-optimal joint inventory control policies for the real case of a one-warehouse, n-retailer distribution system of infusion solutions at a University Medical Center in France. We consider stochastic demand, batching and order-up-to level policies as well as aspects particular to the healthcare setting such as emergency deliveries, required service level rates and a new constraint on the ordering policy that fits best the hospital’s interests instead of abstract ordering costs. The system is modeled as a Markov chain with an objective to minimize the stock-on-hand value for the overall system. We provide the analytical structure of the model to show that the optimal reorder point of the policy at both echelons is easily derived from a simple probability calculation. We also show that the optimal policy at the care units is to set the order-up-to level one unit higher than the reorder point. We further demonstrate that optimizing the care units in isolation is optimal for the joint multi-echelon, n-retailer problem. A heuristic algorithm is presented to find the near-optimal order-up-to level of the policy of each product at the central pharmacy; all other policy parameters are guaranteed optimal via the structure provided by the model. Comparison of our methodology versus that currently in place at the hospital showed a reduction of approximately 45% in the stock-on-hand value while still respecting the service level requirements.
Multi-echelon; Joint optimization; Health care logistics; Markov processes; Inventory/distribution problem; OR in health services;
http://www.sciencedirect.com/science/article/pii/S0377221713004293
Guerrero, W.J.
Yeung, T.G.
Guéret, C.
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:282-2872013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:282-287
article
New complexity results for parallel identical machine scheduling problems with preemption, release dates and regular criteria
In this paper, we are interested in parallel identical machine scheduling problems with preemption and release dates in case of a regular criterion to be minimized. We show that solutions having a permutation flow shop structure are dominant if there exists an optimal solution with completion times scheduled in the same order as the release dates, or if there is no release date. We also prove that, for a subclass of these problems, the completion times of all jobs can be ordered in an optimal solution. Using these two results, we provide new results on polynomially solvable problems and hence refine the boundary between P and NP for these problems.
Scheduling; Identical machines; Preemptive problems; Dominant structure; Agreeability; Common due date;
http://www.sciencedirect.com/science/article/pii/S037722171300458X
Prot, D.
Bellenguez-Morineau, O.
Lahlou, C.
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:22-332013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:22-33
article
An efficient evolutionary algorithm for the ring star problem
This paper addresses the ring star problem (RSP). The goal is to locate a cycle through a subset of nodes of a network aiming to minimize the sum of the cost of installing facilities on the nodes on the cycle, the cost of connecting them and the cost of assigning the nodes not on the cycle to their closest node on the cycle. A fast and efficient evolutionary algorithm is developed which is based on a new formulation of the RSP as a bilevel programming problem with one leader and two independent followers. The leader decides which nodes to include in the ring, one follower decides about the connections of the cycle and the other follower decides about the assignment of the nodes not on the cycle. The bilevel approach leads to a new form of chromosome encoding in which genes are associated to values of the upper level variables. The quality of each chromosome is evaluated by its fitness, by means of the objective function of the RSP. Hence, in order to compute the value of the lower level variables, two optimization problems are solved for each chromosome. The computational results show the efficiency of the algorithm in terms of the quality of the solutions yielded and the computing time. A study to select the best configuration of the algorithm is presented. The algorithm is tested on a set of benchmark problems providing very accurate solutions within short computing times. Moreover, for one of the problems a new best solution is found.
Ring star problem; Median cycle problem; Evolutionary algorithm; Bilevel programming;
http://www.sciencedirect.com/science/article/pii/S0377221713004128
Calvete, Herminia I.
Galé, Carmen
Iranzo, José A.
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:414-4272013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:414-427
article
Operational issues and network effects in vaccine markets
One of the most important concerns for managing public health is the prevention of infectious diseases. Although vaccines provide the most effective means for preventing infectious diseases, there are two main reasons why it is often difficult to reach a socially optimal level of vaccine coverage: (i) the emergence of operational issues (such as yield uncertainty) on the supply side, and (ii) the existence of negative network effects on the consumption side. In particular, uncertainties about production yield and vaccine imperfections often make manufacturing some vaccines a risky process and may lead the manufacturer to produce below the socially optimal level. At the same time, negative network effects provide incentives to potential consumers to free ride off the immunity of the vaccinated population. In this research, we consider how a central policy-maker can induce a socially optimal vaccine coverage through the use of incentives to both consumers and the vaccine manufacturer. We consider a monopoly market for an imperfect vaccine; we show that a fixed two-part subsidy is unable to coordinate the market, but derive a two-part menu of subsidies that leads to a socially efficient level of coverage.
Vaccine coverage; Negative network effect; Random yield; Vaccine pricing; Vaccine subsidy;
http://www.sciencedirect.com/science/article/pii/S0377221713004335
Adida, Elodie
Dey, Debabrata
Mamani, Hamed
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:314-3272013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:314-327
article
Strategic response to pollution taxes in supply chain networks: Dynamic, spatial, and organizational dimensions
This paper presents a model of the strategic behavior of firms operating in a spatial supply chain network. The manufacturing and retailing firms engage in an oligopolistic, noncooperative game by sharing customer demand such that a firm’s decisions impact the product prices, which in turn result in changes in all other firms’ decisions. Each firm’s payoff is to maximize its own profit and we show that, in response to such changes in prices and to exogenous environmental taxes, the manufacturing firms may strategically alter a variety of choices such as ’make-buy’ decisions with respect to intermediate inputs, spatial distribution of production, product shipment patterns and inventory management, environmental tax payment vs recycling decisions, and timing of all such choices to sustainably manage the profit and the environmental regulations. An important implication is that effects of a tax depends on the oligopolistic game structure. With respect to methods, we show that this dynamic game can be represented as a set of differential variational inequalities (DVIs) that motivate a computationally efficient nonlinear complementarity (NCP) approach that enables the full exploitation of above-mentioned salient features. We also provide a numerical example that confirms the utility of our proposed framework and shows substantial strategic reaction can be expected to a tax on pollution stocks.
Sustainable supply chain; Dynamic game; Network oligopolies; Differential variational inequalities; Nonlinear complementary problem;
http://www.sciencedirect.com/science/article/pii/S0377221713004359
Chung, Sung H.
Weaver, Robert D.
Friesz, Terry L.
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:328-3362013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:328-336
article
The risk-averse newsvendor problem with random capacity
We study the effect of capacity uncertainty on the inventory decisions of a risk-averse newsvendor. We consider two well-known risk criteria, namely Value-at-Risk (VaR) included as a constraint and Conditional Value-at-Risk (CVaR). For the risk-neutral newsvendor, we find that the optimal order quantity is not affected by the capacity uncertainty. However, this result does not hold for the risk-averse newsvendor problem. Specifically, we find that capacity uncertainty decreases the order quantity under the CVaR criterion. Under the VaR constraint, capacity uncertainty leads to an order decrease for low confidence levels, but to an order increase for high confidence levels. This implies that the risk criterion should be carefully selected as it has an important effect on inventory decisions. This is shown for the newsvendor problem, but is also likely to hold for other inventory control problems that future research can address.
Conditional Value-at-Risk; Value-at-Risk; Newsvendor problem; Capacity uncertainty;
http://www.sciencedirect.com/science/article/pii/S037722171300461X
Wu, Meng
Zhu, Stuart X.
Teunter, Ruud H.
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:245-2562013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:245-256
article
A review of trade credit literature: Opportunities for research in operations
Trade credit arises when a buyer delays payment for purchased goods or services. Its nature has predominantly been an area of inquiry for researchers from the disciplines of finance, marketing, and economics but it has received relatively little attention in other domains. In our article, we provide an integrative review of the existing literature and discuss conflicting study outcomes. We organize the relevant literature into seven areas of inquiry and analyze four in detail: trade credit motives, order quantity decisions, credit term decisions, and settlement period decisions. Additionally, we derive a detailed agenda for future research in these areas.
Trade credit; Permissible delay in payment; Interface operations and finance;
http://www.sciencedirect.com/science/article/pii/S0377221713002245
Seifert, Daniel
Seifert, Ralf W.
Protopappa-Sieke, Margarita
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:141-1502013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:141-150
article
Preference Programming with incomplete ordinal information
This paper extends possibilities for analyzing incomplete ordinal information about the parameters of an additive value function. Such information is modeled through preference statements which associate sets of alternatives or attributes with corresponding sets of rankings. These preference statements can be particularly helpful in developing a joint preference representation for a group of decision-makers who may find difficulties in agreeing on numerical parameter values. Because these statements can lead to a non-convex set of feasible parameters, a mixed integer linear formulation is developed to establish a linear model for the computation of decision recommendations. This makes it possible to complete incomplete ordinal information with other forms of incomplete information.
Multiple criteria analysis; Value tree analysis; Incomplete information; Ordinal information;
http://www.sciencedirect.com/science/article/pii/S0377221713003871
Punkka, Antti
Salo, Ahti
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:202-2092013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:202-209
article
A new method to solve the fully connected Reserve Network Design Problem
In selecting sites for conservation purposes connectivity of habitat is important for allowing species to move freely within a protected area. The aim of the Reserve Network Design Problem is to choose a network of contiguous sites which maximises some conservation objective subject to various constraints. The problem has been solved using both heuristic and exact methods. Heuristic methods can handle much larger problems than exact methods but cannot guarantee an optimal solution. Improvements in both computer power and optimisation algorithms have increased the attractiveness of exact methods. The aim of this work is to formulate an improved algorithm for solving the Reserve Network Design Problem.
Reserve Network Design Problem; Site selection; Mixed integer programming; Contiguity; Compactness; Spatial optimisation;
http://www.sciencedirect.com/science/article/pii/S0377221713004141
Jafari, Nahid
Hearne, John
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:210-2282013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:210-228
article
Workforce routing and scheduling for electricity network maintenance with downtime minimization
We investigate a combined routing and scheduling problem for the maintenance of electricity networks. In electricity networks power lines must be regularly maintained to ensure a high quality of service. For safety reasons a power line must be physically disconnected from the network before maintenance work can be performed. After completing maintenance work the power line must be reconnected. Each maintenance job therefore consists of multiple tasks which must be performed at different locations in the network. The goal is to assign each task to a worker and to determine a schedule such that the downtimes of power lines and the travel effort of workers are minimized. For solving this problem, we combine a Large Neighborhood Search meta-heuristic with mathematical programming techniques. The method is evaluated on a large set of test instances which are derived from network data of a German electricity provider.
OR in energy; Maintenance; Downtime minimization; Routing; Scheduling;
http://www.sciencedirect.com/science/article/pii/S0377221713004207
Goel, Asvin
Meisel, Frank
oai:RePEc:eee:ejores:v:231:y:2013:i:2:p:405-4132013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:2:p:405-413
article
DEA production games
In this paper, linear production games are extended so that instead of assuming a linear production technology with fixed technological coefficients, the more general, non-parametric, DEA production technology is considered. Different organizations are assumed to possess their own technology and the cooperative game arises from the possibility of pooling their available inputs, collectively processing them and sharing the revenues. Two possibilities are considered: using a joint production technology that results from merging their respective technologies or each cooperating organization keeping its own technology. This gives rise to two different DEA production games, both of which are totally balanced and have a non-empty core. A simple way of computing a stable solution, using the optimal dual solution for the grand coalition, is presented. The full cooperation scenario clearly produces more benefits for the organizations involved although the implied technology sharing is not always possible. Examples of applications of the proposed approach are given.
Linear production games; Data Envelopment Analysis; Resource pooling; Technology sharing; Cooperative games;
http://www.sciencedirect.com/science/article/pii/S0377221713004736
Lozano, S.
oai:RePEc:eee:ejores:v:231:y:2013:i:1:p:88-972013-08-01RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:1:p:88-97
article
The role of store brand positioning for appropriating supply chain profit under shelf space allocation
We consider a retailer’s decision of developing a store brand (SB) version of a national brand (NB) and the role that its positioning strategy plays in appropriating the supply chain profit. Since the business of the retailer can be regarded as selling to NB manufacturers the shelf space at its disposal, we formulate a game-theoretical model of a single-retailer, single-manufacturer supply chain, where the retailer can decide whether to launch its own SB product and sells scarce shelf-space to a competing NB in a consumer good category. As a result, the most likely equilibrium outcome is that the available selling amount of each brand is constrained by the shelf-space available for its products and both brands coexist in the category. In this paper, we conceptualize the SB positioning that involves both product quality and product features. Our analysis shows that when the NB cross-price effect is not too large, the retailer should position its SB’s quality closer to the NB, more emphasize its SB’s differences in features facing a weaker NB, and less emphasize its SB’s differences in features facing a stronger NB. Our results stress the importance of SB positioning under the shelf-space allocation, in order to maximize the retailer’s value appropriation across the supply chain.
Private label positioning; Shelf space management; Supply chain management; Marketing-operations interface; Competitive strategy; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221713004177
Kuo, Chia-Wei
Yang, Shu-Jung Sunny
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:542-5512012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:542-551
article
Pricing policies for substitutable products in a supply chain with Internet and traditional channels
This study considers pricing policies in a supply chain with one manufacturer, who sells a product to an independent retailer and directly to consumers through an Internet channel. In addition to the manufacturer’s product, the retailer sells a substitute product produced by another manufacturer. Given the wholesale prices of the two substitute products, the manufacturer decides the retail price of the Internet channel, and the retailer decides the retail prices of the two substitute products. Both the manufacturer and the retailer choose their own decision variables to maximize their respective profits. This work formulates the price competition, using the settings of Nash and Stackelberg games, and derives the corresponding existence and uniqueness conditions for equilibrium solutions. A sensitivity analysis of an equilibrium solution is then conducted for the model parameters, and the profits are compared for two game settings. The findings show that improving brand loyalty is profitable for both of the manufacturer and retailer, and that an increased service value may alleviate the threat of the Internet channel for the retailer and increase the manufacturer’s profit. The study also derives some conditions under which the manufacturer and the retailer mutually prefer the Stackelberg game. Based on these results, this study proposes an appropriate cooperation strategy for the manufacturer and retailer.
Pricing; Game theory; Supply chain management; Channel competition;
http://www.sciencedirect.com/science/article/pii/S0377221712006662
Chen, Yun Chu
Fang, Shu-Cherng
Wen, Ue-Pyng
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:293-3012012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:293-301
article
Concepts for safety stock determination under stochastic demand and different types of random production yield
We consider a manufacturer’s stochastic production/inventory problem under periodic review and present methods for safety stock determination to cope with uncertainties that are caused by stochastic demand and different types of yield randomness. Following well-proven inventory control concepts for this problem type, we focus on a critical stock policy with a linear order release rule. A central parameter of this type of policy is given by the safety stock value. When non-zero manufacturing lead times are taken into account in the random yield context, it turns out that safety stocks have to be determined that vary from period to period. We present a simple approach for calculating these dynamic safety stocks for different yield models. Additionally, we suggest approaches for determining appropriate static safety stocks that are easier to apply in practice. In a simulation study we investigate the performance of the proposed safety stock variants.
Stochastic demand; Random yield; Yield models; Safety stocks; Linear control rule; Inventory;
http://www.sciencedirect.com/science/article/pii/S0377221712005978
Inderfurth, Karl
Vogelgesang, Stephanie
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:486-4962012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:486-496
article
A reduction approach for solving the rectangle packing area minimization problem
In the rectangle packing area minimization problem (RPAMP) we are given a set of rectangles with known dimensions. We have to determine an arrangement of all rectangles, without overlapping, inside an enveloping rectangle of minimum area. The paper presents a generic approach for solving the RPAMP that is based on two algorithms, one for the 2D Knapsack Problem (KP), and the other for the 2D Strip Packing Problem (SPP). In this way, solving an instance of the RPAMP is reduced to solving multiple SPP and KP instances. A fast constructive heuristic is used as SPP algorithm while the KP algorithm is instantiated by a tree search method and a genetic algorithm alternatively. All these SPP and KP methods have been published previously. Finally, the best variants of the resulting RPAMP heuristics are combined within one procedure. The guillotine cutting condition is always observed as an additional constraint. The approach was tested on 15 well-known RPAMP instances (above all MCNC and GSRC instances) and new best solutions were obtained for 10 instances. The computational effort remains acceptable. Moreover, 24 new benchmark instances are introduced and promising results are reported.
Packing; Rectangle packing area minimization problem; Floor planning; Open dimension problem; MCNC; GSRC;
http://www.sciencedirect.com/science/article/pii/S0377221712006066
Bortfeldt, Andreas
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:240-2602012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:240-260
article
Planning of capacities and orders in build-to-order automobile production: A review
In an attempt to achieve a competitive edge, automotive companies operate global production networks to offer an ever increasing product variety, shorter and reliable lead times as well as competitively priced products. Cars are no longer exclusively produced based on standardized product configurations and stable sales plans but are increasingly build-to-order to match the needs of individual customers. Operations Research (OR) may contribute towards successful build-to-order operations. This is likewise reflected by the appreciable number of published papers on industry specific OR applications. To provide readers with an overview about these OR models and applications we identify current and future research issues based on the review of 49 works. We focus on two important planning objects which have not been considered in prior reviews: the planning of capacities and orders. To bridge the gap between conceptual works on the one hand and quantitative contributions on the other, we provide a framework for the structuring of planning tasks. Existing models are classified according to this framework and open issues that should be addressed in OR are discussed.
Production; Automotive industry; Planning framework; Capacity planning; Order-driven planning; Operations research;
http://www.sciencedirect.com/science/article/pii/S0377221712005759
Volling, Thomas
Matzke, Andreas
Grunewald, Martin
Spengler, Thomas S.
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:530-5412012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:530-541
article
Project scheduling in optimizing integrated supply chain operations
A Supply Chain (SC) requires undertaking considerable number of activities covering the flow of information and goods among multiple production and distribution cells over several tiers. The successful implementation of a SC hinges on the optimum integration and synchronization of these activities.
(D) Supply chain management; Mixed integer program; Project networks; Activity crashing; Bill of material; Shipping modes;
http://www.sciencedirect.com/science/article/pii/S0377221712006704
Elimam, A.A.
Dodin, B.
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:614-6242012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:614-624
article
Metaheuristic hybridizations for the regenerator placement and dimensioning problem in sub-wavelength switching optical networks
Physical layer impairments severely limit the reach and capacity of optical systems, thereby hampering the deployment of transparent optical networks (i.e., no electrical signal regenerators are required). Besides, the high cost and power-consumption of regeneration devices makes it unaffordable for network operators to consider the opaque architecture (i.e., regeneration is available at every network node). In this context, translucent architectures (i.e., regeneration is only available at selected nodes) have emerged as the most promising short term solution to decrease costs and energy consumption in optical backbone networks. Concurrently, the coarse granularity and inflexibility of legacy optical technologies have re-fostered great interest in sub-wavelength switching optical networks, which introduce optical switching in the time domain so as to further improve resources utilization. In these networks, the complex regenerator placement and dimensioning problem emerges. In short, this problem aims at minimizing the number of electrical regenerators deployed in the network. To tackle it, in this paper both a greedy randomized adaptive search procedure and a biased random-key genetic algorithm are developed. Further, we enhance their performance by introducing both path-relinking and variable neighborhood descent as effective intensification procedures. The resulting hybridizations are compared among each other as well as against results from optimal and heuristic mixed integer linear programming formulations. Illustrative results over a broad range of network scenarios show that the biased random-key genetic algorithm working in conjunction with these two intensification mechanisms represents a compelling network planning algorithm for the design of future sub-wavelength optical networks.
OR in telecommunications; Metaheuristics; Sub-wavelength; Regenerator;
http://www.sciencedirect.com/science/article/pii/S037722171200611X
Pedrola, Oscar
Careglio, Davide
Klinkowski, Miroslaw
Velasco, Luis
Bergman, Keren
Solé-Pareta, Josep
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:362-3742012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:362-374
article
The Vessel Schedule Recovery Problem (VSRP) – A MIP model for handling disruptions in liner shipping
Containerized transport by liner shipping companies is a multi billion dollar industry carrying a major part of the world trade between suppliers and customers. The liner shipping industry has come under stress in the last few years due to the economic crisis, increasing fuel costs, and capacity outgrowing demand. The push to reduce CO2 emissions and costs have increasingly committed liner shipping to slow-steaming policies. This increased focus on fuel consumption, has illuminated the huge impacts of operational disruptions in liner shipping on both costs and delayed cargo. Disruptions can occur due to adverse weather conditions, port contingencies, and many other issues. A common scenario for recovering a schedule is to either increase the speed at the cost of a significant increase in the fuel consumption or delaying cargo. Advanced recovery options might exist by swapping two port calls or even omitting one. We present the Vessel Schedule Recovery Problem (VSRP) to evaluate a given disruption scenario and to select a recovery action balancing the trade off between increased bunker consumption and the impact on cargo in the remaining network and the customer service level. It is proven that the VSRP is NP-hard. The model is applied to four real life cases from Maersk Line and results are achieved in less than 5seconds with solutions comparable or superior to those chosen by operations managers in real life. Cost savings of up to 58% may be achieved by the suggested solutions compared to realized recoveries of the real life cases.
Disruption management; Liner shipping; Mathematical programming; Recovery;
http://www.sciencedirect.com/science/article/pii/S037722171200639X
Brouer, Berit D.
Dirksen, Jakob
Pisinger, David
Plum, Christian E.M.
Vaaben, Bo
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:435-4482012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:435-448
article
Rich routing problems arising in supply chain management
The purpose of this paper is to provide basic models for highly relevant extensions of the classical vehicle routing problem in the context of supply chain management. The classical vehicle routing problem is extended in various ways. We will especially focus on extensions with respect to lotsizing, scheduling, packing, batching, inventory and intermodality. The proposed models allow for a more efficient use of resources, while explicitly taking into account interdependencies among the subproblems. The contribution of this survey is twofold: (i) it provides an overview of recent and suitable literature for the interested scholar and (ii) it presents six integrative models for the above mentioned extensions.
Routing; Modeling; Survey; Supply chain management;
http://www.sciencedirect.com/science/article/pii/S0377221712006376
Schmid, Verena
Doerner, Karl F.
Laporte, Gilbert
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:273-2822012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:273-282
article
Competitive food supply chain networks with application to fresh produce
In this paper, we develop a network-based food supply chain model under oligopolistic competition and perishability, with a focus on fresh produce. The model incorporates food deterioration through the introduction of arc multipliers, with the inclusion of the discarding costs associated with the disposal of the spoiled food products. We allow for product differentiation due to product freshness and food safety concerns, as well as the evaluation of alternative technologies associated with various supply chain activities. We then propose an algorithm with elegant features for computation. A case study focused on the cantaloupe market is investigated within this modeling and computational framework, in which we analyze different scenarios prior/during/after a foodborne disease outbreak.
Food supply chains; Fresh produce; Oligopolistic competition; Food deterioration; Product differentiation; Supply chain management;
http://www.sciencedirect.com/science/article/pii/S0377221712005747
Yu, Min
Nagurney, Anna
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:313-3232012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:313-323
article
Control and system-theoretic identification of the supply chain dynamics domain for planning, analysis and adaptation of performance under uncertainty
The analysis of how to achieve planned economic performance in a real-time, uncertain and perturbed execution environment is a vital and up-to-date issue in many supply chains. Although it is intuitive that uncertainty is likely to have impacts on performance, the research on systematic terminology and quantitative analysis in this domain is rather limited as compared with the well-established domain of supply chain optimal planning. This study is among the first to address the operative perspective of the supply chain dynamics domain. The methodology of this conceptual paper is based on the business and technical literature analysis and fundamentals of control and systems theory. In contributing to the existing studies in this domain, the paper proposes a possible systemization and classification of related terminology from different theoretical perspectives, and important practical problems. For the supply chain dynamics domain, the paper identifies and groups possible problem classes of research, corresponding quantitative methods, and describes the general mathematical formulations. The results of this study may be of interest to both academics and practitioners.
Supply chain; Control; System dynamics; Robustness; Resilience; Adaptation;
http://www.sciencedirect.com/science/article/pii/S0377221712006443
Ivanov, Dmitry
Sokolov, Boris
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:404-4132012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:404-413
article
Contingent payment auction mechanism in multidimensional procurement auctions
This paper considers the auctioning of an indivisible project among several suppliers who hold private information about their own efficiency type. Both quality and price need to be determined. Different from scoring auctions, we present a new method, i.e., contingent payment auction mechanism (CPAM), which can effectively deal with the optimal procurement strategy in multidimensional procurement auctions. CPAM can implement the optimal mechanism for the buyer and is thus optimal among all possible procurement strategies. CPAM implies that the buyer should first design and announce a contingent payment function that specifies a payment for each possible quality level before the bidding begins. Compared to scoring auctions, CPAM has some advantages. It does not require a special form of scoring rule and can be generalized in a more broad auction formats. Furthermore, it can help us to solve the ex post moral hazard problem. We consider two kinds of CPAM. For the CPAM I is sensitive to different auction formats, we come up with CPAM II which can improve the performance of CPAM I. Broadly speaking, CPAM integrates the idea of dimension reduction from scoring auction into that of incentive contract design from contract theory to solve the problem of ex post moral hazard.
Multidimensional auction; Procurement auction; Incentive contract; Contingent payment;
http://www.sciencedirect.com/science/article/pii/S0377221712006029
Wang, Hong
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:572-5822012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:572-582
article
Step by step. The benefits of stage-based R&D licensing contracts
We examine how a licensor can optimally design licensing contracts for multi-phase R&D projects when he does not know the licensee’s project valuation, leading to adverse selection, and cannot enforce the licensee’s effort level, resulting in moral hazard. We focus on the effect of the phased nature typical of such projects, and compare single-phase and multi-phase contracts. We determine the optimal values for the upfront payment, milestone payments and royalties, and the optimal timing for outlicensing. Including multiple milestones and accompanying payments can be an effective way of discriminating between licensees holding different valuations, without having to manipulate the royalty rate, which induces licensees to invest less, resulting in lower project values and socially suboptimal solutions. Interestingly, we also find that multiple milestone payments are beneficial even when the licensor is risk-averse, contrary to standard contract theory results, which recommend that only an upfront payment should be used. In terms of licensing timing, we show that the optimal time depends on the licensor’s risk aversion, the characteristics of the licensee and the project value.
Research and development; Innovation; Contract design; Asymmetric information; Industries; Pharmaceutical;
http://www.sciencedirect.com/science/article/pii/S0377221712006789
Crama, Pascale
De Reyck, Bert
Degraeve, Zeger
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:392-4032012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:392-403
article
Supply chains in the presence of store brands
Increased competition from store brands is forcing manufacturers to re-evaluate their strategies in regard to pricing and contracting with trade intermediaries. We analyze a supply chain in which a retailer accepts (with the appropriate contractual agreements) a national brand for resale and then determines whether to introduce a store brand, how to price the store brand, and what quantities of the product(s) to order. We show that when the national brand’s cost per unit quality (CPUQ) is larger than the store brand’s CPUQ, then the retailer seeks to introduce the store brand (SB) and the national brand (NB) manufacturer/supplier is unable to deter him from doing so. We find that the efficiency loss in the decentralized supply chain becomes smaller when a store brand is introduced. Recognizing the inadequacy of standard contracts in coordinating this supply chain, we propose a simple minimum order quantity contract that can coordinate this supply chain.
Supply chain management; Private labels; Supply chain coordination;
http://www.sciencedirect.com/science/article/pii/S0377221712005966
Fang, Xiang
Gavirneni, Srinagesh
Rao, Vithala R.
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:414-4242012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:414-424
article
Inferring the incidence of industry inefficiency from DEA estimates
Data envelopment analysis (DEA) is among the most popular empirical tools for measuring cost and productive efficiency within an industry. Because DEA is a linear programming technique, establishing formal statistical properties for outcomes is difficult. We model the incidence of inefficiency within a population of decision making units (DMUs) as a latent variable, with DEA outcomes providing only noisy and generally inaccurate sample-based categorizations of inefficiency. We then use a Bayesian approach to infer an appropriate posterior distribution for the incidence of inefficiency within an industry based on a random sample of DEA outcomes and a prior distribution on that incidence. The approach applies to the empirically relevant case of a finite number of firms, and to sampling DMUs without replacement. It also accounts for potential mismeasurement in the DEA characterization of inefficiency within a coherent Bayesian approach to the problem. Using three different types of specialty physician practices, we provide an empirical illustration demonstrating that this approach provides appropriately adjusted inferences regarding the incidence of inefficiency within an industry.
Data envelopment analysis; Applied probability; Bayesian inference;
http://www.sciencedirect.com/science/article/pii/S0377221712006030
Friesner, Daniel
Mittelhammer, Ron
Rosenman, Robert
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:333-3392012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:333-339
article
A basic formula for performance gradient estimation of semi-Markov decision processes
This paper presents a basic formula for performance gradient estimation of semi-Markov decision processes (SMDPs) under average-reward criterion. This formula directly follows from a sensitivity equation in perturbation analysis. With this formula, we develop three sample-path-based gradient estimation algorithms by using a single sample path. These algorithms naturally extend many gradient estimation algorithms for discrete-time Markov systems to continuous time semi-Markov models. In particular, they require less storage than the algorithm in the literature.
Markov processes; Semi-Markov decision processes; Sample-path-based gradient estimation; Perturbation analysis;
http://www.sciencedirect.com/science/article/pii/S0377221712006108
Li, Yanjie
Cao, Fang
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:324-3322012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:324-332
article
A semi-preemptive priority scheduling discipline: Performance analysis
In this paper, we present an in-depth analytical study of a semi-preemptive priority scheduling discipline. This discipline eliminates the deficits of both the full- and non-preemptive versions. Under the non-preemptive category, in particular, higher-priority customers may have to wait even when the service of a lower-priority customer has just started, while under the full-preemptive discipline, the almost completed service of a lower-priority customer may be interrupted due to the arrival of higher-priority customers, possibly causing a large extra delay. For fixed low-priority service times, the semi-preemptive priority scheduling discipline shows a performance gain of up to 6% compared to the full- and non-preemptive versions.
Priority scheduling; (Non-)preemptive; Performance analysis; Cost function;
http://www.sciencedirect.com/science/article/pii/S037722171200608X
Walraevens, Joris
Maertens, Tom
Bruneel, Herwig
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:302-3122012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:302-312
article
Last time buy decisions for products sold under warranty
Manufacturers supplying products under warranty need a strategy to deal with failures during the warranty period: repair the product or replace it by a new one, depending on e.g. age and/or usage of the failed product. An (implicit) assumption in virtually all models is that new products to replace the failed ones are immediately available at given replacement costs. Because of the short life cycles of many products, manufacturing may be discontinued before the end of the warranty period. At that point in time, the supplier has to decide how many products to put on the shelf to replace failed products under warranty that will be returned from the field (the last time buy decision). This is a trade-off between product availability for replacement and costs of product obsolescence. In this paper, we consider the joint optimization of repair-replacement decisions and the last time buy quantity for products sold under warranty. We develop approximations to estimate the total relevant costs and service levels for this problem, and show that we can easily find near-optimal last time buy quantities using a numerical search. Comparison to discrete event simulation results shows an excellent performance of our methods.
Reliability; Inventory; Maintenance; Warranty; Spare parts; Last time buy;
http://www.sciencedirect.com/science/article/pii/S037722171200598X
van der Heijden, Matthieu
Iskandar, Bermawi P.
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:507-5192012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:507-519
article
Pricing decisions for complementary products with firms’ different market powers
This article reports the results of a study that explores the pricing problems with regard to two complementary products in a supply chain with two manufacturers and one common retailer. The authors establish five pricing models under decentralized decision cases, including the MS-Bertrand, MS-Stackelberg, RS-Bertrand, RS-Stackelberg, and NG models, with consideration of different market power structures among channel members. By applying a game-theoretical approach, corresponding analytic solutions are obtained. Then, by comparing the maximum profits and optimal pricing decisions obtained in different decision cases, interesting and valuable managerial insights are established.
Pricing; Complementary products; Market power; Stackelberg game;
http://www.sciencedirect.com/science/article/pii/S0377221712006741
Wei, Jie
Zhao, Jing
Li, Yongjian
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:425-4342012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:425-434
article
Developing a measure of risk adjusted revenue (RAR) in credit cards market: Implications for customer relationship management
Current models of customer lifetime value (CLV) consider the discounted value of profits that a customer generates over an expected lifetime of relationship with the firm. This practice can be misleading in the financial services markets because it ignores the risk posed by the customer (such as delinquency and default). Specifically, in the credit card market, the correlation between revenue and risk is positive. Therefore, firms need to adjust a customer’s profits for the associated risk before developing a measure of customer lifetime value. We propose a new measure, risk adjusted revenue (RAR), that can incorporate multiple sources of risk and demonstrate the usefulness of the proposed measure in correctly assessing the value of a customer in the credit card market. The model can be extended to compute risk adjusted lifetime value (RALTV). We use the RAR metric to understand the effectiveness of different modes of acquisition, and of retention strategies such as affinity cards and reward cards. We find that both reward- and affinity-cardholders generate higher RAR than non-reward and non-affinity cardholders respectively. The ordering of different modes of acquisition with respect to RAR (in decreasing order) is as follows: Internet, direct mail, telesales, and direct selling.
Customer relationship management; Data envelopment analysis; Credit cards; Acquisition strategies; Retention strategies;
http://www.sciencedirect.com/science/article/pii/S0377221712006078
Singh, Shweta
Murthi, B.P.S.
Steffes, Erin
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:603-6132012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:603-613
article
Using a multi-criteria decision aid methodology to implement sustainable development principles within an organization
The implementation of Sustainable Development (SD) within an Organization is a difficult task. This is due to the fact that it is difficult to deal with conflicting and incommensurable aspects such as environmental, economic and social dimensions. In this paper we have used a Multi-Criteria Decision Aid (MCDA) methodology to cope with these difficulties. MCDA methodology offers the opportunity to avoid monetary valuation of the different dimensions of the SD. These dimensions are not substitutable for one another and all have a role to play. There is an abundance of possible aggregation procedures in MCDA methodology. In this paper we have proposed an innovative method to choose a suitable aggregation procedure for SD problems. Real life case studies of the implementation of an outranking approach (i.e., ELECTRE) and of a mono-criterion synthesis approach (i.e., MAUT approaches based on the Choquet integral) were done to respectively rank 22 SD strategic actions within an expertise Institute and rank 20 practical operational actions to control energy consumption of the Institute’s buildings.
Sustainable development indicators; Sustainable development action plan; Multi-criteria decision aid; ELECTRE and Choquet integral;
http://www.sciencedirect.com/science/article/pii/S037722171200642X
Merad, Myriam
Dechy, Nicolas
Serir, Lisa
Grabisch, Michel
Marcel, Frédéric
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:592-6022012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:592-602
article
Citizen coproduction and efficient public good provision: Theory and evidence from local public libraries
In both public administration and economics, efficiency is brought forward as an important criterion for evaluating administrative actions. Clearly, its value as an assessment principle depends on our ability to adequately measure efficiency. This article argues that citizen’s coproduction in public services requires a careful reassessment of how we approach the measurement of productive efficiency in public service delivery. Theoretically, we illustrate that using observable outcomes (e.g., library circulation, school results, health outcomes, fires extinguished, and crimes solved) as output indicators is inappropriate and leads to biased estimates of public service providers’ productive efficiency. This bias arises because citizens co-determine final outputs, leaving them at least partly beyond the service providers’ control. Empirically, we find supportive evidence of both the existence and importance of such ‘demand-induced’ bias.
Citizen coproduction; Public service provision; Technical efficiency; Local government; Libraries;
http://www.sciencedirect.com/science/article/pii/S0377221712006650
De Witte, Kristof
Geys, Benny
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:283-2922012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:283-292
article
Optimal policies for inventory systems with two types of product sharing common hardware platforms: Single period and finite horizon
In this paper, we consider an inventory system whose products share a common hardware platform but are differentiated by two types of software. Choice of different software results in different installation cost and different selling price of the whole product. Product with different software also faces different customer demand. We investigate the optimal proportion of an order to be installed with software 1 or 2, that maximizes expected profit in the single and multiple period scenarios, respectively. The optimal policy is analytically obtained and proved to be an order-up-to policy in each scenario. Our investigation reveals that whether to replenish, and how much to replenish each product depend not only on its own initial inventory level, and system parameters, but also the initial inventory level of the other product. We perform numerical experiments using the optimal policies we have derived in the paper.
Inventory; Dynamic programming; Optimal policies; Software;
http://www.sciencedirect.com/science/article/pii/S0377221712005954
Chou, Mabel
Sim, Chee-Khian
Yuan, Xue-Ming
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:560-5652012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:560-565
article
An incremental least squares algorithm for large scale linear classification
In this work we consider the problem of training a linear classifier by assuming that the number of data is huge (in particular, data may be larger than the memory capacity). We propose to adopt a linear least-squares formulation of the problem and an incremental recursive algorithm which requires to store a square matrix (whose dimension is equal to the number of features of the data). The algorithm (very simple to implement) converges to the solution using each training data once, so that it effectively handles possible memory issues and is a viable method for linear large scale classification and for real time applications, provided that the number of features of the data is not too large (say of the order of thousands). The extensive computational experiments show that the proposed algorithm is at least competitive with the state-of-the-art algorithms for large scale linear classification.
Large scale optimization; Machine learning; Linear classification; Incremental algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221712006674
Cassioli, A.
Chiavaioli, A.
Manes, C.
Sciandrone, M.
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:466-4762012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:466-476
article
The rollon–rolloff waste collection vehicle routing problem with time windows
This study introduces a rollon–rolloff waste collection vehicle routing problem involving large containers that accumulate huge amounts of garbage at construction sites and shopping districts. In this problem, tractors move one container at a time between customer locations, a depot, disposal facilities, and container storage yards. The complicated constraints discussed in this study arise from having multiple disposal facilities, multiple container storage yards, seven service types of customer demands, different time windows for customer demands and facilities, various types and sizes of containers, and the lunch break of tractor drivers. In addition, real-world issues, such as changing service types, multiple demands at a customer’s location, and tractors with different work schedules, are dealt with. This study proposes a large neighborhood search based iterative heuristic approach consisting of several algorithms for the problem. The effectiveness of the proposed methods is demonstrated by computational experiments using benchmark data, some instances of which are derived from real-world problems.
Routing; Rollon–rolloff; Waste collection; Skip collection problem; Large neighborhood search; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221712006649
Wy, Juyoung
Kim, Byung-In
Kim, Seongbae
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:375-3912012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:375-391
article
Risk neutral and risk averse Stochastic Dual Dynamic Programming method
In this paper we discuss risk neutral and risk averse approaches to multistage (linear) stochastic programming problems based on the Stochastic Dual Dynamic Programming (SDDP) method. We give a general description of the algorithm and present computational studies related to planning of the Brazilian interconnected power system.
Multistage stochastic programming; Dynamic equations; Stochastic Dual Dynamic Programming; Sample average approximation; Risk averse; Average Value-at-Risk;
http://www.sciencedirect.com/science/article/pii/S0377221712006455
Shapiro, Alexander
Tekaya, Wajdi
da Costa, Joari Paulo
Soares, Murilo Pereira
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:583-5912012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:583-591
article
A dual bin-packing approach to scheduling surgical cases at a publicly-funded hospital
Publicly-funded hospitals are typically allocated an annual budget by the government based on the number of enrollees in the region. Given tight budget constraints, the capacity of resources is fairly fixed. Such hospitals strive to maximize the utilization of their resources through continuous improvement and optimization techniques. We address a surgical case scheduling problem experienced at a publicly-funded hospital and conceptualize this multi-period, multi-resource, priority-based case scheduling problem as an unequal-sized, multi-bin, multi-dimensional dual bin-packing problem. A mixed integer programming model and a heuristic based on the first fit decreasing algorithm are presented. Resource availability, case priorities, and variation in surgery times are key features included in our model. Our proposed approach led to substantial savings, 20% reduction in number of days and up to 20% increase in operating room utilization, when compared to real schedules obtained from the surgical department at a publicly-funded hospital.
Healthcare; Hospitals; Surgical case scheduling; Bin-packing; Optimization; Heuristic;
http://www.sciencedirect.com/science/article/pii/S037722171200673X
Vijayakumar, Bharathwaj
Parikh, Pratik J.
Scott, Rosalyn
Barnes, April
Gallimore, Jennie
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:566-5712012-11-29RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:566-571
article
Consistency for the additive efficient normalization of semivalues
This paper contributes to consistency for the additive efficient normalization of semivalues. Motivated from the additive efficient normalization of a semivalue being a B-revision of the Shapley value, we introduce the B-reduced game which is an extension of Sobolev’s reduced game. Then the additive efficient normalization of a semivalue is axiomatized as the unique value satisfying covariance, symmetry, and B-consistency. Furthermore, by means of the path-independently linear consistency together with the standardness for two-person games, the additive efficient normalization of semivalues is also characterized. Accessorily, the additive efficient normalization of semivalues is directly verified as the linear consistent least square values (see Ruiz et al., 1998).
Game theory; Additive efficient normalization of semivalues; Shapley value; B-consistency; Linear consistency;
http://www.sciencedirect.com/science/article/pii/S0377221712006418
Xu, Genjiu
Driessen, Theo S.H.
Sun, Hao
Su, Jun
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:114-1242013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:114-124
article
Product variety and channel structure strategy for a retailer-Stackelberg supply chain
Motivated by the observations that the direct sales channel is increasingly used for customized products and that retailers wield leadership, we develop in this paper a retailer-Stackelberg pricing model to investigate the product variety and channel structure strategies of manufacturer in a circular spatial market. To avoid channel conflict, we consider the commonly observed case where the indirect channel sells standard products whereas the direct channel offers custom products. Our analytical results indicate that if the reservation price in the indirect channel is sufficiently low, adding the direct channel raises the unit wholesale price and retail price in the indirect channel due to customization in the direct channel. Despite the fact that dual channels for the retailer may dominate the single indirect channel, we find that the motivation for the manufacturer to use dual channels decreases with the unit production cost, while increases with (i) the marginal cost of variety, (ii) the retailer’s marginal selling cost, and (iii) the customer’s fit cost. Interestingly, our equilibrium analysis demonstrates that it is more likely for the manufacturer to use dual channels under the retailer Stackelberg channel leadership scenario than under the manufacturer Stackelberg scenario if offering a greater variety is very expensive. When offering a greater variety is inexpensive, the decentralization of the indirect channel may invert the manufacturer’s channel structure decision. Furthermore, endogenization of product variety will also invert the channel structure decision if the standard product’s reservation price is sufficiently low.
Supply chain management; Product variety; Customization; Dual channels; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221713007224
Xiao, Tiaojun
Choi, Tsan-Ming
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:64-742013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:64-74
article
The single machine serial batch scheduling problem with rejection to minimize total completion time and total rejection cost
We study a scheduling problem with rejection on a single serial batching machine, where the objectives are to minimize the total completion time and the total rejection cost. We consider four different problem variations. The first is to minimize the sum of the two objectives. The second and the third are to minimize one objective, given an upper bound on the value of the other objective and the last is to find a Pareto-optimal solution for each Pareto-optimal point. We provide a polynomial time procedure to solve the first variation and show that the three other variations are NP-hard. For solving the three NP-hard problems, we construct a pseudo-polynomial time algorithm. Finally, for one of the NP-hard variants of the problem we propose an FPTAS, provided some conditions hold.
Batch scheduling; Bicriteria scheduling; Rejection; Total completion time;
http://www.sciencedirect.com/science/article/pii/S0377221713006784
Shabtay, Dvir
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:234-2452013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:234-245
article
Improvements to a large neighborhood search heuristic for an integrated aircraft and passenger recovery problem
Because most commercial passenger airlines operate on a hub-and-spoke network, small disturbances can cause major disruptions in their planned schedules and have a significant impact on their operational costs and performance. When a disturbance occurs, the airline often applies a recovery policy in order to quickly resume normal operations. We present in this paper a large neighborhood search heuristic to solve an integrated aircraft and passenger recovery problem. The problem consists of creating new aircraft routes and passenger itineraries to produce a feasible schedule during the recovery period. The method is based on an existing heuristic, developed in the context of the 2009 ROADEF Challenge, which alternates between three phases: construction, repair and improvement. We introduce a number of refinements in each phase so as to perform a more thorough search of the solution space. The resulting heuristic performs very well on the instances introduced for the challenge, obtaining the best known solution for 17 out of 22 instances within five minutes of computing time and 21 out of 22 instances within 10minutes of computing time.
Airline recovery; Fleet assignment; Aircraft routing; Passenger itineraries; Large neighborhood search;
http://www.sciencedirect.com/science/article/pii/S0377221713007182
Sinclair, Karine
Cordeau, Jean-François
Laporte, Gilbert
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:1-152013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:1-15
article
Multimodal freight transportation planning: A literature review
Multimodal transportation offers an advanced platform for more efficient, reliable, flexible, and sustainable freight transportation. Planning such a complicated system provides interesting areas in Operations Research. This paper presents a structured overview of the multimodal transportation literature from 2005 onward. We focus on the traditional strategic, tactical, and operational levels of planning, where we present the relevant models and their developed solution techniques. We conclude our review paper with an outlook to future research directions.
Freight transportation planning; Multimodal; Intermodal; Co-modal; Synchromodal; Review;
http://www.sciencedirect.com/science/article/pii/S0377221713005638
SteadieSeifi, M.
Dellaert, N.P.
Nuijten, W.
Van Woensel, T.
Raoufi, R.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:184-1922013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:184-192
article
Portfolio optimization in a regime-switching market with derivatives
We consider the optimal asset allocation problem in a continuous-time regime-switching market. The problem is to maximize the expected utility of the terminal wealth of a portfolio that contains an option, an underlying stock and a risk-free bond. The difficulty that arises in our setting is finding a way to represent the return of the option by the returns of the stock and the risk-free bond in an incomplete regime-switching market. To overcome this difficulty, we introduce a functional operator to generate a sequence of value functions, and then show that the optimal value function is the limit of this sequence. The explicit form of each function in the sequence can be obtained by solving an auxiliary portfolio optimization problem in a single-regime market. And then the original optimal value function can be approximated by taking the limit. Additionally, we can also show that the optimal value function is a solution to a dynamic programming equation, which leads to the explicit forms for the optimal value function and the optimal portfolio process. Furthermore, we demonstrate that, as long as the current state of the Markov chain is given, it is still optimal for an investor in a multiple-regime market to simply allocate his/her wealth in the same way as in a single-regime market.
Functional operator; Elasticity approach; Portfolio optimization; Regime switching; Dynamic programming principle;
http://www.sciencedirect.com/science/article/pii/S0377221713007170
Fu, Jun
Wei, Jiaqin
Yang, Hailiang
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:159-1702013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:159-170
article
A scenario-based stochastic model for supplier selection in global context with multiple buyers, currency fluctuation uncertainties, and price discounts
Suppliers network in the global context under price discounts and uncertain fluctuations of currency exchange rates have become critical in today’s world economy. We study the problem of suppliers’ selection in the presence of uncertain fluctuations of currency exchange rates and price discounts. We specifically consider a buyer with multiple sites sourcing a product from heterogeneous suppliers and address both the supplier selection and purchased quantity decision. Suppliers are located worldwide and pricing is offered in suppliers’ local currencies. Exchange rates from the local currencies of suppliers to the standard currency of the buyer are subject to uncertain fluctuations overtime. In addition, suppliers offer discounts as a function of the total quantity bought by the different customer’ sites over the time horizon irrespective of the quantity purchased by each site.
Supplier selection; Currency fluctuation uncertainty; Multiple buyers; Price discounts; Global purchasing;
http://www.sciencedirect.com/science/article/pii/S0377221713006851
Hammami, Ramzi
Temponi, Cecilia
Frein, Yannick
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:23-332013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:23-33
article
On distributional robust probability functions and their computations
Consider a random vector, and assume that a set of its moments information is known. Among all possible distributions obeying the given moments constraints, the envelope of the probability distribution functions is introduced in this paper as distributional robust probability function. We show that such a function is computable in the bi-variate case under some conditions. Connections to the existing results in the literature and its applications in risk management are discussed as well.
Risk management; Distributional robust; Moment bounds; Semidefinite programming (SDP); Conic programming;
http://www.sciencedirect.com/science/article/pii/S0377221713007285
Wong, Man Hong
Zhang, Shuzhong
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:145-1582013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:145-158
article
Issues Mapping: A problem structuring method for addressing science and technology conflicts
There are new opportunities for the application of problem structuring methods to address science and technology risk conflicts through stakeholder dialogue. Most previous approaches to addressing risk conflicts have been developed from a traditional risk communication perspective, which tends to construct engagement between stakeholders based on the assumption that scientists evaluate technologies using facts, and lay participants do so based on their values. ‘Understanding the facts’ is generally privileged, so the value framings of experts often remain unexposed, and the perspectives of lay participants are marginalized. When this happens, risk communication methodologies fail to achieve authentic dialogue and can exacerbate conflict. This paper introduces ‘Issues Mapping’, a problem structuring method that enables dialogue by using visual modelling techniques to clarify issues and develop mutual understanding between stakeholders. A case study of the first application of Issues Mapping is presented, which engaged science and community protagonists in the genetic engineering debate in New Zealand. Participant and researcher evaluations suggest that Issues Mapping helped to break down stereotypes of both scientists and environmental activists; increased mutual understanding; reduced conflict; identified common ground; started building trust; and supported the emergence of policy options that all stakeholders in the room could live with. The paper ends with some reflections and priorities for further research.
Problem structuring methods; Issues mapping; Science and technology conflicts; Risk communication; Dialogue; Genetic engineering;
http://www.sciencedirect.com/science/article/pii/S0377221713006772
Cronin, Karen
Midgley, Gerald
Jackson, Laurie Skuba
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:263-2722013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:263-272
article
Pricing and market segmentation using opaque selling mechanisms
In opaque selling certain characteristics of the product or service are hidden from the consumer until after purchase, transforming a differentiated good into somewhat of a commodity. Opaque selling has become popular in service pricing as it allows firms to sell their differentiated products at higher prices to regular brand loyal customers while simultaneously selling to non-loyal customers at discounted prices. We develop a stylized model of consumer choice that illustrates the role of opaque selling in market segmentation. We model a firm selling a product via three selling channels: a regular full information channel, an opaque posted price channel and an opaque bidding channel where consumers specify the price they are willing to pay. We illustrate the segmentation created by opaque selling as well as compare optimal revenues and prices for sellers using regular full information channels with those using opaque selling mechanisms in conjunction with regular channels. We also study the segmentation and policy changes induced by capacity constraints.
Revenue management; Marketing: pricing; Segmentation; Auctions; Buyer behavior;
http://www.sciencedirect.com/science/article/pii/S0377221713006838
Anderson, Chris K.
Xie, Xiaoqing
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:193-2072013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:193-207
article
Customer acceptance mechanisms for home deliveries in metropolitan areas
Efficient and reliable home delivery is crucial for the economic success of online retailers. This is especially challenging for attended home deliveries in metropolitan areas where logistics service providers face congested traffic networks and customers expect deliveries in tight delivery time windows. Our goal is to develop and compare strategies that maximize the profits of a logistics service provider by accepting as many delivery requests as possible, while assessing the potential impact of a request on the service quality of a delivery tour. Several acceptance mechanisms are introduced, differing in the amount of travel time information that is considered in the decision of whether a delivery request can be accommodated or not. A real-world inspired simulation framework is used for comparison of acceptance mechanisms with regard to profits and service quality. Computational experiments utilizing this simulation framework investigate the effectiveness of acceptance mechanisms and help identify when more advanced travel time information may be worth the additional data collection and computational efforts.
Routing; Home delivery; Feasibility check; Congestion; City logistics;
http://www.sciencedirect.com/science/article/pii/S0377221713006930
Ehmke, Jan Fabian
Campbell, Ann Melissa
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:208-2192013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:208-219
article
Optimal two-phase vaccine allocation to geographically different regions under uncertainty
In this article, we consider a decision process in which vaccination is performed in two phases to contain the outbreak of an infectious disease in a set of geographic regions. In the first phase, a limited number of vaccine doses are allocated to each region; in the second phase, additional doses may be allocated to regions in which the epidemic has not been contained. We develop a simulation model to capture the epidemic dynamics in each region for different vaccination levels. We formulate the vaccine allocation problem as a two-stage stochastic linear program (2-SLP) and use the special problem structure to reduce it to a linear program with a similar size to that of the first stage problem. We also present a Newsvendor model formulation of the problem which provides a closed form solution for the optimal allocation. We construct test cases motivated by vaccine planning for seasonal influenza in the state of North Carolina. Using the 2-SLP formulation, we estimate the value of the stochastic solution and the expected value of perfect information. We also propose and test an easy to implement heuristic for vaccine allocation. We show that our proposed two-phase vaccination policy potentially results in a lower attack rate and a considerable saving in vaccine production and administration cost.
OR in health services; Epidemic control; Two-phase vaccine allocation; Stochastic linear program; Newsvendor model; Value of stochastic solution;
http://www.sciencedirect.com/science/article/pii/S0377221713006929
Yarmand, Hamed
Ivy, Julie S.
Denton, Brian
Lloyd, Alun L.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:246-2622013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:246-262
article
Stochastic models for strategic resource allocation in nonprofit foreclosed housing acquisitions
Increased rates of mortgage foreclosures in the U.S. have had devastating social and economic impacts during and after the 2008 financial crisis. As part of the response to this problem, nonprofit organizations such as community development corporations (CDCs) have been trying to mitigate the negative impacts of mortgage foreclosures by acquiring and redeveloping foreclosed properties. We consider the strategic resource allocation decisions for these organizations which involve budget allocations to different neighborhoods under cost and return uncertainty. Based on interactions with a CDC, we develop stochastic integer programming based frameworks for this decision problem, and assess the practical value of the models by using real-world data. Both policy-related and computational analyses are performed, and several insights such as the trade-offs between different objectives, and the efficiency of different solution approaches are presented.
OR in societal problem analysis; OR in strategic planning; Foreclosures; Stochastic programming; Resource allocation;
http://www.sciencedirect.com/science/article/pii/S0377221713007248
Bayram, Armagan
Solak, Senay
Johnson, Michael
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:75-832013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:75-83
article
Optimal single machine scheduling of products with components and changeover cost
We consider the problem of scheduling products with components on a single machine, where changeovers incur fixed costs. The objective is to minimize the weighted sum of total flow time and changeover cost. We provide properties of optimal solutions and develop an explicit characterization of optimal sequences, while showing that this characterization has recurrent properties. Our structural results have interesting implications for practitioners, primarily that the structure of optimal sequences is robust to changes in demand.
Scheduling; Single machine; Components; Flow time; Changeover cost;
http://www.sciencedirect.com/science/article/pii/S0377221713006814
Zhou, Feng
Blocher, James D.
Hu, Xinxin
Sebastian Heese, H.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:84-932013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:84-93
article
Branch-and-price algorithm for the Resilient Multi-level Hop-constrained Network Design
In this work, we investigate the Resilient Multi-level Hop-constrained Network Design (RMHND) problem, which consists of designing hierarchical telecommunication networks, assuring resilience against random failures and maximum delay guarantees in the communication. Three mathematical formulations are proposed and algorithms based on the proposed formulations are evaluated. A Branch-and-price algorithm, which is based on a delayed column generation approach within a Branch-and-bound framework, is proven to work well, finding optimal solutions for practical telecommunication scenarios within reasonable time. Computational results show that algorithms based on the compact formulations are able to prove optimality for instances of limited size in the scenarios of interest while the proposed Branch-and-price algorithm exhibits a much better performance.
Integer programming; Branch-and-price; Multi-level; Resilience; Hop-constrained;
http://www.sciencedirect.com/science/article/pii/S0377221713006899
Souza, Fernanda S.H.
Gendreau, Michel
Mateus, Geraldo R.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:34-422013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:34-42
article
A geometric characterisation of the quadratic min-power centre
For a given set of nodes in the plane the min-power centre is a point such that the cost of the star centred at this point and spanning all nodes is minimised. The cost of the star is defined as the sum of the costs of its nodes, where the cost of a node is an increasing function of the length of its longest incident edge. The min-power centre problem provides a model for optimally locating a cluster-head amongst a set of radio transmitters, however, the problem can also be formulated within a bicriteria location model involving the 1-centre and a generalised Fermat-Weber point, making it suitable for a variety of facility location problems. We use farthest point Voronoi diagrams and Delaunay triangulations to provide a complete geometric description of the min-power centre of a finite set of nodes in the Euclidean plane when cost is a quadratic function. This leads to a new linear-time algorithm for its construction when the convex hull of the nodes is given. We also provide an upper bound for the performance of the centroid as an approximation to the quadratic min-power centre. Finally, we briefly describe the relationship between solutions under quadratic cost and solutions under more general cost functions.
Networks; Power efficient range assignment; Wireless ad hoc networks; Generalised Fermat–Weber problem; Farthest point Voronoi diagrams;
http://www.sciencedirect.com/science/article/pii/S0377221713007406
Brazil, M.
Ras, C.J.
Thomas, D.A.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:131-1442013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:131-144
article
Revenue deficiency under second-price auctions in a supply-chain setting
Consider a firm, called the buyer, that satisfies its demand over two periods by assigning both demands to a supplier via a second-price procurement auction; call this the Standard auction. In the hope of lowering its purchase cost, the firm is considering an alternative procedure in which it will also allow bids on each period individually, where there can be either one or two winners covering the two demands; call this the Multiple Winner auction. Choosing the Multiple Winner auction over the Standard auction can in fact result in a higher cost to the buyer. We provide a bound on how much greater the buyer’s cost can be in the Multiple Winner auction and show that this bound is tight. We then sharpen this bound for two scenarios that can arise when the buyer announces his demands close to the beginning of the demand horizon. Under a monotonicity condition, we achieve a further sharpening of the bound in one of the scenarios. Finally, this monotonicity condition allows us to generalize this bound to the T-period case in which bids are allowed on any subset of period demands.
Procurement; Supply chain; Second-price auction; VCG mechanism; Revenue deficiency;
http://www.sciencedirect.com/science/article/pii/S0377221713006437
Romero Morales, Dolores
Steinberg, Richard
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:16-222013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:16-22
article
Two-stage stochastic linear programs with incomplete information on uncertainty
Two-stage stochastic linear programming is a classical model in operations research. The usual approach to this model requires detailed information on distribution of the random variables involved. In this paper, we only assume the availability of the first and second moments information of the random variables. By using duality of semi-infinite programming and adopting a linear decision rule, we show that a deterministic equivalence of the two-stage problem can be reformulated as a second-order cone optimization problem. Preliminary numerical experiments are presented to demonstrate the computational advantage of this approach.
Stochastic programming; Linear decision rule; Second order cone optimization;
http://www.sciencedirect.com/science/article/pii/S0377221713006413
Ang, James
Meng, Fanwen
Sun, Jie
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:125-1302013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:125-130
article
A multi-objective approach to supply chain visibility and risk
This paper investigates the twin effects of supply chain visibility (SCV) and supply chain risk (SCR) on supply chain performance. Operationally, SCV has been linked to the capability of sharing timely and accurate information on exogenous demand, quantity and location of inventory, transport related cost, and other logistics activities throughout an entire supply chain. Similarly, SCR can be viewed as the likelihood that an adverse event has occurred during a certain epoch within a supply chain and the associated consequences of that event which affects supply chain performance. Given the multi-faceted attributes of the decision making process which involves many stages, objectives, and stakeholders, it beckons research into this aspect of the supply chain to utilize a fuzzy multi-objective decision making approach to model SCV and SCR from an operational perspective. Hence, our model incorporates the objectives of SCV maximization, SCR minimization, and cost minimization under the constraints of budget, customer demand, production capacity, and supply availability. A numerical example is used to demonstrate the applicability of the model. Our results suggest that decision makers tend to mitigate SCR first then enhance SCV.
Supply chain management; Multiple objective programming; Supply chain visibility; Supply chain risk;
http://www.sciencedirect.com/science/article/pii/S0377221713007212
Yu, Min-Chun
Goh, Mark
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:220-2332013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:220-233
article
Optimal relay node placement in delay constrained wireless sensor network design
The Delay Constrained Relay Node Placement Problem (DCRNPP) frequently arises in the Wireless Sensor Network (WSN) design. In WSN, Sensor Nodes are placed across a target geographical region to detect relevant signals. These signals are communicated to a central location, known as the Base Station, for further processing. The DCRNPP aims to place the minimum number of additional Relay Nodes at a subset of Candidate Relay Node locations in such a manner that signals from various Sensor Nodes can be communicated to the Base Station within a pre-specified delay bound. In this paper, we study the structure of the projection polyhedron of the problem and develop valid inequalities in form of the node-cut inequalities. We also derive conditions under which these inequalities are facet defining for the projection polyhedron. We formulate a branch-and-cut algorithm, based upon the projection formulation, to solve DCRNPP optimally. A Lagrangian relaxation based heuristic is used to generate a good initial solution for the problem that is used as an initial incumbent solution in the branch-and-cut approach. Computational results are reported on several randomly generated instances to demonstrate the efficacy of the proposed algorithm.
Relay node placement; Cutting plane/facet; Polyhedral theory; Projection; Branch and cut; Lagrangian-relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221713006966
Nigam, Ashutosh
Agarwal, Yogesh K.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:281-2912013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:281-291
article
Should retail stores also RFID-tag ‘cheap’ items?
Despite their implementations in a wide variety of applications, there are very few instances where every item sold at a retail store is RFID-tagged. While the business case for expensive items to be RFID tagged may be somewhat clear, we claim that even ‘cheap’ items (i.e., those that cost less than an RFID tag) should be RFID tagged for retailers to benefit from efficiencies associated with item-level visibility. We study the relative price premiums a retailer with RFID tagged items can command as well as the retailer’s profit to illustrate the significance of item-level RFID-tagging both cheap and expensive items at a retail store. Our results indicate that, under certain conditions, item-level RFID tagging of items that cost less than an RFID tag has the potential to generate significant benefits to the retailer. The retailer is also better off tagging all items regardless of their relative price with respect to that of an RFID tag compared to the case where only the expensive item is RFID-tagged.
RFID; Partial and complete tagging; Retailing;
http://www.sciencedirect.com/science/article/pii/S0377221713007352
Piramuthu, Selwyn
Wochner, Sina
Grunow, Martin
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:105-1132013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:105-113
article
Competition for cores in remanufacturing
We study competition between an original equipment manufacturer (OEM) and an independently operating remanufacturer (IO). Different from the existing literature, the OEM and IO compete not only for selling their products but also for collecting returned products (cores) through their acquisition prices. We consider a two-period model with manufacturing by the OEM in the first period, and manufacturing as well as remanufacturing in the second period. We find the optimal policies for both players by establishing a Nash equilibrium in the second period, and then determine the optimal manufacturing decision for the OEM in the first period. This leads to a number of managerial insights. One interesting result is that the acquisition price of the OEM only depends on its own cost structure, and not on the acquisition price of the IO. Further insights are obtained from a numerical investigation. We find that when the cost benefits of remanufacturing diminishes and the IO has more chance to collect the available cores, the OEM manufactures less in the first period as the market in the second period gets larger to protect its market share. Finally, we consider the case where consumers have lower willingness to pay for the remanufactured products and find that in that case remanufacturing becomes less profitable overall.
Inventory; Remanufacturing; Competition; Nash equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221713006905
Bulmus, Serra Caner
Zhu, Stuart X.
Teunter, Ruud
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:481-4972014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:481-497
article
A multi-round generalization of the traveling tournament problem and its application to Japanese baseball
In a double round-robin tournament involving n teams, every team plays 2(nÂ -Â 1) games, with one home game and one away game against each of the other nÂ -Â 1 teams. Given a symmetric n by n matrix representing the distances between each pair of home cities, the traveling tournament problem (TTP) seeks to construct an optimal schedule that minimizes the sum total of distances traveled by the n teams as they move from city to city, subject to several natural constraints to ensure balance and fairness. In the TTP, the number of rounds is set at rÂ =Â 2. In this paper, we generalize the TTP to multiple rounds (rÂ =Â 2k, for any kÂ [greater-or-equal, slanted]Â 1) and present an algorithm that converts the problem to finding the shortest path in a directed graph, enabling us to apply Dijkstra's Algorithm to generate the optimal multi-round schedule. We apply our shortest-path algorithm to optimize the league schedules for Nippon Professional Baseball (NPB) in Japan, where two leagues of nÂ =Â 6 teams play 40 sets of three intra-league games over rÂ =Â 8 rounds. Our optimal schedules for the Pacific and Central Leagues achieve a 25% reduction in total traveling distance compared to the 2010 NPB schedule, implying the potential for considerable savings in terms of time, money, and greenhouse gas emissions.
Scheduling Timetabling Graph theory Perfect matchings One-factorization Traveling tournament problem
http://www.sciencedirect.com/science/article/pii/S0377221711005431
Hoshino, Richard
Kawarabayashi, Ken-ichi
oai:RePEc:eee:ejores:v:200:y:2010:i:3:p:629-6382014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:200:y:2010:i:3:p:629-638
article
Robustness in operational research and decision aiding: A multi-faceted issue
In Introduction, I explain the meaning I give to the qualifier term "robust" and justify my preference for the expression robustness concern rather than robustness analysis, which I feel is likely to be interpreted too narrowly. In Section 2, I discuss this concern in more details and I try to clarify the numerous raisons d'être of this concern. As a means of examining the multiple facets of robustness concern more comprehensively, I explore the existing research about robustness, attempting to highlight what I see as the three different territories covered by these studies (Section 3). In Section 4, I refer to these territories to illustrate how responses to robustness concern could be even more varied than they currently are. In this perspective, I propose in Section 5 three new measures of robustness. In the last section, I identify several aspects of the problem that should be examined more closely because they could lead to new avenues of research, which could in turn yield new and innovative responses.
Decision support Robustness Multiple criteria Uncertainty Optimization
http://www.sciencedirect.com/science/article/B6VCT-4VJ06GW-1/2/21c4eadd5f4aa90cba9294ffd07eff34
Roy, Bernard
oai:RePEc:eee:ejores:v:196:y:2009:i:1:p:140-1542014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:196:y:2009:i:1:p:140-154
article
A network flow-based method to solve performance cost and makespan open-shop scheduling problems with time-windows
This paper deals with several bicriteria open-shop scheduling problems where jobs are pre-emptable and their corresponding time-windows must be strictly respected. The criteria are a performance cost and the makespan. Network flow approaches are used in a lexmin procedure with a bounded makespan and the considered bicriteria problems are solved. Finally, the computational complexity of the algorithm and a numerical example are reported.
90B35 90B10 90C27 Pre-emptive open-shop scheduling Bicriteria optimization Network flow approach Time-windows Max-flow parametrical algorithm
http://www.sciencedirect.com/science/article/B6VCT-4S0YXMX-8/2/8e4a9309556c968bfbc8419211fb5179
Sedeño-Noda, A.
de Pablo, D. Alcaide López
González-Martín, C.
oai:RePEc:eee:ejores:v:196:y:2009:i:3:p:1207-12132014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:196:y:2009:i:3:p:1207-1213
article
Average shadow price and equilibrium price: A case study of tradable pollution permit markets
Shadow prices indicate implicit values of limited resources at the margin and provide important information in decision making for resource management. In continuous economic models, shadow prices associated with demand-supply balance constraints represent consumers' willingness to pay and producers' marginal cost, hence they correspond to market equilibrium prices. As well known, however, marginal analysis fails in the case of discrete optimization, such as mixed integer programming. An alternative concept has been introduced in the literature to measure the value of an extra unit of a limited resource in such cases. This concept is based on average rather than marginal values, thus called the average shadow price, and interpreted in the same way as conventional shadow prices. Whether average shadow prices in a discrete economic model can serve as market equilibrium prices has not been addressed in the related literature. The present paper addresses this issue in an empirical setting. Using a tradable pollution permit market as an example, where firms' YES/NO type technology adoption decisions are represented by binary variables, we show that the average shadow price of tradable permits can be interpreted as the equilibrium price only when certain conditions related to the cost structure and emission levels hold. On the other hand, we show that an iterative procedure based on individual firms' cost minimizing behavior presents a better approach for finding a price that can eliminate or reduce the gap between demand and supply of permits in the market.
Average shadow price Tradable pollution permit Mixed-integer programming
http://www.sciencedirect.com/science/article/B6VCT-4SF3042-4/2/4b5b5f3b53c6c803235675deb3b06223
Liao, Chao-ning
Önal, Hayri
Chen, Ming-Hsiang
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:887-8972014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:887-897
article
Horizontal coordinating contracts in the semiconductor industry
Integrated device manufacturers (IDMs) and foundries are two types of manufacturers in the semiconductor industry. IDMs integrate both design and manufacturing functions whereas foundries solely focus on manufacturing. Since foundries often have cost advantage over IDMs due to their specialization and economies of scale, IDMs have incentives to source from foundries for the purpose of avoiding excessive capacity investment risk. As the IDM is also a potential capacity source, the IDM and foundry are in a horizontal setting rather than a purely vertical setting. In the absence of sophisticated contracts, the benchmark contract for the IDM and foundry is a wholesale price contract. We define “coordinating” contracts as those that improve both the IDM’s and foundry’s expected profits over the benchmark wholesale price contract and also lead to the maximum system profit. This paper examines if there exist coordinating capacity reservation contracts. It is found that wholesale price contracts in the horizontal setting cannot achieve the maximum system profit due to either double marginalization effect, or “misalignment of capacity-usage-priority”. In contrast, if the IDM’s capacity investment risk is not too low, there always exist coordinating capacity reservation contracts. Furthermore, under coordinating contracts, the IDM’s sourcing structure, either sole sourcing from the foundry or dual sourcing, is contingent on the firms’ cost structures.
Supply chain management; Horizontal capacity coordination; Reservation contract; Wholesale price contract in horizontal setting; Sourcing structure;
http://www.sciencedirect.com/science/article/pii/S0377221714001854
Wu, Xiaole
Kouvelis, Panos
Matsuo, Hirofumi
Sano, Hiroki
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:156-1662014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:156-166
article
On air traffic flow management with rerouting. Part I: Deterministic case
In this paper a deterministic mixed 0–1 model for the air traffic flow management problem is presented. The model allows for flight cancelation and rerouting, if necessary. It considers several types of objective functions to minimize, namely, the number of flights exceeding a given time delay (that can be zero), separable and non-separable ground holding and air delay costs, penalization of alternative routes to the scheduled one for each flight, time unit delay cost to arrive to the nodes (i.e., air sectors and airports) and penalization for advancing arrival to the nodes over the schedule. The arrival and departure capacity at the airports is obviously considered, as well as the capacity of the different sectors in the airspace, being allowed to vary along the time horizon. So, the model is aimed to help for better decision-making regarding the ground holding and air delays imposed on flights in an air network, on a short term policy for a given time horizon. It is so strong that there is no additional cut appending, nor does it require the execution of the branch-and-bound phase to obtain the optimal solution for the problem in many cases of the testbeds with which we have experimented. In the other cases, the help of the cut identifying and heuristic schemes of the state-of-the art optimization engine of choice is required in order to obtain the solution of the problem, and the branch-and-bound phase is not required either. An extensive computational experience is reported for large-scale instances, some of which have been taken from the literature and some others were coming from industry.
Air traffic flow management; Ground holding and air delays; Mixed 0–1 optimization model;
http://www.sciencedirect.com/science/article/pii/S0377221711010988
Agustı´n, A.
Alonso-Ayuso, A.
Escudero, L.F.
Pizarro, C.
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:525-5322014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:525-532
article
Design of progressively censored group sampling plans for Weibull distributions: An optimization problem
Optimization algorithms provides efficient solutions to many statistical problems. Essentially, the design of sampling plans for lot acceptance purposes is an optimization problem with several constraints, usually related to the quality levels required by the producer and the consumer. An optimal acceptance sampling plan is developed in this paper for the Weibull distribution with unknown scale parameter. The proposed plan combines grouping of items, sudden death testing in each group and progressive group removals, and its decision criterion is based on the uniformly most powerful life test. A mixed integer programming problem is first solved for determining the minimum number of failures required and the corresponding acceptance constant. The optimal number of groups is then obtained by minimizing a balanced estimation of the expected test cost. Excellent approximately optimal solutions are also provided in closed-forms. The sampling plan is considerably flexible and allows to save experimental time and cost. In general, our methodology achieves solutions that are quite robust to small variations in the Weibull shape parameter. A numerical example about a manufacturing process of gyroscopes is included for illustration.
Constrained optimization Minimal expected cost Mixed integer programming Operating characteristic function Producer's and consumer's risks Quality control
http://www.sciencedirect.com/science/article/B6VCT-51N228J-1/2/6fa9ba419e406f67b27b447fe9dd4f10
Fernández, Arturo J.
Pérez-González, Carlos J.
Aslam, Muhammad
Jun, Chi-Hyuck
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:89-962014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:89-96
article
Discrete-time, economic lot scheduling problem on multiple, non-identical production lines
This paper deals with a general discrete time dynamic demand model to solve real time resource allocation and lot-sizing problems in a multimachine environment. In particular, the problem of apportioning item production to distinct manufacturing lines with different costs (production, setup and inventory) and capabilities is considered. Three models with different cost definitions are introduced, and a set of algorithms able to handle all the problems are developed. The computational results show that the best of the developed approaches is able to handle problems with up to 10000 binary variables outperforming general-purpose solvers and other randomized approaches. The gap between lower and upper bound procedures is within 1.0% after about 500 seconds of CPU time on a 2.66Â Ghz Intel Core2 PC.
Lot-sizing Scheduling Beam search Matheuristicm Resource allocation
http://www.sciencedirect.com/science/article/pii/S0377221711005017
Bollapragada, Ramesh
Croce, Federico Della
Ghirardi, Marco
oai:RePEc:eee:ejores:v:191:y:2008:i:2:p:416-4362014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:191:y:2008:i:2:p:416-436
article
Ordinal regression revisited: Multiple criteria ranking using a set of additive value functions
We present a new method, called UTAGMS, for multiple criteria ranking of alternatives from set A using a set of additive value functions which result from an ordinal regression. The preference information provided by the decision maker is a set of pairwise comparisons on a subset of alternatives ARÂ [subset, double equals]Â A, called reference alternatives. The preference model built via ordinal regression is the set of all additive value functions compatible with the preference information. Using this model, one can define two relations in the set A: the necessary weak preference relation which holds for any two alternatives a, b from set A if and only if for all compatible value functions a is preferred to b, and the possible weak preference relation which holds for this pair if and only if for at least one compatible value function a is preferred to b. These relations establish a necessary and a possible ranking of alternatives from A, being, respectively, a partial preorder and a strongly complete relation. The UTAGMS method is intended to be used interactively, with an increasing subset AR and a progressive statement of pairwise comparisons. When no preference information is provided, the necessary weak preference relation is a weak dominance relation, and the possible weak preference relation is a complete relation. Every new pairwise comparison of reference alternatives, for which the dominance relation does not hold, is enriching the necessary relation and it is impoverishing the possible relation, so that they converge with the growth of the preference information. Distinguishing necessary and possible consequences of preference information on the complete set of actions, UTAGMS answers questions of robustness analysis. Moreover, the method can support the decision maker when his/her preference statements cannot be represented in terms of an additive value function. The method is illustrated by an example solved using the UTAGMS software. Some extensions of the method are also presented.
http://www.sciencedirect.com/science/article/B6VCT-4PHJTNK-1/1/aa018056e06d415b61f39b65b0935562
Greco, Salvatore
Mousseau, Vincent
Slowinski, Roman
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:270-2772014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:270-277
article
Cost-sharing mechanisms for scheduling under general demand settings
We investigate cost-sharing mechanisms for scheduling cost-sharing games. We assume that the demand is general—that is, each player can be allocated one of several levels of service. We show how to design mechanisms for these games that are weakly group strategyproof, approximately budget-balanced, and approximately efficient, using approximation algorithms for the underlying scheduling problems. We consider scheduling cost-sharing games in single machine, parallel machine, and concurrent open shop environments.
Game theory; Cost sharing; Scheduling; Combinatorial optimization;
http://www.sciencedirect.com/science/article/pii/S0377221711008587
Balireddi, Sindhura
Uhan, Nelson A.
oai:RePEc:eee:ejores:v:233:y:2014:i:3:p:595-6032014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:3:p:595-603
article
A compound control chart for monitoring and controlling high quality processes
In the present article, we propose a new control chart for monitoring high quality processes. More specifically, we suggest declaring the monitored process out of control, by exploiting a compound rule couching on the number of conforming units observed between the (i−1)th and the ith nonconforming item and the number of conforming items observed between the (i−2)th and the ith nonconforming item. Our numerical experimentation demonstrates that the proposed control chart, in most of the cases, exhibits a better (or at least equivalent) performance than its competitors.
Quality control; Applied probability; High quality processes; Run length distribution; Markov chain impedance; Conforming run length;
http://www.sciencedirect.com/science/article/pii/S0377221713006826
Bersimis, Sotiris
Koutras, Markos V.
Maravelakis, Petros E.
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:391-4002014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:391-400
article
Callable risky perpetual debt with protection period
Issuances in the USD 260 Bn global market of perpetual risky debt are often motivated by capital requirements for financial institutions. We analyze callable risky perpetual debt emphasizing an initial protection ('grace') period before the debt may be called. The total market value of debt including the call option is expressed as a portfolio of perpetual debt and barrier options with a time dependent barrier. We also analyze how an issuer's optimal bankruptcy decision is affected by the existence of the call option by using closed-form approximations. The model quantifies the increased coupon and the decreased initial bankruptcy level caused by the embedded option. Examples indicate that our closed form model produces reasonably precise coupon rates compared to numerical solutions. The credit-spread produced by our model is in a realistic order of magnitude compared to market data.
Callable perpetual debt Embedded options Barrier options Optimal bankruptcy
http://www.sciencedirect.com/science/article/B6VCT-4YX7KDD-2/2/9ff5a25e9afdabb9525b556269185b90
Mjøs, Aksel
Persson, Svein-Arne
oai:RePEc:eee:ejores:v:208:y:2011:i:3:p:206-2202014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:3:p:206-220
article
Competitive facility location problem with attractiveness adjustment of the follower: A bilevel programming model and its solution
We are concerned with a problem in which a firm or franchise enters a market by locating new facilities where there are existing facilities belonging to a competitor. The firm aims at finding the location and attractiveness of each facility to be opened so as to maximize its profit. The competitor, on the other hand, can react by adjusting the attractiveness of its existing facilities with the objective of maximizing its own profit. The demand is assumed to be aggregated at certain points in the plane and the facilities of the firm can be located at predetermined candidate sites. We employ Huff's gravity-based rule in modeling the behavior of the customers where the fraction of customers at a demand point that visit a certain facility is proportional to the facility attractiveness and inversely proportional to the distance between the facility site and demand point. We formulate a bilevel mixed-integer nonlinear programming model where the firm entering the market is the leader and the competitor is the follower. In order to find the optimal solution of this model, we convert it into an equivalent one-level mixed-integer nonlinear program so that it can be solved by global optimization methods. Apart from reporting computational results obtained on a set of randomly generated instances, we also compute the benefit the leader firm derives from anticipating the competitor's reaction of adjusting the attractiveness levels of its facilities. The results on the test instances indicate that the benefit is 58.33% on the average.
Competitive facility location Bilevel programming Mixed-integer nonlinear programming Variable facility attractiveness
http://www.sciencedirect.com/science/article/B6VCT-50T9WWR-2/2/1fbae072ee3e9f5fe1b579f60e66e010
Küçükaydin, Hande
Aras, Necati
Kuban AltInel, I.
oai:RePEc:eee:ejores:v:205:y:2010:i:3:p:650-6602014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:3:p:650-660
article
Optimal dynamic price selection under attraction choice models
We present a single-resource finite-horizon Markov decision process approach for a firm that seeks to maximize expected revenues by dynamically adjusting the menu of offered products and their prices to be selected from a finite set of alternative values predetermined as a matter of policy. Consumers choose among available products according to an attraction choice model, a special but widely applied class of discrete choice models. The fractional substructure inherent to attraction choice models enables the derivation of structural properties that significantly simplify the computation of an optimal policy. We show that the more capacity (or less time) is remaining, the lower are the optimal prices. We develop an exact solution procedure for preprocessing and recursively solving price selection problems that clearly outperforms current revenue management approaches with respect to computational complexity. Furthermore, we demonstrate that the potential revenue improvements from dynamic compared to fixed pricing can be significant, ranging from 4.5% up to 149% for selected examples from the literature. Extensions to multiple resources and to related discrete choice models are discussed.
Pricing Revenue management Stochastic multi-product dynamic pricing Discrete choice behavior Fractional programming
http://www.sciencedirect.com/science/article/B6VCT-4Y70C5Y-2/2/b98dfcd74829dbb203172f9c172ce3be
Schön, Cornelia
oai:RePEc:eee:ejores:v:201:y:2010:i:1:p:336-3362014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:201:y:2010:i:1:p:336-336
article
Corrigendum to "Balancing and scheduling tasks in assembly lines with sequence-dependent setup" [European Journal of Operational Research 187 (2008) 1212-1223]
http://www.sciencedirect.com/science/article/B6VCT-4VW4V8C-1/2/f72262b781e2cedb356a98c0ae74e1d8
Pastor, Rafael
Andrés, Carlos
Miralles, Cristóbal
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:731-7432014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:731-743
article
Distributed localized bi-objective search
We propose a new distributed heuristic for approximating the Pareto set of bi-objective optimization problems. Our approach is at the crossroads of parallel cooperative computation, objective space decomposition, and adaptive search. Given a number of computing nodes, we self-coordinate them locally, in order to cooperatively search different regions of the Pareto front. This offers a trade-off between a fully independent approach, where each node would operate independently of the others, and a fully centralized approach, where a global knowledge of the entire population is required at every step. More specifically, the population of solutions is structured and mapped into computing nodes. As local information, every node uses only the positions of its neighbors in the objective space and evolves its local solution based on what we term a ‘localized fitness function’. This has the effect of making the distributed search evolve, over all nodes, to a high quality approximation set, with minimum communications. We deploy our distributed algorithm using a computer cluster of hundreds of cores and study its properties and performance on ρMNK-landscapes. Through extensive large-scale experiments, our approach is shown to be very effective in terms of approximation quality, computational time and scalability.
Multiple objective programming; Combinatorial optimization; Parallel and distributed computing; Evolutionary computation;
http://www.sciencedirect.com/science/article/pii/S0377221714004639
Derbel, Bilel
Humeau, Jérémie
Liefooghe, Arnaud
Verel, Sébastien
oai:RePEc:eee:ejores:v:225:y:2013:i:3:p:528-5402014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:225:y:2013:i:3:p:528-540
article
Forecasting foreign exchange rates with adaptive neural networks using radial-basis functions and Particle Swarm Optimization
The motivation for this paper is to introduce a hybrid neural network architecture of Particle Swarm Optimization and Adaptive Radial Basis Function (ARBF–PSO), a time varying leverage trading strategy based on Glosten, Jagannathan and Runkle (GJR) volatility forecasts and a neural network fitness function for financial forecasting purposes. This is done by benchmarking the ARBF–PSO results with those of three different neural networks architectures, a Nearest Neighbors algorithm (k-NN), an autoregressive moving average model (ARMA), a moving average convergence/divergence model (MACD) plus a naı¨ve strategy. More specifically, the trading and statistical performance of all models is investigated in a forecast simulation of the EUR/USD, EUR/GBP and EUR/JPY ECB exchange rate fixing time series over the period January 1999–March 2011 using the last 2years for out-of-sample testing.
Adaptive Radial Basis Function; Partial Swarm Optimization; Forecasting; Quantitative trading strategies;
http://www.sciencedirect.com/science/article/pii/S0377221712007667
Sermpinis, Georgios
Theofilatos, Konstantinos
Karathanasopoulos, Andreas
Georgopoulos, Efstratios F.
Dunis, Christian
oai:RePEc:eee:ejores:v:234:y:2014:i:3:p:898-9092014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:234:y:2014:i:3:p:898-909
article
Introducing competition in healthcare services: The role of private care and increased patient mobility
We study the operational implications from competition in the provision of healthcare services, in the context of national public healthcare systems in Europe. Specifically, we study the potential impact of two alternative ways through which policy makers have introduced such competition: (i) via the introduction of private hospitals to operate alongside public hospitals and (ii) via the introduction of increased patient choice to grant European patients the freedom to choose the country they receive treatment at. We use a game-theoretic framework with a queueing component to capture the interactions among the patients, the hospitals and the healthcare funders. Specifically, we analyze two different sequential games and obtain closed form expressions for the patients’ waiting time and the funders’ reimbursement cost in equilibrium. We show that the presence of a private provider can be beneficial to the public system: the patients’ waiting time will decrease and the funders’ cost can decrease under certain conditions. Also, we show that the cross-border healthcare policy, which increases patient mobility, can also be beneficial to the public systems: when welfare requirements across countries are sufficiently close, all funders can reduce their costs without increasing the patients’ waiting time. Our analysis implies that in border regions, where the cost of crossing the border is low, “outsourcing” the high-cost country’s elective care services to the low-cost country is a viable strategy from which both countries’ systems can benefit.
OR in government; Health policy; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221713009399
Andritsos, Dimitrios A.
Tang, Christopher S.
oai:RePEc:eee:ejores:v:224:y:2013:i:2:p:340-3522014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:2:p:340-352
article
Efficient comparison of constrained systems using dormancy
We consider the problem of finding the best simulated system under a primary performance measure, while also satisfying stochastic constraints on secondary performance measures. We improve upon existing constrained selection procedures by allowing certain systems to become dormant, halting sampling for those systems as the procedure continues. A system goes dormant when it is found inferior to another system whose feasibility has not been determined, and returns to contention only if its superior system is eliminated. If found feasible, the superior system will eliminate the dormant system. By making systems dormant, we avoid collecting unnecessary observations from inferior systems. The paper also proposes other modifications, and studies the impact and benefits of our approaches (compared to similar constrained selection procedures) through experimental results and asymptotic approximations. Additionally, we discuss the difficulties associated with procedures that use sample means of unequal, random sample sizes, which commonly occurs within constrained selection and optimization-via-simulation.
Simulation; Ranking and selection; Stochastic constraints; Fully sequential algorithms; Multiple performance measures; Constrained selection;
http://www.sciencedirect.com/science/article/pii/S0377221712006352
Healey, Christopher M.
Andradóttir, Sigrún
Kim, Seong-Hee
oai:RePEc:eee:ejores:v:236:y:2014:i:1:p:351-3602014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:236:y:2014:i:1:p:351-360
article
Profit criteria involving risk in price setting of virtual products
This work deals with pricing of “virtual” products, i.e., products that a retailer can supply after demand has been realized. Such products allow the retailer to avoid holding costs and ensure timely fulfillment of demand with no risk of shortage. Demand is commonly price-dependent and uncertain, and we seek to maximize each of three criteria: expected profit, the likelihood of achieving a profit target, and the profit for a given percentile. Simultaneous multiple criteria are also explored. Two forms of demand uncertainty are considered in the analysis: the multiplicative form, where, due to stochastic dominance, all the investigated profit criteria—and, in fact, any utility function of the profit—can be optimized simultaneously; and the additive form, where stochastic dominance cannot occur. Under the multiplicative form of demand, the property of stochastic dominance is shown to hold in a two-echelon supply chain (comprising both the supplier and the retailer) and in a centralized system.
Pricing; Multiple criteria; Risk; Supply chain management; Decision analysis;
http://www.sciencedirect.com/science/article/pii/S037722171300979X
Chernonog, Tatyana
Avinadav, Tal
oai:RePEc:eee:ejores:v:220:y:2012:i:2:p:588-5902014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:220:y:2012:i:2:p:588-590
article
A comment on “cost efficiency in data envelopment analysis with data uncertainty”
In a recent paper by Mostafaee and Saljooghi [Mostafaee, A., Saljooghi, F.H., 2010. Cost efficiency in data envelopment analysis with data uncertainty. European Journal of Operational Research, 202, 595–603], the authors extend the classical cost efficiency model to address data uncertainty. They claim that the upper bound of the cost efficiency can be obtained at extreme points when the input prices appear in the form of ranges. In this paper, we present our counterexamples and comments on the contention by Mostafaee and Saljooghi.
Data envelopment analysis; Cost efficiency; Data uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221712000914
Fang, Lei
Li, Hecheng
oai:RePEc:eee:ejores:v:199:y:2009:i:1:p:303-3102014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:199:y:2009:i:1:p:303-310
article
How to generate regularly behaved production data? A Monte Carlo experimentation on DEA scale efficiency measurement
Monte Carlo experimentation is a well-known approach used to test the performance of alternative methodologies under different hypotheses. In the frontier analysis framework, whatever the parametric or non-parametric methods tested, experiments to date have been developed assuming single output multi-input production functions. The data generated have mostly assumed a Cobb-Douglas technology. Among other drawbacks, this simple framework does not allow the evaluation of DEA performance on scale efficiency measurement. The aim of this paper is twofold. On the one hand, we show how reliable two-output two-input production data can be generated using a parametric output distance function approach. A variable returns to scale translog technology satisfying regularity conditions is used for this purpose. On the other hand, we evaluate the accuracy of DEA technical and scale efficiency measurement when sample size and output ratios vary. Our Monte Carlo experiment shows that the correlation between true and estimated scale efficiency is dramatically low when DEA analysis is performed with small samples and wide output ratio variations.
Parametric distance function DEA Technical efficiency Scale efficiency Monte Carlo experiments
http://www.sciencedirect.com/science/article/B6VCT-4V0TD2N-5/2/77d22693784288287f77f66fd3e07b1b
Perelman, Sergio
Santín, Daniel
oai:RePEc:eee:ejores:v:199:y:2009:i:1:p:25-352014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:199:y:2009:i:1:p:25-35
article
Multi-objective integer programming: A general approach for generating all non-dominated solutions
In this paper we develop a general approach to generate all non-dominated solutions of the multi-objective integer programming (MOIP) Problem. Our approach, which is based on the identification of objective efficiency ranges, is an improvement over classical [epsilon]-constraint method. Objective efficiency ranges are identified by solving simpler MOIP problems with fewer objectives. We first provide the classical [epsilon]-constraint method on the bi-objective integer programming problem for the sake of completeness and comment on its efficiency. Then present our method on tri-objective integer programming problem and then extend it to the general MOIP problem with k objectives. A numerical example considering tri-objective assignment problem is also provided.
Multiple objective programming Integer programming
http://www.sciencedirect.com/science/article/B6VCT-4TVJ3KV-3/2/552854ec23e4298fa77a8e138adbb50c
Özlen, Melih
Azizoglu, Meral
oai:RePEc:eee:ejores:v:193:y:2009:i:3:p:857-8642014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:193:y:2009:i:3:p:857-864
article
Apportionment methods and the Liu-Layland problem
The Liu-Layland periodic scheduling problem can be solved by the house monotone quota methods of apportionment. This paper shows that staying within the quota is necessary for any apportionment divisor method to solve this problem. As a consequence no divisor method, or equivalently no population monotone method, solves the Liu-Layland problem.
Scheduling Just-in-time scheduling Apportionment theory Divisor methods Hard real-time systems
http://www.sciencedirect.com/science/article/B6VCT-4R3357R-9/2/18812702d4b90f62c612a31f01256961
Józefowska, Joanna
Józefowski, Lukasz
Kubiak, Wieslaw
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:273-2802014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:273-280
article
Two-stage financial risk tolerance assessment using data envelopment analysis
Typical questionnaires administered by financial advisors to assess financial risk tolerance mostly contain stereotypes of people, have seemingly unscientific scoring approaches and often treat risk as a one-dimensional concept. In this work, a mathematical tool was developed to assess relative risk tolerance using Data Envelopment Analysis (DEA). At its core, it is a novel questionnaire that characterizes risk by its four distinct elements: propensity, attitude, capacity, and knowledge. Over 180 individuals were surveyed and their responses were analyzed using the Slacks-based measure type of DEA efficiency model. Results show that the multidimensionality of risk must be considered for complete assessment of risk tolerance. This approach also provides insight into the relationship between risk, its elements and other variables. Specifically, the perception of risk varies by gender as men are generally less risk averse than women. In fact, risk attitude and knowledge scores are consistently lower for women, while there is no statistical difference in their risk capacity and propensity compared to men. The tool can also serve as a “risk calculator” for an appropriate and defensible method to meet legal compliance requirements, known as the “Know Your Client” rule, that exist for Canadian financial institutions and their advisors.
Data envelopment analysis; Investment analysis; Risk management; OR in banking; Finance;
http://www.sciencedirect.com/science/article/pii/S0377221713006954
Cooper, W.W.
Kingyens, Angela T.
Paradi, Joseph C.
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:167-1792014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:167-179
article
Share functions for cooperative games with levels structure of cooperation
In a standard TU-game it is assumed that every subset of the player set N can form a coalition and earn its worth. One of the first models where restrictions in cooperation are considered is the one of games with coalition structure of Aumann and Drèze (1974). They assumed that the player set is partitioned into unions and that players can only cooperate within their own union. Owen (1977) introduced a value for games with coalition structure under the assumption that also the unions can cooperate among them. Winter (1989) extended this value to games with levels structure of cooperation, which consists of a game and a finite sequence of partitions defined on the player set, each of them being coarser than the previous one.
Cooperative game; Shapley value; Coalition structure; Share functions; Levels structure of cooperation;
http://www.sciencedirect.com/science/article/pii/S0377221712005723
Álvarez-Mozos, M.
van den Brink, R.
van der Laan, G.
Tejada, O.
oai:RePEc:eee:ejores:v:230:y:2013:i:2:p:356-3632014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:2:p:356-363
article
Variable neighborhood search for minimum sum-of-squares clustering on networks
Euclidean Minimum Sum-of-Squares Clustering amounts to finding p prototypes by minimizing the sum of the squared Euclidean distances from a set of points to their closest prototype. In recent years related clustering problems have been extensively analyzed under the assumption that the space is a network, and not any more the Euclidean space. This allows one to properly address community detection problems, of significant relevance in diverse phenomena in biological, technological and social systems. However, the problem of minimizing the sum of squared distances on networks have not yet been addressed. Two versions of the problem are possible: either the p prototypes are sought among the set of nodes of the network, or also points along edges are taken into account as possible prototypes. While the first problem is transformed into a classical discrete p-median problem, the latter is new in the literature, and solved in this paper with the Variable Neighborhood Search heuristic. The solutions of the two problems are compared in a series of test examples.
Minimum sum-of-squares clustering; Location on networks; Variable neighborhood search;
http://www.sciencedirect.com/science/article/pii/S037722171300341X
Carrizosa, Emilio
Mladenović, Nenad
Todosijević, Raca
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:214-2232014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:214-223
article
Divide-and-price: A decomposition algorithm for solving large railway crew scheduling problems
The railway crew scheduling problem consists of generating crew duties to operate trains at minimal cost, while meeting all work regulations and operational requirements. Typically, a railway operation uses tens of thousands of train movements (trips) and requires thousands of crew members to be assigned to these trips. Despite the large size of the problem, crew schedules need to be generated in short time, because large parts of the train schedule are not finalized until few days before operation.
Large scale optimization; Crew scheduling; Combinatorial optimization; Pricing; Parallel computing;
http://www.sciencedirect.com/science/article/pii/S0377221711011325
Jütte, Silke
Thonemann, Ulrich W.
oai:RePEc:eee:ejores:v:196:y:2009:i:2:p:737-7432014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:196:y:2009:i:2:p:737-743
article
An integrated approach to the one-dimensional cutting stock problem in coronary stent manufacturing
This paper presents a two-stage approach for pattern generation and cutting plan determination of the one-dimensional cutting stock problem. Calculation of the total number of patterns that will be cut and generation of the cutting patterns are performed in the first stage. On the other hand, the second stage determines the cutting plan. The proposed approach makes use of two separate integer linear programming models. One of these models is employed by the first stage to generate the cutting patterns through a heuristic procedure with the objective of minimizing trim loss. The cutting patterns obtained from Stage 1 are then fed into the second stage. In this stage, another integer linear programming model is solved to form a cutting plan. The objective of this model is to minimize a generalized total cost function consisting of material inputs, number of setups, labor hours and overdue time; subject to demand requirements, material availability, regular and overtime availability, and due date constraints. The study also demonstrates an implementation of the proposed approach in a coronary stent manufacturer. The case study focuses on the cutting phase of the manufacturing process followed by manual cleaning and quality control activities. The experiments show that the proposed approach is suitable to the conditions and requirements of the company.
OR in health services Production One-dimensional cutting stock problem
http://www.sciencedirect.com/science/article/B6VCT-4S7SV3H-2/2/308d22c906aaab7640589c268a26f97d
Aktin, Tülin
Özdemir, RIfat Gürcan
oai:RePEc:eee:ejores:v:204:y:2010:i:1:p:117-1242014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:1:p:117-124
article
Highway games on weakly cyclic graphs
A highway problem is determined by a connected graph which provides all potential entry and exit vertices and all possible edges that can be constructed between vertices, a cost function on the edges of the graph and a set of players, each in need of constructing a connection between a specific entry and exit vertex. Mosquera (2007) introduce highway problems and the corresponding cooperative cost games called highway games to address the problem of fair allocation of the construction costs in case the underlying graph is a tree. In this paper, we study the concavity and the balancedness of highway games on weakly cyclic graphs. A graph G is called highway-game concave if for each highway problem in which G is the underlying graph the corresponding highway game is concave. We show that a graph is highway-game concave if and only if it is weakly triangular. Moreover, we prove that highway games on weakly cyclic graphs are balanced.
C71 Cooperative games Highway games Cost sharing
http://www.sciencedirect.com/science/article/B6VCT-4XBX73W-1/2/0a8a601bc20a5d70daa0036ce9c2a397
Çiftçi, BarIs
Borm, Peter
Hamers, Herbert
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:667-6752014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:667-675
article
Exploring the trade-off between generalization and empirical errors in a one-norm SVM
We propose a one-norm support vector machine (SVM) formulation as an alternative to the well-known formulation that uses parameter C in order to balance the two inherent objective functions of the problem. Our formulation is motivated by the ϵ-constraint approach that is used in bicriteria optimization and we propose expressing the objective of minimizing total empirical error as a constraint with a parametric right-hand-side. Using dual variables we show equivalence of this formulation to the one with the trade-off parameter. We propose an algorithm that enumerates the entire efficient frontier by systematically changing the right-hand-side parameter. We discuss the results of a detailed computational analysis that portrays the structure of the efficient frontier as well as the computational burden associated with finding it. Our results indicate that the computational effort for obtaining the efficient frontier grows linearly in problem size, and the benefit in terms of classifier performance is almost always substantial when compared to a single run of the corresponding SVM. In addition, both the run time and accuracy compare favorably to other methods that search part or all of the regularization path of SVM.
Data mining; Multiple objective programming; Support vector machines; One-norm; Regularization path;
http://www.sciencedirect.com/science/article/pii/S0377221711010460
Aytug, Haldun
Sayın, Serpil
oai:RePEc:eee:ejores:v:220:y:2012:i:2:p:443-4512014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:220:y:2012:i:2:p:443-451
article
An alternative for robust estimation in Project Management
Recently, Hahn (2008) has proposed the mixture between the uniform and the beta distributions as an alternative to the beta distribution in PERT methodology which allows for varying amounts of dispersion and a greater likelihood of more extreme tail-area events. However, this mixture lacks a closed cumulative distribution function expression and its parameters remain a difficult interpretation. In addition, the kurtosis limit of the beta distribution is 3. Due to their higher kurtosis and easier elicitation we consider the Two-Sided Power and the Generalized Biparabolic distributions good candidates for applying Hahn’s idea (2008) and for building the Uniform-Two Sided Power (U-TSP) and the Uniform-Generalized Biparabolic (U-GBP) distributions. Using the same example from Hahn (2008) we are going to demonstrate that we can obtain more accuracy and a greater likelihood of more extreme tail area events. These distributions could be applied in other heavy-tailed phenomena which are usual in business contexts.
Project Management; Uncertainty modeling; Distribution; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221712000963
López Martín, M.M.
García García, C.B.
García Pérez, J.
Sánchez Granero, M.A.
oai:RePEc:eee:ejores:v:199:y:2009:i:2:p:323-3332014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:199:y:2009:i:2:p:323-333
article
Enhanced-interval linear programming
An enhanced-interval linear programming (EILP) model and its solution algorithm have been developed that incorporate enhanced-interval uncertainty (e.g., AÂ±, BÂ± and CÂ±) in a linear optimization framework. As a new extension of linear programming, the EILP model has the following advantages. Its solution space is absolutely feasible compared to that of interval linear programming (ILP), which helps to achieve insight into the expected-value-oriented trade-off between system benefits and risks of constraint violations. The degree of uncertainty of its enhanced-interval objective function (EIOF) would be lower than that of ILP model when the solution space is absolutely feasible, and the EIOF's expected value could be used as a criterion for generating the appropriate alternatives, which help decision-makers obtain non-extreme decisions. Moreover, because it can be decomposed into two submodels, EILP's computational requirement is lower than that of stochastic and fuzzy LP models. The results of a numeric example further indicated the feasibility and effectiveness of EILP model. In addition, EI nonlinear programming models, hybrid stochastic or fuzzy EILP models as well as risk-based trade-off analysis for EI uncertainty within decision process can be further developed to improve its applicability.
Linear programming Optimization Uncertainty Feasibility Solution algorithm Risk-based decision making
http://www.sciencedirect.com/science/article/B6VCT-4V7622K-3/2/8d649d118822a485e6d6adc766b0b4d5
Zhou, Feng
Huang, Gordon H.
Chen, Guo-Xian
Guo, Huai-Cheng
oai:RePEc:eee:ejores:v:222:y:2012:i:3:p:663-6722014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:222:y:2012:i:3:p:663-672
article
Robust risk management
Estimating the probabilities by which different events might occur is usually a delicate task, subject to many sources of inaccuracies. Moreover, these probabilities can change over time, leading to a very difficult evaluation of the risk induced by any particular decision. Given a set of probability measures and a set of nominal risk measures, we define in this paper the concept of robust risk measure as the worst possible of our risks when each of our probability measures is likely to occur. We study how some properties of this new object can be related with those of our nominal risk measures, such as convexity or coherence. We introduce a robust version of the Conditional Value-at-Risk (CVaR) and of entropy-based risk measures. We show how to compute and optimize the Robust CVaR using convex duality methods and illustrate its behavior using data from the New York Stock Exchange and from the NASDAQ between 2005 and 2010.
Convex programming; Robust optimization; Risk management;
http://www.sciencedirect.com/science/article/pii/S0377221712002457
Fertis, Apostolos
Baes, Michel
Lüthi, Hans-Jakob
oai:RePEc:eee:ejores:v:198:y:2009:i:2:p:387-3912014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:198:y:2009:i:2:p:387-391
article
The tricriterion shortest path problem with at least two bottleneck objective functions
The focus of this paper is on the tricriterion shortest path problem where two objective functions are of the bottleneck type, for example MinMax or MaxMin. The third objective function may be of the same kind or we may consider, for example, MinSum or MaxProd. Let p(n) be the complexity of a classical single objective algorithm responsible for this third function, where n is the number of nodes and m be the number of arcs of the graph. An O(m2p(n)) algorithm is presented that can generate the minimal complete set of Pareto-optimal solutions. Finding the maximal complete set is also possible. Optimality proofs are given and extensions for several special cases are presented. Computational experience for a set of randomly generated problems is reported.
Multicriteria shortest path problem Pareto-optimal solution
http://www.sciencedirect.com/science/article/B6VCT-4TN0KW2-1/2/1d2189b7d90350eb944c45df0aaea195
de Lima Pinto, Leizer
Bornstein, Cláudio Thomás
Maculan, Nelson
oai:RePEc:eee:ejores:v:197:y:2009:i:3:p:897-9112014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:197:y:2009:i:3:p:897-911
article
Bundle pricing of inventories with stochastic demand
We consider a retailer selling a fixed inventory of two perishable products over a finite horizon. Assuming Poisson arrivals and a bivariate reservation price distribution, we determine the optimal product and bundle prices that maximize the expected revenue. Our results indicate that the performances of mixed bundling, pure bundling and unbundled sales strategies heavily depend on the parameters of the demand process and the initial inventory levels. Bundling appears to be most effective with negatively correlated reservation prices and high starting inventory levels. When the starting inventory levels are equal and in excess of average demand, most of the benefits of bundling can be achieved through pure bundling. However, the mixed bundling strategy dominates the other two when the starting inventory levels are not equal. We also observe that an incorrect modeling of the reservation prices may lead to significant losses. The model is extended to allow for price changes during the selling horizon. It is shown that offering price bundles mid-season may be more effective than changing individual product prices.
Pricing Revenue management Bundling
http://www.sciencedirect.com/science/article/B6VCT-4RWH899-D/2/2699755bd4257fc37779d6a4cf82c37b
Bulut, Zümbül
Gürler, Ülkü
Sen, Alper
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:239-2502014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:239-250
article
Approximate dynamic programming for capacity allocation in the service industry
We consider a problem where different classes of customers can book different types of service in advance and the service company has to respond immediately to the booking request confirming or rejecting it. The objective of the service company is to maximize profit made of class-type specific revenues, refunds for cancellations or no-shows as well as cost of overtime. For the calculation of the latter, information on the underlying appointment schedule is required. In contrast to most models in the literature we assume that the service time of clients is stochastic and that clients might be unpunctual. Throughout the paper we will relate the problem to capacity allocation in radiology services. The problem is modeled as a continuous-time Markov decision process and solved using simulation-based approximate dynamic programming (ADP) combined with a discrete event simulation of the service period. We employ an adapted heuristic ADP algorithm from the literature and investigate on the benefits of applying ADP to this type of problem. First, we study a simplified problem with deterministic service times and punctual arrival of clients and compare the solution from the ADP algorithm to the optimal solution. We find that the heuristic ADP algorithm performs very well in terms of objective function value, solution time, and memory requirements. Second, we study the problem with stochastic service times and unpunctuality. It is then shown that the resulting policy constitutes a large improvement over an “optimal” policy that is deduced using restrictive, simplifying assumptions.
Capacity allocation; Services; Health care operations; Approximate dynamic programming; Reinforcement learning; Semi-Markov decision process;
http://www.sciencedirect.com/science/article/pii/S0377221711008101
Schütz, Hans-Jörg
Kolisch, Rainer
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:38-472014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:38-47
article
A hierarchy of relaxations for nonlinear convex generalized disjunctive programming
We propose a framework to generate alternative mixed-integer nonlinear programming formulations for disjunctive convex programs that lead to stronger relaxations. We extend the concept of “basic steps” defined for disjunctive linear programs to the nonlinear case. A basic step is an operation that takes a disjunctive set to another with fewer number of conjuncts. We show that the strength of the relaxations increases as the number of conjuncts decreases, leading to a hierarchy of relaxations. We prove that the tightest of these relaxations, allows in theory the solution of the disjunctive convex program as a nonlinear programming problem. We present a methodology to guide the generation of strong relaxations without incurring an exponential increase of the size of the reformulated mixed-integer program. Finally, we apply the theory developed to improve the computational efficiency of solution methods for nonlinear convex generalized disjunctive programs (GDP). This methodology is validated through a set of numerical examples.
Combinatorial optimization; Convex programming; Disjunctive programming; Generalized disjunctive programming; Tight relaxations;
http://www.sciencedirect.com/science/article/pii/S037722171100899X
Ruiz, Juan P.
Grossmann, Ignacio E.
oai:RePEc:eee:ejores:v:225:y:2013:i:3:p:552-5572014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:225:y:2013:i:3:p:552-557
article
The 1-median and 1-highway problem
In this paper we study a facility location problem in the plane in which a single point (median) and a rapid transit line (highway) are simultaneously located in order to minimize the total travel time of the clients to the facility, using the L1 or Manhattan metric. The highway is an alternative transportation system that can be used by the clients to reduce their travel time to the facility. We represent the highway by a line segment with fixed length and arbitrary orientation. This problem was introduced in [Computers & Operations Research 38(2) (2011) 525–538]. They gave both a characterization of the optimal solutions and an algorithm running in O(n3logn) time, where n represents the number of clients. In this paper we show that the previous characterization does not work in general. Moreover, we provide a complete characterization of the solutions and give an algorithm solving the problem in O(n3) time.
Location; Geometric optimization; Transportation; Time distance;
http://www.sciencedirect.com/science/article/pii/S0377221712006923
Díaz-Báñez, J.M.
Korman, M.
Pérez-Lantero, P.
Ventura, I.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:287-2992014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:287-299
article
Simulation modelling for contracting hospital emergency services at the regional level
Hospital emergency services are closely connected to demographic issues and population changes. The methodology presented here helps to assess the effects of the forecasted demand changes on the next-year emergency unit workloads. The objective of the study is to estimate the expected volume of emergency hospital services, as measured by the number and costs of medical procedures provided to patients, to be contracted by the Polish National Health Fund (NFZ) branch at the regional level to cover the forecasted demand. A discrete-event simulation model was developed to elaborate the credible forecasts of the function components, the fundamental elements of the contract values granted by the NFZ for emergency departments for the following year. Emergency department-level data were drawn from the NFZ regional branch registry to perform a statistical analysis of emergency services provided to patients in 17 admission units and emergency wards in 2010. The model results indicate that the predicted increase in two age groups, i.e., the youngest children and the older population, will have different effects on the number and value of hospital emergency services to be considered in the contracting policy. There is potential for a discrete-event simulation to support strategic health policy decision making at the regional level. The value of this approach lies in providing estimates for the what-if scenarios related to the prognosis of changing acute demand.
OR in health services; Simulation; Hospitals; Emergency services; Contracting;
http://www.sciencedirect.com/science/article/pii/S0377221713008904
Mielczarek, Bożena
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1067-10822014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1067-1082
article
Export diversification through resource-based industrialization: The case of natural gas
For small resource-rich developing economies, specialization in raw exports is usually considered to be detrimental to growth and Resource-Based Industrialization (RBI) is often advocated to promote export diversification. This paper develops a new methodology to assess the performance of these RBI policies. We first formulate an adapted mean-variance portfolio model that explicitly takes into consideration: (i) a technology-based representation of the set of feasible export combinations and (ii) the cost structure of the resource processing industries. Second, we provide a computationally tractable reformulation of the resulting mixed-integer nonlinear optimization problem. Finally, we present an application to the case of natural gas, comparing current and efficient export-oriented industrialization strategies of nine gas-rich developing countries.
OR in developing countries; Mean-variance portfolio model; Efficiency measure; Development planning; Resource-based industrialization; Export earnings volatility;
http://www.sciencedirect.com/science/article/pii/S0377221714001787
Massol, Olivier
Banal-Estañol, Albert
oai:RePEc:eee:ejores:v:225:y:2013:i:3:p:383-3922014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:225:y:2013:i:3:p:383-392
article
Production and availability policies through the Markov Decision Process and myopic methods for contractual and selective orders
In this paper, we consider a supply chain with one manufacturer, one retailer, and some online customers. In addition to supplying the retailer, manufacturers may selectively take orders from individuals online. Through the Markov Decision Process, we explore the optimal production and availability policy for a manufacturer to determine whether to produce one more unit of products and whether to indicate “in stock” or “out of stock” on website. We measure the benefits and influences of adding online customers with and without the retailer’s inventory information sharing. We also simulate the production and availability policy via a myopic method, which can be implemented easily in the real world. Prediction of simple switching functions for the production and availability is proposed. We find the information sharing, production capacity and unit profit from online orders are the primary factors influencing manufacturer profits and optimal policy. The manufacturer might reserve 50% production capacity for contractual orders from the retailer and devote the remaining capacity to selective orders from spontaneous online customers.
Supply chain management; Information-sharing; Markov Decision Process; Production/availability policy;
http://www.sciencedirect.com/science/article/pii/S0377221712007321
Chao, Gary H.
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:711-7302014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:711-730
article
Robust ordinal regression for value functions handling interacting criteria
We present a new method called UTAGMS–INT for ranking a finite set of alternatives evaluated on multiple criteria. It belongs to the family of Robust Ordinal Regression (ROR) methods which build a set of preference models compatible with preference information elicited by the Decision Maker (DM). The preference model used by UTAGMS–INT is a general additive value function augmented by two types of components corresponding to “bonus” or “penalty” values for positively or negatively interacting pairs of criteria, respectively. When calculating value of a particular alternative, a bonus is added to the additive component of the value function if a given pair of criteria is in a positive synergy for performances of this alternative on the two criteria. Similarly, a penalty is subtracted from the additive component of the value function if a given pair of criteria is in a negative synergy for performances of the considered alternative on the two criteria. The preference information elicited by the DM is composed of pairwise comparisons of some reference alternatives, as well as of comparisons of some pairs of reference alternatives with respect to intensity of preference, either comprehensively or on a particular criterion. In UTAGMS–INT, ROR starts with identification of pairs of interacting criteria for given preference information by solving a mixed-integer linear program. Once the interacting pairs are validated by the DM, ROR continues calculations with the whole set of compatible value functions handling the interacting criteria, to get necessary and possible preference relations in the considered set of alternatives. A single representative value function can be calculated to attribute specific scores to alternatives. It also gives values to bonuses and penalties. UTAGMS–INT handles quite general interactions among criteria and provides an interesting alternative to the Choquet integral.
Multiple criteria decision aiding; Value function; Interacting criteria; Robust ordinal regression;
http://www.sciencedirect.com/science/article/pii/S0377221714004457
Greco, Salvatore
Mousseau, Vincent
Słowiński, Roman
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:178-1872014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:178-187
article
Identifying managerial groups in a large Canadian bank branch network with a DEA approach
This paper develops a new grouping approach in using data envelopment analysis as a framework to identify management groups and group performance leaders. The management group relies on the fact that branches grouped together present similar managerial preferences over performance goals and resource deployments due to internal or external market forces. This grouping approach can help a firm to create continuous improvement opportunities with effectively promoting the best managerial practices within groups, given similar operating characteristics. This approach’s grouping power and rationality is examined in the context of a large Canadian bank with about 1000 branches. The advantages of this new grouping approach are further verified through comparisons with the results obtained from the traditional clustering algorithm and the collaborating Bank’s “community type and population size” grouping criteria.
Data envelopment analysis; Clustering methods; DMU grouping; Branch management; Bank branch;
http://www.sciencedirect.com/science/article/pii/S037722171101099X
Paradi, Joseph C.
Zhu, Haiyan
Edelstein, Barak
oai:RePEc:eee:ejores:v:223:y:2012:i:2:p:304-3112014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:2:p:304-311
article
Improvement sets and vector optimization
In this paper we focus on minimal points in linear spaces and minimal solutions of vector optimization problems, where the preference relation is defined via an improvement set E. To be precise, we extend the notion of E-optimal point due to Chicco et al. in [4] to a general (non-necessarily Pareto) quasi ordered linear space and we study its properties. In particular, we relate the notion of improvement set with other similar concepts of the literature and we characterize it by means of sublevel sets of scalar functions. Moreover, we obtain necessary and sufficient conditions for E-optimal solutions of vector optimization problems through scalarization processes by assuming convexity assumptions and also in the general (nonconvex) case. By applying the obtained results to certain improvement sets we generalize well-known results of the literature referred to efficient, weak efficient and approximate efficient solutions of vector optimization problems.
Improvement set; Minimal point; Vector optimization; ε-Efficiency; Scalarization;
http://www.sciencedirect.com/science/article/pii/S0377221712004559
Gutiérrez, C.
Jiménez, B.
Novo, V.
oai:RePEc:eee:ejores:v:236:y:2014:i:1:p:37-502014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:236:y:2014:i:1:p:37-50
article
Scheduling with few changes
In this work we consider scheduling problems where a sequence of assignments from products to machines – or from tasks to operators, or from workers to resources – has to be determined, with the goal of minimizing the costs (=money, manpower, and/or time) that are incurred by the interplay between those assignments. To account for the different practical requirements (e.g. few changes between different products/tasks on the same machine/operator, few production disruptions, or few changes of the same worker between different resources), we employ different objective functions that are all based on elementary combinatorial properties of the schedule matrix. We propose simple and efficient algorithms to solve the corresponding optimization problems, and provide hardness results where such algorithms most likely do not exist.
Assignment problem; Changeover cost; Algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221713009107
Mütze, Torsten
oai:RePEc:eee:ejores:v:208:y:2011:i:1:p:12-182014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:1:p:12-18
article
Inverse variational inequalities with projection-based solution methods
An inverse variational inequality is defined as to find a vector , such thatIf an inverse function u = F-1(x) exists, the above inverse variational inequality could be transformed as a regular variational inequality. However, in reality, it is not uncommon that the inverse function of F-1(x) does not have explicit form, although its functional values can be observed. Existing line search algorithms cannot be applied directly to solve such inverse variational inequalities. In this paper, we propose two projection-based methods using the co-coercivity of mapping F. A self-adaptive strategy is developed to determine the step sizes efficiently when the co-coercivity modulus is unknown. The convergence of the proposed methods is proved rigorously.
Inverse variational inequality Co-coercivity Projection method Self-adaptive strategy
http://www.sciencedirect.com/science/article/B6VCT-50W80TK-3/2/5fe0a0582dca59b45f4941290ae8aa0f
He, Xiaozheng
Liu, Henry X.
oai:RePEc:eee:ejores:v:236:y:2014:i:1:p:27-362014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:236:y:2014:i:1:p:27-36
article
Frequency optimization in public transportation systems: Formulation and metaheuristic approach
We study the transit frequency optimization problem, which aims to determine the time interval between subsequent buses for a set of public transportation lines given by their itineraries, i.e., sequences of stops and street sections. The solution should satisfy a given origin–destination demand and a constraint on the available fleet of buses. We propose a new mixed integer linear programming (MILP) formulation for an already existing model, originally formulated as a nonlinear bilevel one. The proposed formulation is able to solve to optimality real small-sized instances of the problem using MILP techniques. For solving larger instances we propose a metaheuristic which accuracy is estimated by comparing against exact results (when possible). Both exact and approximated approaches are tested by using existing cases, including a real one related to a small-city which public transportation system comprises 13 lines. The magnitude of the improvement of that system obtained by applying the proposed methodologies, is comparable with the improvements reported in the literature, related to other real systems. Also, we investigate the applicability of the metaheuristic to a larger-sized real case, comprising more than 130 lines.
Transportation; Bus frequency optimization; Mixed integer linear programming; Tabu Search;
http://www.sciencedirect.com/science/article/pii/S0377221713009065
Martínez, Héctor
Mauttone, Antonio
Urquhart, María E.
oai:RePEc:eee:ejores:v:209:y:2011:i:2:p:122-1282014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:2:p:122-128
article
The Shapley-Shubik index for multi-criteria simple games
In this paper we address multi-criteria simple games which constitute an extension of the basic framework of voting systems and related social-choice situations. For these games, we propose the extended Shapley-Shubik index as the natural generalization of the Shapley-Shubik index in conventional simple games, and establish an axiomatic characterization of this power index.
Multiple criteria analysis Group decision-making Multi-criteria simple games Shapley-Shubik index Voting systems
http://www.sciencedirect.com/science/article/B6VCT-50SXC4Y-3/2/52429d258369f7790523c8ccca5867e5
Monroy, Luisa
Fernández, Francisco R.
oai:RePEc:eee:ejores:v:210:y:2011:i:1:p:57-672014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:1:p:57-67
article
A comprehensive method for comparing mental models of dynamic systems
Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure and compare mental models. It shows that the methods used to compare such models lack to account for relevant aspects of dynamic systems, such as, time delays in causal links, feedback structures, and the polarities of feedback loops. Mental models without those properties are mostly static models. To overcome these limitations of the methods to compare mental models, we enhance the widely used distance ratio approach (Markóczy and Goldberg, 1995) so as to comprehend these dynamic characteristics and detect differences among mental models at three levels: the level of elements, the level of individual feedback loops, and the level of the complete model. Our contribution lies in a new method to compare explicated mental models, not to elicit such models. An application of the method shows that this previously non-existent information is essential for understanding differences between managers' mental models of dynamic systems. Thereby, a further path is created to critically analyze and elaborate the models managers use in real world decision making. We discuss the benefits and limitations of our approach for research about mental models and decision making and conclude by identifying directions for further research for operational researchers.
Problem structuring Mental models Dynamic systems Feedback
http://www.sciencedirect.com/science/article/B6VCT-5100HN4-1/2/0ac3cc157c2d7deb44b2bfc59ed941dd
Schaffernicht, Martin
Groesser, Stefan N.
oai:RePEc:eee:ejores:v:220:y:2012:i:2:p:370-3772014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:220:y:2012:i:2:p:370-377
article
Pooling through lateral transshipments in service parts systems
We study the inventory management problem of a service center operating in a decentralized service parts network. The service centers collaborate through inventory and service pooling, and through sharing information on the inventory status. Upon demand arrival, a service center may request a part from the other center, in which case a payment is made. Under this competitive and collaborative environment, we first characterize the optimal operating policy of an individual service center. Through computational analysis we identify the conditions under which pooling is most beneficial to the service center, and make an assessment of different pooling strategies which are commonly adopted in practice and in the literature. Finally, we analyze the effect of interaction between the centers on the benefit of pooling.
Inventory; Lateral transshipment; Service parts; Optimal control of queueing networks;
http://www.sciencedirect.com/science/article/pii/S0377221712001269
Satır, Benhür
Savasaneril, Secil
Serin, Yasemin
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:529-5392014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:529-539
article
Unoriented two-stage DEA: The case of the oscillating intermediate products
Data envelopment analysis (DEA) allows us to evaluate the relative efficiency of each of a set of decision-making units (DMUs). However, the methodology does not permit us to identify specific sources of inefficiency because DEA views the DMU as a “black box” that consumes a mix of inputs and produces a mix of outputs. Thus, DEA does not provide a DMU manager with insight regarding the internal source of the organization’s inefficiency.
Data envelopment analysis; Two-stage; Unoriented; Major League Baseball;
http://www.sciencedirect.com/science/article/pii/S0377221713002075
Lewis, Herbert F.
Mallikarjun, Sreekanth
Sexton, Thomas R.
oai:RePEc:eee:ejores:v:228:y:2013:i:1:p:190-2002014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:228:y:2013:i:1:p:190-200
article
A maximum entropy approach to the newsvendor problem with partial information
In this paper, we consider the newsvendor model under partial information, i.e., where the demand distribution D is partly unknown. We focus on the classical case where the retailer only knows the expectation and variance of D. The standard approach is then to determine the order quantity using conservative rules such as minimax regret or Scarf’s rule. We compute instead the most likely demand distribution in the sense of maximum entropy. We then compare the performance of the maximum entropy approach with minimax regret and Scarf’s rule on large samples of randomly drawn demand distributions. We show that the average performance of the maximum entropy approach is considerably better than either alternative, and more surprisingly, that it is in most cases a better hedge against bad results.
Newsvendor model; Entropy; Partial information;
http://www.sciencedirect.com/science/article/pii/S0377221713000787
Andersson, Jonas
Jörnsten, Kurt
Nonås, Sigrid Lise
Sandal, Leif
Ubøe, Jan
oai:RePEc:eee:ejores:v:201:y:2010:i:2:p:593-6002014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:201:y:2010:i:2:p:593-600
article
An interactive GRAMPS algorithm for the heterogeneous fixed fleet vehicle routing problem with and without backhauls
In this article, a visual interactive approach based on a new greedy randomised adaptive memory programming search (GRAMPS) algorithm is proposed to solve the heterogeneous fixed fleet vehicle routing problem (HFFVRP) and a new extension of the HFFVRP, which is called heterogeneous fixed fleet vehicle routing problem with backhauls (HFFVRPB). This problem involves two different sets of customers. Backhaul customers are pickup points and linehaul customers are delivery points that are to be serviced from a single depot by a heterogeneous fixed fleet of vehicles, each of which is restricted in the capacity it can carry, with different variable travelling costs. The proposed approach is implemented within a visual decision support system, which was developed to allow users to produce and judge alternative decisions by using their knowledge and experience about the requirements of the HFFVRP. The computational results are provided on classical problem instances for HFFVRP and a new best-known solution has been reported. A new set of problem instances for HFFVRPB is proposed. The results show that the proposed approach can find high quality solutions in very short time and the system is able to create alternative solutions in order to satisfy the user's expectations.
Routing Metaheuristics GRAMPS Heterogeneous fixed fleet Backhauls
http://www.sciencedirect.com/science/article/B6VCT-4W0SK0T-2/2/cb1a5525150a96936fadef64edf424ac
Tütüncü, G. YazgI
oai:RePEc:eee:ejores:v:223:y:2012:i:1:p:226-2332014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:1:p:226-233
article
Developing a step-by-step effectiveness assessment model for customer-oriented service organizations
Effectiveness involves more than simple efficiency, which is limited to the production process assessment of peer operational units. Effectiveness incorporates variables that are both controllable (i.e. efficiency) and non-controllable (i.e. perceived quality) by the operational units. It is a fundamental driver for the success of either a for-profit or a non-for-profit unit in a competitive environment that is customer/citizen- and goal-oriented. Additionally, with respect to the short-run production constraints, i.e. the resources available and controllable by the operational units, and the legal status, we go beyond the traditional effectiveness assessment techniques by developing a Modified or “rational” Quality-driven-Efficiency-adjusted Data Envelopment Analysis (MQE-DEA) model. This particular model provides in the short run a feasible effectiveness attainment path for every disqualified unit in order to meet high-perceived quality and high-efficiency standards. By applying the MQE-DEA model a new production equilibrium is determined, which is different from the equilibrium suggested by the mainstream microeconomic theory, in that it takes into account not only the need for operational efficiency but also the customer-driven market dynamics.
OR in service industries; Efficiency; Perceived quality; Production equilibrium; Data Envelopment Analysis (DEA); Context-dependent DEA;
http://www.sciencedirect.com/science/article/pii/S0377221712004584
Brissimis, Sophocles N.
Zervopoulos, Panagiotis D.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:91-982014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:91-98
article
Finding all pure strategy Nash equilibria in a planar location game
In this paper, we deal with a planar location-price game where firms first select their locations and then set delivered prices in order to maximize their profits. If firms set the equilibrium prices in the second stage, the game is reduced to a location game for which pure strategy Nash equilibria are studied assuming that the marginal delivered cost is proportional to the distance between the customer and the facility from which it is served. We present characterizations of local and global Nash equilibria. Then an algorithm is shown in order to find all possible Nash equilibrium pairs of locations. The minimization of the social cost leads to a Nash equilibrium. An example shows that there may exist multiple Nash equilibria which are not minimizers of the social cost.
Location Game theory Nash equilibrium
http://www.sciencedirect.com/science/article/pii/S0377221711003134
Díaz-Báñez, J.M.
Heredia, M.
Pelegrín, B.
Pérez-Lantero, P.
Ventura, I.
oai:RePEc:eee:ejores:v:196:y:2009:i:1:p:155-1612014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:196:y:2009:i:1:p:155-161
article
Multiobjective traveling salesperson problem on Halin graphs
In this paper, we study traveling salesperson (TSP) and bottleneck traveling salesperson (BTSP) problems on special graphs called Halin graphs. Although both problems are NP-Hard on general graphs, they are polynomially solvable on Halin graphs. We address the multiobjective versions of these problems. We show computational complexities of finding a single nondominated point as well as finding all nondominated points for different objective function combinations. We develop algorithms for the polynomially solvable combinations.
Traveling salesperson problem Bottleneck traveling salesperson problem Multiple objectives Solvable cases Halin graphs Computational complexity
http://www.sciencedirect.com/science/article/B6VCT-4S8TB73-2/2/8a9cd198c25a251b20b281cca4ab4dff
Özpeynirci, Özgür
Köksalan, Murat
oai:RePEc:eee:ejores:v:227:y:2013:i:2:p:367-3842014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:2:p:367-384
article
A Tabu Search heuristic procedure in Markov chain bootstrapping
Markov chain theory is proving to be a powerful approach to bootstrap finite states processes, especially where time dependence is non linear. In this work we extend such approach to bootstrap discrete time continuous-valued processes. To this purpose we solve a minimization problem to partition the state space of a continuous-valued process into a finite number of intervals or unions of intervals (i.e. its states) and identify the time lags which provide “memory” to the process. A distance is used as objective function to stimulate the clustering of the states having similar transition probabilities. The problem of the exploding number of alternative partitions in the solution space (which grows with the number of states and the order of the Markov chain) is addressed through a Tabu Search algorithm. The method is applied to bootstrap the series of the German and Spanish electricity prices. The analysis of the results confirms the good consistency properties of the method we propose.
Markov chains; Bootstrapping; Heuristic algorithms; Tabu Search; Electricity prices;
http://www.sciencedirect.com/science/article/pii/S0377221712008338
Cerqueti, Roy
Falbo, Paolo
Guastaroba, Gianfranco
Pelizzari, Cristian
oai:RePEc:eee:ejores:v:196:y:2009:i:3:p:1008-10142014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:196:y:2009:i:3:p:1008-1014
article
Axiomatizations of the Shapley value for games on augmenting systems
This paper deals with cooperative games in which only certain coalitions are allowed to form. There have been previous models developed to confront the problem of unallowable coalitions. Games restricted by a communication graph were introduced by Myerson and Owen. In their model, the feasible coalitions are those that induce connected subgraphs. Another type of model is introduced in Gilles, Owen and van den Brink. In their model, the possibilities of coalition formation are determined by the positions of the players in a so-called permission structure. Faigle proposed another model for cooperative games defined on lattice structures. We introduce a combinatorial structure called augmenting system which is a generalization of the antimatroid structure and the system of connected subgraphs of a graph. In this framework, the Shapley value of games on augmenting systems is introduced and two axiomatizations of this value are showed.
Augmenting system Shapley value
http://www.sciencedirect.com/science/article/B6VCT-4SF3042-2/2/d6726c2700f6a35ff63922df6c6df7cf
Bilbao, J.M.
Ordóñez, M.
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:673-6782014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:673-678
article
Monte Carlo analysis of estimation methods for the prediction of customer response patterns in direct marketing
In direct marketing, customers are usually asked to take a specific action, and their responses are recorded over time and stored in a database. Based on the response data, we can estimate the number of customers who will ultimately respond, the number of responses anticipated to receive by a certain period of time, and the like. The goal of this article is to derive and propose several estimation methods and compare their performances in a Monte Carlo simulation. The response patterns can be described by a simple geometric function, which relates the number of responses to elapsed time. The “maximum likelihood” estimator appears to be the most effective method of estimating the parameters of this function. As we have more sample observations, the maximum likelihood estimates also converge to the true parameter values rapidly.
Response pattern; Direct marketing; Probabilistic modeling;
http://www.sciencedirect.com/science/article/pii/S0377221711009076
Chun, Young H.
oai:RePEc:eee:ejores:v:203:y:2010:i:1:p:59-692014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:203:y:2010:i:1:p:59-69
article
The Attractive Traveling Salesman Problem
In the Attractive Traveling Salesman Problem the vertex set is partitioned into facility vertices and customer vertices. A maximum profit tour must be constructed on a subset of the facility vertices. Profit is computed through an attraction function: every visited facility vertex attracts a portion of the profit from the customer vertices based on the distance between the facility and customer vertices, and the attractiveness of the facility vertex. A gravity model is used for computing the profit attraction. The problem is formulated as an integer non-linear program. A linearization is proposed and strengthened through the introduction of valid inequalities, and a branch-and-cut algorithm is developed. A tabu search algorithm is also implemented. Computational results are reported.
Traveling Salesman Problem Demand attraction Demand allocation Linearization Branch-and-cut Tabu search
http://www.sciencedirect.com/science/article/B6VCT-4WNXTY1-2/2/9e01096f88abd14faf894acf2215ce4a
Erdogan, Günes
Cordeau, Jean-François
Laporte, Gilbert
oai:RePEc:eee:ejores:v:222:y:2012:i:1:p:149-1562014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:222:y:2012:i:1:p:149-156
article
A pseudo-polynomial algorithm for optimal capacitor placement on electric power distribution networks
Allocation of shunt capacitor banks on radial electric power distribution networks allow reduction of energy losses and aggregated benefits. Four decades ago Durán proposed the use of dynamic programming to find optimal capacitor placement on these networks; however, with the restricting assumption of single-ended networks, which precluded its application to real capacitor allocation problems. Subsequently heuristic methods prevailed in the capacitor allocation literature. Here the Extended Dynamic Programming Approach (EDP) lifts Durán’s restricting assumption; a richer definition of state and the projection of multidimensional informations into equivalent one-dimensional representations are the supporting concepts. In addition to allow consideration of multi-ended networks, EDP deals with other requirements of capacitor allocation studies, including the use of both fixed and switched capacitors and representation of voltage drops along the networks. When switched capacitors are considered the optimization procedure also solves the capacitor control problem, obtaining the best tap adjustments for them. Case studies with real scale distribution networks put into perspective the benefits of the methodology; EDP has the appeal of providing global optimal solutions with pseudo-polynomial computational complexity in the worst-case, and with linear complexity for practical applications.
OR in energy; Dynamic programming; Power distribution networks; Optimal capacitor placement; Reduction of energy losses;
http://www.sciencedirect.com/science/article/pii/S0377221712002524
Vizcaino González, José Federico
Lyra, Christiano
Usberti, Fábio Luiz
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:182-1892014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:182-189
article
A conditional directional distance function approach for measuring regional environmental efficiency: Evidence from UK regions
This paper, by using conditional directional distance functions as introduced by Simar and Vanhems [J. Econometrics 166 (2012) 342–354] modifies the model by Färe and Grosskopf [Eur. J. Operat. Res. 157 (2004) 242–245] and examines the link between regional environmental efficiency and economic growth. The proposed model using conditional directional distance functions incorporates the effect of regional economic growth on regions’ environmental efficiency levels. The results from UK regional data reveal a negative relationship between regions’ GDP per capita and environmental inefficiency up to a certain GDP per capita level. After that level it appears that the relationship becomes positive. As an overall result the regional environmental inefficiency-GDP per capita relationship appears to have a ‘U’ shape form.
Regional environmental efficiency; Directional distance function; Conditional measures; UK regions;
http://www.sciencedirect.com/science/article/pii/S037722171200940X
Halkos, George E.
Tzeremes, Nickolaos G.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:161-1682014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:161-168
article
Weighted bankruptcy rules and the museum pass problem
In this paper we introduce and characterize some allocation rules for weighted bankruptcy problems. We illustrate the relevancy of weighted bankruptcy by applying it to analyse the museum pass problem, introduced by Ginsburgh and Zang (2003). This application is completed with the analysis of real data for the "Card Musei" of the Municipality of Genova.
Game theory Bankruptcy Weighted rule Axiomatic characterization Museum pass problem
http://www.sciencedirect.com/science/article/pii/S0377221711004589
Casas-Méndez, Balbina
Fragnelli, Vito
García-Jurado, Ignacio
oai:RePEc:eee:ejores:v:195:y:2009:i:2:p:497-5182014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:195:y:2009:i:2:p:497-518
article
Evaluating environmental regulation in Spain using process control and preventive techniques
Because environmental regulations may be adjusted over time to reflect updated understanding or new circumstances, there may be scope for strategic behaviour too. Regulations affect not only current emission levels, but also the effect on investment in R&D or new plant and equipment and, consequently, on competitive priorities. Although most of the literature devoted to the environmental regulation highlights the effects that legislation has on the adoption of decisions related to the environment as a competitive opportunity, there is actually no strong empirical evidence which supports that argument. This is why the present paper has as its aim to identify (1) how the regulation concerning the natural environment differs across sectors and (2) how it can influence managerial perceptions of the role to be played by the natural environment as a competitive opportunity. The research work has been carried out in two phases. The first phase involved comparative case studies of eight Spanish firms; during the second, the propositions emerging from the first phase were tested through a structural equation model of 239 hotels and 208 firms affected by the IPPC law in Spain. This paper has made a contribution to the existing research literature through the examination of the similarities and differences concerning managerial decision-making in the field of natural environment regulations. Moreover, a contact point between the Porter hypothesis and its criticism is offered. In relation to practical implications, updated information about the European, National and Community environmental legislation is presented that affects firms from eight sectors. In this context, legal environmental requirements are identified so as to make easier the adoption of managerial decisions which guarantee compliance with the law and avoidance of fines.
Environment Government Pollution control and prevention Competitiveness Multivariate statistics
http://www.sciencedirect.com/science/article/B6VCT-4RSJDTH-5/2/6f5db664316e12d68eb3526791891082
López-Gamero, María D.
Claver-Cortés, Enrique
Molina-Azorín, José F.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:411-4172014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:411-417
article
Semi-Lagrangean approach for price discovery in markets with non-convexities
From standard economic theory, the market clearing price for a commodity is set where the demand and supply curves intersect. Convexity is a property that economic models require for a competitive equilibrium, which is efficient and well-behaved and provides equilibrium prices. However, some markets present non-convexities due to their cost structure or due to some operational constraints that need to be addressed. This is the case for electricity markets where the electricity producers incur costs for shutting down a generating unit and then bringing it back on. Non-convex cost structures can be a challenge for the price discovery process, since the supply and demand curves may not intersect, or if they intersect, the price found may not be high enough to cover the total cost of production. We apply a Semi-Lagrangean approach to find a price that can be applied in the electricity pool markets where a central system operator decides who produces and how much they should produce. By applying the model to an example from the literature, we found prices that are high enough to cover the producer's total costs, and follows the optimal solution for achieving mining cost in production. The prices are an alternative solution to the price discovery problem in non-convexities economies; in addition, they provide nonnegative profits to all the generators without the use of side-payments or up-lifts, and closes the integrality gap.
OR in energy Mathematical programming Non-convexities Lagrangean relaxation
http://www.sciencedirect.com/science/article/pii/S0377221711004140
Araoz, Veronica
Jörnsten, Kurt
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:486-4872014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:486-487
article
Metaheuristics. From Design to Implementation, El-Ghazali Talbi. John Wiley & Sons Inc. (2009). XXIÂ +Â 593 pp., Publication 978-0-470-27858-1.
http://www.sciencedirect.com/science/article/B6VCT-4Y4XCPD-1/2/79583a1aebd011181fc3789109a587b0
Teghem, Jacques
oai:RePEc:eee:ejores:v:201:y:2010:i:2:p:505-5192014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:201:y:2010:i:2:p:505-519
article
Residual income and value creation: An investigation into the lost-capital paradigm
This paper presents a new way of measuring residual income, originally introduced by Magni (2000a,b,c, 2001a,b, 2003). Contrary to the standard residual income, the capital charge is equal to the capital lost by investors multiplied by the cost of capital. The lost capital may be viewed as (a) the foregone capital, (b) the capital implicitly infused into the business, (c) the outstanding capital of a shadow project, (d) the claimholders' credit. Relations of the lost capital with book values and market values are studied, as well as relations of the lost capital residual income with the classical standard paradigm; many appealing properties are derived, among which an aggregation property. Different concepts and results, provided by different authors in such different fields as economic theory, management accounting and corporate finance, are considered: O'Hanlon and Peasnell's (2002) unrecovered capital and Excess Value Created; Ohlson's (2005) Abnormal Earnings Growth; O'Byrne's (1997) EVA improvement; Miller and Modigliani's (1961) investment opportunities approach to valuation; Young and O'Byrne's (2001) Adjusted EVA; Keynes's (1936) user cost; Drukarczyk and Schueler's (2000) Net Economic Income; Fernández's (2002) Created Shareholder Value; Anthony's (1975) profit. They are all conveniently reinterpreted within the theoretical domain of the lost-capital paradigm and conjoined in a unified view. The results found make this new theoretical approach a good candidate for firm valuation, capital budgeting decision-making, managerial incentives and control.
Accounting Corporate finance Residual income Value creation Management Incentive compensation Lost-capital Net Present Value Book value Market value
http://www.sciencedirect.com/science/article/B6VCT-4VWB17S-4/2/ac7fe3d59ca88c5bc4977a298c23983d
Magni, Carlo Alberto
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:284-2892014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:284-289
article
Merging experts' opinions: A Bayesian hierarchical model with mixture of prior distributions
In this paper, a general approach is proposed to address a full Bayesian analysis for the class of quadratic natural exponential families in the presence of several expert sources of prior information. By expressing the opinion of each expert as a conjugate prior distribution, a mixture model is used by the decision maker to arrive at a consensus of the sources. A hyperprior distribution on the mixing parameters is considered and a procedure based on the expected Kullback-Leibler divergence is proposed to analytically calculate the hyperparameter values. Next, the experts' prior beliefs are calibrated with respect to the combined posterior belief over the quantity of interest by using expected Kullback-Leibler divergences, which are estimated with a computationally low-cost method. Finally, it is remarkable that the proposed approach can be easily applied in practice, as it is shown with an application.
Bayesian analysis Conjugate prior distributions Exponential families Prior mixtures Kullback-Leibler divergence
http://www.sciencedirect.com/science/article/B6VCT-4YV82KB-3/2/9925d9856b9aebdc7764aea5c7f31c6d
Rufo, M.J.
Pérez, C.J.
Martín, J.
oai:RePEc:eee:ejores:v:240:y:2015:i:3:p:791-8062014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:240:y:2015:i:3:p:791-806
article
Location–allocation approaches for hospital network planning under uncertainty
This study proposes two location–allocation models for handling uncertainty in the strategic planning of hospital networks. The models aim to inform how the hospital networking system may be (re)organized when the decision maker seeks to improve geographical access while minimizing costs. Key features relevant in the design of hospital networks, such as hospitals being multiservice providers operating within a hierarchical structure, are modelled throughout a planning horizon in which network changes may occur. The models hold different assumptions regarding decisions that have to be taken without full information on uncertain parameters and on the recourse decisions which will be made once uncertainty is disclosed. While the first model is in line with previous literature and considers location as first-stage decisions, the second model considers location and allocation as first-stage decisions. Uncertainty associated with demand is modelled through a set of discrete scenarios that illustrate future possible realizations. Both models are applied to a case study based on the Portuguese National Health Service. The results illustrate the information that can be obtained with each model, how models can assist health care planners, and what are the consequences of different choices on the decisions to be taken without complete information. The second model has shown to be advantageous on grounds that location–allocation decisions are not scenario dependent, and it appears to be more flexible to handle the planning problem at hand.
OR in health services; Uncertainty modelling; Stochastic planning; Location; Public facility planning;
http://www.sciencedirect.com/science/article/pii/S0377221714005906
Mestre, Ana Maria
Oliveira, Mónica Duarte
Barbosa-Póvoa, Ana Paula
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:119-1232014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:119-123
article
The impossibility of convex constant returns-to-scale production technologies with exogenously fixed factors
The extensions to the variable (VRS) and the constant (CRS) returns-to-scale models developed by Banker and Morey are considered among the main approaches to the incorporation of exogenously fixed factors in models of data envelopment analysis (DEA). Recently, Syrjänen showed that the Banker and Morey CRS technology is not convex. Taking into account that its subset VRS technology is explicitly assumed convex, this observation leads to difficulties with explaining the fundamental production assumptions of the CRS extension. Motivated by the example of Syrjänen, the contribution of this paper is twofold. First, we show that the nonconvex Banker and Morey CRS technology is nevertheless a suitable reference technology for the assessment of scale efficiency. Second, we ask if a convex technology could be constructed that would "correct" the nonconvexity of the CRS technology of Banker and Morey. The answer to this is negative: one consequence of assuming both convexity and ray unboundness with fixed exogenous factors is that we can always "mix-and-match" discretionary and nondiscretionary factors taken from different units, arriving at totally unrealistic production plans. This demonstrates that generally there exists no meaningful convex CRS technology with exogenously fixed factors that can be used in its own right, apart from its use as a reference technology in the measurement of scale efficiency.
Data envelopment analysis Exogenous factors Discretionary factors Returns to scale
http://www.sciencedirect.com/science/article/B6VCT-52B116T-2/2/575c489e90b059cb0261b1139edb523b
Podinovski, Victor V.
Bouzdine-Chameeva, Tatiana
oai:RePEc:eee:ejores:v:232:y:2014:i:2:p:287-2972014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:232:y:2014:i:2:p:287-297
article
Adaptive linear combination of heuristic orderings in constructing examination timetables
In this paper, we investigate adaptive linear combinations of graph coloring heuristics with a heuristic modifier to address the examination timetabling problem. We invoke a normalisation strategy for each parameter in order to generalise the specific problem data. Two graph coloring heuristics were used in this study (largest degree and saturation degree). A score for the difficulty of assigning each examination was obtained from an adaptive linear combination of these two heuristics and examinations in the list were ordered based on this value. The examinations with the score value representing the higher difficulty were chosen for scheduling based on two strategies. We tested for single and multiple heuristics with and without a heuristic modifier with different combinations of weight values for each parameter on the Toronto and ITC2007 benchmark data sets. We observed that the combination of multiple heuristics with a heuristic modifier offers an effective way to obtain good solution quality. Experimental results demonstrate that our approach delivers promising results. We conclude that this adaptive linear combination of heuristics is a highly effective method and simple to implement.
Examination timetabling; Constructive heuristic; Linear combinations; Graph colouring;
http://www.sciencedirect.com/science/article/pii/S0377221713005596
Abdul Rahman, Syariza
Bargiela, Andrzej
Burke, Edmund K.
Özcan, Ender
McCollum, Barry
McMullan, Paul
oai:RePEc:eee:ejores:v:225:y:2013:i:2:p:236-2432014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:225:y:2013:i:2:p:236-243
article
Iterated tabu search for the circular open dimension problem
This paper investigates the circular open dimension problem (CODP), which consists of packing a set of circles of known radii into a strip of fixed width and unlimited length without overlapping. The objective is to minimize the length of the strip. In this paper, CODP is solved by a series of sub-problems, each corresponding to a fixed strip length. For each sub-problem, an iterated tabu search approach, named ITS, is proposed. ITS starts from a randomly generated solution and attempts to gain improvements by a tabu search procedure. After that, if the obtained solution is not feasible, a perturbation operator is subsequently employed to reconstruct the incumbent solution and an acceptance criterion is implemented to determine whether or not accept the perturbed solution. As a supplementary method, the length of the strip is determined in monotonously decreasing way, with the aid of some post-processing techniques. The search terminates and returns the best found solution after the allowed computation time has been elapsed. Computational experiments based on numerous well-known benchmark instances show that ITS produces quite competitive results, with respect to the best known results, while the computational time remains reasonable for each instance.
Packing; Cutting; Tabu search; Perturbation operator; Acceptance criterion;
http://www.sciencedirect.com/science/article/pii/S0377221712007680
Fu, Zhanghua
Huang, Wenqi
Lü, Zhipeng
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:42-462014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:42-46
article
Generalized linear fractional programming under interval uncertainty
Data in many real-life engineering and economical problems suffer from inexactness. Herein we assume that we are given some intervals in which the data can simultaneously and independently perturb. We consider a generalized linear fractional programming problem with interval data and present an efficient method for computing the range of optimal values. The method reduces the problem to solving from two to four real-valued generalized linear fractional programs, which can be computed in polynomial time using an appropriate interior point method solver. We consider also the inverse problem: How much can data of a real generalized linear fractional program vary such that the optimal values do not exceed some prescribed bounds. We propose a method for calculating (often the largest possible) ranges of admissible variations; it needs to solve only two real-valued generalized linear fractional programs. We illustrate the approach on a simple von Neumann economic growth model.
Generalized linear fractional programming Interval analysis Tolerance analysis Sensitivity analysis Economic growth model
http://www.sciencedirect.com/science/article/B6VCT-4Y5GXXG-2/2/3ef3221a2282212d84226d9b028ad75b
Hladík, Milan
oai:RePEc:eee:ejores:v:220:y:2012:i:1:p:251-2612014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:220:y:2012:i:1:p:251-261
article
A Constraint Programming model for fast optimal stowage of container vessel bays
Container vessel stowage planning is a hard combinatorial optimization problem with both high economic and environmental impact. We have developed an approach that often is able to generate near-optimal plans for large container vessels within a few minutes. It decomposes the problem into a master planning phase that distributes the containers to bay sections and a slot planning phase that assigns containers of each bay section to slots. In this paper, we focus on the slot planning phase of this approach and present a Constraint Programming and Integer Programming model for stowing a set of containers in a single bay section. This so-called slot planning problem is NP-hard and often involves stowing several hundred containers. Using state-of-the-art constraint solvers and modeling techniques, however, we were able to solve 90% of 236 real instances from our industrial collaborator to optimality within 1second. Thus, somewhat to our surprise, it is possible to solve most of these problems optimally within the time required for practical application.
Container vessel stowage planning; Slot planning; Constraint Programming; Integer Programming;
http://www.sciencedirect.com/science/article/pii/S0377221712000483
Delgado, Alberto
Jensen, Rune Møller
Janstrup, Kira
Rose, Trine Høyer
Andersen, Kent Høj
oai:RePEc:eee:ejores:v:204:y:2010:i:1:p:105-1162014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:1:p:105-116
article
An exact algorithm for solving large-scale two-stage stochastic mixed-integer problems: Some theoretical and experimental aspects
We present an algorithmic framework, so-called BFC-TSMIP, for solving two-stage stochastic mixed 0-1 problems. The constraints in the Deterministic Equivalent Model have 0-1 variables and continuous variables at any stage. The approach uses the Twin Node Family (TNF) concept within an adaptation of the algorithmic framework so-called Branch-and-Fix Coordination for satisfying the nonanticipativity constraints for the first stage 0-1 variables. Jointly we solve the mixed 0-1 submodels defined at each TNF integer set for satisfying the nonanticipativity constraints for the first stage continuous variables. In these submodels the only integer variables are the second stage 0-1 variables. A numerical example and some theoretical and computational results are presented to show the performance of the proposed approach.
Stochastic integer programming Nonanticipativity constraints Splitting variables Twin Node Family Branch-and-Fix Coordination
http://www.sciencedirect.com/science/article/B6VCT-4XBG195-1/2/b2415592b957192a1092a84ec628f443
Escudero, L.F.
Garín, M.A.
Merino, M.
Pérez, G.
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:270-2772014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:270-277
article
Metaheuristics for the linear ordering problem with cumulative costs
The linear ordering problem with cumulative costs (LOPCC) is a variant of the well-known linear ordering problem, in which a cumulative propagation makes the objective function highly non-linear. The LOPCC has been recently introduced in the context of mobile-phone telecommunications. In this paper we propose two metaheuristic methods for this NP-hard problem. The first one is based on the GRASP methodology, while the second one implements an Iterated Greedy-Strategic Oscillation procedure. We also propose a post-processing based on Path Relinking to obtain improved outcomes. We compare our methods with the state-of-the-art procedures on a set of 218 previously reported instances. The comparison favors the Iterated Greedy – Strategic Oscillation with the Path Relinking post-processing, which is able to identify 87 new best objective function values.
Combinatorial optimization; Metaheuristics; Linear ordering problem;
http://www.sciencedirect.com/science/article/pii/S0377221711006680
Duarte, Abraham
Martí, Rafael
Álvarez, Ada
Ángel-Bello, Francisco
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:713-7202014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:713-720
article
Elective course planning
Efficient planning increasingly becomes an indispensable tool for management of both companies and public organizations. This is also the case for high school management in Denmark, because the growing individual freedom of the students to choose courses makes planning much more complex. Due to reforms, elective courses are today an important part of the curriculum, and elective courses are a good way to make high school education more attractive for the students. In this article, the problem of planning the elective courses is modeled using integer programming and three different solution approaches are suggested, including a Branch-and-Price framework using partial Dantzig-Wolfe decomposition. Explicit Constraint Branching is used to enhance the solution process, both on the original IP model and in the Branch-and-Price algorithm. To the best of our knowledge, no exact algorithm for the Elective Course Planning Problem has been described in the literature before. The proposed algorithms are tested on data sets from 98 of the 150 high schools in Denmark. The tests show that for the majority of the problems, the optimal solution can be obtained within the one hour time bound. Furthermore the suggested algorithms achieve better results than the currently applied meta-heuristic.
Elective course planning High school planning Integer programming Dantzig-Wolfe decomposition Branch and Price Explicit Constraint Branching
http://www.sciencedirect.com/science/article/pii/S0377221711005686
Kristiansen, Simon
Sørensen, Matias
Stidsen, Thomas R.
oai:RePEc:eee:ejores:v:220:y:2012:i:2:p:414-4222014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:220:y:2012:i:2:p:414-422
article
Bayesian variable selection in generalized linear models using a combination of stochastic optimization methods
In this paper the usage of a stochastic optimization algorithm as a model search tool is proposed for the Bayesian variable selection problem in generalized linear models. Combining aspects of three well known stochastic optimization algorithms, namely, simulated annealing, genetic algorithm and tabu search, a powerful model search algorithm is produced. After choosing suitable priors, the posterior model probability is used as a criterion function for the algorithm; in cases when it is not analytically tractable Laplace approximation is used. The proposed algorithm is illustrated on normal linear and logistic regression models, for simulated and real-life examples, and it is shown that, with a very low computational cost, it achieves improved performance when compared with popular MCMC algorithms, such as the MCMC model composition, as well as with “vanilla” versions of simulated annealing, genetic algorithm and tabu search.
Bayesian variable selection; Genetic algorithm; Laplace approximation; Simulated annealing; Stochastic optimization; Tabu search;
http://www.sciencedirect.com/science/article/pii/S0377221712000781
Fouskakis, D.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:1-182014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:1-18
article
The hybrid flow shop scheduling problem
The scheduling of flow shops with multiple parallel machines per stage, usually referred to as the hybrid flow shop (HFS), is a complex combinatorial problem encountered in many real world applications. Given its importance and complexity, the HFS problem has been intensively studied. This paper presents a literature review on exact, heuristic and metaheuristic methods that have been proposed for its solution. The paper briefly discusses and reviews several variants of the HFS problem, each in turn considering different assumptions, constraints and objective functions. Research opportunities in HFS are also discussed.
Scheduling Hybrid flow shop Review
http://www.sciencedirect.com/science/article/B6VCT-4XFFJN7-1/2/d6a44e8a526f549d622bfa4d500b22e5
Ruiz, Rubén
Vázquez-Rodríguez, José Antonio
oai:RePEc:eee:ejores:v:201:y:2010:i:2:p:337-3482014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:201:y:2010:i:2:p:337-348
article
Stock interest rate risk and inflation shocks
In this paper we proceed to estimate a measure of the flow-through capability of the firms listed on the Spanish Stock Exchange. The flow-through capability is defined as the ability of firms to transmit inflation shocks to the prices of the products and services sold by the company. According to a strand of the literature, this flow-through capability can to some extent explain the so called "stock duration paradox", which is the difference between the theoretical stock duration derived from the DDM model and its empirical estimates. The line of reasoning suggests that if a company can pass on inflation shocks to the prices of its own outputs and then to profits and dividends, nominal interest rate changes due to variations in the expected inflation will have a limited impact on stock prices. So in this paper we first estimate the flow-through capability for different industries and find great differences among them. Then we analyse the link between flow-through capability and stock duration and find a significant negative relationship between them, as claimed by part of the literature.
Flow-through capability Stock duration Sectorial analysis
http://www.sciencedirect.com/science/article/B6VCT-4VXMPNK-3/2/b9fa7b79874e9724576db2218ea2a8e6
Jareño, Francisco
Navarro, Eliseo
oai:RePEc:eee:ejores:v:220:y:2012:i:1:p:182-1862014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:220:y:2012:i:1:p:182-186
article
Linear values for interior operator games
Interior operator games were introduced by Bilbao et al. (2005) as additive games restricted by antimatroids. In that paper several interesting cooperative games were shown as examples of interior operator games. The antimatroid is a known combinatorial structure which represents, in the game theory context, a dependence system among the players. The aim of this paper is to study a family of values which are linear functions and satisfy reasonable conditions for interior operator games. Two classes of these values are considered assuming particular properties.
Cooperative games; Antimatroids; Core; Shapley value;
http://www.sciencedirect.com/science/article/pii/S0377221712000458
Jiménez-Losada, A.
Chacón, C.
Lebrón, E.
oai:RePEc:eee:ejores:v:234:y:2014:i:1:p:25-362014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:234:y:2014:i:1:p:25-36
article
Solving a bi-objective Transportation Location Routing Problem by metaheuristic algorithms
In this work we consider a Transportation Location Routing Problem (TLRP) that can be seen as an extension of the two stage Location Routing Problem, in which the first stage corresponds to a transportation problem with truck capacity. Two objectives are considered in this research, reduction of distribution cost and balance of workloads for drivers in the routing stage. Here, we present a mathematical formulation for the bi-objective TLRP and propose a new representation for the TLRP based on priorities. This representation lets us manage the problem easily and reduces the computational effort, plus, it is suitable to be used with both local search based and evolutionary approaches. In order to demonstrate its efficiency, it was implemented in two metaheuristic solution algorithms based on the Scatter Tabu Search Procedure for Non-Linear Multiobjective Optimization (SSPMO) and on the Non-dominated Sorting Genetic Algorithm II (NSGA-II) strategies. Computational experiments showed efficient results in solution quality and computing time.
Multiple objective programming; Transportation Location Routing Problem; Logistics; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221713007443
Martínez-Salazar, Iris Abril
Molina, Julian
Ángel-Bello, Francisco
Gómez, Trinidad
Caballero, Rafael
oai:RePEc:eee:ejores:v:222:y:2012:i:3:p:571-5822014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:222:y:2012:i:3:p:571-582
article
Patrolling a perimeter
In this paper we study the problem of patrolling a perimeter. The general situation considered here can correspond to different tactical problems and it is studied from the point of view of game theory. To put the ideas in a context we describe it as follows. An intruder seeks to carry out a sabotage on the perimeter of a protected zone. He has to perform the action along n consecutive days and has to position himself each day at one of m strategic points placed on this border. The first day he can take his place at any of the m points, but on successive days he can move only to adjacent points. Furthermore, the perimeter is protected by a patroller, who will select each day one of the m points to inspect. The strategic situation is modeled as a two-person zero-sum game, which is developed on a cyclic set of m points over n time units. We prove some interesting properties of the strategies, solve the game in closed form under certain constraints and obtain bounds for the value of the game in several non-solved cases.
Game theory; Search theory; Search games; Two-person zero-sum game;
http://www.sciencedirect.com/science/article/pii/S0377221712003888
Zoroa, N.
Fernández-Sáez, M.J.
Zoroa, P.
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:260-2692014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:260-269
article
Detecting relevant variables and interactions in supervised classification
The widely used Support Vector Machine (SVM) method has shown to yield good results in Supervised Classification problems. When the interpretability is an important issue, then classification methods such as Classification and Regression Trees (CART) might be more attractive, since they are designed to detect the important predictor variables and, for each predictor variable, the critical values which are most relevant for classification. However, when interactions between variables strongly affect the class membership, CART may yield misleading information. Extending previous work of the authors, in this paper an SVM-based method is introduced. The numerical experiments reported show that our method is competitive against SVM and CART in terms of misclassification rates, and, at the same time, is able to detect critical values and variables interactions which are relevant for classification.
Supervised classification Interactions Support vector machines Binarization
http://www.sciencedirect.com/science/article/B6VCT-4YMPX6Y-1/2/9dbf90272ee797d1452171aea43cc4db
Carrizosa, Emilio
Martín-Barragán, Belén
Morales, Dolores Romero
oai:RePEc:eee:ejores:v:212:y:2011:i:2:p:345-3512014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:2:p:345-351
article
Gradient-based simulation optimization under probability constraints
We study optimization problems subject to possible fatal failures. The probability of failure should not exceed a given confidence level. The distribution of the failure event is assumed unknown, but it can be generated via simulation or observation of historical data. Gradient-based simulation-optimization methods pose the difficulty of the estimation of the gradient of the probability constraint under no knowledge of the distribution. In this work we provide two single-path estimators with bias: a convolution method and a finite difference, and we provide a full analysis of convergence of the Arrow-Hurwicz algorithm, which we use as our solver for optimization. Convergence results are used to tune the parameters of the numerical algorithms in order to achieve best convergence rates, and numerical results are included via an example of application in finance.
Probability constraints Stochastic gradient algorithm Stochastic approximation
http://www.sciencedirect.com/science/article/B6VCT-523M8CC-2/2/45472e2ce333340dd297a212774a9956
Andrieu, Laetitia
Cohen, Guy
Vázquez-Abad, Felisa J.
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:66-752014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:66-75
article
Using intermediate infeasible solutions to approach vehicle routing problems with precedence and loading constraints
Logistics and transportation issues have been receiving increasing attention during the last decades and their requirements have gradually changed, making it necessary to take into account new situations and conditions. The Double Traveling Salesman Problem with Multiple Stacks (DTSPMS) is a pickup and delivery problem in which some additional precedence and loading constraints are imposed on the vehicle to be used. In this paper we approach the problem using intermediate infeasible solutions to diversify the search process and we develop some fixing procedures and infeasibility measures to deal with this kind of solutions and take advantage of their potential.
Traveling Salesman Heuristics Infeasible solutions
http://www.sciencedirect.com/science/article/B6VCT-51H1DJM-1/2/88a73e0a09fb722ca1c5fb70b5cfd8c1
Felipe, Angel
Teresa Ortuño, M.
Tirado, Gregorio
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1119-11322014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1119-1132
article
Strategic and operational decisions in restaurant revenue management
The paper addresses restaurant revenue management from both a strategic and an operational point of view. Strategic decisions in restaurants are mainly related to defining the most profitable combination of tables that will constitute the restaurant. We propose new formulations of the so-called “Tables Mix Problem” by taking into account several features of the real setting. We compare the proposed models in a computational study showing that restaurants, with the capacity of managing tables as renewable resources and of combining different-sized tables, can improve expected revenue performances. Operational decisions are mainly concerned with the more profitable assignment of tables to customers. Indeed, the “Parties Mix Problem” consists of deciding on accepting or denying a booking request from different groups of customers, with the aim of maximizing the total expected revenue. A dynamic formulation of the “Parties Mix Problem” is presented together with a linear programming approximation, whose solutions can be used to define capacity control policies based on booking limits and bid prices. Computational results compare the proposed policies and show that they lead to higher revenues than the traditional strategies used to support decision makers.
Restaurant revenue management; Dynamic programming; Strategic and operational planning;
http://www.sciencedirect.com/science/article/pii/S0377221714001830
Guerriero, Francesca
Miglionico, Giovanna
Olivito, Filomena
oai:RePEc:eee:ejores:v:189:y:2008:i:3:p:609-6232014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:189:y:2008:i:3:p:609-623
article
Exact solution procedures for the balanced unidirectional cyclic layout problem
In this paper, we consider the balanced unidirectional cyclic layout problem (BUCLP) arising in the determination of workstation locations around a closed loop conveyor system, in the allocation of cutting tools on the sites around a turret, in the positioning of stations around a unidirectional single loop AGV path. BUCLP is known to be NP-Complete. One important property of this problem is the balanced material flow assumption where the material flow is conserved at every workstation. We first develop a branch-and-bound procedure by using the special material flow property of the problem. Then, we propose a dynamic programming algorithm, which provides optimum solutions for instances with up to 20 workstations due to memory limitations. The branch and bound procedure can solve problems with up to 50 workstations.
http://www.sciencedirect.com/science/article/B6VCT-4MWGYKB-1/1/5f4a5921e655e04b45d6097a37ebb8cd
Öncan, Temel
AltInel, I. Kuban
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:21-272014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:21-27
article
A canonical dual approach for solving linearly constrained quadratic programs
This paper provides a canonical dual approach for minimizing a general quadratic function over a set of linear constraints. We first perturb the feasible domain by a quadratic constraint, and then solve a “restricted” canonical dual program of the perturbed problem at each iteration to generate a sequence of feasible solutions of the original problem. The generated sequence is proven to be convergent to a Karush–Kuhn–Tucker point with a strictly decreasing objective value. Some numerical results are provided to illustrate the proposed approach.
Quadratic programming; Global optimization; Canonical duality theory;
http://www.sciencedirect.com/science/article/pii/S0377221711008186
Xing, Wenxun
Fang, Shu-Cherng
Sheu, Ruey-Lin
Wang, Ziteng
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:59-722014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:59-72
article
Time and work generalised precedence relationships in project scheduling with pre-emption: An application to the management of Service Centres
In this paper we present an application of project scheduling concepts and solution procedures for the solution of a complex problem that comes up in the daily management of many company Service Centres. The real problem has been modelled as a multi-mode resource-constrained project scheduling problem with pre-emption, time and work generalised precedence relationships with minimal and maximal time lags between the tasks and due dates. We present a complete study of work GPRs which includes proper definitions, a new notation and all possible conversions amongst them. Computational results that show the efficiency of the proposed hybrid genetic algorithm and the advantages of allowing pre-emption are also presented.
Project scheduling; Pre-emption; Work generalised precedence relationships; Multi-mode; Genetic algorithms; Service Centres;
http://www.sciencedirect.com/science/article/pii/S0377221711010952
Quintanilla, Sacramento
Pérez, Ángeles
Lino, Pilar
Valls, Vicente
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:630-6412014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:630-641
article
Bank productivity and performance groups: A decomposition approach based upon the Luenberger productivity indicator
The purpose of this paper is twofold. First, in the framework of the strategic groups' literature, it analyZes changes in productivity and efficiency of Spanish private and savings banks over an eight-year period (1998-2006). Second, by adapting the decomposition of the Malmquist productivity indices suggested by Färe et al. (1994), it proposes similar components decomposing the Luenberger productivity indicator. Initially, productivity is decomposed into technological and efficiency changes. Thereafter, this efficiency change is decomposed into pure efficiency, scale and congestion changes. Empirical results demonstrate that productivity improvements are partially due to technological innovation. Furthermore, it is shown how the competition between private and savings banks develops in terms of the analyzed productivity and efficiency components. While private banks enjoy better efficiency change, savings banks contribute more to technological progress. Consequently, the Luenberger components are used as cluster analysis inputs. Thus, economic interpretations of the resulting performance groups are made via key differences in productivity components. Finally, following the strategic groups' literature, supplementary insights are gained by linking these performance groups with banking ratios.
Luenberger decomposition Productivity Efficiency Spanish banking sector Performance groups Banking ratios
http://www.sciencedirect.com/science/article/B6VCT-521NWB4-3/2/ae036942b1b7d3467d8d8ac3ebf93d29
Epure, Mircea
Kerstens, Kristiaan
Prior, Diego
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:141-1532014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:141-153
article
A technological, organisational, and environmental analysis of decision making methodologies and satisfaction in the context of IT induced business transformations
Although Operational Research (OR) has successfully provided many methodologies to address complex decision problems, in particular based on the rationality principle, there has been too little discussion regarding their limited consideration in IT evaluation practice and associated decision making satisfaction levels in an organisational context. The aim of this paper is to address these issues through providing a current account of diffusion and infusion of OR methodologies in IT decision making practice, and by analysing factors affecting decision making satisfaction from a Technological, Organisational, and Environmental (TOE) framework in the context of IT induced business transformations. We developed a structural equation model and conducted an empirical survey, which supported four out of five developed research hypotheses. Our results show that while Decision Support Systems (DSSs), holistic IT evaluation methods, and management support seem to positively affect individual satisfaction, legislative regulation has an adverse effect. Results also revealed a persistent methodology diffusion and infusion gap. The paper discusses implications in each of these aspects and presents opportunities for future work.
Decision analysis; Decision satisfaction; TOE framework; Decision making methods; Decision Support Systems; Structural equation modelling;
http://www.sciencedirect.com/science/article/pii/S0377221712005668
Bernroider, Edward W.N.
Schmöllerl, Patrick
oai:RePEc:eee:ejores:v:226:y:2013:i:1:p:163-1722014-11-20RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:1:p:163-172
article
Optimal keyword bidding in search-based advertising with target exposure levels
Search-based advertising has become very popular since it provides advertisers the ability to attract potential customers with measurable returns. In this type of advertising, advertisers bid on keywords to have an impact on their ad’s placement, which in turn affects the response from potential customers. An advertiser must choose the right keywords and then bid correctly for each keyword in order to maximize the expected revenue or attain a certain level of exposure while keeping the daily costs in mind. In response to increasing need for analytical models that provide a guidance to advertisers, we construct and examine deterministic optimization models that minimize total expected advertising costs while satisfying a desired level of exposure. We investigate th