2014-09-16T17:28:05Z
http://oai.repec.openlib.org/oai.php
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:688-6962011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:688-696
article
Interactions between investment timing and management effort under asymmetric information: Costs and benefits of privatized firms
In this paper, we examine the interactions between investment timing and management effort in the presence of asymmetric information between the owner and the manager where the manager has an informational advantage. We find that investment timing is later under asymmetric information than under full information, implying a decrease in the value of equity option. However, in order to minimize any distortion under underinvestment, management effort is greater under asymmetric information than under full information. We show that there are trade-offs in the efficiencies of investment timing and management effort under asymmetric information. These results fit well with the findings of past empirical studies concerning the costs and benefits of privatized firms.
Investment timing Agency Incentives Privatization
http://www.sciencedirect.com/science/article/pii/S0377221711005546
Shibata, Takashi
Nishihara, Michi
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:404-4102011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:404-410
article
Comparison of combined stochastic risk processes and its applications
In this article we compare combined stochastic risk processes and consider its applications to various fields of relevance. Initially, the problem is formulated in terms of optimal transportation under fatal risks which may cause the failure of the transportation. Various transportation policies are considered and the problem of determining optimal policy maximizing the probability of success of transportation is suggested. Then the suggested problem is reformulated in the context of reliability modelling under more general settings and the main results are derived. Applications of the results to many different areas are discussed.
Reliability Applied probability Optimal transportation Shock model Policy ordering
http://www.sciencedirect.com/science/article/pii/S0377221711005066
Cha, Ji Hwan
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:503-5112011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:503-511
article
Mixing OR methods in practice: Past, present and future directions
Although mixing of OR methods is an area of increasing interest to the OR community, there has been little discussion regarding generic lessons that can be learnt from mixing methods in practice. The aim of this paper is to carry out such an analysis through considering generic lessons that may be associated with mixing methods, regardless of the methods chosen. To identify these lessons, published case studies on how OR methods have been mixed are analysed giving rise to a number of themes revealing the lessons. These themes include; the implications from the use of different facilitators/modellers, how methods have been mixed together, the nature of the modelling interventions, the client value and the rationale given for mixing methods. The paper discusses the lessons learnt in each of these themes and presents opportunities for future work.
Mixing methods Practice of OR Problem structuring
http://www.sciencedirect.com/science/article/pii/S0377221711002244
Howick, Susan
Ackermann, Fran
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:309-3182011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:309-318
article
Recent developments in Dual Resource Constrained (DRC) system research
Real world manufacturing systems are usually constrained by both machine and human resources. Human operators are often the constraining resource and transfer between workstations to process jobs when required. This kind of system is known as a Dual Resource Constrained (DRC) system and presents additional technical challenges which must be considered during planning and scheduling. These technical challenges can be categorised into the five main dimensions of job release mechanisms, job dispatching, worker flexibility, worker assignment and transfer costs. This paper aims to provide an overview of recent developments in DRC research concerned with each of these areas and also discusses some possible approaches to solving the resource scheduling problem in a DRC system. The focus is on materials published after 1995 and up to 2009. Previous reviews on DRC systems are commented on and followed by a review of recent works associated with each of the five dimensions of DRC system research. Advancements made and new methodologies proposed are discussed and future research directions are identified.
(D) Human resources Cross training Workforce scheduling Dispatching Dual Resource Constrained (DRC) systems
http://www.sciencedirect.com/science/article/pii/S0377221711002153
Xu, J.
Xu, X.
Xie, S.Q.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:374-3822011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:374-382
article
The incremental bullwhip effect of operational deviations in an arborescent supply chain with requirements planning
Lee et al. (1997) advocated the idea of sharing demand and order information among different supply chain entities to mitigate the bullwhip effect. Even with full supply chain visibility afforded by IT systems with requirements planning and with no information distortion, we identify a "core" bullwhip effect inherent to any supply chain because of the underlying demand characteristics and replenishment lead times. In addition, we quantify an incremental bullwhip effect as various operational deviations (inaccurate order placements, batching, lag in sharing demand forecast) contribute incrementally to the variance of the order quantity not only at the node where the deviation is taking place but also at all upstream supply chain nodes. We discuss some managerial implications of our results in the context of a UK manufacturer.
Bullwhip effect Operational deviations Requirements planning MRP Arborescent supply chain Batch size
http://www.sciencedirect.com/science/article/pii/S0377221711005480
Sodhi, ManMohan S.
Tang, Christopher S.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:459-4692011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:459-469
article
An option-based revenue management procedure for strategic airline alliances
An airline has to decide whether to accept an incoming customer request for a seat in the airplane or to reject it in hope that another customer will request the seat later at a higher price. Capacity control, as one of the instruments of revenue management, gives a solution to this decision problem. In the presence of strategic alliances capacity control changes. For the case of two airlines in the alliance and a single flight leg we propose an option-based capacity control process. The determination of booking limits for capacity control is done with real options. A simulation model is introduced to evaluate the booking process of the partner airlines within the strategic alliance, considering the option-based procedure. In an iterative process the booking limits are improved with simulation-based optimization. The results of the option-based procedure will be compared with the results of the simulation-based optimization, the results of a first-come-first-served (FCFS) approach and ex post optimal solutions.
Revenue management Strategic alliances Capacity control Real options Simulation-based optimization Stochastic approximation
http://www.sciencedirect.com/science/article/pii/S0377221711005388
Graf, M.
Kimms, A.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:347-3572011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:347-357
article
Ordering policy and coordination of a supply chain with two-period demand uncertainty
We develop a two-period game model of a one-manufacturer and one-retailer supply chain to investigate the optimal decisions of the players, where stock-out and holding costs are incorporated into the model. The demand at each period is stochastic and price sharply drops in mid-life. We assume the retailer has a single order opportunity, and decides how much inventory to keep in the middle of selling season. We show that both the price-protection mid-life and end-of-life returns (PME) scheme and the only mid-life and end-of-life returns (ME) scheme may achieve channel coordination and access a 'win-win' situation under some conditions. The larger the lowest expected profit of the retailer, the lower the possibility of 'win-win' situation will be. Combined with the analysis of feasible regions for coordination policies, we find that PME scheme is not always better than ME scheme from the perspective of implementable mechanism. Finally, we find that adopting the dispose-down-to (DDT) policy can bring a larger improvement of the expected channel profit in the centralized setting, and it is interesting that by using DDT policy, double marginalization occurs only at Period 1, and however, does not plague the retailer in Period 2.
Supply chain management Channel coordination Game theory Buyback contract
http://www.sciencedirect.com/science/article/pii/S0377221711005443
Chen, Kebing
Xiao, Tiaojun
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:750-7622011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:750-762
article
Alternative measures of environmental technology structure in DEA: An application
The nonparametric data envelopment analysis (DEA) literature on environmental efficiency (EE) considers handling undesirable outputs in two alternative ways: either in their original forms with the assumption that these are weakly disposable or in various translated forms with the assumption that these are strongly disposable. Choosing a particular approach implies adoption of a particular, distinct treatment of undesirable outputs, and hence yields a distinct set of EE estimates. To explore the effects of the interplay between choice of EE measure and specific treatment of undesirable outputs, this paper attempts to generate all possible output-oriented EE measures based on these two alternative approaches. Furthermore, guided by the argument that slacks are important in identifying properly the efficiency behavior of firms, it proposes two new alternative, slacks-based formulations of EE: one based on the range directional model, and the other on the generalized proportional distance function model. Using a confected data set of ten firms and a real-life data set of 22 OECD countries, our empirical analysis reveals that: first, EE scores are influenced not only by the choice of disposability assumption for undesirable outputs but also by the way these are treated in various translated forms; second, the choice of any particular treatment of undesirable outputs plays no role in influencing the rankings of firms; and third, our two new alternative EE formulations are, at the least, viable alternatives to existing EE measures in ranking firms according to their eco-efficiency behavior.
Data envelopment analysis Environmental technology Environmental efficiency Undesirable outputs Weak disposability
http://www.sciencedirect.com/science/article/pii/S0377221711006199
Sahoo, Biresh K.
Luptacik, Mikulas
Mahlberg, Bernhard
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:730-7392011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:730-739
article
Optimizing referral reward programs under impression management considerations
We examine referral reward programs (RRP) that are intended for a service firm to encourage its current customers (inductors) to entice their friends (inductees) to purchase the firm's service. By considering the interplay among the firm, the inductor, and the inductee, we solve a "nested" Stackelberg game so as to determine the optimal RRP in equilibrium. We determine the conditions under which it is optimal for the firm to reward the inductor only, reward the inductee only, or reward both. Also, our results suggest that RRP dominates direct marketing when the firm's current market penetration or the inductor's referral effectiveness is sufficiently high. We then extend our model to incorporate certain key impression management factors: the inductor's intrinsic reward of making a positive impression by being seen as helping a friend, the inductor's concerns about creating a negative impression when making an incentivized referral, and the inductee's impression of the inductor's credibility when an incentive is involved. In the presence of these impression management factors, we show that the firm should reward the inductee more and the inductor less. Under certain conditions, it is optimal for the firm to reward neither the inductor nor the inductee so that the optimal RRP relies purely on unincentivized word of mouth.
Referral reward programs Stackelberg game Impression management
http://www.sciencedirect.com/science/article/pii/S0377221711004905
Xiao, Ping
Tang, Christopher S.
Wirtz, Jochen
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:532-5382011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:532-538
article
Natural gas bilevel cash-out problem: Convergence of a penalty function method
This paper studies a special bi-level programming problem that arises from the dealings of a Natural Gas Shipping Company and the Pipeline Operator, with facilities of the latter used by the former. Because of the business relationships between these two actors, the timing and objectives of their decision-making process are different and sometimes even opposed. In order to model that, bi-level programming was traditionally used in previous works. Later, the problem was expanded and theoretically studied to facilitate its solution; this included extension of the upper level objective function, linear reformulation, heuristic approaches, and branch-and-bound techniques. In this paper, we present a linear programming reformulation of the latest version of the model, which is significantly faster to solve when implemented computationally. More importantly, this new formulation makes it easier to analyze the problem theoretically, allowing us to draw some conclusions about the nature of the solution of the modified problem. Numerical results concerning the running time, convergence, and optimal values, are presented and compared to previous reports, showing a significant improvement in speed without actual sacrifice of the solution's quality.
OR in energy Bi-level programming Linearization Penalty method
http://www.sciencedirect.com/science/article/pii/S0377221711006059
Dempe, Stephan
Kalashnikov, Vyacheslav V.
Pérez-Valdés, Gerardo A.
Kalashnykova, Nataliya I.
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:713-7202011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:713-720
article
Elective course planning
Efficient planning increasingly becomes an indispensable tool for management of both companies and public organizations. This is also the case for high school management in Denmark, because the growing individual freedom of the students to choose courses makes planning much more complex. Due to reforms, elective courses are today an important part of the curriculum, and elective courses are a good way to make high school education more attractive for the students. In this article, the problem of planning the elective courses is modeled using integer programming and three different solution approaches are suggested, including a Branch-and-Price framework using partial Dantzig-Wolfe decomposition. Explicit Constraint Branching is used to enhance the solution process, both on the original IP model and in the Branch-and-Price algorithm. To the best of our knowledge, no exact algorithm for the Elective Course Planning Problem has been described in the literature before. The proposed algorithms are tested on data sets from 98 of the 150 high schools in Denmark. The tests show that for the majority of the problems, the optimal solution can be obtained within the one hour time bound. Furthermore the suggested algorithms achieve better results than the currently applied meta-heuristic.
Elective course planning High school planning Integer programming Dantzig-Wolfe decomposition Branch and Price Explicit Constraint Branching
http://www.sciencedirect.com/science/article/pii/S0377221711005686
Kristiansen, Simon
Sørensen, Matias
Stidsen, Thomas R.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:481-4972011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:481-497
article
A multi-round generalization of the traveling tournament problem and its application to Japanese baseball
In a double round-robin tournament involving n teams, every team plays 2(nÂ -Â 1) games, with one home game and one away game against each of the other nÂ -Â 1 teams. Given a symmetric n by n matrix representing the distances between each pair of home cities, the traveling tournament problem (TTP) seeks to construct an optimal schedule that minimizes the sum total of distances traveled by the n teams as they move from city to city, subject to several natural constraints to ensure balance and fairness. In the TTP, the number of rounds is set at rÂ =Â 2. In this paper, we generalize the TTP to multiple rounds (rÂ =Â 2k, for any kÂ [greater-or-equal, slanted]Â 1) and present an algorithm that converts the problem to finding the shortest path in a directed graph, enabling us to apply Dijkstra's Algorithm to generate the optimal multi-round schedule. We apply our shortest-path algorithm to optimize the league schedules for Nippon Professional Baseball (NPB) in Japan, where two leagues of nÂ =Â 6 teams play 40 sets of three intra-league games over rÂ =Â 8 rounds. Our optimal schedules for the Pacific and Central Leagues achieve a 25% reduction in total traveling distance compared to the 2010 NPB schedule, implying the potential for considerable savings in terms of time, money, and greenhouse gas emissions.
Scheduling Timetabling Graph theory Perfect matchings One-factorization Traveling tournament problem
http://www.sciencedirect.com/science/article/pii/S0377221711005431
Hoshino, Richard
Kawarabayashi, Ken-ichi
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:697-7042011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:697-704
article
An aggregate stochastic programming model for air traffic flow management
In this paper, we present an aggregate mathematical model for air traffic flow management (ATFM), a problem of great concern both in Europe and in the United States. The model extends previous approaches by simultaneously taking into account three important issues: (i) the model explicitly incorporates uncertainty in the airport capacities; (ii) it also considers the trade-off between airport arrivals and departures, which is a crucial issue in any hub airport; and (iii) it takes into account the interactions between different hubs. The level of aggregation proposed for the mathematical model allows us to solve realistic size instances with a commercial solver on a PC. Moreover it allows us to compute solutions which are perfectly consistent with the Collaborative Decision-Making (CDM) procedure in ATFM, widely adopted in the USA and which is currently receiving a lot of attention in Europe. In fact, the proposed model suggests the number of flights that should be delayed, a decision that belongs to the ATFM Authority, rather than assigning delays to individual aircraft.
ATFM model Hub and spoke operations Stochastic programming Strategic flow management Decision analysis
http://www.sciencedirect.com/science/article/pii/S0377221711005571
Andreatta, Giovanni
Dell'Olmo, Paolo
Lulli, Guglielmo
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:512-5232011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:512-523
article
A hybrid single and dual population search procedure for the job shop scheduling problem
This paper presents a genetic algorithm and a scatter search procedure to solve the well-known job shop scheduling problem. In contrast to the single population search performed by the genetic algorithm, the scatter search algorithm splits the population of solutions in a diverse and high-quality set to exchange information between individuals in a controlled way. The extension from a single to a dual population, by taking problem specific characteristics into account, can be seen as a stimulator to add diversity in the search process. This has a positive influence on the important balance between intensification and diversification. Computational experiments verify the benefit of this diversity on the effectiveness of the meta-heuristic search process. Various algorithmic parameters from literature are embedded in both procedures and a detailed comparison is made. A set of standard instances is used to compare the different approaches and the best obtained results are benchmarked against heuristic solutions found in literature.
Job shop scheduling Genetic algorithms Scatter search
http://www.sciencedirect.com/science/article/pii/S0377221711005601
Sels, Veronique
Craeymeersch, Kjeld
Vanhoucke, Mario
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:539-5502011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:539-550
article
A mathematical programming approach to the computation of the omega invariant of a numerical semigroup
In this paper we present a mathematical programming formulation for the [omega]-invariant of a numerical semigroup for each of its minimal generators which is an useful index in commutative algebra (in particular in factorization theory) to analyze the primality of the elements in the semigroup. The model consists of solving a problem of optimizing a linear function over the efficient set of a multiobjective linear integer program. We offer a methodology to solve this problem and we provide some computational experiments to show the efficiency of the proposed algorithm.
Integer programming Multiobjective optimization Optimization over an efficient set Numerical semigroups Factorization theory
http://www.sciencedirect.com/science/article/pii/S0377221711006060
Blanco, Víctor
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:670-6782011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:670-678
article
Resource allocation in dynamic PERT networks with finite capacity
This article models the resource allocation problem in dynamic PERT networks with finite capacity of concurrent projects (COnstant Number of Projects In Process (CONPIP)), where activity durations are independent random variables with exponential distributions, and the new projects are generated according to a Poisson process. The system is represented as a queuing network with finite concurrent projects, where each activity of a project is performed at a devoted service station with one server located in a node of the network. For modeling dynamic PERT networks with CONPIP, we first convert the network of queues into a stochastic network. Then, by constructing a proper finite-state continuous-time Markov model, a system of differential equations is created to solve and find the completion time distribution for any particular project. Finally, we propose a multi-objective model with three conflict objectives to optimally control the resources allocated to the servers, and apply the goal attainment method to solve a discrete-time approximation of the original multi-objective problem.
Project management Markov processes Multiple objective programming
http://www.sciencedirect.com/science/article/pii/S0377221711006084
Yaghoubi, Saeed
Noori, Siamak
Azaron, Amir
Tavakkoli-Moghaddam, Reza
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:563-5712011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:563-571
article
An efficient computational method for a stochastic dynamic lot-sizing problem under service-level constraints
We provide an efficient computational approach to solve the mixed integer programming (MIP) model developed by Tarim and Kingsman [8] for solving a stochastic lot-sizing problem with service level constraints under the static-dynamic uncertainty strategy. The effectiveness of the proposed method hinges on three novelties: (i) the proposed relaxation is computationally efficient and provides an optimal solution most of the time, (ii) if the relaxation produces an infeasible solution, then this solution yields a tight lower bound for the optimal cost, and (iii) it can be modified easily to obtain a feasible solution, which yields an upper bound. In case of infeasibility, the relaxation approach is implemented at each node of the search tree in a branch-and-bound procedure to efficiently search for an optimal solution. Extensive numerical tests show that our method dominates the MIP solution approach and can handle real-life size problems in trivial time.
Inventory Relaxation Stochastic non-stationary demand Mixed integer programming Service level Static-dynamic uncertainty
http://www.sciencedirect.com/science/article/pii/S0377221711005637
Tarim, S. Armagan
Dogru, Mustafa K.
Özen, Ulas
Rossi, Roberto
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:319-3242011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:319-324
article
Tight bounds for periodicity theorems on the unbounded Knapsack problem
Three new bounds for periodicity theorems on the unbounded Knapsack problem are developed. Periodicity theorems specify when it is optimal to pack one unit of the best item (the one with the highest profit-to-weight ratio). The successive applications of periodicity theorems can drastically reduce the size of the Knapsack problem under analysis, theoretical or empirical. We prove that each new bound is tight in the sense that no smaller bound exists under the given condition.
Combinatorial optimization Integer programming Knapsack problem Number theory Periodicity
http://www.sciencedirect.com/science/article/pii/S037722171100539X
Huang, Ping H.
Lawley, Mark
Morin, Thomas
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:616-6282011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:616-628
article
R&D pipeline management: Task interdependencies and risk management
Maintaining a rich research and development (R&D) pipeline is the key to remaining competitive in many industrial sectors. Due to its nature, R&D activities are subject to multiple sources of uncertainty, the modeling of which is compounded by the ability of the decision maker to alter the underlying process. In this paper, we present a multi-stage stochastic programming framework for R&D pipeline management, which demonstrates how essential considerations can be modeled in an efficient manner including: (i) the selection and scheduling of R&D tasks with general precedence constraints under pass/fail uncertainty, and (ii) resource planning decisions (expansion/contraction and outsourcing) for multiple resource types. Furthermore, we study interdependencies between tasks in terms of probability of success, resource usage and market impact. Finally, we explore risk management approaches, including novel formulations for value at risk and conditional value at risk.
Stochastic programming Research and development pipeline Project scheduling Resource planning
http://www.sciencedirect.com/science/article/pii/S0377221711005522
Colvin, Matthew
Maravelias, Christos T.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:498-5012011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:498-501
article
Some clarifications on the DEA clustering approach
This paper clarifies the role of alternative optimal solutions in the clustering of multidimensional observations using data envelopment analysis (DEA). The paper shows that alternative optimal solutions corresponding to several units produce different groups with different sizes and different decision making units (DMUs) at each class. This implies that a specific DMU may be grouped into different clusters when the corresponding DEA model has multiple optimal solutions.
Data envelopment analysis Clustering Alternative optimal solutions
http://www.sciencedirect.com/science/article/pii/S0377221711005996
Amin, Gholam R.
Emrouznejad, Ali
Rezaei, S.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:422-4302011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:422-430
article
Determining all Nash equilibria in a (bi-linear) inspection game
This paper addresses a "game" between an inspection agency and multiple inspectees that are subject to random inspections by that agency. We provide explicit (easily computable) expressions for all possible Nash equilibria and verify that none is left out. In particular, our results characterize situations when there exists a unique Nash equilibrium. We also explore special features of the Nash equilibria and the solution of the problem the inspection agency faces in a non-strategic environment.
Game theory Nash equilibria Resource allocation Inspection games
http://www.sciencedirect.com/science/article/pii/S0377221711005029
Deutsch, Yael
Golany, Boaz
Rothblum, Uriel G.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:383-3922011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:383-392
article
Discrete and continuous time representations and mathematical models for large production scheduling problems: A case study from the pharmaceutical industry
The underlying time framework used is one of the major differences in the basic structure of mathematical programming formulations used for production scheduling problems. The models are either based on continuous or discrete time representations. In the literature there is no general agreement on which is better or more suitable for different types of production or business environments. In this paper we study a large real-world scheduling problem from a pharmaceutical company. The problem is at least NP-hard and cannot be solved with standard solution methods. We therefore decompose the problem into two parts and compare discrete and continuous time representations for solving the individual parts. Our results show pros and cons of each model. The continuous formulation can be used to solve larger test cases and it is also more accurate for the problem under consideration.
Production Scheduling Mixed integer linear programming Time representations
http://www.sciencedirect.com/science/article/pii/S0377221711005509
Stefansson, Hlynur
Sigmarsdottir, Sigrun
Jensson, Pall
Shah, Nilay
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:411-4212011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:411-421
article
The Banzhaf index in complete and incomplete shareholding structures: A new algorithm
In this global world, many firms present a complex shareholding structure with indirect participation, such that it may become difficult to assess a firm's controllers. Furthermore, if there are numerous dominant shareholders, the control can be shared between them. Determining who has the most influence is often a difficult task. To measure this influence, game theory allows the modeling of voting games and the computing of the Banzhaf index. This paper firstly offers a new algorithm to compute this index in all structures and then suggests some modelisations of the floating shareholder. Then, our model is applied to a real case study: the French group Lafarge. This exemplary case demonstrates how the float's structure and hidden coalition can impact the power relationship between dominant shareholders.
Control Game theory Graph theory Ownership structure Banzhaf index
http://www.sciencedirect.com/science/article/pii/S0377221711005364
Levy, Marc
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:337-3462011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:337-346
article
A skyline heuristic for the 2D rectangular packing and strip packing problems
In this paper, we propose a greedy heuristic for the 2D rectangular packing problem (2DRP) that represents packings using a skyline; the use of this heuristic in a simple tabu search approach outperforms the best existing approach for the 2DRP on benchmark test cases. We then make use of this 2DRP approach as a subroutine in an "iterative doubling" binary search on the height of the packing to solve the 2D rectangular strip packing problem (2DSP). This approach outperforms all existing approaches on standard benchmark test cases for the 2DSP.
Cutting and packing Heuristics Tabu search
http://www.sciencedirect.com/science/article/pii/S0377221711005510
Wei, Lijun
Oon, Wee-Chong
Zhu, Wenbin
Lim, Andrew
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:639-6502011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:639-650
article
Strategic investment under uncertainty: A synthesis
Investment is a central theme in economics, finance, and operational research. Traditionally, the focus of analysis has been either on assessing the value of flexibility (investment under uncertainty) or on describing commitment effects in competitive settings (industrial organization). Research contributions addressing the intersection of investment under uncertainty and industrial organization have become numerous in recent years. In this paper, we provide an overview aimed at categorizing and relating these research streams. We highlight managerial insights concerning the nature of competitive advantage (first- versus second-mover advantage), the manner in which information is revealed, firm heterogeneity, capital increment size, and the number of competing firms.
Finance Investment analysis Real options Strategic investment Option games
http://www.sciencedirect.com/science/article/pii/S0377221711004863
Chevalier-Roignant, Benoît
Flath, Christoph M.
Huchzermeier, Arnd
Trigeorgis, Lenos
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:524-5312011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:524-531
article
Single machine batch scheduling with two competing agents to minimize total flowtime
We study a single machine scheduling problem, where two agents compete on the use of a single processor. Each of the agents needs to process a set of jobs in order to optimize his objective function. We focus on a two-agent problem in the context of batch scheduling. We assume identical jobs and identical (agent-dependent) setup times. The objective function is minimizing the flowtime of one agent subject to an upper bound on the flowtime of the second agent. As in many real-life applications, we restrict ourselves to settings where the batches of the second agent must be processed continuously. Thus, the batch sizes are partitioned into three parts, starting with a sequence of the first agent, followed by a sequence of the second agent, and ending by another sequence of the first agent. In an optimal schedule, all three are shown to be decreasing arithmetic sequences. We introduce an efficient solution algorithm (where n is the total number of jobs).
Two-agent scheduling Batch scheduling Single machine Flowtime
http://www.sciencedirect.com/science/article/pii/S0377221711005662
Mor, Baruch
Mosheiov, Gur
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:679-6872011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:679-687
article
Generalized Markov models of infectious disease spread: A novel framework for developing dynamic health policies
We propose a class of mathematical models for the transmission of infectious diseases in large populations. This class of models, which generalizes the existing discrete-time Markov chain models of infectious diseases, is compatible with efficient dynamic optimization techniques to assist real-time selection and modification of public health interventions in response to evolving epidemiological situations and changing availability of information and medical resources. While retaining the strength of existing classes of mathematical models in their ability to represent the within-host natural history of disease and between-host transmission dynamics, the proposed models possess two advantages over previous models: (1) these models can be used to generate optimal dynamic health policies for controlling spreads of infectious diseases, and (2) these models are able to approximate the spread of the disease in relatively large populations with a limited state space size and computation time.
Infectious disease models Dynamic health policy Discrete-time Markov chain Dynamic programming Epidemiology
http://www.sciencedirect.com/science/article/pii/S0377221711006187
Yaesoubi, Reza
Cohen, Ted
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:721-7292011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:721-729
article
Revenue-maximizing Dutch auctions with discrete bid levels
This paper is concerned with setting a predetermined number of bid levels in a Dutch auction to maximize the auctioneer's expected revenue. As a departure from the traditional methods used by applied economists and game-theorists, a novel approach is taken in this study to tackle the problem by formulating the auctioning process as a constrained nonlinear program and applying standard optimization techniques to solve it. Aside from proposing respective closed-form formulae for computing the optimal bid levels and the auctioneer's maximum expected revenue, we also show that the bid decrements should be increasing if there are two or more bidders in the Dutch auction. Additionally, the auctioneer's maximum expected revenue increases with the number of bidders as well as the number of bid levels. Finally, managerial implications of the key findings as well as limitations of this research work are discussed.
Auctions/bidding Dutch auction Discrete bid Revenue maximization Nonlinear programming
http://www.sciencedirect.com/science/article/pii/S0377221711004875
Li, Zhen
Kuo, Ching-Chung
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:705-7122011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:705-712
article
Selection of entrepreneurs in the venture capital industry: An asymptotic analysis
We study a model of entrepreneurs who compete in an auction-like setting for venture capital (VC) funding in a setting where limited capital dictates that the VC can only finance the best entrepreneurs. With asymmetric information, VCs can only assess entrepreneurs by the progress of development, which, in equilibrium, reveals the quality of the new technology. Using an asymptotic analysis, we prove that in attractive industries having a large number of entrepreneurs competing for VC funding could lead to underinvestment in technology by entrepreneurs as the effort exerted by losing entrepreneurs is wasted. The study then proceeds to characterize the conditions under which a greater number of competing entrepreneurs is better. The model also demonstrates that VCs could possibly increase their payoff by concentrating on a single industry. In addition, the study also provides some insights on the effects of multiple investments by VCs and the effects of competition among VCs on the same investments.
Asymptotic methods Auctions Contests Entrepreneurs Venture capital
http://www.sciencedirect.com/science/article/pii/S0377221711005583
Elitzur, Ramy
Gavious, Arieh
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:662-6692011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:662-669
article
Guidelines for using variable selection techniques in data envelopment analysis
Model misspecification has significant impacts on data envelopment analysis (DEA) efficiency estimates. This paper discusses the four most widely-used approaches to guide variable specification in DEA. We analyze efficiency contribution measure (ECM), principal component analysis (PCA-DEA), a regression-based test, and bootstrapping for variable selection via Monte Carlo simulations to determine each approach's advantages and disadvantages. For a three input, one output production process, we find that: PCA-DEA performs well with highly correlated inputs (greater than 0.8) and even for small data sets (less than 300 observations); both the regression and ECM approaches perform well under low correlation (less than 0.2) and relatively larger data sets (at least 300 observations); and bootstrapping performs relatively poorly. Bootstrapping requires hours of computational time whereas the three other methods require minutes. Based on the results, we offer guidelines for effectively choosing among the four selection methods.
Data envelopment analysis Model specification Efficiency estimation
http://www.sciencedirect.com/science/article/pii/S0377221711006011
Nataraja, Niranjan R.
Johnson, Andrew L.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:393-4032011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:393-403
article
Sudden changes in variance and time varying hedge ratios
This paper analyzes the influence of sudden changes in the unconditional volatility on the estimation and forecast of volatility and its impact on futures hedging strategies. We employ several multivariate GARCH models to estimate the optimal hedge ratios for the Spanish stock market including in each one some well-known patterns that may affect volatility forecasts (asymmetry and sudden changes). The main empirical results show that more complex models including sudden changes in volatility outperform the simpler models in hedging effectiveness both with in-sample and out-of-sample analysis. However, the evidence is stronger when the loss distribution tail is used as a measure for the effectiveness (Value at Risk (VaR) and Expected Shortfall (ES)) suggesting that traditional measures based on the variance of the hedged portfolio should be used with caution.
Finance Hedging effectiveness GARCH Sudden changes in variance
http://www.sciencedirect.com/science/article/pii/S0377221711005030
Aragó, Vicent
Salvador, Enrique
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:651-6612011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:651-661
article
Short sales in Log-robust portfolio management
This paper extends the Log-robust portfolio management approach to the case with short sales, i.e., the case where the manager can sell shares he does not yet own. We model the continuously compounded rates of return, which have been established in the literature as the true drivers of uncertainty, as uncertain parameters belonging to polyhedral uncertainty sets, and maximize the worst-case portfolio wealth over that set in a one-period setting. The degree of the manager's aversion to ambiguity is incorporated through a single, intuitive parameter, which determines the size of the uncertainty set. The presence of short-selling requires the development of problem-specific techniques, because the optimization problem is not convex. In the case where assets are independent, we show that the robust optimization problem can be solved exactly as a series of linear programming problems; as a result, the approach remains tractable for large numbers of assets. We also provide insights into the structure of the optimal solution. In the case of correlated assets, we develop and test a heuristic where correlation is maintained only between assets invested in. In computational experiments, the proposed approach exhibits superior performance to that of the traditional robust approach.
Robust optimization Nonlinear optimization Portfolio management
http://www.sciencedirect.com/science/article/pii/S0377221711005716
Kawas, Ban
Thiele, Aurélie
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:470-4802011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:470-480
article
Models and algorithms to improve earthwork operations in road design using mixed integer linear programming
In road construction, earthwork operations account for about 25% of the construction costs. Existing linear programming models for earthwork optimization are designed to minimize the hauling costs and to balance the earth across the construction site. However, these models do not consider the removal of physical blocks that may influence the earthwork process. As such, current models may result in inaccurate estimates of optimal earthwork costs, leading to poor choices in road design. In this research, we extend the classical linear program model of earthwork operations to a mixed integer linear program model that accounts for blocks. We examine the economic impact of incorporating blocks via mixed integer linear programming, and find significant savings for most road designs in our test-set. However, the resulting model is considerably harder to solve than the original linear program. Based on structural observations, we introduce a set of algorithms that theoretically reduce the solving time of the model. We confirm this reduction in solve time with numerical experiments.
Combinatorial optimization Mixed integer linear program OR in road design (natural resources) Earthwork optimization
http://www.sciencedirect.com/science/article/pii/S0377221711005406
Hare, Warren L.
Koch, Valentin R.
Lucet, Yves
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:604-6152011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:604-615
article
Optimal coalition formation and surplus distribution: Two sides of one coin
A fuzzy coalitional game represents a situation in which players can vary the intensity at which they participate in the coalitions accessible to them, as opposed to the treatment as a binary choice in the non-fuzzy (crisp) game. Building on the property - not made use of so far in the literature of fuzzy games - that a fuzzy game can be represented as a convex program, this paper shows that the optimum of such a program determines the optimal coalitions as well as the optimal rewards for the players, two sides of one coin. Furthermore, this program is seen to provide a unifying framework for representing the core, the least core, and the (fuzzy) nucleolus, among others. Next, we derive conditions for uniqueness of core rewards and to deal with non-uniqueness we introduce a family of parametric perturbations of the convex program that encompasses a large number of well-known concepts for selection from the core, including the Dutta-Ray solution (Dutta and Ray, 1989), the equal sacrifice solution (Yu, 1973), the equal division solution (Selten, 1972) and the tau-value (Tijs, 1981). We also generalize the concept of the Grand Coalition of contracting players by allowing for multiple technologies, and we specify the conditions for this allocation to be unique and Egalitarian. Finally, we show that our formulation offers a natural extension to existing models of production economies with threats and division rules for common surplus.
Game theory Parametric convex optimization Fuzzy core Coalition formation Uniqueness of equilibrium Computation
http://www.sciencedirect.com/science/article/pii/S0377221711005595
Keyzer, Michiel
van Wesenbeeck, Cornelia
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:629-6382011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:629-638
article
Generating and improving orthogonal designs by using mixed integer programming
Analysts faced with conducting experiments involving quantitative factors have a variety of potential designs in their portfolio. However, in many experimental settings involving discrete-valued factors (particularly if the factors do not all have the same number of levels), none of these designs are suitable. In this paper, we present a mixed integer programming (MIP) method that is suitable for constructing orthogonal designs, or improving existing orthogonal arrays, for experiments involving quantitative factors with limited numbers of levels of interest. Our formulation makes use of a novel linearization of the correlation calculation. The orthogonal designs we construct do not satisfy the definition of an orthogonal array, so we do not advocate their use for qualitative factors. However, they do allow analysts to study, without sacrificing balance or orthogonality, a greater number of quantitative factors than it is possible to do with orthogonal arrays which have the same number of runs.
Orthogonal design creation Design of experiments Statistics
http://www.sciencedirect.com/science/article/pii/S0377221711006072
Vieira Jr., Hélcio
Sanchez, Susan
Kienitz, Karl Heinz
Belderrain, Mischel Carmen Neyra
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:740-7492011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:740-749
article
On some semilattice structures for production technologies
Tracing back from Charnes et al. [9] many approaches have been proposed to extend the DEA production model to non-convex technologies. The FDH method were introduced by Deprins et al. [13] and it only assumes a free disposal assumption of the technology. This paper, continues further an earlier work by Briec and Horvath [7]. Among other things, a new class of semilattice production technologies is introduced. Duality results as well as computational issues are proposed.
Non-parametric production technology Semilattice -convex sets Inverse -convexity DEA FDH
http://www.sciencedirect.com/science/article/pii/S0377221711005418
Briec, Walter
Liang, Qi Bin
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:551-5622011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:551-562
article
Managing business dynamics with adaptive supply chain portfolios
Practitioners and scholars readily agree that firms need to frequently adapt their supply chain portfolios to respond to today's rapidly evolving business dynamics. Adapting a well established supply chain portfolio, however, may involve high costs and expose a firm to unforeseeable risks. In this paper we address this issue. We differentiate business dynamics into product portfolio dynamics and global business dynamics and classify supply chain adaptation into high, medium and low. Building on these classifications we develop mathematical models to analyze how much supply chain adaptation a firm actually requires to respond to the business dynamics it faces. Our results indicate that supply chain adaptation may indeed be crucial for a firm to retain its competitiveness. The need for it, however, differs widely across firms. For example, a firm faced with product portfolio commoditization may be required to adapt its entire manufacturing footprint, while a firm with a high product turnover rate may not need to adapt its supply chain portfolio at all. Furthermore, the need for supply chain adaptation is not only determined by the business context a firm operates in but can be manipulated by the firm's product portfolio decisions. Finally, we also argue that, to exhaust the attainable benefits, a firm should carefully align its supply chain portfolio with the employed supply chain adaptation strategy.
Supply chain management Network design Supply chain portfolio Quantitative model Dynamics
http://www.sciencedirect.com/science/article/pii/S0377221711005558
Seifert, Ralf W.
Langenberg, Kerstin U.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:358-3662011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:358-366
article
A study on lead-time discount coordination for deteriorating products
Goods flowing through supply chains usually deteriorate. Such goods may corrupt, volatilize, and degenerate over time and thus cause the decline of their values or quantity. This study focuses on lead-time coordination for supply chains with deteriorating products which facilitates member cooperation and long-time relationships, thus increasing profit for the entire supply chain. A two-level supply chain with a single supplier and a single retailer is considered, in which the product deteriorates in the same manner for both the supplier and retailer, which is allowed to have shortages. A lead-time discount coordination strategy is used to maximize the profit of the entire supply chain by appropriately determining the optimal order quantity and lead-time. A numerical example is given, and sensitivity analyses are performed to analyze the influence of various parameters on the overall profit. The results can help managers establish long-term cooperative relationships in supply chains.
Supply chain coordination Deteriorating products Lead-time discount
http://www.sciencedirect.com/science/article/pii/S0377221711005455
Huang, Yeu-Shiang
Su, Wei-Jun
Lin, Zu-Liang
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:590-6032011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:590-603
article
Collaborative production planning of supply chain under price and demand uncertainty
This research is motivated by an automobile manufacturing supply chain network. It involves a multi-echelon production system with material supply, component fabrication, manufacturing, and final product distribution activities. We address the production planning issue by considering bill of materials and the trade-offs between inventories, production costs and customer service level. Due to its complexity, an integrated solution framework which combines scatter evolutionary algorithm, fuzzy programming and stochastic chance-constrained programming are combined to jointly take up the issue. We conduct a computational study to evaluate the model. Numerical results using the proposed algorithm confirm the advantage of the integrated planning approach. Compared with other solution methodologies, the supply chain profits from the proposed approach consistently outperform, in some cases up to 13% better. The impacts of uncertainty in demand, material price, and other parameters on the performance of the supply chain are studied through sensitivity analysis. We found the proposed model is effective in developing robust production plans under various market conditions.
Supply chain management Uncertainty modeling Production planning Fuzzy sets Stochastic chance-constrained programming Evolutionary computations
http://www.sciencedirect.com/science/article/pii/S0377221711006096
Zhang, Guoquan
Shang, Jennifer
Li, Wenli
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:581-5892011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:581-589
article
Price markdown scheme in a multi-echelon supply chain in a high-tech industry
This paper studies the price markdown scheme in a supply chain that consists of a supplier, a contract manufacturer (CM), and a buyer (retailer). The buyer subcontracts the production of the final product to the CM. The CM buys the components from the supplier and charges the buyer a service fee for the final product produced. The price markdown is made possible by the supplier with the development of new manufacturing technologies that reduce the production cost for the sourced component. Consequently, the buyer adjusts the retail price in order to possibly stimulate stronger demand that may benefit both the supplier and the buyer. Under this scenario, we identify the optimal discount pricing strategies, capacity reservation, and the stocking policies for the supplier and the buyer. We also investigate the optimal inventory decision for the CM to cope with the price discount by considering both demand and delivery uncertainties. Our results suggest that higher production cost accelerates the effects of higher price sensitivity on lowering the optimal capacity and stocking policies in the supply chain. The effect of mean demand error on the optimal prices is relatively marginal compared with that from price sensitivity. We also found that increasing the standard deviation of the random demand does not necessarily increase the stocking level as one would predict. The results show that delivery uncertainty plays an important role in the inventory carried beyond the price break. We discuss potential extensions for future research.
Three-echelon supply chain Price markdown Game theory Lead-time demand
http://www.sciencedirect.com/science/article/pii/S0377221711006047
Chung, Wenming
Talluri, Srinivas
Narasimhan, Ram
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:572-5802011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:572-580
article
A multi-commodity, capacitated pickup and delivery problem: The single and two-vehicle cases
We explore dynamic programming solutions for a multi-commodity, capacitated pickup and delivery problem. Cargo flows are given by an origin/destination matrix which is not necessarily symmetric. This problem is a generalization of several known pickup and delivery problems, as regards both problem structure and objective function. Solution approaches are developed for the single-vehicle and two-vehicle cases. The fact that for each cargo that goes from a node i to another node j there may be a cargo going in the opposite direction provides the motivation for the two-vehicle case, because one may conceivably consider solutions where no cargoes that travel in opposite directions between node pairs are carried by the same vehicle. Yet, it is shown that such scenarios are generally sub-optimal. As expected, the computational effort of the single vehicle algorithm is exponential in the number of cargoes. For the two-vehicle case, said effort is of an order of magnitude that is not higher than that of the single-vehicle case. Some rudimentary examples are presented or both the single-vehicle and two-vehicle cases so as to better illustrate the method.
Routing Scheduling Pickup and delivery
http://www.sciencedirect.com/science/article/pii/S0377221711005674
Psaraftis, Harilaos N.
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:439-4452011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:439-445
article
Multi-choice goal programming with utility functions
Goal programming (GP) has been, and still is, the most widely used technique for solving multiple-criteria decision problems and multiple-objective decision problems by finding a set of satisfying solutions. However, the major limitation of goal programming is that can only use aspiration levels with scalar value for solving multiple objective problems. In order to solve this problem multi-choice goal programming (MCGP) was proposed by Chang (2007a). Following the idea of MCGP this study proposes a new concept of level achieving in the utility functions to replace the aspiration level with scalar value in classical GP and MCGP for multiple objective problems. According to this idea, it is possible to use the skill of MCGP with utility functions to solve multi-objective problems. The major contribution of using the utility functions of MCGP is that they can be used as measuring instruments to help decision makers make the best/appropriate policy corresponding to their goals with the highest level of utility achieved. In addition, the above properties can improve the practical utility of MCGP in solving more real-world decision/management problems.
Goal programming Multiple objective decision making Utility function Normalization
http://www.sciencedirect.com/science/article/pii/S0377221711005704
Chang, Ching-Ter
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:431-4382011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:431-438
article
Ratio-based RTS determination in weight-restricted DEA models
This paper provides computationally efficient approaches for determining to which returns to scale (RTS) class a unit belongs in weight-restricted Data Envelopment Analysis (DEA) models. A non-traditional computational algorithm is introduced. The suggested approach is based on the calculation of certain ratios within the data set and offers obvious computational advantages over the traditional approaches involving the solution of standard DEA models. Some theorems and algorithms are given. Computational advantages of the provided results are discussed and one of the algorithms is illustrated using real world data.
Data Envelopment Analysis (DEA) Returns to scale (RTS) Weight restrictions TIMSS study
http://www.sciencedirect.com/science/article/pii/S0377221711005467
Korhonen, Pekka J.
Soleimani-damaneh, Majid
Wallenius, Jyrki
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:367-3732011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:367-373
article
Inventory competition for newsvendors under the objective of profit satisficing
Inventory competition for newsvendors (NVs) has been studied extensively under the objective of expected profit maximization which is based on risk neutrality. In this paper, we study this classic problem under the objective of profit satisficing which is based on downside-risk aversion. Consistent with prior literature, we consider two possible scenarios. In the first scenario, each NV's demand depends on the stocking levels of all NVs other than herself. In this scenario, we show that there is a unique Nash equilibrium where all NVs optimally order as if they were independent. In the second scenario, each NV's demand depends on the stocking levels of all NVs including herself. We prove the existence of Nash equilibrium for both additive and multiplicative forms of demands. As a special case, we also study symmetrical NVs under the proportional allocation model. We show that at equilibrium, if the number of NVs exceeds a threshold, the market becomes highly competitive.
Inventory competition Risk aversion Newsvendor Satisficing Game theory
http://www.sciencedirect.com/science/article/pii/S0377221711005479
Shi, Chunming (Victor)
Yang, Shilei
Xia, Yu
Zhao, Xuan
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:446-4582011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:446-458
article
Semiconductor final test scheduling with Sarsa([lambda],Â k) algorithm
Semiconductor test scheduling problem is a variation of reentrant unrelated parallel machine problems considering multiple resource constraints, intricate {product, tester, kit, enabler assembly} eligibility constraints, sequence-dependant setup times, etc. A multi-step reinforcement learning (RL) algorithm called Sarsa([lambda],Â k) is proposed and applied to deal with the scheduling problem with throughput related objective. Allowing enabler reconfiguration, the production capacity of the test facility is expanded and scheduling optimization is performed at the bottom level. Two forms of Sarsa([lambda],Â k), i.e. forward view Sarsa([lambda],Â k) and backward view Sarsa([lambda],Â k), are constructed and proved equivalent in off-line updating. The upper bound of the error of the action-value function in tabular Sarsa([lambda],Â k) is provided when solving deterministic problems. In order to apply Sarsa([lambda],Â k), the scheduling problem is transformed into an RL problem by representing states, constructing actions, the reward function and the function approximator. Sarsa([lambda],Â k) achieves smaller mean scheduling objective value than the Industrial Method (IM) by 68.59% and 76.89%, respectively for real industrial problems and randomly generated test problems. Computational experiments show that Sarsa([lambda],Â k) outperforms IM and any individual action constructed with the heuristics derived from the existing heuristics or scheduling rules.
Scheduling Semiconductor Reinforcement learning
http://www.sciencedirect.com/science/article/pii/S0377221711005005
Zhang, Zhicong
Zheng, Li
Hou, Forest
Li, Na
oai:RePEc:eee:ejores:v:215:y:2011:i:2:p:325-3362011-09-08RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:2:p:325-336
article
A lookahead partitioning heuristic for a new assignment and scheduling problem in a distribution system
We introduce a new assignment and scheduling problem in a distribution system, which we refer to as the ASTV problem: Assigning and Scheduling transportation Tasks to Vehicles. In this problem, commodities need to be delivered directly from their origins to their destinations within specified time windows, using a fleet of homogenous capacitated vehicles. A set of routes, each of which performs one or several direct deliveries, need to be constructed such that the operational costs, including vehicle fixed cost, variable traveling and variable waiting costs, are minimized. The problem arises, for example, when delivering food products from several factories, where they are manufactured, to several distribution centers, from which they are delivered to the final customers. We define the problem and describe its relationship to existing problems studied in the literature, in particular pickup and delivery, assignment and scheduling problems. Subsequently we develop a solution method which is based on decomposing (partitioning) the ASTV problem into two interdependent sub-problems. The first consists of Assignment of Tasks to origin-destination full-load Trips (ATT), while the second determines assignment and Scheduling of these Trips to Vehicle routes (STV). We use a bi-criterion objective function in the first problem, whose purpose is to connect the two problems by looking ahead to the rest of the decisions, determined in the second problem. Thus, the solution method is referred to as lookahead partitioning. In this way, decisions of the first problem determine a favorable input for the second problem, which is solved last. An extensive numerical study was conducted to evaluate the performance of the overall heuristic method. The results indicate that our heuristic method is quite efficient.
Heuristics Distribution systems Assignment Routing Scheduling
http://www.sciencedirect.com/science/article/pii/S037722171100542X
Tzur, Michal
Drezner, Ehud
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:85-902011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:85-90
article
An effective Markov based approach for calculating the Limit Matrix in the analytic network process
Analytic network process is a multiple criteria decision analysis (MCDA) method that aids decision makers to choose among a number of possible alternatives or prioritize the criteria for making a decision in terms of importance. It handles both qualitative and quantitative criteria, that are compared in pairs, in order to forge a best compromise answer according to the different criteria and influences involved. The method has been widely applied and the literature review reveals a rising trend of ANP-related articles. The 'power' matrix method, a procedure necessary for the stability of the decision system, is one of the critical calculations in the mathematical part of the method. The present study proposes an alternative mathematical approach that is based on Markov chain processes and the well-known Gauss-Jordan elimination. The new approach obtains practically the same results as the power matrix method, requires slightly less time and number of calculations and handles effectively cyclic supermatrices, optimizing thus the whole procedure.
Markov processes Multiple criteria decision analysis Analytic network process 'Power' matrix method
http://www.sciencedirect.com/science/article/pii/S0377221711002803
Kirytopoulos, Konstantinos
Voulgaridou, Dimitra
Platis, Agapios
Leopoulos, Vrassidas
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:89-962011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:89-96
article
Discrete-time, economic lot scheduling problem on multiple, non-identical production lines
This paper deals with a general discrete time dynamic demand model to solve real time resource allocation and lot-sizing problems in a multimachine environment. In particular, the problem of apportioning item production to distinct manufacturing lines with different costs (production, setup and inventory) and capabilities is considered. Three models with different cost definitions are introduced, and a set of algorithms able to handle all the problems are developed. The computational results show that the best of the developed approaches is able to handle problems with up to 10000 binary variables outperforming general-purpose solvers and other randomized approaches. The gap between lower and upper bound procedures is within 1.0% after about 500 seconds of CPU time on a 2.66Â Ghz Intel Core2 PC.
Lot-sizing Scheduling Beam search Matheuristicm Resource allocation
http://www.sciencedirect.com/science/article/pii/S0377221711005017
Bollapragada, Ramesh
Croce, Federico Della
Ghirardi, Marco
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:546-5582011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:546-558
article
Solving a continuous local access network design problem with a stabilized central column generation approach
In this paper, we focus on a variant of the multi-source Weber problem. In the multi-source Weber problem, the location of a fixed number of concentrators, and the allocation of terminals to them, must be chosen to minimize the total cost of links between terminals and concentrators. In our variant, we have a third hierarchical level, two categories of link costs, and the number of concentrators is unknown. To solve this difficult problem, we propose several heuristics, and use a new stabilized column generation approach, based on a central cutting plane method, to provide lower bounds.
Location Combinatorial optimization Column generation Central cutting plane Multi-source Weber problem
http://www.sciencedirect.com/science/article/pii/S0377221711004462
Trampont, M.
Destré, C.
Faye, A.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:188-1932011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:188-193
article
Do investors like to diversify? A study of Markowitz preferences
We study rankings of completely and partially diversified portfolios and also of specialized assets when investors follow so-called Markowitz preferences. It turns out that diversification strategies for Markowitz investors are more complex than in the case of risk-averse and risk-inclined investors, whose investment strategies have been extensively investigated in the literature. In particular, we observe that for Markowitz investors, preferences toward risk vary depending on their sensitivities toward gains and losses. For example, it turns out that, unlike in the case of risk-averse and risk-inclined investors, Markowitz investors might prefer investing their entire wealth in just one asset. This finding helps us to better understand some financial anomalies and puzzles, such as the well known diversification puzzle, which notes that some investors tend to concentrate on investing in only a few assets instead of choosing the seemingly more attractive complete diversification.
Portfolio selection Diversified portfolio Markowitz preferences Utility theory Risk aversion
http://www.sciencedirect.com/science/article/pii/S0377221711004590
Egozcue, Martín
García, Luis Fuentes
Wong, Wing-Keung
Zitikis, Ricardas
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:218-2262011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:218-226
article
A sequential perspective on searching for static targets
We present a sequential approach to detect static targets with imperfect sensors, which range from tower-mounted cameras to satellites. The scenario is operationally relevant to many military, homeland security, search and rescue, environmental engineering, counter-narcotics, and law enforcement applications. The idea is to stop the search as soon as there is enough probabilistic evidence about the targets' locations, given an operator-prescribed error tolerance, knowledge of the sensors' parameters, and a sequence of detection signals from the sensors. By stopping the search as soon as possible, we promote efficiency by freeing up sensors and operators to perform other tasks. The model we develop has the added benefits of decreasing operator workload and providing negative information as a search progresses.
Applied probability OR in military Sequential analysis
http://www.sciencedirect.com/science/article/pii/S0377221711004930
Wilson, Kurt E.
Szechtman, Roberto
Atkinson, Michael P.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:559-5672011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:559-567
article
Transportation pricing of a truckload carrier
Freight transportation is a major component of logistical operations. Due to the increase in global trade, fierce competition among shippers and raising concerns about energy, companies are putting more emphasis on effective management and usage of transportation services. This paper studies the transportation pricing problem of a truckload carrier in a setting that consists of a retailer, a truckload carrier and a less than truckload carrier. In this setting, the truckload carrier makes his/her pricing decision based on previous knowledge on the less than truckload carrier's price schedule and the retailer's ordering behavior. The retailer then makes a determination of his/her order quantity through an integrated model that explicitly considers the transportation alternatives, and the related costs (i.e., bimodal transportation costs) and capacities. In the paper, the retailer's replenishment problem and the truckload carrier's pricing problem are modeled and solved based on a detailed analysis. Numerical evidence shows that the truckload carrier may increase his/her gainings significantly through better pricing and there is further opportunity of savings if the truckload carrier and the retailer coordinate their decisions.
Transportation pricing Supply chain management Truckload carrier Less than truckload carrier Integrated inventory/transportation
http://www.sciencedirect.com/science/article/pii/S0377221711004103
Toptal, Aysegül
Bingöl, Safa Onur
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:780-7952011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:780-795
article
Sequential clinical scheduling with service criteria
This study investigates sequential appointment scheduling with service criteria. It uses a constraint-based approach with service criteria bounded in a constraint set in contrast to the more typical weighted linear objective function. Properties are derived and a sequential scheduling algorithm is developed. Fairness properties of generated schedules are considered in detail, where fairness is the uniformity of performance across patients. New unfairness measures are proposed and used to capture the inequity among patients assigned to different slots. Other criteria such as expectation and variance of patient waiting time, queue length, and overtime are also considered. The fairness/revenue tradeoff is investigated as is the flexibility of the constraint-based approach in handling unavailable time periods.
Multiple criteria clinical scheduling Moment-based constraints Overbooking Pareto optimal set
http://www.sciencedirect.com/science/article/pii/S0377221711004486
Turkcan, Ayten
Zeng, Bo
Muthuraman, Kumar
Lawley, Mark
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:39-442011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:39-44
article
Two-agent scheduling to minimize the total cost
Two agents, each having his own set of jobs, compete to perform their own jobs on a common processing resource. Each job of the agents has a weight that specifies its importance. The cost of the first agent is the maximum weighted completion time of his jobs while the cost of the second agent is the total weighted completion time of his jobs. We consider the scheduling problem of determining the sequence of the jobs such that the total cost of the two agents is minimized. We provide a 2-approximation algorithm for the problem, show that the case where the number of jobs of the first agent is fixed is NP-hard, and devise a polynomial time approximation scheme for this case.
Scheduling Multi-agent Approximation algorithm Polynomial time approximation scheme
http://www.sciencedirect.com/science/article/pii/S0377221711004899
Nong, Q.Q.
Cheng, T.C.E.
Ng, C.T.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:115-1252011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:115-125
article
Holding costs under push or pull conditions - The impact of the Anchor Point
Holding costs are traditionally determined from the investment in physical stock during a cycle. An alternative approach instead derives holding costs from Net Present Value (NPV) functions. It is known that applying both frameworks to the same system can lead to different holding cost valuations, but little explanation has been offered. By introducing the Anchor Point in a model, this paper shows, for four different systems, that traditional holding cost models (implicitly) assume pull conditions, while current NPV approaches model push conditions. This explains in part the differences between the methods. It is shown that the Anchor Point concept allows the construction of NPV models under pull conditions, giving results in better correspondence with traditional models. The traditional framework is restricted to pull conditions and important considerations could be easily overlooked, leading to wrong valuations of holding costs. NPV seems superior as such considerations are automatically incorporated. The application to multi-echelon inventory systems provides interesting insights on the roles of echelon stocks and lead-times, and offers potential for future research.
Inventory Production Net Present Value Average profit models Multi-echelon inventory theory
http://www.sciencedirect.com/science/article/pii/S037722171100511X
Beullens, Patrick
Janssens, Gerrit K.
oai:RePEc:eee:ejores:v:209:y:2011:i:1:p:73-822011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:1:p:73-82
article
Strategic disclosure of random variables
We consider a game Gn played by two players. There are n independent random variables Z1, ... , Zn, each of which is uniformly distributed on [0,1]. Both players know n, the independence and the distribution of these random variables, but only player 1 knows the vector of realizations z := (z1, ... , zn) of them. Player 1 begins by choosing an order zk1,...,zkn of the realizations. Player 2, who does not know the realizations, faces a stopping problem. At period 1, player 2 learns zk1. If player 2 accepts, then player 1 pays zk1 euros to player 2 and play ends. Otherwise, if player 2 rejects, play continues similarly at period 2 with player 1 offering zk2 euros to player 2. Play continues until player 2 accepts an offer. If player 2 has rejected n - 1 times, player 2 has to accept the last offer at period n. This model extends Moser's (1956) problem, which assumes a non-strategic player 1. We examine different types of strategies for the players and determine their guarantee-levels. Although we do not find the exact max-min and min-max values of the game Gn in general, we provide an interval In = [an, bn] containing these such that the length of In is at most 0.07 and converges to 0 as n tends to infinity. We also point out strategies, with a relatively simple structure, which guarantee that player 1 has to pay at most bn and player 2 receives at least an. In addition, we completely solve the special case G2 where there are only two random variables. We mention a number of intriguing open questions and conjectures, which may initiate further research on this subject.
Secretary problem Moser's problem Incomplete information Lack of information on one side Optimal strategies
http://www.sciencedirect.com/science/article/B6VCT-50XCY23-1/2/82a724fd6ba491a358b99960350bf087
Flesch, János
Perea, Andrés
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:512-5252011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:512-525
article
A heuristic for the circle packing problem with a variety of containers
In this paper we present a heuristic algorithm based on the formulation space search method to solve the circle packing problem. The circle packing problem is the problem of finding the maximum radius of a specified number of identical circles that can be fitted, without overlaps, into a two-dimensional container of fixed size. In this paper we consider a variety of containers: the unit circle, unit square, rectangle, isosceles right-angled triangle and semicircle. The problem is formulated as a nonlinear optimization problem involving both Cartesian and polar coordinate systems. Formulation space search consists of switching between different formulations of the same problem, each formulation potentially having different properties in terms of nonlinear optimization. As a component of our heuristic we solve a nonlinear optimization problem using the solver SNOPT. Our heuristic improves on previous results based on formulation space search presented in the literature. For a number of the containers we improve on the best result previously known. Our heuristic is also a computationally effective approach (when balancing quality of result obtained against computation time required) when compared with other work presented in the literature.
Circle packing Formulation space search Metaheuristic
http://www.sciencedirect.com/science/article/pii/S0377221711003687
López, C.O.
Beasley, J.E.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:126-1352011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:126-135
article
An artificial bee colony algorithm for the capacitated vehicle routing problem
This paper introduces an artificial bee colony heuristic for solving the capacitated vehicle routing problem. The artificial bee colony heuristic is a swarm-based heuristic, which mimics the foraging behavior of a honey bee swarm. An enhanced version of the artificial bee colony heuristic is also proposed to improve the solution quality of the original version. The performance of the enhanced heuristic is evaluated on two sets of standard benchmark instances, and compared with the original artificial bee colony heuristic. The computational results show that the enhanced heuristic outperforms the original one, and can produce good solutions when compared with the existing heuristics. These results seem to indicate that the enhanced heuristic is an alternative to solve the capacitated vehicle routing problem.
Routing Artificial bee colony Metaheuristic
http://www.sciencedirect.com/science/article/pii/S0377221711005121
Szeto, W.Y.
Wu, Yongzhong
Ho, Sin C.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:80-882011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:80-88
article
An optimal solution technique to the single-vendor multi-buyer integrated inventory supply chain by incorporating some realistic factors
This paper develops two generalized integrated inventory models to deliver a single product from a vendor to multiple buyers. To minimize the total cost of set up, ordering, inventory holding and transportation, the production flow is synchronized by transferring the lot with equal and/or unequal (either all are equal or all are unequal or a combination of equal and unequal) sized batches (sub-lots), each of which incurs a transportation cost. For easy implementation of the models, we relax some unrealistic assumptions in the existing models such as unlimited capacities of the transport equipment and buyers' storage, insignificant set up and transportation times, unlimited lead time and batch sizes. A common optimal solution technique to the models is derived and their performances are analyzed. Potential significances of the solution method are highlighted with solutions of some numerical problems. The importance of the relaxed factors and limitation of the models are discussed.
Synchronization Integrated inventory Constraint Optimal solution
http://www.sciencedirect.com/science/article/pii/S0377221711004619
Hoque, M.A.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:627-6432011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:627-643
article
QoS commitment between vertically integrated autonomous systems
Vertically integrated autonomous systems bargain to provide quality of service guarantees and revenue sharing. Depending on the perceived quality of service and access price, consumers determine whether they subscribe to the access provider's service. Four types of contracts are compared: (i) best effort, (ii) bilateral bargaining, (iii) cascade negotiations and (iv) grand coalition cooperation; the impact of the consumers' QoS sensitivity parameter and power relation are tested for each contract. Assuming that the consumers' quality of service sensitivity parameter is unknown and might evolve dynamically due to error judgement, word-of-mouth effect or competition pressure, a learning algorithm is detailed and implemented by each integrated autonomous systems under asymmetrical information. Its convergence and the influence of bias introduction by the most informed autonomous system is analyzed.
Bilateral bargaining Supply chain Shapley value Learning
http://www.sciencedirect.com/science/article/pii/S0377221711003870
Le Cadre, Hélène
Barth, Dominique
Pouyllau, Hélia
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:814-8172011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:814-817
article
Cost evaluation in M/G/1 queue with T-policy revisited, technical note
This note clarifies the concept of regeneration cycle used in evaluating the average operating cost of the M/G/1 queue with T-policy studied in 70s. Two ways of defining the regeneration cycle are compared and advantages and disadvantages of each way are pointed out. In addition, we establish the convexity of the cost function based on the service cycle.
M/G/1 queue T-policy N-policy Busy cycle Service cycle Convexity
http://www.sciencedirect.com/science/article/pii/S0377221711004917
Zhang, Zhe George
Tadj, Lotfi
Bounkhel, Messaoud
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:805-8132011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:805-813
article
Enhancing credit default swap valuation with meshfree methods
In this paper, we apply the meshfree radial basis function (RBF) interpolation to numerically approximate zero-coupon bond prices and survival probabilities in order to price credit default swap (CDS) contracts. We assume that the interest rate follows a Cox-Ingersoll-Ross process while the default intensity is described by the Exponential-Vasicek model. Several numerical experiments are conducted to evaluate the approximations by the RBF interpolation for one- and two-factor models. The results are compared with those estimated by the finite difference method (FDM). We find that the RBF interpolation achieves more accurate and computationally efficient results than the FDM. Our results also suggest that the correlation between factors does not have a significant impact on CDS spreads.
Radial basis function interpolation Finite difference methods Default intensity
http://www.sciencedirect.com/science/article/pii/S0377221711004942
Guarin, Alexander
Liu, Xiaoquan
Ng, Wing Lon
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:70-792011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:70-79
article
A real-time decision rule for an inventory system with committed service time and emergency orders
In this paper, we study the inventory system of an online retailer with compound Poisson demand. The retailer normally replenishes its inventory according to a continuous review (nQ,Â R) policy with a constant lead time. Usually demands that cannot be satisfied immediately are backordered. We also assume that the customers will accept a reasonable waiting time after they have placed their orders because of the purchasing convenience of the online system. This means that a sufficiently short waiting time incurs no shortage costs. We call this allowed waiting time "committed service time". After this committed service time, if the retailer is still in shortage, the customer demand must either be satisfied with an emergency supply that takes no time (which is financially equivalent to a lost sale) or continue to be backordered with a time-dependent backorder cost. The committed service time gives an online retailer a buffer period to handle excess demands. Based on real-time information concerning the outstanding orders of an online retailer and the waiting times of its customers, we provide a decision rule for emergency orders that minimizes the expected costs under the assumption that no further emergency orders will occur. This decision rule is then used repeatedly as a heuristic. Numerical examples are presented to illustrate the model, together with a discussion of the conditions under which the real-time decision rule provides considerable cost savings compared to traditional systems.
Inventory Partial backordering Emergency orders Committed service time
http://www.sciencedirect.com/science/article/pii/S0377221711004541
Huang, Shuo
Axsäter, Sven
Dou, Yifan
Chen, Jian
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:656-6642011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:656-664
article
Statistical properties and economic implications of jump-diffusion processes with shot-noise effects
The shot-noise jump-diffusion (SNJD) model aims to reflect how economic variables respond to the arrival of sudden information. This paper analyzes the SNJD model, providing its statistical distribution and closed-form expressions for the characteristic function and moments. We also analyze the dynamics of the model, concluding that the degree of serial autocorrelation is related to the occurrence and magnitude of abnormal information. In addition, we provide some useful approximations in a particular case that considers exponential-type decay. Empirically, we propose a GMM approach to estimate the parameters of the model and present an empirical application for the stocks included in the Dow Jones Averaged Index. Our findings seem to confirm the presence of shot-noise effects in 73% of the stocks and a strong relationship between the shot-noise process and the autocorrelation pattern embedded in data.
Finance Shot-noise Characteristic function Generalized method of moments
http://www.sciencedirect.com/science/article/pii/S0377221711004176
Moreno, Manuel
Serrano, Pedro
Stute, Winfried
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:536-5452011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:536-545
article
Solving the vehicle routing problem with time windows and multiple routes exactly using a pseudo-polynomial model
In this paper, we address a variant of the vehicle routing problem called the vehicle routing problem with time windows and multiple routes. It considers that a given vehicle can be assigned to more than one route per planning period. We propose a new exact algorithm for this problem. Our algorithm is iterative and it relies on a pseudo-polynomial network flow model whose nodes represent time instants, and whose arcs represent feasible vehicle routes. This algorithm was tested on a set of benchmark instances from the literature. The computational results show that our method is able to solve more instances than the only other exact method described so far in the literature, and it clearly outperforms this method in terms of computing time.
Integer programming Combinatorial optimization Routing Network flows
http://www.sciencedirect.com/science/article/pii/S037722171100381X
Macedo, Rita
Alves, Cláudio
Valério de Carvalho, J.M.
Clautiaux, François
Hanafi, Saïd
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:194-2032011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:194-203
article
A knowledge-based multi-role decision support system for ore blending cost optimization of blast furnaces
Literature illustrates the difficulties in obtaining the lowest-cost optimal solution to an ore blending problem for blast furnaces by using the traditional trial-and-error method in iron and steel enterprises. To solve this problem, we developed a cost optimization model which we have implemented in a multi-role-based decision support system (DSS). On the basis of analyzing the business flow and working process of ore blending, we propose an architecture of DSS which is built based on multi-roles. This DSS construction pre-processes the data for materials and elements, builds a general database, abstracts the related optimal operations research models and introduces the reasoning mechanism of an expert system. A non-linear model of ore blending for blast furnaces and its solutions are provided. A database, a model base and a knowledge base are integrated into the expert system-based multi-role DSS to meet the different demands of data, information and decision-making knowledge for the various roles of users. A comparison of the results for the DSS and the trial-and-error method is provided. The system has produced excellent economic benefits since it was implemented at the Xiangtan Iron & Steel Group Co. Ltd., China.
Decision support systems Cost optimization Expert system Ore blending
http://www.sciencedirect.com/science/article/pii/S0377221711004620
Zhang, Ruijun
Lu, Jie
Zhang, Guangquan
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:57-692011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:57-69
article
A two-stage intelligent search algorithm for the two-dimensional strip packing problem
This paper presents a two-stage intelligent search algorithm for a two-dimensional strip packing problem without guillotine constraint. In the first stage, a heuristic algorithm is proposed, which is based on a simple scoring rule that selects one rectangle from all rectangles to be packed, for a given space. In the second stage, a local search and a simulated annealing algorithm are combined to improve solutions of the problem. In particular, a multi-start strategy is designed to enhance the search capability of the simulated annealing algorithm. Extensive computational experiments on a wide range of benchmark problems from zero-waste to non-zero-waste instances are implemented. Computational results obtained in less than 60Â seconds of computation time show that the proposed algorithm outperforms the supposedly excellent algorithms reported recently, on average. It performs particularly better for large instances.
Packing problem Heuristic search Simulated annealing
http://www.sciencedirect.com/science/article/pii/S037722171100508X
Leung, Stephen C.H.
Zhang, Defu
Sim, Kwang Mong
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:473-4842011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:473-484
article
Full Nesterov-Todd step infeasible interior-point method for symmetric optimization
Euclidean Jordan algebras were proved more than a decade ago to be an indispensable tool in the unified study of interior-point methods. By using it, we generalize the full-Newton step infeasible interior-point method for linear optimization of Roos [Roos, C., 2006. A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM Journal on Optimization. 16 (4), 1110-1136 (electronic)] to symmetric optimization. This unifies the analysis for linear, second-order cone and semidefinite optimizations.
Interior point methods Conic programming Nesterov-Todd step
http://www.sciencedirect.com/science/article/pii/S0377221711001779
Gu, G.
Zangiabadi, M.
Roos, C.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:796-8042011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:796-804
article
Stable solutions for optimal reinsurance problems involving risk measures
The optimal reinsurance problem is a classic topic in actuarial mathematics. Recent approaches consider a coherent or expectation bounded risk measure and minimize the global risk of the ceding company under adequate constraints. However, there is no consensus about the risk measure that the insurer must use, since every risk measure presents advantages and shortcomings when compared with others. This paper deals with a discrete probability space and analyzes the stability of the optimal reinsurance with respect to the risk measure that the insurer uses. We will demonstrate that there is a "stable optimal retention" that will show no sensitivity, insofar as it will solve the optimal reinsurance problem for many risk measures, thus providing a very robust reinsurance plan. This stable optimal retention is a stop-loss contract, and it is easy to compute in practice. A fast linear time algorithm will be given and a numerical example presented.
Optimal reinsurance Risk measure Sensitivity Stable optimal retention Stop-loss reinsurance
http://www.sciencedirect.com/science/article/pii/S0377221711004607
Balbás, Alejandro
Balbás, Beatriz
Heras, Antonio
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:25-382011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:25-38
article
A polynomial arc-search interior-point algorithm for convex quadratic programming
Arc-search is developed for linear programming in [24] and [25]. The algorithms search for optimizers along an ellipse that is an approximation of the central path. In this paper, the arc-search method is applied to primal-dual path-following interior-point method for convex quadratic programming. A simple algorithm with iteration complexity is devised. Several improvements on the simple algorithm, which improve computational efficiency, increase step length, and further reduce duality gap in every iteration, are then proposed and implemented. It is intuitively clear that the iteration with these improvements will reduce the duality gap more than the iteration of the simple algorithm without the improvements, though it is hard to show how much these improvements reduce the complexity bound. The proposed algorithm is implemented in MATLAB and tested on quadratic programming problems originating from [13]. The result is compared to the one obtained by LOQO in [22]. The proposed algorithm uses fewer iterations in all these problems and the number of total iterations is 27% fewer than the one obtained by LOQO. This preliminary result shows that the proposed algorithm is promising.
Arc-search Convex quadratic programming Interior-point method Polynomial algorithm
http://www.sciencedirect.com/science/article/pii/S0377221711005492
Yang, Yaguang
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:674-6822011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:674-682
article
The impact of transportation delays on repairshop capacity pooling and spare part inventories
In this paper, to serve different fleets of machines at different locations, we study whether repair shop pooling is more cost effective than having dedicated on-site repair shops for each fleet. When modeling the former alternative, we take transportation delays and related costs into account and represent it as a closed queueing network. This allows us to include on-site spare-part inventories that operate according to a continuous-review base-stock policy. We obtain the steady-state distribution of components at each location and the cost of the system with a pooled repair shop by applying the Mean-Value Analysis technique. Our numerical findings indicate that when transportation costs are reasonable, repair shop pooling is a better alternative.
Spare parts Multiple finite-population queueing systems Transportation delays Mean-Value Analysis
http://www.sciencedirect.com/science/article/pii/S0377221711004474
Sahba, Pedram
BalcIog[small tilde]lu, BarIs
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:244-2562011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:244-256
article
Centralized and decentralized management of groundwater with multiple users
In this work, we investigate two groundwater inventory management schemes with multiple users in a dynamic game-theoretic structure: (i) under the centralized management scheme, users are allowed to pump water from a common aquifer with the supervision of a social planner, and (ii) under the decentralized management scheme, each user is allowed to pump water from a common aquifer making usage decisions individually in a non-cooperative fashion. This work is motivated by the work of Saak and Peterson [14], which considers a model with two identical users sharing a common aquifer over a two-period planning horizon. In our work, the model and results of Saak and Peterson [14] are generalized in several directions. We first build on and extend their work to the case of n non-identical users distributed over a common aquifer region. Furthermore, we consider two different geometric configurations overlying the aquifer, namely, the strip and the ring configurations. In each configuration, general analytical results of the optimal groundwater usage are obtained and numerical examples are discussed for both centralized and decentralized problems.
OR in natural resources Game theory Water resources management Darcy's Law
http://www.sciencedirect.com/science/article/pii/S0377221711004966
Saleh, Yahya
Gürler, Ülkü
Berk, Emre
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:768-7792011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:768-779
article
An integrated fuzzy simulation-fuzzy data envelopment analysis algorithm for job-shop layout optimization: The case of injection process with ambiguous data
This paper puts forward an integrated fuzzy simulation-fuzzy data envelopment analysis (FSFDEA) algorithm to cope with a special case of single-row facility layout problem (SRFLP). Discrete-event-simulation, a powerful tool for analyzing complex and stochastic systems, is employed for modeling different layout formations. Afterwards, a range-adjusted measure (RAM) is used as a data envelopment analysis (DEA) model for ranking the simulation results and finding the optimal layout design. Due to ambiguousness associated with the processing times, fuzzy sets theory is incorporated into the simulation model. Since the results of simulation are in the form of possibility distributions, the DEA model is treated on a fuzzy basis; therefore, a recent possibilistic programming approach is used to convert the fuzzy DEA model to an equivalent crisp one. The proposed FSFDEA algorithm is capable of modeling and optimizing small-sized SRFLP's in stochastic, uncertain, and non-linear environments. The solution quality is inspected through a real case study in a refrigerator manufacturing company.
Single-row facility layout problem Discrete-event-simulation Data envelopment analysis Fuzzy sets Possibilistic programming
http://www.sciencedirect.com/science/article/pii/S0377221711004401
Azadeh, A.
Moghaddam, M.
Asadzadeh, S.M.
Negahban, A.
oai:RePEc:eee:ejores:v:212:y:2011:i:2:p:374-3852011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:2:p:374-385
article
Evaluating pharmaceutical R&D under technical and economic uncertainty
This study sets up a compound option approach for evaluating pharmaceutical R&D investment projects in the presence of technical and economic uncertainties. Technical uncertainty is modeled as a Poisson jump that allows for failure and thus abandonment of the drug development. Economic uncertainty is modeled as a standard diffusion process which incorporates both up-and downward shocks. Practical application of this method is emphasized through a case analysis. We show that both uncertainties have a positive impact on the R&D option value. Moreover, from the sensitivity analysis, we find that the sensitivity of the option with respect to economic uncertainty and market introduction cost decreases when technical uncertainty increases.
Compound option Jump-diffusion process R&D Pharmaceutical industry
http://www.sciencedirect.com/science/article/B6VCT-523V3CN-1/2/8169aa81d41aff5bf0120235669d79dc
Pennings, Enrico
Sereno, Luigi
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:442-4542011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:442-454
article
Climate change and optimal energy technology R&D policy
Public policy response to global climate change presents a classic problem of decision making under uncertainty. Theoretical work has shown that explicitly accounting for uncertainty and learning in climate change can have a large impact on optimal policy, especially technology policy. However, theory also shows that the specific impacts of uncertainty are ambiguous. In this paper, we provide a framework that combines economics and decision analysis to implement probabilistic data on energy technology research and development (R&D) policy in response to global climate change. We find that, given a budget constraint, the composition of the optimal R&D portfolio is highly diversified and robust to risk in climate damages. The overall optimal investment into technical change, however, does depend (in a non-monotonic way) on the risk in climate damages. Finally, we show that in order to properly value R&D, abatement must be included as a recourse decision.
R&D portfolio Energy technology Climate change Stochastic programming Public policy
http://www.sciencedirect.com/science/article/pii/S0377221711003080
Baker, Erin
Solak, Senay
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:588-5942011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:588-594
article
Freight transportation in railway networks with automated terminals: A mathematical model and MIP heuristic approaches
In this paper we propose a planning procedure for serving freight transportation requests in a railway network with fast transfer equipment at terminals. We consider a transportation system where different customers make their requests (orders) for moving boxes, i.e., either containers or swap bodies, between different origins and destinations, with specific requirements on delivery times. The decisions to be taken concern the route (and the corresponding sequence of trains) that each box follows in the network and the assignment of boxes to train wagons, taking into account that boxes can change more than one train and that train timetables are fixed. The planning procedure includes a pre-analysis step to determine all the possible sequences of trains for serving each order, followed by the solution of a 0-1 linear programming problem to find the optimal assignment of each box to a train sequence and to a specific wagon for each train in the sequence. This latter is a generalized assignment problem which is NP-hard. Hence, in order to find good solutions in acceptable computation times, two MIP heuristic approaches are proposed and tested through an experimental analysis considering realistic problem instances.
Freight transportation Optimal planning Mathematical programming MIP heuristics
http://www.sciencedirect.com/science/article/pii/S0377221711004383
Anghinolfi, D.
Paolucci, M.
Sacone, S.
Siri, S.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:501-5112011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:501-511
article
The multiple container loading cost minimization problem
In the shipping and transportation industry, there are several types of standard containers with different dimensions and different associated costs. In this paper, we examine the multiple container loading cost minimization problem (MCLCMP), where the objective is to load products of various types into containers of various sizes so as to minimize the total cost. We transform the MCLCMP into an extended set cover problem that is formulated using linear integer programming and solve it with a heuristic to generate columns. Experiments on standard bin-packing instances show our approach is superior to prior approaches. Additionally, since the optimal solutions for existing test data is unknown, we propose a technique to generate test data with known optimal solutions for MCLCMP.
Packing Heuristics Container loading Integer programming Design of experiments
http://www.sciencedirect.com/science/article/pii/S0377221711003614
Che, Chan Hou
Huang, Weili
Lim, Andrew
Zhu, Wenbin
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:1-132011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:1-13
article
Lost-sales inventory theory: A review
In classic inventory models it is common to assume that excess demand is backordered. However, studies analyzing customer behavior in practice show that most unfulfilled demand is lost or an alternative item/location is looked for in many retail environments. Inventory systems that include this lost-sales characteristic appear to be more difficult to analyze and to solve. Furthermore, lost-sales inventory systems require different replenishment policies to minimize costs compared to backorder systems. In this paper, we classify the models in the literature based on the characteristics of the inventory system and review the proposed replenishment policies. For each classification and type of replenishment policy we discuss the available models and their performance. Furthermore, directions for future research are proposed.
Inventory Lost sales Replenishment policies Classification of literature
http://www.sciencedirect.com/science/article/pii/S0377221711001354
Bijvank, Marco
Vis, Iris F.A.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:749-7582011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:749-758
article
A mixed integer linear programming model for optimal sovereign debt issuance
Governments borrow funds to finance the excess of cash payments or interest payments over receipts, usually by issuing fixed income debt and index-linked debt. The goal of this work is to propose a stochastic optimization-based approach to determine the composition of the portfolio issued over a series of government auctions for the fixed income debt, to minimize the cost of servicing debt while controlling risk and maintaining market liquidity. We show that this debt issuance problem can be modeled as a mixed integer linear programming problem with a receding horizon. The stochastic model for the interest rates is calibrated using a Kalman filter and the future interest rates are represented using a recombining trinomial lattice for the purpose of scenario-based optimization. The use of a latent factor interest rate model and a recombining lattice provides us with a realistic, yet very tractable scenario generator and allows us to do a multi-stage stochastic optimization involving integer variables on an ordinary desktop in a matter of seconds. This, in turn, facilitates frequent re-calibration of the interest rate model and re-optimization of the issuance throughout the budgetary year allows us to respond to the changes in the interest rate environment. We successfully demonstrate the utility of our approach by out-of-sample back-testing on the UK debt issuance data.
Multistage stochastic programming Public debt management OR in government Finance
http://www.sciencedirect.com/science/article/pii/S037722171100378X
Date, P.
Canepa, A.
Abdel-Jawad, M.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:227-2432011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:227-243
article
SAPI: Statistical Analysis of Propagation of Incidents. A new approach for rescheduling trains after disruptions
In this paper, we present a new approach to solve the railway rescheduling problem. This problem deals with the reparation of a disturbed railway timetable after incidents in such a way to minimize the difference between the original plan and the new provisional plan. We use a mixed integer linear programming (MIP) formulation that models this problem correctly. However, the large number of variables and constraints denies the possibility to solve this problem efficiently using a standard MIP solver. A new approach called SAPI (Statistical Analysis of Propagation of Incidents) has been developed to tackle the problem. The key point of SAPI is to estimate the probability that an event, one step of the itinerary of a train, is affected by a set of incidents. Using these probabilities, the search space is reduced, obtaining very good solutions in a short time. The method has been tested with two different networks located in France and Chile. The numerical results show that our procedure is viable in practice.
Timetabling Transportation Integer programming Logistic regression Disruption management Railways
http://www.sciencedirect.com/science/article/pii/S0377221711004954
Acuna-Agost, Rodrigo
Michelon, Philippe
Feillet, Dominique
Gueye, Serigne
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:14-202011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:14-20
article
A global optimization procedure for the location of a median line in the three-dimensional space
A global optimization procedure is proposed to find a line in the Euclidean three-dimensional space which minimizes the sum of distances to a given finite set of three-dimensional data points. Although we are using similar techniques as for location problems in two dimensions, it is shown that the problem becomes much harder to solve. However, a problem parameterization as well as lower bounds are suggested whereby we succeeded in solving medium-size instances in a reasonable amount of computing time.
Global optimization Geometric branch-and-bound methods Line location
http://www.sciencedirect.com/science/article/pii/S0377221711004553
Blanquero, Rafael
Carrizosa, Emilio
Schöbel, Anita
Scholz, Daniel
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:525-5322011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:525-532
article
Design of progressively censored group sampling plans for Weibull distributions: An optimization problem
Optimization algorithms provides efficient solutions to many statistical problems. Essentially, the design of sampling plans for lot acceptance purposes is an optimization problem with several constraints, usually related to the quality levels required by the producer and the consumer. An optimal acceptance sampling plan is developed in this paper for the Weibull distribution with unknown scale parameter. The proposed plan combines grouping of items, sudden death testing in each group and progressive group removals, and its decision criterion is based on the uniformly most powerful life test. A mixed integer programming problem is first solved for determining the minimum number of failures required and the corresponding acceptance constant. The optimal number of groups is then obtained by minimizing a balanced estimation of the expected test cost. Excellent approximately optimal solutions are also provided in closed-forms. The sampling plan is considerably flexible and allows to save experimental time and cost. In general, our methodology achieves solutions that are quite robust to small variations in the Weibull shape parameter. A numerical example about a manufacturing process of gyroscopes is included for illustration.
Constrained optimization Minimal expected cost Mixed integer programming Operating characteristic function Producer's and consumer's risks Quality control
http://www.sciencedirect.com/science/article/B6VCT-51N228J-1/2/6fa9ba419e406f67b27b447fe9dd4f10
Fernández, Arturo J.
Pérez-González, Carlos J.
Aslam, Muhammad
Jun, Chi-Hyuck
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:45-562011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:45-56
article
Scheduling inspired models for two-dimensional packing problems
We propose two exact algorithms for two-dimensional orthogonal packing problems whose main components are simple mixed-integer linear programming models. Based on the different forms of time representation in scheduling formulations, we extend the concept of multiple time grids into a second dimension and propose a hybrid discrete/continuous-space formulation. By relying on events to continuously locate the rectangles along the strip height, we aim to reduce the size of the resulting mathematical problem when compared to a pure discrete-space model, with hopes of achieving a better computational performance. Through the solution of a set of 29 test instances from the literature, we show that this was mostly accomplished, primarily because the associated search strategy can quickly find good feasible solutions prior to the optimum, which may be very important in real industrial environments. We also provide a comprehensive comparison to seven other conceptually different approaches that have solved the same strip packing problems.
Optimization Integer programming Strip packing Resource-Task Network Spatial grids
http://www.sciencedirect.com/science/article/pii/S0377221711005078
Castro, Pedro M.
Oliveira, José F.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:568-5782011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:568-578
article
Equilibrium analysis of supply chain structures under power imbalance
This paper investigates the implications of channel power on supply chain stability in a setting where multiple suppliers sell substitutable products through a common retailer. Such supply chains have been traditionally analyzed as one- or two-stage Stackelberg non-cooperative games with all suppliers sharing balanced (equal) decision-making power. In this paper, we relax this assumption and formulate game-theoretic models to examine scenarios where one supplier can act as the Stackelberg leader. Consequently, we analyze new supply chain structures and introduce the notion of structure dominance, a novel approach to analyze the performance of supply chains that has practical implications. Thus, a decision maker can employ the concepts of structure dominance to determine whether there exist supply chain scenarios that are more stable than others, i.e., less prone to power reconfigurations, at both agent and group level. We find that power imbalance causes significant declines in supply chain profits, and the more balanced the agents are the higher their profits when demand is linear, regardless of product competition. It develops that neither the Manufacturer Stackelberg nor the Retailer Stackelberg supply chains are stable structures in our generalized setting, but that structures where power is equally split between agents provide for best stability and performance.
Power dominance Supply chain stability Game theory
http://www.sciencedirect.com/science/article/pii/S0377221711004139
Edirisinghe, N.C.P.
Bichescu, B.
Shi, X.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:595-6052011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:595-605
article
Fix-and-optimize heuristics for capacitated lot-sizing with sequence-dependent setups and substitutions
In this paper, we consider a capacitated single-level dynamic lot-sizing problem with sequence-dependent setup costs and times that includes product substitution options. The model is motivated from a real-world production planning problem of a manufacturer of plastic sheets used as an interlayer in car windshields. We develop a mixed-integer programming (MIP) formulation of the problem and devise MIP-based Relax&Fix and Fix&Optimize heuristics. Unlike existing literature, we combine Fix&Optimize with a time decomposition. Also, we develop a specialized substitute decomposition and devise a computation budget allocation scheme for ensuring a uniform, efficient usage of computation time by decompositions and their subproblems. Computational experiments were performed on generated instances whose structure follows that of the considered practical application and which have rather tight production capacities. We found that a Fix&Optimize algorithm with an overlapping time decomposition yielded the best solutions. It outperformed the state-of-the-art approach Relax&Fix and all other tested algorithm variants on the considered class of instances, and returned feasible solutions with neither overtime nor backlogging for all instances. It returned solutions that were on average only 5% worse than those returned by a standard MIP solver after 4Â hours and 19% better than those of Relax&Fix.
Production Heuristics Integer programming Capacitated lot-sizing and scheduling Sequence-dependent setups Substitutions
http://www.sciencedirect.com/science/article/pii/S0377221711004395
Lang, Jan Christian
Shen, Zuo-Jun Max
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:289-3002011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:289-300
article
Telecom service provider portal: Revenue sharing and outsourcing
Early mobile phones only provided voice transmission, for a fee. They have now evolved into voice and online data portals for providing additional services through 3rd party vendors. These service providers (vendors) are given access to a customer base "owned" by the mobile phone companies, for a fee. Typically customers make two payments: to the mobile phone company for phone services and to the 3rd party vendors for specific services bought from them. Variations to the above business model may involve outsourcing the online portal and/or acquiring customers from other independent portals. For these scenarios, we study how the fees for phone service and customer access are established and how they may relate to the prices of vendor services, and which services should be located on the portal - all in a game-theoretic context. Our results prove that it is possible to reorganize revenue flows through an invoicing process that may benefit the mobile network operator more than the other parties. In addition, we establish optimality in terms of the number of vendors on the portal, and determine a rank-ordering of vendors for their inclusion into the portal.
E-commerce Non-cooperative games Pricing Telecommunications
http://www.sciencedirect.com/science/article/pii/S0377221711003869
Chakravarty, Amiya K.
Werner, Adrian S.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:257-2672011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:257-267
article
Dynamic lot-sizing in sequential online retail auctions
Retailers often conduct non-overlapping sequential online auctions as a revenue generation and inventory clearing tool. We build a stochastic dynamic programming model for the seller's lot-size decision problem in these auctions. The model incorporates a random number of participating bidders in each auction, allows for any bid distribution, and is not restricted to any specific price-determination mechanism. Using stochastic monotonicity/stochastic concavity and supermodularity arguments, we present a complete structural characterization of optimal lot-sizing policies under a second order condition on the single-auction expected revenue function. We show that a monotone staircase with unit jumps policy is optimal and provide a simple inequality to determine the locations of these staircase jumps. Our analytical examples demonstrate that the second order condition is met in common online auction mechanisms. We also present numerical experiments and sensitivity analyses using real online auction data.
Auctions/bidding Dynamic programming e-Commerce
http://www.sciencedirect.com/science/article/pii/S0377221711004991
Chen, Xi
Ghate, Archis
Tripathi, Arvind
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:759-7672011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:759-767
article
Portfolio symmetry and momentum
This paper presents a novel theoretical framework to model the evolution of a dynamic portfolio (i.e., a portfolio whose weights vary over time), considering a given investment policy. The framework is based on graph theory and the quantum probability. Embedding the dynamics of a portfolio into a graph, each node of the graph representing a plausible portfolio, we provide the probabilities for a dynamic portfolio to lie on different nodes of the graph, characterizing its optimality in terms of returns. The framework embeds cross-sectional phenomena, such as the momentum effect, in stochastic processes, using portfolios instead of individual stocks. We apply our methodology to an investment policy similar to the momentum strategy of Jegadeesh and Titman (1993). We find that the strategy symmetry is a source of momentum.
(P) Finance Graph theory Momentum Quantum probability Spectral analysis
http://www.sciencedirect.com/science/article/pii/S0377221711004188
Billio, Monica
Calès, Ludovic
Guégan, Dominique
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:496-5042011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:496-504
article
Inequalities for the ruin probability in a controlled discrete-time risk process
Ruin probabilities in a controlled discrete-time risk process with a Markov chain interest are studied. To reduce the risk of ruin there is a possibility to reinsure a part or the whole reserve. Recursive and integral equations for ruin probabilities are given. Generalized Lundberg inequalities for the ruin probabilities are derived given a constant stationary policy. The relationships between these inequalities are discussed. To illustrate these results some numerical examples are included.
Risk process Ruin probability Proportional reinsurance Lundberg's inequality
http://www.sciencedirect.com/science/article/B6VCT-4XSJVN5-1/2/f91c6504f68fe0606cae1dfbc4c676ea
Diasparra, M.
Romera, R.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:722-7312011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:722-731
article
Analyzing online B2B exchange markets: Asymmetric cost and incomplete information
This research applies the discriminating auction to analyze the online B2B exchange market in which a single buyer requests multiple items and several suppliers having equal capacity and asymmetric cost submit bids to compete for buyer demand. In the present model, we examine the impact of asymmetric cost and incomplete information on the participants in the market. Given the complete cost information, each supplier randomizes its price and the lower bound of the price range is determined by the highest marginal cost. In addition, the supplier with a lower marginal cost has a larger considered pricing space but ultimately has a smaller equilibrium one than others with higher marginal costs. When each supplier's marginal cost is private information, the lowest possible price is determined by the number of suppliers and the buyer's reservation price. Comparing these two market settings, we find whether IT is beneficial to buyers or suppliers depends on the scale of the bid process and the highest marginal cost. When the number of suppliers and the difference between the highest marginal cost and the buyer's reservation price are sufficiently large, each supplier can gain a higher profit if the marginal costs are private information. On the contrary, when the highest marginal cost approaches the buyer's reservation price, complete cost information benefits the suppliers.
Auctions Economics Game theory Online exchanges Supplier competition
http://www.sciencedirect.com/science/article/pii/S0377221711004504
Li, Yung-Ming
Jhang-Li, Jhih-Hua
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:204-2172011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:204-217
article
Reverse programming the optimal process mean problem to identify a factor space profile
For the manufacturer that intends to reduce the processing costs without sacrificing product quality, the identification of the optimal process mean is a problem frequently revisited. The traditional method to solving this problem involves making assumptions on the process parameter values and then determining the ideal location of the mean based upon various constraints such as cost or the degree of quality loss when a product characteristic deviates from its desired target value. The optimal process mean, however, is affected not only by these settings but also by any shift in the variability of a process, thus making it extremely difficult to predict with any accuracy. In contrast, this paper proposes the use of a reverse programming scheme to determine the relationship between the optimal process mean and the settings within an experimental factor space. By doing so, one may gain increased awareness of the sensitivity and robustness of a process, as well as greater predictive capability in the setting of the optimal process mean. Non-linear optimization programming routines are used from both a univariate and multivariate perspective in order to illustrate the proposed methodology.
Quality management Optimization Response surface methodology Desirability function Optimal process mean
http://www.sciencedirect.com/science/article/pii/S0377221711005108
Goethals, Paul L.
Cho, Byung Rae
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:606-6152011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:606-615
article
Intermittent demand: Linking forecasting to inventory obsolescence
The standard method to forecast intermittent demand is that by Croston. This method is available in ERP-type solutions such as SAP and specialised forecasting software packages (e.g. Forecast Pro), and often applied in practice. It uses exponential smoothing to separately update the estimated demand size and demand interval whenever a positive demand occurs, and their ratio provides the forecast of demand per period. The Croston method has two important disadvantages. First and foremost, not updating after (many) periods with zero demand renders the method unsuitable for dealing with obsolescence issues. Second, the method is positively biased and this is true for all points in time (i.e. considering the forecasts made at an arbitrary time period) and issue points only (i.e. considering the forecasts following a positive demand occurrence only). The second issue has been addressed in the literature by the proposal of an estimator (Syntetos-Boylan Approximation, SBA) that is approximately unbiased. In this paper, we propose a new method that overcomes both these shortcomings while not adding complexity. Different from the Croston method, the new method is unbiased (for all points in time) and it updates the demand probability instead of the demand interval, doing so in every period. The comparative merits of the new estimator are assessed by means of an extensive simulation experiment. The results indicate its superior performance and enable insights to be gained into the linkage between demand forecasting and obsolescence.
Forecasting Intermittent demand Inventory control Obsolescence
http://www.sciencedirect.com/science/article/pii/S0377221711004437
Teunter, Ruud H.
Syntetos, Aris A.
Zied Babai, M.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:161-1682011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:161-168
article
Weighted bankruptcy rules and the museum pass problem
In this paper we introduce and characterize some allocation rules for weighted bankruptcy problems. We illustrate the relevancy of weighted bankruptcy by applying it to analyse the museum pass problem, introduced by Ginsburgh and Zang (2003). This application is completed with the analysis of real data for the "Card Musei" of the Municipality of Genova.
Game theory Bankruptcy Weighted rule Axiomatic characterization Museum pass problem
http://www.sciencedirect.com/science/article/pii/S0377221711004589
Casas-Méndez, Balbina
Fragnelli, Vito
García-Jurado, Ignacio
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:716-7212011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:716-721
article
Generalizing cross redundancy in data envelopment analysis
Lee and Choi (2010) proved that a cross redundant output in a CCR or BCC DEA study is unnecessary and can be eliminated from the model without affecting the results of the study. A cross redundant output, as characterized by Lee and Choi, can be expressed as a specially constrained linear combination of both some outputs and some inputs. This article extends the contributions of Lee and Choi (2010) in at least three ways: (i) by adding precision and clarity to some of their definitions; (ii) by introducing specific definitions that complement the ones in their paper; and (iii) by conducting some additional analysis on the impact of the presence of other types of linear dependencies among the inputs and outputs of a DEA model. One reason that it is important to identify and remove cross redundant inputs or outputs from DEA models is that the computational burden of the DEA study is decreased, especially in large applications.
Data envelopment analysis DEA Cross redundancy
http://www.sciencedirect.com/science/article/pii/S0377221711004450
López, Francisco J.
oai:RePEc:eee:ejores:v:201:y:2010:i:3:p:791-7982011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:201:y:2010:i:3:p:791-798
article
On the link between Markovian trees and tree-structured Markov chains
In this paper, we describe a link between Markovian binary trees (MBT) and tree-like quasi-birth-and-death processes (TLQBD) by associating a specific TLQBD to each MBT. The algorithms to compute the matrices Gk in the TLQBD then correspond to the algorithms calculating the extinction probability vector of the MBT. This parallelism leads to a new quadratic algorithm, based on the Newton iteration method, which converges to the extinction probability of an MBT. We also present a one-to-one correspondence between a general Markovian tree (GMT) and a scalar tree-structured M/G/1-type Markov chain. This allows us to prove the equivalence between the main result on the positive recurrence, null recurrence or transience of a scalar tree-structured M/G/1-type Markov chain and the criticality of a GMT.
Stochastic processes Markovian multi-type branching processes Markovian trees Tree-like quasi-birth-and-death processes Extinction probability Newton's iteration
http://www.sciencedirect.com/science/article/B6VCT-4W1SRPD-1/2/f9e60fe85929b348860489e589725dc8
Hautphenne, Sophie
Houdt, Benny Van
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:136-1482011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:136-148
article
Consignment contracts with retail competition
Consignment contracts have been widely employed in many industries. Under such contracts, items are sold at a retailer's but the supplier retains the full ownership of the inventory until purchased by consumers; the supplier collects payment from the retailer based on actual units sold. We investigate how competition among retailers influences the supply chain decisions and profits under different consignment arrangements, namely a consignment price contract and a consignment contract with revenue share. First, we investigate how these two consignment contracts and a price only contract compare from the perspective of each supply chain partner. We find that the retailers benefit more from a consignment price contract than from a consignment contract with revenue share or a price only contract, regardless of the level of retailer differentiation. The supplier's most beneficial contact, however, critically depends upon the level of retailer differentiation: a consignment contract with revenue share is preferable for the supplier if retailer differentiation is strong; otherwise a consignment price contract is preferable. Second, we study how retailer differentiation affects the profits of all supply chain partners. We find that less retailer differentiation improves the supplier's profit for both types of consignment contract. Moreover, less retailer differentiation improves profits of the retailers in a consignment price contract, but not necessarily in a consignment contract with revenue share.
Supply chain Consignment Retail competition
http://www.sciencedirect.com/science/article/pii/S0377221711005133
Adida, Elodie
Ratisoontorn, Nantaporn
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:21-242011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:21-24
article
Locating a competitive facility in the plane with a robustness criterion
A new model for locating a competitive facility in the plane in a robust way is presented and embedded in the literature on robustness in facility location. Its mathematical properties are investigated and new sharp bounds for a deterministic method that guarantees the global optimum are derived and evaluated.
Robustness Facility location Robust solutions Competitive location Huff model dc programming
http://www.sciencedirect.com/science/article/pii/S0377221711004887
Blanquero, R.
Carrizosa, E.
Hendrix, E.M.T.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:457-4722011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:457-472
article
A taxonomy and review of the fuzzy data envelopment analysis literature: Two decades in the making
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the [alpha]-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA.
Data envelopment analysis Fuzzy sets Tolerance approach [alpha]-Level based approach Fuzzy ranking approach Possibility approach
http://www.sciencedirect.com/science/article/pii/S0377221711001329
Hatami-Marbini, Adel
Emrouznejad, Ali
Tavana, Madjid
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:665-6732011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:665-673
article
The signal model: A model for competing risks of opportunistic maintenance
This paper presents a competing risks reliability model for a system that releases signals each time its condition deteriorates. The released signals are used to inform opportunistic maintenance. The model provides a framework for the determination of the underlying system lifetime from right-censored data, without requiring explicit assumptions about the type of censoring to be made. The parameters of the model are estimated from observational data by using maximum likelihood estimation. We illustrate the estimation process through a simulation study. The proposed signal model can be used to support decision-making in optimising preventive maintenance: at a component level, estimates of the underlying failure distribution can be used to identify the critical signal that would trigger maintenance of the individual component; at a multi-component system level, accurate estimates of the component underlying lifetimes are important when making general maintenance decisions. The benefit of good estimation from censored data, when adequate knowledge about the dependence structure is not available, may justify the additional data collection cost in cases where full signal data is not available.
Reliability Maintenance Competing risks Statistical inference
http://www.sciencedirect.com/science/article/pii/S0377221711004413
Bedford, Tim
Dewan, Isha
Meilijson, Isaac
Zitrou, Athena
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:485-4922011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:485-492
article
An interior proximal method in vector optimization
This paper studies the vector optimization problem of finding weakly efficient points for maps from to , with respect to the partial order induced by a closed, convex, and pointed cone , with nonempty interior. We develop for this problem an extension of the proximal point method for scalar-valued convex optimization problem with a modified convergence sensing condition that allows us to construct an interior proximal method for solving VOP on nonpolyhedral set.
Interior point methods Vector optimization C-convex Positively lower semicontinuous
http://www.sciencedirect.com/science/article/pii/S0377221711004115
Villacorta, Kely D.V.
Oliveira, P. Roberto
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:420-4332011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:420-433
article
Optimization of R&D project portfolios under endogenous uncertainty
Project portfolio management deals with the dynamic selection of research and development (R&D) projects and determination of resource allocations to these projects over a planning period. Given the uncertainties and resource limitations over the planning period, the objective is to maximize the expected total discounted return or the expectation of some other function for all projects over a long time horizon. We develop a detailed formal description of this problem and the corresponding decision process, and then model it as a multistage stochastic integer program with endogenous uncertainty. Accounting for this endogeneity, we propose an efficient solution approach for the resulting model, which involves the development of a formulation technique that is amenable to scenario decomposition. The proposed solution algorithm also includes an application of the sample average approximation method, where the sample problems are solved through Lagrangian relaxation and a new lower bounding heuristic. The performance of the overall solution procedure is demonstrated using several implementations of the proposed approach.
OR in research and development Project portfolio Technology management R&D Multistage stochastic programming Endogenous uncertainty
http://www.sciencedirect.com/science/article/B6VCT-504127Y-1/2/e3d8fbc74177b8efd6bbad180f34317b
Solak, Senay
Clarke, John-Paul B.
Johnson, Ellis L.
Barnes, Earl R.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:105-1142011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:105-114
article
Optimal operating strategy for a long-haul liner service route
This paper proposes an optimal operating strategy problem arising in liner shipping industry that aims to determine service frequency, containership fleet deployment plan, and sailing speed for a long-haul liner service route. The problem is formulated as a mixed-integer nonlinear programming model that cannot be solved efficiently by the existing solution algorithms. In view of some unique characteristics of the liner shipping operations, this paper proposes an efficient and exact branch-and-bound based [epsilon]-optimal algorithm. In particular, a mixed-integer nonlinear model is first developed for a given service frequency and ship type; two linearization techniques are subsequently presented to approximate this model with a mixed-integer linear program; and the branch-and-bound approach controls the approximation error below a specified tolerance. This paper further demonstrates that the branch-and-bound based [epsilon]-optimal algorithm obtains a globally optimal solution with the predetermined relative optimality tolerance [epsilon] in a finite number of iterations. The case study based on an existing long-haul liner service route shows the effectiveness and efficiency of the proposed solution method.
Logistics Liner shipping Sailing speed Branch-and-bound
http://www.sciencedirect.com/science/article/pii/S0377221711005054
Meng, Qiang
Wang, Shuaian
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:268-2802011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:268-280
article
Customer rebates and retailer incentives in the presence of competition and price discrimination
Promotions are important tools for matching supply and demand in many industries. In the United States automotive industry, promotions are frequently offered, which may be given directly to customers (rebates) or given to dealers (incentives) to stimulate demand. We analyze the performance of customer rebate and retailer incentive promotions under competition. We study a setting with two manufacturers making simultaneous pricing and promotion decisions, and with two price-discriminating retailers as Stackelberg followers making simultaneous order quantity decisions. In the benchmark case with no promotions, we characterize the equilibria in closed form. We find that retailer incentives can be used by manufacturers to simultaneously improve each of their profits but can potentially lead to lower retailer profits. When manufacturers use customer rebates, we show that a manufacturer is able to decrease the profit of her competitor while increasing her own profit, although she is also at risk for her competitor to use rebates in a similar fashion. Unlike the monopoly case where the manufacturers are always better off with retailer incentives, customer rebates can be more profitable under some cases in the presence of competition. Using numerical examples we generate insights on the manufacturers' preference of promotions in different market settings.
Retailer incentives Customer rebates Competition Automotive industry First-degree price discrimination
http://www.sciencedirect.com/science/article/pii/S0377221711003195
Demirag, Ozgun Caliskan
Keskinocak, Pinar
Swann, Julie
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:616-6262011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:616-626
article
Manufacturing performance measurement and target setting: A data envelopment analysis approach
Manufacturing decision makers have to deal with a large number of reports and metrics for evaluating the performance of manufacturing systems. Since the metrics provide different and at times conflicting assessments, it is hard for the manufacturing decision makers to track and improve overall manufacturing system performance. This research presents a data envelopment analysis (DEA) based approach for performance measurement and target setting of manufacturing systems. The approach is applied to two different manufacturing environments. The performance peer groups identified using DEA are utilized to set performance targets and to guide performance improvement efforts. The DEA scores are checked against past process modifications that led to identified performance changes. Limitations of the DEA based approach are presented when considering measures that are influenced by factors outside of the control of the manufacturing decision makers. The potential of a DEA based generic performance measurement approach for manufacturing systems is provided.
Manufacturing Performance measurement Target setting Data envelopment analysis
http://www.sciencedirect.com/science/article/pii/S037722171100453X
Jain, Sanjay
Triantis, Konstantinos P.
Liu, Shiyong
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:703-7152011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:703-715
article
An innovative orders-of-magnitude approach to AHP-based mutli-criteria decision making: Prioritizing divergent intangible humane acts
An innovative Analytic Hierarchy Process-based structure is developed to capture the relationship between various levels of activities contributed by people to society. Physical objects have widespread extension and degrees of importance that often differ by many orders of magnitude. Similarly, mental thoughts and criteria occur in widely heterogeneous entities that have to be sorted and arranged into homogeneous groups of few elements in each group so that one can evaluate the relationships among them accurately, from the smallest to the largest. It is through such a framework for organizing factors with smooth transition that it is possible to derive reliable priorities from expert judgments. The proposed model enables one to make decisions and allocate resources in as detailed and fine a way as possible. In addition to the traditional approach of structuring criteria into multiple clusters, the alternatives of a decision are also organized into the lowest multiple levels of that hierarchy. This arrangement and evaluation of alternatives differs from one criterion to another, which adds to the complexity of the undertaking when the alternatives are heterogeneous. The coherent approach to structuring complex decisions with the Analytic Hierarchy Process enables one to transcend the complexity of dealing in a scientific way with the problem of widespread orders of magnitude of criteria and alternatives in a complex decision. When the magnitudes are actually very small or very large, the accuracy of rating alternatives one at a time instead of comparing them in pairs involves much guessing, and can lead to a questionable outcome. Alternatively, comparisons, which are necessary for the measurement of intangibles, have greater and better justified accuracy.
Decision support structure Multivariate relative measurement Intangibles AHP Monetary value
http://www.sciencedirect.com/science/article/pii/S0377221711004449
Saaty, Thomas L.
Shang, Jennifer S.
oai:RePEc:eee:ejores:v:195:y:2009:i:1:p:75-882011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:195:y:2009:i:1:p:75-88
article
The dynamic frequency assignment problem
In this paper, we consider a frequency assignment problem occurring in a military context. The main originality of the problem pertains to its dynamic dimension: new communications requiring frequency assignments need to be established throughout a battlefield deployment. The problem resolution framework decomposes into three phases: assignment of an initial kernel of communications, dynamic assignment of new communication links and a repair process when no assignment is possible. Different solution methods are proposed and extensive computational experiments are carried out on realistic instances.
Frequency assignment Dynamic problem Heuristics Tabu search and consistent neighborhood Branch&Bound
http://www.sciencedirect.com/science/article/B6VCT-4RR8YR5-5/2/50579c934e2146544df4a3a91a3ec5ea
Dupont, Audrey
Linhares, Andréa Carneiro
Artigues, Christian
Feillet, Dominique
Michelon, Philippe
Vasquez, Michel
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:493-5002011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:493-500
article
Solving discrete systems of nonlinear equations
We study the existence problem of a zero point of a function defined on a finite set of elements of the integer lattice of the n-dimensional Euclidean space . It is assumed that the set is integrally convex, which implies that the convex hull of the set can be subdivided in simplices such that every vertex is an element of and each simplex of the triangulation lies in an n-dimensional cube of size one. With respect to this triangulation we assume that the function satisfies some property that replaces continuity. Under this property and some boundary condition the function has a zero point. To prove this we use a simplicial algorithm that terminates with a zero point within a finite number of iterations. The standard technique of applying a fixed point theorem to a piecewise linear approximation cannot be applied, because the 'continuity property' is too weak to assure that a zero point of the piecewise linear approximation induces a zero point of the function itself. We apply the main existence result to prove the existence of a pure Cournot-Nash equilibrium in a Cournot oligopoly model. We further obtain a discrete analogue of the well-known Borsuk-Ulam theorem and a theorem for the existence of a solution for the discrete nonlinear complementarity problem.
Discrete system of equations Triangulation Simplicial algorithm Fixed point Zero point
http://www.sciencedirect.com/science/article/pii/S0377221711004498
van der Laan, Gerard
Talman, Dolf
Yang, Zaifu
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:149-1602011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:149-160
article
Optimizing yard assignment in an automotive transshipment terminal
This paper studies a yard management problem in an automotive transshipment terminal. Groups of cars arrive to and depart from the terminal in a given planning period. These groups must be assigned to parking rows under some constraints resulting from managerial rules. The main objective is the minimization of the total handling time. Model extensions to handle application specific issues such as a rolling horizon and a manpower leveling objective are also discussed. The main features of the problem are modeled as an integer linear program. However, solving this formulation by a state-of-the-art solver is impractical. In view of this, we develop a metaheuristic algorithm based on the adaptive large neighborhood search framework. Computational results on real-life data show the efficacy of the proposed metaheuristic algorithm.
Logistics Yard management Automotive transshipment terminal Adaptive large neighborhood search
http://www.sciencedirect.com/science/article/pii/S0377221711005376
Cordeau, Jean-François
Laporte, Gilbert
Moccia, Luigi
Sorrentino, Gregorio
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:169-1802011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:169-180
article
A Bayesian approach to the triage problem with imperfect classification
A collection of jobs (or customers, or patients) wait impatiently for service. Each has a random lifetime during which it is available for service. Should this lifetime expire before its service starts then it leaves unserved. Limited resources mean that it is only possible to serve one job at a time. We wish to schedule the jobs for service to maximise the total number served. In support of this objective all jobs are subject to an initial triage, namely an assessment of both their urgency and of their service requirement. This assessment is subject to error. We take a Bayesian approach to the uncertainty generated by error prone triage and discuss the design of heuristic policies for scheduling jobs for service to maximise the Bayes' return (mean number of jobs served). We identify problem features for which a high price is paid in number of services lost for poor initial triage and for which improvements in initial job assessment yield significant improvements in service outcomes. An analytical upper bound for the cost of imperfect classification is developed for exponentially distributed lifetime cases.
Dynamic programming Bayes sequential decision problem Imperfect classification Stochastic scheduling Optimal service policy
http://www.sciencedirect.com/science/article/pii/S0377221711004929
Li, Dong
Glazebrook, Kevin D.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:644-6552011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:644-655
article
Simulation metamodeling with dynamic Bayesian networks
This paper presents a novel approach to simulation metamodeling using dynamic Bayesian networks (DBNs) in the context of discrete event simulation. A DBN is a probabilistic model that represents the joint distribution of a sequence of random variables and enables the efficient calculation of their marginal and conditional distributions. In this paper, the construction of a DBN based on simulation data and its utilization in simulation analyses are presented. The DBN metamodel allows the study of the time evolution of simulation by tracking the probability distribution of the simulation state over the duration of the simulation. This feature is unprecedented among existing simulation metamodels. The DBN metamodel also enables effective what-if analysis which reveals the conditional evolution of the simulation. In such an analysis, the simulation state at a given time is fixed and the probability distributions representing the state at other time instants are updated. Simulation parameters can be included in the DBN metamodel as external random variables. Then, the DBN offers a way to study the effects of parameter values and their uncertainty on the evolution of the simulation. The accuracy of the analyses allowed by DBNs is studied by constructing appropriate confidence intervals. These analyses could be conducted based on raw simulation data but the use of DBNs reduces the duration of repetitive analyses and is expedited by available Bayesian network software. The construction and analysis capabilities of DBN metamodels are illustrated with two example simulation studies.
Simulation Dynamic Bayesian networks Discrete event simulation Simulation metamodeling
http://www.sciencedirect.com/science/article/pii/S0377221711004127
Poropudas, Jirka
Virtanen, Kai
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:732-7382011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:732-738
article
A probability-mapping algorithm for calibrating the posterior probabilities: A direct marketing application
Calibration refers to the adjustment of the posterior probabilities output by a classification algorithm towards the true prior probability distribution of the target classes. This adjustment is necessary to account for the difference in prior distributions between the training set and the test set. This article proposes a new calibration method, called the probability-mapping approach. Two types of mapping are proposed: linear and non-linear probability mapping. These new calibration techniques are applied to 9 real-life direct marketing datasets. The newly-proposed techniques are compared with the original, non-calibrated posterior probabilities and the adjusted posterior probabilities obtained using the rescaling algorithm of Saerens et al. (2002). The results recommend that marketing researchers must calibrate the posterior probabilities obtained from the classifier. Moreover, it is shown that using a 'simple' rescaling algorithm is not a first and workable solution, because the results suggest applying the newly-proposed non-linear probability-mapping approach for best calibration performance.
Data mining Decision support systems Direct marketing Response modeling Calibration
http://www.sciencedirect.com/science/article/pii/S0377221711004528
Coussement, Kristof
Buckinx, Wouter
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:683-6962011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:683-696
article
Efficient space-filling and non-collapsing sequential design strategies for simulation-based modeling
Simulated computer experiments have become a viable cost-effective alternative for controlled real-life experiments. However, the simulation of complex systems with multiple input and output parameters can be a very time-consuming process. Many of these high-fidelity simulators need minutes, hours or even days to perform one simulation. The goal of global surrogate modeling is to create an approximation model that mimics the original simulator, based on a limited number of expensive simulations, but can be evaluated much faster. The set of simulations performed to create this model is called the experimental design. Traditionally, one-shot designs such as the Latin hypercube and factorial design are used, and all simulations are performed before the first model is built. In order to reduce the number of simulations needed to achieve the desired accuracy, sequential design methods can be employed. These methods generate the samples for the experimental design one by one, without knowing the total number of samples in advance. In this paper, the authors perform an extensive study of new and state-of-the-art space-filling sequential design methods. It is shown that the new sequential methods proposed in this paper produce results comparable to the best one-shot experimental designs available right now.
Regression Design of computer experiments Experimental design Sequential design Space-filling
http://www.sciencedirect.com/science/article/pii/S0377221711004577
Crombecq, K.
Laermans, E.
Dhaene, T.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:281-2882011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:281-288
article
The decision to raise firm value through a sports-business exchange: How much are Real Madrid's goals worth to its president's company's goals?
Global brands emerging from the world of sports are becoming commonplace, and firms invest in the realm of sports, usually through sponsorship initiatives, to get a link with these global brands. Over and above just a mere business link, what if a company makes a personal commitment to get into the core of a renowned, celebrated sports team? This article provides managers with a procedure to analyze, in a weekly basis, how valuable this type of decision is. A conceptual model shows that the personal involvement of a firm's figurehead in a first-class sports club can impact positively on firm value if the person is doing well in the task s/he is entrusted with by the club. The empirical application to the soccer club Real Madrid, over 1,409Â days and 215 matches, finds that the club's performance on the field has a significant impact on the economic returns of its president's company, with asymmetrical effects on firm value in a "loss aversion" pattern, that is, lost matches have a greater effect on firm value than games won.
Decision analysis Sports-business exchange Brand equity Firm value Loss aversion
http://www.sciencedirect.com/science/article/pii/S0377221711003766
Nicolau, Juan L.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:579-5872011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:579-587
article
Joint logistics and financial services by a 3PL firm
Integrated logistics and financial services have been practiced by third party logistics (3PL) firms for years; however, the literature has been silent on the value of 3PL firms as credit providers in budget-constrained supply chains. This paper investigates an extended supply chain model with a supplier, a budget-constrained retailer, a bank, and a 3PL firm, in which the retailer has insufficient initial budget and may borrow or obtain trade credit from either a bank (traditional role) or a 3PL firm (control role). Our analysis indicates that the control role model yields higher profits not only for the 3PL firm but also for the supplier, the retailer, and the entire supply chain. In comparison with a supplier credit model where the supplier provides the trade credit, the control role model yields a better performance for the supply chain as long as the 3PL firm's marginal profit is greater than that of the supplier. We further demonstrate that, for all players, both the control role and supplier credit models can outperform the classic newsvendor model without budget constraint.
Third party logistics (3PL) Budget-constrained retailer Supply chain Financial service
http://www.sciencedirect.com/science/article/pii/S0377221711004164
Chen, Xiangfeng
Cai, Gangshu (George)
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:526-5352011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:526-535
article
A tree search method for the container loading problem with shipment priority
This paper addresses a special kind of container loading problem with shipment priority. We present a tree search method, which is based on a greedy heuristic. In the greedy heuristic, blocks made up of identical items with the same orientation are selected for packing into a container. Five evaluation functions are proposed for block selection, and the different blocks selected by each evaluation function constitute the branches of the search tree. A method of space splitting and merging is also embedded in the algorithm to facilitate efficient use of the container space. In addition, the proposed algorithm covers an important constraint called shipment priority to solve practical problems. The validity of the proposed algorithm is examined by comparing the present results with those of published algorithms using the same data.
Packing Container loading problem Heuristic Tree search Shipment priority
http://www.sciencedirect.com/science/article/pii/S0377221711003699
Ren, Jidong
Tian, Yajie
Sawaragi, Tetsuo
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:301-3082011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:301-308
article
Pro-efficiency: Data speak more than technical efficiency
In this study, we demonstrate a new method of addressing efficiency in situations in which only the input and output data are available, while evaluating efficiency more accurately than is possible via the ordinary data envelopment analysis (DEA). Technical efficiency is important, but management always desires information regarding the profit aspects of performance. In practice, however, the precise price data are frequently unavailable. Is it possible to approximate profit efficiency in the absence of price information? We develop a simple and usable approach, a linear programming model, for the evaluation of profit efficiency. Our approach implies technical efficiency in DEA and gives rise to the upper bound of profit efficiency, referred to as pro-efficiency. We also report a successful application of our method to a securities company, in which a comparison of the actual profit data and the pro-efficiency measures of the company's branches demonstrates a significant correlation.
Efficiency measurement DEA Profit efficiency Securities company
http://www.sciencedirect.com/science/article/pii/S0377221711004097
Sam Park, K.
Cho, Jin-Wan
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:181-1872011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:181-187
article
Should European gamblers play lotto in the USA?
Any jackpot building game is designed to have a negative expected return for the gambler, but it can be profitable under certain circumstances. Previous studies have shown that the purchase of a single ticket of US-American state lotteries is sometimes a gamble with a positive expected value. Lottery winnings are not taxed in Europe, which suggests that the profitability of European games may be even higher. We present an exact formula for the calculation of the expected value of a single lotto ticket and find European lottery drawings to be far less profitable for the gambler compared to the US-American lotto market. Those US lotteries that generate profitable drawings are not characterized by higher redistribution rates or by their specific rules, but by the purchasing behavior of the gamblers. These gamblers buy far fewer tickets (per capita) and they barely react to increasing jackpots, even though the jackpots are large enough to cause positive expected winnings.
Applied probability Gambling Lottery
http://www.sciencedirect.com/science/article/pii/S0377221711004978
Hofer, Vera
Leitner, Johannes
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:739-7482011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:739-748
article
An adaptive evaluation mechanism for online traders
Economic agents in electronic markets generally consider reputation to be a significant factor in selecting trading partners. Most traditional online businesses publish reputation profile for traders that reflect average of the ratings received in previous transactions. Because of the importance of these ratings, there is an incentive for traders to partake in strategic behavior (for example shilling) to artificially inflate their rating. It is therefore important for an online business to be able to provide a robust estimate of a trader's reputation that is not easily affected by strategic behavior or noisy ratings. This paper proposes such an adaptive ratings-based reputation model. The model is based on a trader's transaction history, witness testimony, and other weighting factors. Learning is integrated to make the ratings model adaptive and robust in a dynamic environment. To validate the proposed model and to demonstrate the significance of its constructs, a multi-agent system is built to simulate the interactions among buyers and sellers in an electronic marketplace. The performance of the proposed model is compared to that of the reputation model used in most online marketplaces like Amazon, and to Huynh's model proposed in the literature.
E-commerce Reputation mechanism Online ratings Simulation Multi-agent system
http://www.sciencedirect.com/science/article/pii/S0377221711004565
You, Liangjun
Sikora, Riyaz
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:697-7022011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:697-702
article
On the set of imputations induced by the k-additive core
An extension to the classical notion of core is the notion of k-additive core, that is, the set of k-additive games which dominate a given game, where a k-additive game has its Möbius transform (or Harsanyi dividends) vanishing for subsets of more than k elements. Therefore, the 1-additive core coincides with the classical core. The advantages of the k-additive core is that it is never empty once kÂ [greater-or-equal, slanted]Â 2, and that it preserves the idea of coalitional rationality. However, it produces k-imputations, that is, imputations on individuals and coalitions of at most k individuals, instead of a classical imputation. Therefore one needs to derive a classical imputation from a k-order imputation by a so-called sharing rule. The paper investigates what set of imputations the k-additive core can produce from a given sharing rule.
Game theory Core k-Additive game Selectope
http://www.sciencedirect.com/science/article/pii/S0377221711004152
Grabisch, Michel
Li, Tong
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:97-1042011-08-25RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:97-104
article
A comprehensive extension of optimal ordering policy for stock-dependent demand under progressive payment scheme
In a recent paper, Soni and Shah (2008) presented an inventory model with a stock-dependent demand under progressive payment scheme, assuming zero ending-inventory and adopting a cost-minimization objective. However, with a stock-dependent demand a non-zero ending stock may increase profits resulting from the increased demand. This work is motivated by Soni and Shah's (2008) paper extending their model to allow for: (1) a non-zero ending-inventory, (2) a profit-maximization objective, (3) a limited inventory capacity and (4) deteriorating items with a constant deterioration rate. For the resulted model sufficient conditions for the existence and uniqueness of the optimal solution are provided. Finally, several economic interpretations of the theoretical results are also given.
Inventory Stock-dependent demand Progressive payment scheme Deteriorating items
http://www.sciencedirect.com/science/article/pii/S0377221711005042
Teng, Jinn-Tsair
Krommyda, Iris-Pandora
Skouri, Konstantina
Lou, Kuo-Ren
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:395-4042011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:395-404
article
Optimization problems in statistical learning: Duality and optimality conditions
Regularization methods are techniques for learning functions from given data. We consider regularization problems the objective function of which consisting of a cost function and a regularization term with the aim of selecting a prediction function f with a finite representation which minimizes the error of prediction. Here the role of the regularizer is to avoid overfitting. In general these are convex optimization problems with not necessarily differentiable objective functions. Thus in order to provide optimality conditions for this class of problems one needs to appeal on some specific techniques from the convex analysis. In this paper we provide a general approach for deriving necessary and sufficient optimality conditions for the regularized problem via the so-called conjugate duality theory. Afterwards we employ the obtained results to the Support Vector Machines problem and Support Vector Regression problem formulated for different cost functions.
Machine learning Tikhonov regularization Convex duality theory Optimality conditions
http://www.sciencedirect.com/science/article/pii/S0377221711002323
Bot, Radu Ioan
Lorenz, Nicole
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:340-3472011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:340-347
article
The nested consideration model: Investigating dynamic store consideration sets and store competition
The nested logit model has been widely used to study nested choice. A typical example of such nested choice is store patronage and brand choice. An important limitation of the nested logit model is that it assumes that all alternatives at both levels of the nest are available in the choice set of the consumer. While there is a wide literature on the incorporation of restricted choice sets into the logit model, the logical extension of this analysis to nested restricted choice sets has not been pursued in the literature. In this study we develop a nested consideration model that integrates store choice and brand choice incorporating the formation of dynamic restricted choice sets of both stores and brands. We term the model the nested consideration model and derive the related probabilities in a manner analogous to the well-known nested logit model. In an empirical illustration, we find that the nested consideration model shows better prediction than nested logit models (with the same explanatory variables). We find that it is important to account for dynamic store consideration sets rather than static sets or store loyalty measures. We also conduct simulations to demonstrate the importance of the nested consideration set model for correct pricing and store location decisions of business managers.
Marketing Nested consideration Store competition Consideration sets Nested logit
http://www.sciencedirect.com/science/article/pii/S0377221711003717
Pancras, Joseph
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:380-3922011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:380-392
article
An experimental study of the effect of uncertainty representation on decision making
This paper presents the results of an experiment investigating the effects of using different formats for representing uncertain attribute evaluations on decision making. Study participants make a series of hypothetical choices using six uncertainty formats - probability distributions, expected values, standard deviations, three-point (minimum-median-maximum) approximations, quantiles, and scenarios - and effects on decision making are tracked in terms of the quality of the final choice, the specific characteristics of the selected alternatives, and the difficulty experienced in making a decision. The results provide insights into how subjects make single- and multi-criteria choices in the presence of uncertainty (and some format for representing uncertainty) but in the absence of any real facilitation. The use of probability distributions appeared to overload subjects with information, leading to poorer and more difficult choices than if some intermediate level of summary was used - in particular three-point approximations or quantiles.
Multi-criteria analysis Decision analysis Decision support systems Uncertainty modelling Psychology
http://www.sciencedirect.com/science/article/pii/S0377221711003651
Durbach, Ian N.
Stewart, Theodor J.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:147-1592011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:147-159
article
Behavioural simulations in spot electricity markets
We study the consistency of behavioural simulation methods used to model the operations of wholesale electricity markets. We include different supply and demand representations and propose the Experience-Weighted Attractions method (Camerer and Ho, 1999) to encompass several behavioural paradigms. We compare the results across assumptions and to standard economic theory predictions. The match is good under flat and upward-slopping supply bidding, and also for plausible demand elasticity assumptions. Learning is influenced by the number of bids per plant and the initial conditions. The simulations perform best under reinforcement learning, less well under best-response and especially poorly under fictitious play. The overall conclusion is that simulation assumptions are far from innocuous. We link their performance to underlying features, and identify those that are better suited to model liberalised electricity markets.
Behavioural simulations Electricity auctions Market design Experience-weighted attraction (EWA) Learning from metadata
http://www.sciencedirect.com/science/article/pii/S0377221711002785
Banal-Estañol, Albert
Rupérez Micola, Augusto
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:331-3392011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:331-339
article
Optimal Bayesian fault prediction scheme for a partially observable system subject to random failure
A new method for predicting failures of a partially observable system is presented. System deterioration is modeled as a hidden, 3-state continuous time homogeneous Markov process. States 0 and 1, which are not observable, represent good and warning conditions, respectively. Only the failure state 2 is assumed to be observable. The system is subject to condition monitoring at equidistant, discrete time epochs. The vector observation process is stochastically related to the system state. The objective is to develop a method for optimally predicting impending system failures. Model parameters are estimated using EM algorithm and a cost-optimal Bayesian fault prediction scheme is proposed. The method is illustrated using real data obtained from spectrometric analysis of oil samples collected at regular time epochs from transmission units of heavy hauler trucks used in mining industry. A comparison with other methods is given, which illustrates effectiveness of our approach.
Maintenance Stochastic optimization Failure prediction Hidden Markov modeling Multivariate Bayesian control
http://www.sciencedirect.com/science/article/pii/S0377221711003675
Kim, Michael Jong
Jiang, Rui
Makis, Viliam
Lee, Chi-Guhn
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:418-4272011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:418-427
article
Forecasting the value effect of seasoned equity offering announcements
Seasoned Equity Offers (SEOs) by publicly listed firms generally result in unexpected negative share price returns, being often perceived as a signal of overvalued share prices and information asymmetries. Hence, forecasting the value effect of such announcements is of crucial importance for issuers, who wish to avoid share price dilution, but also for professional fund managers and individual investors alike. This study adopts the OR forecasting paradigm, where the latest part of the data is used as a holdout, on which a competition is performed unveiling the most effective forecasting techniques for the matter in question. We employ data from a European Market raising in total [euro]8 billion through 149 SEOs. We compare economic and econometric models to forecasting techniques mostly applied in the OR literature such as Nearest Neighbour approaches, Artificial Neural Networks as well as human Judgment. Evaluation in terms of statistical accuracy metrics indicates the superiority of the econometric models, while economic evaluation based on trading strategies and simulated profits attests expert judgement and nearest-neighbour approaches as top performers.
Financial forecasting Forecasting competitions Econometric models Artificial neural networks Judgment
http://www.sciencedirect.com/science/article/pii/S0377221711003481
Bozos, Konstantinos
Nikolopoulos, Konstantinos
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:91-982011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:91-98
article
Finding all pure strategy Nash equilibria in a planar location game
In this paper, we deal with a planar location-price game where firms first select their locations and then set delivered prices in order to maximize their profits. If firms set the equilibrium prices in the second stage, the game is reduced to a location game for which pure strategy Nash equilibria are studied assuming that the marginal delivered cost is proportional to the distance between the customer and the facility from which it is served. We present characterizations of local and global Nash equilibria. Then an algorithm is shown in order to find all possible Nash equilibrium pairs of locations. The minimization of the social cost leads to a Nash equilibrium. An example shows that there may exist multiple Nash equilibria which are not minimizers of the social cost.
Location Game theory Nash equilibrium
http://www.sciencedirect.com/science/article/pii/S0377221711003134
Díaz-Báñez, J.M.
Heredia, M.
Pelegrín, B.
Pérez-Lantero, P.
Ventura, I.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:136-1462011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:136-146
article
Trade credit for supply chain coordination
Trade-credit is a seller's short-term loan to the buyer, allowing the buyer to delay payment of an invoice. It has been the largest source of working capital for a majority of business-to-business firms in the United States. Numerous theories have been proposed to explain trade-credit, mainly from finance perspectives. It has also been an important issue in supply chain management. Surprisingly, most literature in supply chain management has examined the retailer's stocking policies given a supplier's trade-credit. This paper attempts to shed light on trade-credit from a supplier's perspective, and presents it as a tool for supply chain coordination. Specifically, we explicitly assume firms' financial needs for inventory. Following a Newsvendor framework, we assume that the supplier grants trade-credit and markdown allowance. Given the supplier's offer, the retailer determines order quantity and the financing option for the inventory, either trade-credit or direct financing from a financial institution. Our result shows that the supplier's markdown allowance alone cannot fully coordinate the supply chain if the retailer employs direct financing. Positive financing costs call for trade-credit in order to subsidize the retailer's costs of inventory financing. Using trade-credit in addition to markdown allowance, the supplier fully coordinates the retailer's decisions for the largest joint profit, and extracts a greater portion of the maximized joint profit.
Finance Trade-credit Inventory financing Supply chain coordination Newsvendor framework
http://www.sciencedirect.com/science/article/pii/S0377221711003171
Lee, Chang Hwan
Rhee, Byong-Duk
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:160-1672011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:160-167
article
Estimation methods for choice-based conjoint analysis of consumer preferences
Conjoint analysis, a preference measurement method typical in marketing research, has gradually expanded to other disciplines. Choice-based conjoint analysis (CBC) is currently the most popular type. Very few alternative estimation approaches have been suggested since the introduction of the Hierarchical Bayes (HB) method for estimating CBC utility functions. Studies that compare the performance of more than one of the proposed approaches and the HB are almost non- existing. We compare the performance of four published optimization-based procedures and additionally we introduce a new one called CP. The CP is an estimation approach based on convex penalty minimization. In comparison with HB as the benchmark we use eight field data sets. We base the performance comparisons on holdout validation, i.e. predictive performance. Among the optimization based procedures CP performs best. We run simulations to compare the extent to which CP and HB can recover the true utilities. With the field data on the average, the CP and HB results are equally good. However, depending on the problem characteristics, one may perform better than the other. In terms of average performance, the other four methods were inferior to CP and HB.
Utility function Optimization Conjoint analysis Marketing research Preference estimation
http://www.sciencedirect.com/science/article/pii/S0377221711003110
Halme, Merja
Kallio, Markku
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:489-4972011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:489-497
article
Two-stage production scheduling with an outsourcing option
This paper considers a two-stage production scheduling problem in which each activity requires two operations to be processed in stages 1 and 2, respectively. There are two options for processing each operation: the first is to produce it by utilizing in-house resources, while the second is to outsource it to a subcontractor. For in-house operations, a schedule is constructed and its performance is measured by the makespan, that is, the latest completion time of operations processed in-house. Operations by subcontractors are instantaneous but require outsourcing cost. The objective is to minimize the weighted sum of the makespan and the total outsourcing cost. This paper analyzes how the model's computational complexity changes according to unit outsourcing costs in both stages and describes the boundary between NP-hard and polynomially solvable cases. Finally, this paper presents an approximation algorithm for one NP-hard case.
Scheduling Outsourcing Flow shop scheduling NP-completeness Approximation algorithm
http://www.sciencedirect.com/science/article/pii/S0377221711002748
Lee, Kangbok
Choi, Byung-Cheon
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:118-1352011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:118-135
article
ELECTREGKMS: Robust ordinal regression for outranking methods
We present a new method, called ELECTREGKMS, which employs robust ordinal regression to construct a set of outranking models compatible with preference information. The preference information supplied by the decision maker (DM) is composed of pairwise comparisons stating the truth or falsity of the outranking relation for some real or fictitious reference alternatives. Moreover, the DM specifies some ranges of variation of comparison thresholds on considered pseudo-criteria. Using robust ordinal regression, the method builds a set of values of concordance indices, concordance thresholds, indifference, preference, and veto thresholds, for which all specified pairwise comparisons can be restored. Such sets are called compatible outranking models. Using these models, two outranking relations are defined, necessary and possible. Whether for an ordered pair of alternatives there is necessary or possible outranking depends on the truth of outranking relation for all or at least one compatible model, respectively. Distinguishing the most certain recommendation worked out by the necessary outranking, and a possible recommendation worked out by the possible outranking, ELECTREGKMS answers questions of robustness concern. The method is intended to be used interactively with incremental specification of pairwise comparisons, possibly with decreasing confidence levels. In this way, the necessary and possible outranking relations can be, respectively, enriched or impoverished with the growth of the number of pairwise comparisons. Furthermore, the method is able to identify troublesome pieces of preference information which are responsible for incompatibility. The necessary and possible outranking relations are to be exploited as usual outranking relations to work out recommendation in choice or ranking problems. The introduced approach is illustrated by a didactic example showing how ELECTREGKMS can support real-world decision problems.
Robust ordinal regression Outranking relation Multiple criteria ranking and choice ELECTRE-like method
http://www.sciencedirect.com/science/article/pii/S0377221711003079
Greco, Salvatore
Kadzinski, Milosz
Mousseau, Vincent
Slowinski, Roman
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:67-772011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:67-77
article
Compact bidding languages and supplier selection for markets with economies of scale and scope
Combinatorial auctions have been used in procurement markets with economies of scope. Preference elicitation is already a problem in single-unit combinatorial auctions, but it becomes prohibitive even for small instances of multi-unit combinatorial auctions, as suppliers cannot be expected to enumerate a sufficient number of bids that would allow an auctioneer to find the efficient allocation. Auction design for markets with economies of scale and scope are much less well understood. They require more compact and yet expressive bidding languages, and the supplier selection typically is a hard computational problem. In this paper, we propose a compact bidding language to express the characteristics of a supplier's cost function in markets with economies of scale and scope. Bidders in these auctions can specify various discounts and markups on overall spend on all items or selected item sets, and specify complex conditions for these pricing rules. We propose an optimization formulation to solve the resulting supplier selection problem and provide an extensive experimental evaluation. We also discuss the impact of different language features on the computational effort, on total spend, and the knowledge representation of the bids. Interestingly, while in most settings volume discount bids can lead to significant cost savings, some types of volume discount bids can be worse than split-award auctions in simple settings.
Decision support systems Auctions/bidding E-commerce
http://www.sciencedirect.com/science/article/pii/S0377221711003109
Bichler, Martin
Schneider, Stefan
Guler, Kemal
Sayal, Mehmet
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:358-3642011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:358-364
article
Integrated design and operation of remnant inventory supply chains under uncertainty
We consider the simultaneous design and operation of remnant inventory supply chains. Remnant inventory is generated when demand for various lengths of a product may be satisfied by existing inventory, or by cutting a large piece into smaller pieces. We formulate our problem as a two-stage stochastic mixed-integer program. In solving our stochastic program, we enhance the standard L-shaped method in two ways. Our computational experiments demonstrate that these enhancements are effective, dramatically reducing the solution time for large instances.
Remnant inventory Stochastic programming Mixed integer programming
http://www.sciencedirect.com/science/article/pii/S0377221711003833
Rajgopal, Jayant
Wang, Zhouyan
Schaefer, Andrew J.
Prokopyev, Oleg A.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:31-382011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:31-38
article
Iterated greedy for the maximum diversity problem
The problem of choosing a subset of elements with maximum diversity from a given set is known as the maximum diversity problem. Many algorithms and methods have been proposed for this hard combinatorial problem, including several highly sophisticated procedures. By contrast, in this paper we present a simple iterated greedy metaheuristic that generates a sequence of solutions by iterating over a greedy construction heuristic using destruction and construction phases. Extensive computational experiments reveal that the proposed algorithm is highly effective as compared to the best-so-far metaheuristics for the problem under consideration.
Maximum diversity problem Iterated greedy Local search Acceptance criterion
http://www.sciencedirect.com/science/article/pii/S0377221711003626
Lozano, M.
Molina, D.
GarcIÂ´a-MartIÂ´nez, C.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:15-262011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:15-26
article
Pivot-and-reduce cuts: An approach for improving Gomory mixed-integer cuts
Gomory mixed-integer cuts are of crucial importance in solving large-scale mixed-integer linear programs. Recently, there has been considerable research particularly on the strengthening of these cuts. We present a new heuristic algorithm which aims at improving Gomory mixed-integer cuts. Our approach is related to the reduce-and-split cuts. These cuts are based on an algorithm which tries to reduce the coefficients of the continuous variables by forming integer linear combinations of simplex tableau rows. Our algorithm is designed to achieve the same result by performing pivots on the simplex tableau. We give a detailed description of the algorithm and its implementation. Finally, we report on computational results with our approach and analyze its performance. The results indicate that our algorithm can enhance the performance of the Gomory mixed-integer cuts.
Integer programming Cutting planes Gomory mixed-integer cuts
http://www.sciencedirect.com/science/article/pii/S037722171100350X
Wesselmann, Franz
Koberstein, Achim
Suhl, Uwe H.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:375-3832011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:375-383
article
Admission policies for the customized stochastic lot scheduling problem with strict due-dates
This papers considers admission control and scheduling of customer orders in a production system that produces different items on a single machine. Customer orders drive the production and belong to product families, and have family dependent due-date, size, and reward. When production changes from one family to another a setup time is incurred. Moreover, if an order cannot be accepted, it is considered lost upon arrival. The problem is to find a policy that accepts/rejects and schedules orders such that long run profit is maximized. This problem finds its motivation in batch industries in which suppliers have to realize high machine utilization while delivery times should be short and reliable and the production environment is subject to long setup times. We model the joint admission control/scheduling problem as a Markov decision process (MDP) to gain insight into the optimal control of the production system and use the MDP to benchmark the performance of a simple heuristic acceptance/scheduling policy. Numerical results show that the heuristic performs very well compared with the optimal policy for a wide range of parameter settings, including product family asymmetries in arrival rate, order size, and order reward.
Scheduling Order acceptance Make-to-order production Stochastic lot scheduling problem Due-dates
http://www.sciencedirect.com/science/article/pii/S0377221711002311
Germs, Remco
Van Foreest, Nicky D.
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:478-4882011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:478-488
article
The impact of sharing customer returns information in a supply chain with and without a buyback policy
In this paper, we examine a single period problem in a supply chain in which a Stackelberg manufacturer supplies a product to a retailer who faces customer returns and demand uncertainty. We show that the manufacturer incurs a significant profit loss with and without a buyback policy if it fails to account for customer returns in the wholesale price decision. Under the assumption that the retailer is better informed than the manufacturer on customer returns information, we show that without a buyback policy, the retailer prefers not to share if the manufacturer overestimates while it prefers to share customer returns information if the manufacturer underestimates this information. If the manufacturer offers a buyback policy, we have the opposite results. We also discuss incentives to share the customer returns information and some of the issues that are raised in sharing this information.
Customer returns Information sharing Customer returns policies Buyback policies
http://www.sciencedirect.com/science/article/pii/S0377221711002384
Chen, Jing
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:53-662011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:53-66
article
Pareto analysis of supply chain contracts under satisficing objectives
Supply chain coordination has become critical to firms as increased pressure is placed on them to improve performance. We evaluate the performance of Push, Pull, and Advance-purchase discount (APD) contracts in a manufacturer-retailer supply chain where one or both firms have a satisficing objective of maximizing the probability of achieving a target profit. We identify the resulting operational modes of the supply chain and potential conflicts over the preferred contracts under the Push, Pull, and APD contracts. When both firms are satisficing, conflict over the preferred contract arises when the manufacturer has an ambitious profit target or the retailer has a low profit target. We show that the Push contract can result in a large decrease in the expected profit of a risk-neutral manufacturer when the retailer maximizes the probability of achieving her maximum expected profit. We find that a modified buy-back and profit guarantee contracts can provide significant Pareto improvement over Push or APD contracts when the manufacturer is risk-neutral and the retailer is satisficing, while revenue-sharing contracts cannot. In contrast, revenue sharing and modified buy-back contracts are Pareto dominant under certain conditions when the manufacturer is satisficing and the retailer is risk-neutral.
Supply chain contracts Newsvendor model Pareto improvements
http://www.sciencedirect.com/science/article/pii/S0377221711003092
He, Xiuli
Khouja, Moutaz
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:526-5372011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:526-537
article
Strategic interactions in traditional franchise systems: Are franchisors always better off?
The effects of price competition and advertising spillover on franchisees' decision to cooperate and on franchisor's contractual preferences are investigated. We show that the franchisees' decision to cooperate or not depends on the type of franchise contracts. Under exclusive territory contracts, any mode of play between franchisees give the same profits to the franchisees and franchisor. Contracts that allow price competition and well targeted local advertising offer a good ground for horizontal cooperation, which may or may not benefit the franchisor depending on whether the prices are strategic substitutes or strategic complements. Contracts in which price competition is allowed and the burden of advertising decisions is totally transferred to the franchisor lead to cooperation between franchisees at the expense of the franchisor. Franchisees do not cooperate to the benefit of the franchisor if local advertising is predatory and price competition is not allowed in the contract, but franchisees are given the responsibility to undertake local advertising. Also, the franchisor endorses cooperation between franchisees when local advertising has a public good nature, but such a cooperation may never occur when the impact of local advertising on demand is significant. We finally show that while some contracts always dominate others, the choice of a franchise contract may also depend on local competition and/or the franchise goodwill.
Franchising Horizontal cooperation Differential games
http://www.sciencedirect.com/science/article/pii/S0377221711002268
Martín-Herrán, Guiomar
Sigué, Simon Pierre
Zaccour, Georges
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:27-302011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:27-30
article
Probability of unique integer solution to a system of linear equations
We consider a system of m linear equations in n variables AxÂ =Â d and give necessary and sufficient conditions for the existence of a unique solution to the system that is integer: xÂ [set membership, variant]Â {-1,Â 1}n. We achieve this by reformulating the problem as a linear program and deriving necessary and sufficient conditions for the integer solution to be the unique primal optimal solution. We show that as long as m is larger than n/2, then the linear programming reformulation succeeds for most instances, but if m is less than n/2, the reformulation fails on most instances. We also demonstrate that these predictions match the empirical performance of the linear programming formulation to very high accuracy.
Unique integer solution Linear equations Linear programming
http://www.sciencedirect.com/science/article/pii/S0377221711003511
Mangasarian, O.L.
Recht, Benjamin
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:99-1082011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:99-108
article
Interaction indices for games on combinatorial structures with forbidden coalitions
The notion of interaction among a set of players has been defined on the Boolean lattice and Cartesian products of lattices. The aim of this paper is to extend this concept to combinatorial structures with forbidden coalitions. The set of feasible coalitions is supposed to fulfil some general conditions. This general representation encompasses convex geometries, antimatroids, augmenting systems and distributive lattices. Two axiomatic characterizations are obtained. They both assume that the Shapley value is already defined on the combinatorial structures. The first one is restricted to pairs of players and is based on a generalization of a recursivity axiom that uniquely specifies the interaction index from the Shapley value when all coalitions are permitted. This unique correspondence cannot be maintained when some coalitions are forbidden. From this, a weak recursivity axiom is defined. We show that this axiom together with linearity and dummy player are sufficient to specify the interaction index. The second axiomatic characterization is obtained from the linearity, dummy player and partnership axioms. An interpretation of the interaction index in the context of surplus sharing is also proposed. Finally, our interaction index is instantiated to the case of games under precedence constraints.
Game theory Cooperative games Interaction index Combinatorial structure Shapley value
http://www.sciencedirect.com/science/article/pii/S0377221711003493
Labreuche, Christophe
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:393-4022011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:393-402
article
Optimal production strategy under demand fluctuations: Technology versus capacity
This paper provides a comparative analysis of five possible production strategies for two kinds of flexibility investment, namely flexible technology and flexible capacity, under demand fluctuations. Each strategy is underpinned by a set of operations decisions on technology level, capacity amount, production quantity, and pricing. By evaluating each strategy, we show how market uncertainty, production cost structure, operations timing, and investment costing environment affect a firm's strategic decisions. The results show that there is no sequential effect of the two flexibility investments. We also illustrate the different ways in which flexible technology and flexible capacity affect a firm's profit under demand fluctuations. The results reveal that compared to no flexibility investment, flexible technology investment earns the same or a higher profit for a firm, whereas flexible capacity investment can be beneficial or harmful to a firm's profit. Moreover, we prove that higher flexibility does not guarantee more profit. Depending on the situation, the optimal strategy can be any one of the five possible strategies. We also provide the optimality conditions for each strategy.
Production Investment analysis Flexible manufacturing systems Manufacturing
http://www.sciencedirect.com/science/article/pii/S0377221711003729
Yang, L.
Ng, C.T.
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:384-3872011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:384-387
article
Unbounded knapsack problems with arithmetic weight sequences
We investigate a special case of the unbounded knapsack problem in which the item weights form an arithmetic sequence. We derive a polynomial time algorithm for this special case with running time O(n8), where n denotes the number of distinct items in the instance. Furthermore, we extend our approach to a slightly more general class of knapsack instances.
Combinatorial optimization Computational complexity Dynamic programming Polynomially solvable special case
http://www.sciencedirect.com/science/article/pii/S0377221711002396
Deineko, Vladimir G.
Woeginger, Gerhard J.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:39-522011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:39-52
article
Modeling and analysis of the multiperiod effects of social relationship on supply chain networks
In this paper, we analyze the effects of levels of social relationship on a multiperiod supply chain network with multiple decision-makers (suppliers, manufacturers, and retailers) associated at different tiers. The model incorporates the individual attitudes towards disruption and opportunism risks and allows us to investigate the interplay of the heterogeneous decision-makers and to compute the resultant network equilibrium pattern of production, transactions, prices, and levels of social relationship over the multiperiod planning horizon. In our analysis, we focus on the following questions: (1) how do the evolving relationships affect the profitability and risks of supply chain firms as well as the prices and demands of the product in the market? (2) how do the relationships with the upstream supply chain firms affect the relationships with the downstream firms, and how these relationships influence the profitability and risks of the supply chain firms? (3) how do the supply disruption risks interact with the opportunism risks through supply chain relationships, and how these risks influence the profitability of the firms? The results show that high levels of relationship can lead to lower supply chain overall cost, lower risk, lower prices, higher product transaction and therefore higher profit.
Supply chain management Social relationship Risk management Network equilibrium Pricing Multicriteria decision-making
http://www.sciencedirect.com/science/article/pii/S0377221711002815
Cruz, Jose M.
Liu, Zugang
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:498-5082011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:498-508
article
Approximate analysis of non-stationary loss queues and networks of loss queues with general service time distributions
A Fixed Point Approximation (FPA) method has recently been suggested for non-stationary analysis of loss queues and networks of loss queues with Exponential service times. Deriving exact equations relating time-dependent mean numbers of busy servers to blocking probabilities, we generalize the FPA method to loss systems with general service time distributions. These equations are combined with associated formulae for stationary analysis of loss systems in steady state through a carried load to offered load transformation. The accuracy and speed of the generalized methods are illustrated through a wide set of examples.
Queueing Erlang loss model Time-dependent arrival rate Carried load
http://www.sciencedirect.com/science/article/pii/S0377221711002402
Izady, N.
Worthington, D.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:405-4132011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:405-413
article
A new approach to multi-criteria sorting based on fuzzy outranking relations: The THESEUS method
In this paper, we propose the THESEUS method, a new approach based on fuzzy outranking relations to multi-criteria sorting problems. Compared with other outranking-based methods, THESEUS is inspired by another view of multi-criteria classification problems. It utilizes a new way of evaluating the assignment of an object to an element of the set of ordered categories that were previously defined. This way is based on comparing every possible assignment with the information from various preference relations that are derived from a fuzzy outranking relation defined on the universe of objects. The appropriate assignment is determined by solving a simple selection problem. The capacity of a reference set for making appropriate assignments is related to a good characterization of the categories. A single reference action characterizing a category may be insufficient to achieve well-determined assignments. In this paper, the reference set capacity to perform appropriate assignments is characterized by some new concepts. This capacity may be increased when more objects are added to the reference set. THESEUS is a method for handling the preference information contained in such larger reference sets.
Multiple criteria analysis Sorting Outranking methods
http://www.sciencedirect.com/science/article/pii/S0377221711002736
Fernandez, Eduardo
Navarro, Jorge
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:455-4572011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:455-457
article
Note on "An efficient approach for solving the lot-sizing problem with time-varying storage capacities"
In a recent paper Gutièrrez et al. [1] show that the lot-sizing problem with inventory bounds can be solved in time. In this note we show that their algorithm does not lead to an optimal solution in general.
Inventory Lot-sizing Inventory bounds
http://www.sciencedirect.com/science/article/pii/S0377221711003122
van den Heuvel, Wilco
Gutiérrez, José Miguel
Hwang, Hark-Chin
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:538-5502011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:538-550
article
Heuristic algorithms for the cardinality constrained efficient frontier
This paper examines the application of genetic algorithm, tabu search and simulated annealing metaheuristic approaches to finding the cardinality constrained efficient frontier that arises in financial portfolio optimisation. We consider the mean-variance model of Markowitz as extended to include the discrete restrictions of buy-in thresholds and cardinality constraints. Computational results are reported for publicly available data sets drawn from seven major market indices involving up to 1318 assets. Our results are compared with previous results given in the literature illustrating the effectiveness of the proposed metaheuristics in terms of solution quality and computation time.
Efficient frontier Genetic algorithm Portfolio optimisation Simulated annealing Tabu search
http://www.sciencedirect.com/science/article/pii/S0377221711002670
Woodside-Oriakhi, M.
Lucas, C.
Beasley, J.E.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:223-2312011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:223-231
article
A reduced variable neighborhood search algorithm for uncapacitated multilevel lot-sizing problems
Multilevel lot-sizing (MLLS) problems, which involve complicated product structures with interdependence among the items, play an important role in the material requirement planning (MRP) system of modern manufacturing/assembling lines. In this paper, we present a reduced variable neighborhood search (RVNS) algorithm and several implemental techniques for solving uncapacitated MLLS problems. Computational experiments are carried out on three classes of benchmark instances under different scales (small, medium, and large). Compared with the existing literature, RVNS shows good performance and robustness on a total of 176 tested instances. For the 96 small-sized instances, the RVNS algorithm can find 100% of the optimal solutions in less computational time; for the 40 medium-sized and the 40 large-sized instances, the RVNS algorithm is competitive against other methods, enjoying good effectiveness as well as high computational efficiency. In the calculations, RVNS updated 7 (17.5%) best known solutions for the medium-sized instances and 16 (40%) best known solutions for the large-sized instances.
Meta-heuristics Uncapacitated multilevel lot-sizing (MLLS) problem Material requirement planning (MRP) Reduced variable neighborhood search (RVNS) algorithm Production planning
http://www.sciencedirect.com/science/article/pii/S0377221711003596
Xiao, Yiyong
Kaku, Ikou
Zhao, Qiuhong
Zhang, Renqian
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:273-2832011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:273-283
article
Strategic capability investments and competition for supply contracts
Suppliers often make proactive investments to strategically position themselves to win contracts with a large buyer. Such investments reduce the suppliers' variable costs of serving the buyer's demand. We show that an auction mechanism does not always benefit the buyer, the supply chain, or the society. We identify scenarios where the buyer can implement the supply chain and socially optimal solution by committing to a bilateral relationship with fair reimbursement, and forgoing the benefits of competition altogether. We explore the role of commitment by the buyer (to a procurement mechanism) and by the suppliers (to an investment level) by analyzing different timing games under symmetric and asymmetric information about suppliers' types. We show that it never benefits anyone for the suppliers to commit first. Equilibrium investments and cost structures depend upon the buyer's bargaining power (opportunity cost). However, the winning supplier's investments are almost always below the supply chain optimal level.
Auctions/bidding Capability investments Supply contracts Type-conscious agents First mover advantages
http://www.sciencedirect.com/science/article/pii/S0377221711004085
Li, Ying
Gupta, Sudheer
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:308-3162011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:308-316
article
Proactive policies for the stochastic resource-constrained project scheduling problem
The resource-constrained project scheduling problem involves the determination of a schedule of the project activities, satisfying the precedence and resource constraints while minimizing the project duration. In practice, activity durations may be subject to variability. We propose a stochastic methodology for the determination of a project execution policy and a vector of predictive activity starting times with the objective of minimizing a cost function that consists of the weighted expected activity starting time deviations and the penalties or bonuses associated with late or early project completion. In a computational experiment, we show that our procedure greatly outperforms existing algorithms described in the literature.
Project scheduling Proactive scheduling Execution policies Stochastic RCPSP
http://www.sciencedirect.com/science/article/pii/S0377221711003638
Deblaere, Filip
Demeulemeester, Erik
Herroelen, Willy
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:298-3072011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:298-307
article
Two shock and wear systems under repair standing a finite number of shocks
A shock and wear system standing a finite number of shocks and subject to two types of repairs is considered. The failure of the system can be due to wear or to a fatal shock. Associated to these failures there are two repair types: normal and severe. Repairs are as good as new. The shocks arrive following a Markovian arrival process, and the lifetime of the system follows a continuous phase-type distribution. The repair times follow different continuous phase-type distributions, depending on the type of failure. Under these assumptions, two systems are studied, depending on the finite number of shocks that the system can stand before a fatal failure that can be random or fixed. In the first case, the number of shocks is governed by a discrete phase-type distribution. After a finite (random or fixed) number of non-fatal shocks the system is repaired (severe repair). The repair due to wear is a normal repair. For these systems, general Markov models are constructed and the following elements are studied: the stationary probability vector; the transient rate of occurrence of failures; the renewal process associated to the repairs, including the distribution of the period between replacements and the number of non-fatal shocks in this period. Special cases of the model with random number of shocks are presented. An application illustrating the numerical calculations is given. The systems are studied in such a way that several particular cases can be deduced from the general ones straightaway. We apply the matrix-analytic methods for studying these models showing their versatility.
Shock and wear model Repair Replacement Markovian arrival process Phase-type distribution
http://www.sciencedirect.com/science/article/pii/S0377221711003602
Montoro-Cazorla, Delia
Pérez-Ocón, Rafael
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:442-4522011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:442-452
article
A tabu search heuristic for the dynamic transportation of patients between care units
The problem studied in this paper stems from a real application to the transportation of patients in the Hospital Complex of Tours (France). The ambulance central station of the Hospital Complex has to plan the transportation demands between care units which require a vehicle. Some demands are known in advance and the others arise dynamically. Each demand requires a specific type of vehicle and a vehicle can transport only one person at a time. The demands can be subcontracted to a private company which implies high cost. Moreover, transportations are subject to particular constraints, among them priority of urgent demands, disinfection of a vehicle after the transportation of a patient with contagious disease and respect of the type of vehicle needed. These characteristics involve a distinction between the vehicles and the crews during the modeling phase. We propose a modeling for solving this difficult problem and a tabu search algorithm inspired by Gendreau et al. (1999). This method supports an adaptive memory and a tabu search procedure. Computational experiments on a real-life instance and on randomly generated instances show that the method can provide high-quality solutions for this dynamic problem with a short computation time.
Transportation Real-time Health care Tabu search Vehicle routing
http://www.sciencedirect.com/science/article/pii/S0377221711003778
Kergosien, Y.
Lenté, Ch.
Piton, D.
Billaut, J.-C.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:284-2972011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:284-297
article
Equilibrium and optimal strategies to join a queue with partial information on service times
In this paper, we study customer equilibrium as well as socially optimal strategies to join a queue with only partial information on the service time distribution such as moments and the range. Based on such partial information, customers adopt the entropy-maximization principle to obtain the expectation of their waiting cost and decide to join or balk. We find that more information encourages customers to join the queue. And it is beneficial for decision makers to convey partial information to customers in welfare maximization but reveal full information in profit maximization.
Queueing Partial information Equilibrium Joining/balking behavior Entropy maximization
http://www.sciencedirect.com/science/article/pii/S0377221711003523
Guo, Pengfei
Sun, Wei
Wang, Yulan
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:509-5162011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:509-516
article
Subexponential asymptotics of the stationary distributions of M/G/1-type Markov chains
This paper studies the subexponential asymptotics of the stationary distribution of an M/G/1-type Markov chain. We provide a sufficient condition for the subexponentiality of the stationary distribution. The sufficient condition requires only the subexponential integrated tail of level increments. On the other hand, the previous studies assume the subexponentiality of level increments themselves and/or the aperiodicity of the G-matrix. Therefore, our sufficient condition is weaker than the existing ones. We also mention some errors in the literature.
Queueing Subexponential asymptotics M/G/1-type Markov chain Periodicity G-matrix BMAP
http://www.sciencedirect.com/science/article/pii/S037722171100275X
Masuyama, Hiroyuki
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:232-2452011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:232-245
article
Single firm product diffusion model for single-function and fusion products
The prosperity of multifunction products (also referred to as fusion products) has changed the landscape of the marketplace for several electronics products. To illustrate, as fusion products gain popularity in cellular phones and office machines, we observe that single-function products (e.g., stand-alone PDAs and stand-alone scanners) gradually disappear from the market as they are supplanted by fusion products. This paper presents a product diffusion model that captures the diffusion transition from two distinct single-function products into one fusion product. We investigate the optimal launch time of the fusion product under various conditions and conduct a numerical analysis to demonstrate the dynamics among the three products. Similar to previous multi-generation single product diffusion models, we find that the planning horizon, the products' relative profit margin, and substitution effects are important to the launch time decision. However, there are several unique factors that warrant special consideration when a firm introduces a fusion product to the market: the firm's competitive role, buyer consolidation of purchases to a multi-function product, the fusion technology and the age of current single-function products.
Manufacturing Marketing Technology management Multifunction products Fusion products Product diffusion
http://www.sciencedirect.com/science/article/pii/S0377221711003742
Chen, Yuwen
Carrillo, Janice E.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:1-142011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:1-14
article
Interest rate term structure modelling
This article surveys approaches to modelling the term structure of interest rates. Over the last few decades several frameworks have been developed, which are actively used in banks for the pricing and risk management of interest rate related products. There seems to be a need for an introductory overview of modelling approaches aimed at the yet unfamiliar reader with a quantitative background.
Finance Interest rate Term structure Arbitrage pricing theory
http://www.sciencedirect.com/science/article/pii/S0377221711000877
Schmidt, Wolfgang M.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:109-1172011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:109-117
article
Capital renewal as a real option
We consider the timing of replacement of obsolete subsystems within an extensive, complex infrastructure. Such replacement action, known as capital renewal, must balance uncertainty about future profitability against uncertainty about future renewal costs. Treating renewal investments as real options, we derive an optimal solution to the infinite horizon version of this problem and determine the total present value of an institution's capital renewal options. We investigate the sensitivity of the infinite horizon solution to variations in key problem parameters and highlight the system scenarios in which timely renewal activity is most profitable. For finite horizon renewal planning, we show that our solution performs better than a policy of constant periodic renewals if more than two renewal cycles are completed.
Investment analysis Maintenance Replacement Facilities planning and design Simulation
http://www.sciencedirect.com/science/article/pii/S0377221711002797
Reindorp, Matthew J.
Fu, Michael C.
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:168-1782011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:168-178
article
An optimization approach to gene stacking
We present a multi-objective integer programming model for the gene stacking problem, which is to bring desirable alleles found in multiple inbred lines to a single target genotype. Pareto optimal solutions from the model provide strategic stacking schemes to maximize the likelihood of successfully creating the target genotypes and to minimize the number of generations associated with a stacking strategy. A consideration of genetic diversity is also incorporated in the models to preserve all desirable allelic variants in the target population. Although the gene stacking problem is proved to be NP-hard, we have been able to obtain Pareto frontiers for smaller sized instances within one minute using the state-of-the-art commercial computer solvers in our computational experiments.
Gene stacking Multi-objective optimization Pareto frontier Integer programming
http://www.sciencedirect.com/science/article/pii/S0377221711003559
Xu, Pan
Wang, Lizhi
Beavis, William D.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:199-2152011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:199-215
article
A convex optimisation framework for the unequal-areas facility layout problem
The unequal-areas facility layout problem is concerned with finding the optimal arrangement of a given number of non-overlapping indivisible departments with unequal area requirements within a facility. We present a convex-optimisation-based framework for efficiently finding competitive solutions for this problem. The framework is based on the combination of two mathematical programming models. The first model is a convex relaxation of the layout problem that establishes the relative position of the departments within the facility, and the second model uses semidefinite optimisation to determine the final layout. Aspect ratio constraints, frequently used in facility layout methods to restrict the occurrence of overly long and narrow departments in the computed layouts, are taken into account by both models. We present computational results showing that the proposed framework consistently produces competitive, and often improved, layouts for well-known large instances when compared with other approaches in the literature.
Facility layout Semidefinite programming Convex programming Global optimisation
http://www.sciencedirect.com/science/article/pii/S0377221711003560
Jankovits, Ibolya
Luo, Chaomin
Anjos, Miguel F.
Vannelli, Anthony
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:361-3742011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:361-374
article
The newsvendor problem: Review and directions for future research
In this paper, we review the contributions to date for analyzing the newsvendor problem. Our focus is on examining the specific extensions for analyzing this problem in the context of modeling customer demand, supplier costs, and the buyer risk profile. More specifically, we analyze the impact of market price, marketing effort, and stocking quantity on customer demand; how supplier prices can serve as a coordination mechanism in a supply chain setting; integrating alternative supplier pricing policies within the newsvendor framework; and how the buyer's risk profile moderates the newsvendor order quantity decision. For each of these areas, we summarize the current literature and develop extensions. Finally, we also propose directions for future research.
Inventory Logistics Supply chain management
http://www.sciencedirect.com/science/article/pii/S0377221710008040
Qin, Yan
Wang, Ruoxuan
Vakharia, Asoo J.
Chen, Yuwen
Seref, Michelle M.H.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:256-2612011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:256-261
article
A study of repairable parts inventory system operating under performance-based contract
Performance-Based Logistics (PBL) is becoming a dominant logistics support strategy, especially in the defense industry. PBL contracts are designed to serve the customer's key performance measures, while the traditional contracts for after-sales services, such as Fixed-price (FP) and Cost-plus (C+), only provide insurance or incentive. In this research, we develop an inventory model for a repairable parts system operating under a PBL contract. We model the closed-loop inventory system as an M/M/m queue in which component failures are Poisson distributed and the repair times at the service facility are exponential. Our model provides the supplier and the customer increased flexibility in achieving target availability. Analysis of key parameters suggests that to improve the availability of the system with repairable spare parts, the supplier should work to improve the components reliability and efficiency of repair facility, rather than the base stock level, which has minimal impact on system availability.
Logistics Base stock Component reliability Repair facility
http://www.sciencedirect.com/science/article/pii/S0377221711003791
Mirzahosseinian, H.
Piplani, R.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:388-3942011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:388-394
article
Single row facility layout problem using a permutation-based genetic algorithm
In this paper, a permutation-based genetic algorithm (GA) is applied to the NP-hard problem of arranging a number of facilities on a line with minimum cost, known as the single row facility layout problem (SRFLP). The GA individuals are obtained by using some rule-based as well as random permutations of the facilities, which are then improved towards the optimum by means of specially designed crossover and mutation operators. Such schemes led the GA to handle the SRFLP as an unconstrained optimization problem. In the computational experiments carried out with large-size instances of sizes from 60 to 80, available in the literature, the proposed GA improved several previously known best solutions.
Single row facility layout problem Genetic algorithm Combinatorial optimization
http://www.sciencedirect.com/science/article/pii/S0377221711002712
Datta, Dilip
Amaral, André R.S.
Figueira, José Rui
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:428-4412011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:428-441
article
An optimization framework for solving capacitated multi-level lot-sizing problems with backlogging
This paper proposes two new mixed integer programming models for capacitated multi-level lot-sizing problems with backlogging, whose linear programming relaxations provide good lower bounds on the optimal solution value. We show that both of these strong formulations yield the same lower bounds. In addition to these theoretical results, we propose a new, effective optimization framework that achieves high quality solutions in reasonable computational time. Computational results show that the proposed optimization framework is superior to other well-known approaches on several important performance dimensions.
Capacitated Multi-level Lot-sizing Backlogging Lower and upper bound guided nested partitions
http://www.sciencedirect.com/science/article/pii/S0377221711003730
Wu, Tao
Shi, Leyuan
Geunes, Joseph
AkartunalI, Kerem
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:459-4692011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:459-469
article
Experiments on forecasting behavior with several sources of information - A review of the literature
Decision makers frequently have to forecast the future values of a time series (e.g. the price of a commodity, sales figures) given several sources of information (e.g. leading indicators, forecasts of advisors). As a subdomain of decision theory the explanation and the improvement of human forecasting behavior are interdisciplinary issues and have been subject to extensive empirical field and laboratory research. We here review the relevant experimental literature, demonstrate the significance of these results for decision science in general, and summarize the implications for practical forecasting applications.
Experimental economics Heuristics Expectation formation Judgmental forecasting Adjustment of forecasts Revision of forecasts
http://www.sciencedirect.com/science/article/pii/S0377221711000099
Leitner, Johannes
Leopold-Wildburger, Ulrike
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:246-2552011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:246-255
article
Marketing-driven channel coordination with revenue-sharing contracts under price promotion to end-customers
This paper explores the equilibrium behavior of a basic supplier-retailer distribution channel with and without revenue-sharing contracts under price promotion to end-customers. Three types of promotional demand patterns characterized by different features of dynamic price sensitivity are considered to rationalize price promotional effects on end-customer demands. Under such a retail price promotion scheme, this work develops a basic model to investigate decentralized channel members' equilibrium decisions in pricing and logistics operations using a two-stage Stackelberg game approach. Extending from the basic model, this work further derives the equilibrium solutions of the dyadic members under channel coordination with revenue-sharing contracts. Analytical results show that under certain conditions both the supplier and retailer can gain more profits through revenue-sharing contracts by means of appropriate promotional pricing strategies. Moreover, the supplier should provide additional economic incentives to the retailer. Furthermore, a counter-profit revenue-sharing chain effect is found in the illustrative examples. Such a phenomenon infers that the more the retailer requests to share from a unit of sale the more it may lose under the revenue-sharing supply chain coordination scheme.
Supply chain management Channel coordination Promotional effect Revenue sharing
http://www.sciencedirect.com/science/article/pii/S0377221711003754
Sheu, Jiuh-Biing
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:430-4412011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:430-441
article
A fuzzy goal programming approach for mid-term assortment planning in supermarkets
We develop a fuzzy mixed integer non-linear goal programming model for the mid-term assortment planning of supermarkets in which three conflicting objectives namely profitability, customer service, and space utilization are incorporated. The items and brands in a supermarket compete to obtain more space and better shelf level. This model offers different service levels to loyal and disloyal customers, applies joint replenishment policy, and accounts for the holding time limitation of perishable items. We propose a fuzzy approach due to the imprecise nature of the goals' target levels and priorities as well as critical data. A heuristic method inspiring by the problem-specific rules is developed to solve this complex model approximately within a reasonable time. Finally, the proposed approach is validated through several numerical examples and results are reported.
Assortment planning Retailing Fuzzy goal programming
http://www.sciencedirect.com/science/article/pii/S0377221711003067
Lotfi, M.M.
Torabi, S.A.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:216-2222011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:216-222
article
The performance evaluation of a multi-stage JIT production system with stochastic demand and production capacities
This paper discusses a single-item, multi-stage, serial Just-in-Time (JIT) production system with stochastic demand and production capacities. The JIT production system is modeled as a discrete-time, M/G/1-type Markov chain. A necessary and sufficient condition, or a stability condition, under which the system has a steady-state distribution is derived. A performance evaluation algorithm is then developed using the matrix analytic methods. In numerical examples, the optimal numbers of kanbans are determined by the proposed algorithm. The optimal numbers of kanbans are robust for the variations in production capacity distribution and demand distribution.
Production Multi-stage JIT production system M/G/1-type Markov chain Stability condition Matrix analytic methods Numerical results
http://www.sciencedirect.com/science/article/pii/S0377221711003584
Iwase, Masaharu
Ohno, Katsuhisa
oai:RePEc:eee:ejores:v:214:y:2011:i:1:p:78-842011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:1:p:78-84
article
A multi-product risk-averse newsvendor with exponential utility function
We consider a multi-product newsvendor using an exponential utility function. We first establish a few basic properties for the newsvendor regarding the convexity of the model and monotonicity of the impact of risk aversion on the solution. When the product demands are independent and the ratio of the degree of risk aversion to the number of products is sufficiently small, we obtain closed-form approximations of the optimal order quantities. The approximations are as easy to compute as the risk-neutral solution. We prove that when this ratio approaches zero, the risk-averse solution converges to the corresponding risk-neutral solution. When the product demands are positively (negatively) correlated, we show that risk aversion leads to lower (higher) optimal order quantities than the solution with independent demands. Using a numerical study, we examine convergence rates of the approximations and thoroughly study the interplay of demand correlation and risk aversion. The numerical study confirms our analytical results and further shows that an increased risk aversion does not always lead to lower order quantities, when demands are strongly negatively correlated.
Supply chain management Risk analysis Expected utility theory
http://www.sciencedirect.com/science/article/pii/S0377221711003183
Choi, Sungyong
Ruszczynski, Andrzej
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:551-5582011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:551-558
article
A trading mechanism contingent on several indices
We introduce a trading mechanism where the execution of an order on a security can be made contingent on the relation between the clearing price of the security and the clearing price of one or several indices. A mechanism similar to ours, but limited to only one index, was implemented on the Tel Aviv Stock Exchange. We argue that it is in some cases crucial to make the execution of an order contingent on several indices. Our mechanism consists of a particular implementation of a double-sided multi-unit combinatorial auction with substitutes (or DMCS auction), which we introduced in an earlier article.
Trading systems Limit orders Market microstructure
http://www.sciencedirect.com/science/article/pii/S0377221711002682
Schellhorn, Henry
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:365-3792011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:365-379
article
Credit risk model with contagious default dependencies affected by macro-economic condition
We consider a credit risk model with two industrial sectors, where defaults of corporations would be influenced by two factors. The first factor represents the macro economic condition which would affect the default intensities of the two industrial sectors differently. The second factor reflects the influences of the past defaults of corporations against other active corporations, where such influences would affect the two industrial sectors differently. A two-layer Markov chain model is developed, where the macro economic condition is described as a birth-death process, while another Markov chain represents the stochastic characteristics of defaults with default intensities dependent on the state of the birth-death process and the number of defaults in two sectors. Although the state space of the two-layer Markov chain is huge, the fundamental absorbing process with a reasonable state space size could capture the first passage time structure of the two-layer Markov chain, thereby enabling one to evaluate the joint probability of the number of defaults in two sectors via the uniformization procedure of Keilson. This in turn enables one to value a variety of derivatives defined on the underlying credit portfolios. In this paper, we focus on a financial product called CDO, and a related option.
Finance Pricing Risk analysis Systems dynamics
http://www.sciencedirect.com/science/article/pii/S0377221711003857
Takada, Hideyuki
Sumita, Ushio
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:453-4562011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:453-456
article
Review of "Experimental Methods for the Analysis of Optimization Algorithms", Thomas Bartz-Beielstein, Marco Chiarandini, LuIÂ´s Paquete, Mike Preuss. Springer, 2010.
http://www.sciencedirect.com/science/article/pii/S0377221711004425
Ruiz, Rubén
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:414-4212011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:414-421
article
Stochastic convergence of random search methods to fixed size Pareto front approximations
In this paper we investigate to what extent random search methods, equipped with an archive of bounded size to store a limited amount of solutions and other data, are able to obtain good Pareto front approximations. We propose and analyze two archiving schemes that allow for maintaining a sequence of solution sets of given cardinality that converge with probability one to an [epsilon]-Pareto set of a certain quality, under very mild assumptions on the process used to sample new solutions. The first algorithm uses a hierarchical grid to define a family of approximate dominance relations to compare solutions and solution sets. Acceptance of a new solution is based on a potential function that counts the number of occupied boxes (on various levels) and thus maintains a strictly monotonous progress to a limit set that covers the Pareto front with non-overlapping boxes at finest resolution possible. The second algorithm uses an adaptation scheme to modify the current value of [epsilon] based on the information gathered during the run. This way it will be possible to achieve convergence to the best (smallest) [epsilon] value, and to a corresponding solution set of k solutions that [epsilon]-dominate all other solutions, which is probably the best possible result regarding the limit behavior of random search methods or metaheuristics for obtaining Pareto front approximations.
Multiple criteria analysis Multiobjective optimization Metaheuristics Search theory
http://www.sciencedirect.com/science/article/pii/S0377221711002761
Laumanns, Marco
Zenklusen, Rico
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:422-4292011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:422-429
article
Incentive based energy market design
Current energy market designs and pricing schemes fail to give investors the appropriate market signals. In particular, energy prices are not high enough to attract investors to build new or maintain existing power capacity. In this paper we propose a method to compute second-best Pareto optimal equilibrium prices for any market exhibiting non-convexities and, based on this result, an energy market design able to restore the correct energy price signals for supply investors.
Economics Energy market design Equilibria
http://www.sciencedirect.com/science/article/pii/S037722171100213X
Muratore, Gabriella
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:317-3302011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:317-330
article
A cooperative location game based on the 1-center location problem
In this paper we introduce and analyze new classes of cooperative games related to facility location models defined on general metric spaces. The players are the customers (demand points) in the location problem and the characteristic value of a coalition is the cost of serving its members. Specifically, the cost in our games is the service radius of the coalition. We call these games the Minimum Radius Location Games (MRLG). We study the existence of core allocations and the existence of polynomial representations of the cores of these games, focusing on network spaces, i.e., finite metric spaces induced by undirected graphs and positive edge lengths, and on the lp metric spaces defined over .
Cooperative combinatorial games Core solutions Radius Diameter
http://www.sciencedirect.com/science/article/pii/S037722171100364X
Puerto, Justo
Tamir, Arie
Perea, Federico
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:517-5252011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:517-525
article
Group and individual heterogeneity in a stochastic frontier model: Container terminal operators
Container ports are a major component of international trade and the global supply chain. Hence, the improvement of port efficiency can have a significant impact on the wider maritime economy. This paper deconstructs a representation in the existing literature that neglects the heterogeneity of individual and group-specific terminal operators. In its place, we present a hierarchical model to make a connection between efficiency and terminal operator group characteristics. The paper develops a stochastic frontier model that controls not only individual heterogeneity but also group-specific variations. The model decomposes the total stochastic derivation from the frontier into inefficiency, individual heterogeneity, group-specific variations, and noise components, with the estimation being performed using Markov chain Monte Carlo simulations. The validity of the model is tested with a panel of container terminal operator data from 1997-2004. Our findings show that terminal operator groups are important in promoting terminal efficiency at the global level, and that the operators with stevedore backgrounds show a higher efficiency than carriers.
Stochastic processes Stochastic production frontier Markov processes Container terminal operators Port globalisation Group-specific
http://www.sciencedirect.com/science/article/pii/S0377221711002773
Yip, Tsz Leung
Sun, Xin Yu
Liu, John J.
oai:RePEc:eee:ejores:v:213:y:2011:i:3:p:470-4772011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:3:p:470-477
article
Branch and price for the vehicle routing problem with discrete split deliveries and time windows
The Discrete Split Delivery Vehicle Routing Problem with Time Windows (DSDVRPTW) consists of designing the optimal set of routes to serve, at least cost, a given set of customers while respecting constraints on vehicles' capacity and customer time windows. Each customer can be visited by more than one vehicle since each customer's demand, discretized in items, can be split in orders, i.e., feasible combinations of items. In this work, we model the DSDVRPTW assuming that all feasible orders are known in advance. Remarkably, service time at customer's location depends on the delivered combination of items, which is a modeling feature rarely found in literature. We present a flow-based mixed integer program for the DSDVRPTW, we reformulate it via Dantzig-Wolfe and we apply column generation. The proposed branch-and-price algorithm largely outperforms a commercial solver, as shown by computational experiments on Solomon-based instances. A comparison in terms of complexity between constant service time vs delivery-dependent service time is presented and potential savings are discussed.
Vehicle routing Split delivery Column generation Dantzig-Wolfe decomposition Branch and price
http://www.sciencedirect.com/science/article/pii/S0377221711002347
Salani, Matteo
Vacca, Ilaria
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:403-4102011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:403-410
article
Quality investment and price decision in a risk-averse supply chain
In this paper, we investigate quality investment and price decision of a make-to-order (MTO) supply chain with uncertain demand in international trade. Due to volatility of orders from buyers, the supplier and the manufacturer in the supply chain are subject to financial risk. In contrast to the general assumption that players in a supply chain are risk neutral in quality investment and price decision, we consider the risk-averse behavior of the players in three different supply chain strategies: Vertical Integration (VI), Manufacturer's Stackelberg (MS) and Supplier's Stackelberg (SS). The study shows that both supply chain strategy and risk-averse behavior have significant impacts on quality investment and pricing. Compared to a risk-neutral supply chain, a risk-averse supply chain has lower, same and higher quality of products in VI, MS and SS, respectively. Also, we derive the conditions under which the supply chain strategy is implemented in a decentralized setting. A numerical study is used to illustrate some related issues.
Quality investment Supply chain strategy Preference theory Make-to-order Risk tolerance
http://www.sciencedirect.com/science/article/pii/S0377221711003808
Xie, Gang
Yue, Wuyi
Wang, Shouyang
Lai, Kin Keung
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:179-1982011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:179-198
article
A survey of deterministic models for the EOQ and EPQ with partial backordering
Models for the basic deterministic EOQ or EPQ problem with partial backordering or backlogging make all the assumptions of the classic EOQ or EPQ model with full backordering except that only a fraction of the demand during the stockout period is backordered. In this survey we review deterministic models that have been developed over the past 40Â years that address the basic models and extensions that add other considerations, such as pricing, perishable or deteriorating inventory, time-varying or stock-dependent demand, quantity discounts, or multiple-warehouses.
Inventory Partial backordering Lot sizing
http://www.sciencedirect.com/science/article/pii/S0377221711001020
Pentico, David W.
Drake, Matthew J.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:262-2722011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:262-272
article
The stochastic transportation problem with single sourcing
We propose a branch-and-price algorithm for solving a class of stochastic transportation problems with single-sourcing constraints. Our approach allows for general demand distributions, nonlinear cost structures, and capacity expansion opportunities. The pricing problem is a knapsack problem with variable item sizes and concave costs that is interesting in its own right. We perform an extensive set of computational experiments illustrating the efficacy of our approach. In addition, we study the cost of the single-sourcing constraints.
Transportation problem Random demands Nonlinear costs
http://www.sciencedirect.com/science/article/pii/S0377221711003845
Edwin Romeijn, H.
Zeynep Sargut, F.
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:411-4172011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:411-417
article
Semi-Lagrangean approach for price discovery in markets with non-convexities
From standard economic theory, the market clearing price for a commodity is set where the demand and supply curves intersect. Convexity is a property that economic models require for a competitive equilibrium, which is efficient and well-behaved and provides equilibrium prices. However, some markets present non-convexities due to their cost structure or due to some operational constraints that need to be addressed. This is the case for electricity markets where the electricity producers incur costs for shutting down a generating unit and then bringing it back on. Non-convex cost structures can be a challenge for the price discovery process, since the supply and demand curves may not intersect, or if they intersect, the price found may not be high enough to cover the total cost of production. We apply a Semi-Lagrangean approach to find a price that can be applied in the electricity pool markets where a central system operator decides who produces and how much they should produce. By applying the model to an example from the literature, we found prices that are high enough to cover the producer's total costs, and follows the optimal solution for achieving mining cost in production. The prices are an alternative solution to the price discovery problem in non-convexities economies; in addition, they provide nonnegative profits to all the generators without the use of side-payments or up-lifts, and closes the integrality gap.
OR in energy Mathematical programming Non-convexities Lagrangean relaxation
http://www.sciencedirect.com/science/article/pii/S0377221711004140
Araoz, Veronica
Jörnsten, Kurt
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:348-3572011-07-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:348-357
article
From deterministic to stochastic surrender risk models: Impact of correlation crises on economic capital
In this paper we raise the matter of considering a stochastic model of the surrender rate instead of the classical S-shaped deterministic curve (in function of the spread), still used in almost all insurance companies. For extreme scenarios, due to the lack of data, it could be tempting to assume that surrenders are conditionally independent with respect to a S-curve disturbance. However, we explain why this conditional independence between policyholders decisions, which has the advantage to be the simplest assumption, looks particularly maladaptive when the spread increases. Indeed the correlation between policyholders decisions is most likely to increase in this situation. We suggest and develop a simple model which integrates those phenomena. With stochastic orders it is possible to compare it to the conditional independence approach qualitatively. In a partially internal Solvency II model, we quantify the impact of the correlation phenomenon on a real life portfolio for a global risk management strategy.
Risk management Applied probability Life insurance Surrender risk Correlation risk
http://www.sciencedirect.com/science/article/pii/S0377221711003821
Loisel, Stéphane
Milhaud, Xavier
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:285-2932010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:285-293
article
Compromising prioritization from pairwise comparisons considering type I and II errors
We explore an important problem in prioritizing product design alternatives, using a real-world case. Despite the importance of prioritization in the area of new product development, the development of systematic schemes has been limited and the concepts and methods developed in the decision analysis area do not seem to be used actively. Therefore, we propose a new method, referred to as the compromising prioritization technique, to prioritize the product design alternatives based on paired comparisons. It introduces type I and type II errors and compromises these two errors to arrive at a desirable order of alternatives. To accomplish this, the two indices of homogeneity and separation are developed together with a heuristic algorithm. A comparative study is also conducted to support our method for use in product development and analogous areas. We then demonstrate how to use the developed compromising prioritization technique using a case study on the asymmetric digital subscriber line (ADSL)-based high-speed internet service product.
Prioritization Type I and II errors Pairwise comparisons Product design and development
http://www.sciencedirect.com/science/article/B6VCT-4XH0MNT-3/2/3689bf09b287c4f42c49508224705010
Kim, Deok-Hwan
Kim, Kwang-Jae
Sam Park, K.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:377-3902010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:377-390
article
Mathematical programming models for supply chain production and transport planning
This paper presents a review of mathematical programming models for supply chain production and transport planning. The purpose of this review is to identify current and future research in this field and to propose a taxonomy framework based on the following elements: supply chain structure, decision level, modeling approach, purpose, shared information, limitations, novelty and application. The research objective is to provide readers with a starting point for mathematical modeling problems in supply chain production and transport planning aimed at production management researchers.
Mathematical programming Supply chain Production planning Transport planning
http://www.sciencedirect.com/science/article/B6VCT-4X723X8-2/2/7574a2402d1a43acd59fa72d8fad0ff4
Mula, Josefa
Peidro, David
Díaz-Madroñero, Manuel
Vicens, Eduardo
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:237-2462010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:237-246
article
Architecture of manufacturing scheduling systems: Literature review and an integrated proposal
This paper deals with the development of customised and realistic manufacturing scheduling systems. More specifically, we focus onto a key element that may help driving their efficient design and implementation: i.e., the set of building blocks that should include a generic scheduling system and its interconnections, a set collectively known as the architecture of a system. To do so, we first analyse existing contributions on the topic together with papers describing different functional requirements of scheduling systems. These contributions are then discussed and classified, and a modular architecture for manufacturing scheduling systems is proposed. This proposal updates, extends and refines the well-known architecture proposed earlier by Pinedo and Yen's [Pinedo, M.L., Yen, B.P.-C., 1997. On the design and development of object-oriented scheduling systems. Annals of Operations Research 70 (1), 359-378], and serves to integrate the different requirements identified in the literature review.
Scheduling systems Architecture Functional requirements
http://www.sciencedirect.com/science/article/B6VCT-4X9FGRY-3/2/cc7b163c624e9c782695b4ae8146b650
Framinan, Jose M.
Ruiz, Rubén
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:343-3542010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:343-354
article
A relaxed cutting plane algorithm for solving the Vasicek-type forward interest rate model
This work considers the solution of the Vasicek-type forward interest rate model. A deterministic process is adopted to model the random behavior of interest rate variation as a deterministic perturbation. It shows that the solution of the Vasicek-type forward interest rate model can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The features of the proposed method are tested using a set of real data and compared with some commonly used spline fitting methods.
Semi-infinite programming Forward interest rate
http://www.sciencedirect.com/science/article/B6VCT-4X7YNGB-3/2/63152e6be0ba88c99fe6685d9fcee7a1
Chen, Homing
Hu, Cheng-Feng
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:346-3602010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:346-360
article
A water flow-like algorithm for manufacturing cell formation problems
Available research on the manufacturing cell formation problem shows that most solution approaches are either single- or multiple-solution-agent-based, with a fixed size of solution agents. Frequent problems encountered during the process of solving the cell formation problem include solutions being easily trapped in local optima and bad solution efficiency. Yang and Wang [Yang, F.-C., Wang, Y.-P., 2007. Water flow-like algorithm for object grouping problems. Journal of the Chinese Institute of Industrial Engineers, 24 (6), 475-488] proposed the water flow-like algorithm (WFA) to overcome the shortcomings of single- and multiple-solution -agent-based algorithms. WFA has the features of multiple and dynamic numbers of solution agents, and its mimicking of the natural behavior of water flowing from higher to lower levels coincides exactly with the process of searching for optimal solutions. This paper therefore adopts the WFA logic and designs a heuristic algorithm for solving the cell formation problem. Computational results obtained from running a set of 37 test instances from the literature and newly created have shown that the proposed algorithm has performed better than other benchmarking approaches both in solution effectiveness and efficiency, especially in large-sized problems. The superiority of the proposed WFACF over other approaches from the literature should be attributed to the collaboration of the WFA logic, the proposed prior estimation of the cell size, and the insertion-move. The WFA is a novel heuristic approach that deserves more attention. More attempts on adopting the WFA logic to solve many other combinatorial optimization problems are highly recommended.
Manufacturing Heuristics Cell formation Meta-heuristics
http://www.sciencedirect.com/science/article/B6VCT-4Y65SCG-1/2/9844f4ed88b3885d55b65c3ad44665b2
Wu, Tai-Hsi
Chung, Shu-Hsing
Chang, Chin-Chih
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:303-3152010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:303-315
article
Choquet-based optimisation in multiobjective shortest path and spanning tree problems
This paper is devoted to the search of Choquet-optimal solutions in finite graph problems with multiple objectives. The Choquet integral is one of the most sophisticated preference models used in decision theory for aggregating preferences on multiple objectives. We first present a condition on preferences (name hereafter preference for interior points) that characterizes preferences favouring compromise solutions, a natural attitude in various contexts such as multicriteria optimisation, robust optimisation and optimisation with multiple agents. Within Choquet expected utility theory, this condition amounts to using a submodular capacity and a convex utility function. Under these assumptions, we focus on the fast determination of Choquet-optimal paths and spanning trees. After investigating the complexity of these problems, we introduce a lower bound for the Choquet integral, computable in polynomial time. Then, we propose different algorithms using this bound, either based on a controlled enumeration of solutions (ranking approach) or an implicit enumeration scheme (branch and bound). Finally, we provide numerical experiments that show the actual efficiency of the algorithms on multiple instances of different sizes.
Multiobjective discrete optimisation Choquet integral Shortest path problem Minimum spanning tree problem Submodular capacity
http://www.sciencedirect.com/science/article/B6VCT-4XKHD9G-1/2/99887d2e85abe236a890cacf4666cdb2
Galand, Lucie
Perny, Patrice
Spanjaard, Olivier
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:290-3002010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:290-300
article
An electromagnetism metaheuristic for the unicost set covering problem
In this paper we propose a new heuristic algorithm to solve the unicost version of the well-known set covering problem. The method is based on the electromagnetism metaheuristic approach which, after generating a pool of solutions to create the initial population, applies a fixed number of local search and movement iterations based on the "electromagnetism" theory. In addition to some random aspects, used in the construction and local search phases, we also apply mutation in order to further escape from local optima. The proposed algorithm has been tested over 80 instances of the literature. On the classical benchmark instances, where the number of columns is larger than the number of rows, the algorithm, by using a fixed set of parameters, always found the best known solution, and for 12 instances it was able to improve the current best solution. By using different parameter settings the algorithm improved 4 additional best known solutions. Moreover, we proved the effectiveness of the electromagnetism metaheuristic approach for the unicost set covering problem by embedding the procedures of the proposed algorithm in a genetic algorithm scheme. The worse results obtained by the genetic algorithm show the impact of the electromagnetism metaheuristic approach in conducting the search of the solution space by applying the movements based on the electromagnetism theory. Finally, we report the results obtained by modifying the proposed electromagnetism metaheuristic algorithm for solving the non-unicost set covering problem.
Combinatorial optimization Unicost set covering problem Electromagnetism metaheuristic
http://www.sciencedirect.com/science/article/B6VCT-4Y7MM1T-3/2/4444d35b7e6a7e5e119588efdf1435c7
Naji-Azimi, Zahra
Toth, Paolo
Galli, Laura
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:47-582010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:47-58
article
Integrated inventory models considering the two-level trade credit policy and a price-negotiation scheme
This paper develops the integrated inventory models with permissible delay in payment, in which customers' demand is sensitive to the buyer's price. The models consider the two-level trade credit policy in the vendor-buyer and buyer-customer relationships in supply chain management. A simple recursive solution procedure is proposed for the integrated models to determine the buyer's optimal pricing and production/order strategy. Although the total profit from the buyer and vendor increases together, the buyer's share lessens. To compensate the buyer's loss due to the cooperative relationship, a negotiation system is presented in order to allocate the profit increase to the vendor and buyer to determine the pricing and production/order strategy. A numerical example and sensitivity analysis are provided to illustrate the proposed model. The results indicate that the total profit from the buyer and vendor together can increase, although a price discount is given to the buyer in the proposed models.
Inventory Integrated models Price-negotiation system Price-sensitive demand Optimization
http://www.sciencedirect.com/science/article/B6VCT-4XVBP61-4/2/279d6ea704ab66ba76bb6c36a38760e5
Chen, Liang-Hsuan
Kang, Fu-Sen
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:690-6932010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:690-693
article
A note on the properties of the optimal solution(s) of the Greed and Regret problem
The Greed and Regret problem is a simple model with applications in areas such as studies of crime, ultimatums, bidding, setting service levels and sales force compensation. In general, the Greed and Regret problem is not concave, and may admit several local optima. Nevertheless, the optimal solution exhibits some intuitive monotonicity properties with respect to the problem parameters. We identify a sufficient condition for uniqueness of the optimal solution, which is a generalization of the Increasing Generalized Failure Rate property developed by Lariviere and Porteus (2001).
Crime Newsvendor Failure rates
http://www.sciencedirect.com/science/article/B6VCT-4XRYT5D-2/2/4c63d964714671bd1436206b6f7c2ddd
Sheopuri, Anshul
Zemel, Eitan
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:604-6122010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:604-612
article
A branch-and-cut algorithm for the strong minimum energy topology in wireless sensor networks
This paper studies the strong minimum energy topology design problem in wireless sensor networks. The objective is to assign transmission power to each sensor node in a directed wireless sensor network such that the induced directed graph topology is strongly connected and the total energy consumption is minimized. A topology is defined to be strongly connected if there exists a communication path between each ordered pair of sensor nodes. This topology design problem with sensor nodes defined on a plane is an NP-Complete problem. We first establish a lower bound on the optimal power consumption. We then provide three formulations for a more general problem defined on a general directed graph. All these formulations involve an exponential number of constraints. Second formulation is stronger than the first one. Further, using the second formulation, we lift the connectivity constraints to generate stronger set of constraints that yield the third formulation. These lifted cuts turn out to be extremely helpful in developing an effective branch-and-cut algorithm. A series of experiments are carried out to investigate the performance of the proposed branch-and-cut algorithm. These computational results over 580 instances demonstrate the effectiveness of our approach.
OR in energy Wireless sensor network Minimum energy topology Branch and bound Cutting
http://www.sciencedirect.com/science/article/B6VCT-4XNF8BM-1/2/8a5b84a44dd5c533c59baf0cc730c019
Aneja, Y.P.
Chandrasekaran, R.
Li, Xiangyong
Nair, K.P.K.
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:280-2892010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:280-289
article
Due dates assignment and JIT scheduling with equal-size jobs
This paper deals with due date assignment and just-in-time scheduling for single machine and parallel machine problems with equal-size jobs where the objective is to minimize the total weighted earliness-tardiness and due date cost. These two problems, but with a common due date to be calculated, were shown to be polynomially solvable in O(n4) time. We first show that this complexity can be reduced to O(n3) by modeling the single machine scheduling problem as an assignment problem without necessary due date enumeration. We next prove that the general case with identical parallel machines and a given set of assignable due dates where the cardinality of this set is bounded by a constant number is still polynomially solvable.
Multi-processor scheduling Due dates assignment Earliness-tardiness penalty Polynomial-time algorithms
http://www.sciencedirect.com/science/article/B6VCT-4Y95V17-1/2/068cac9d6f22716f761a49512883afec
Huynh Tuong, Nguyen
Soukhal, Ameur
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:189-1982010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:189-198
article
Assessing bank efficiency and performance with operational research and artificial intelligence techniques: A survey
This paper presents a comprehensive review of 196 studies which employ operational research (O.R.) and artificial intelligence (A.I.) techniques in the assessment of bank performance. Several key issues in the literature are highlighted. The paper also points to a number of directions for future research. We first discuss numerous applications of data envelopment analysis which is the most widely applied O.R. technique in the field. Then we discuss applications of other techniques such as neural networks, support vector machines, and multicriteria decision aid that have also been used in recent years, in bank failure prediction studies and the assessment of bank creditworthiness and underperformance.
Artificial intelligence Banks Data envelopment analysis Efficiency Operational research Literature review
http://www.sciencedirect.com/science/article/B6VCT-4X01PB2-1/2/62533496419d528842387ca251a8b7f8
Fethi, Meryem Duygun
Pasiouras, Fotios
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:262-2722010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:262-272
article
Weak sharp minima for set-valued vector variational inequalities with an application
In this paper, the notion of weak sharp minima is employed to the investigation of set-valued vector variational inequalities. The gap function [phi]T for set-valued strong vector variational inequalities (for short, SVVI) is proved to be less than the gap function [phi]T for set-valued weak vector variational inequalities (for short, WVVI) under certain conditions, which implies that the solution set of SVVI is equivalent to the solution set of WVVI. Moreover, it is shown that weak sharp minima for the solution sets of SVVI and WVVI hold for and for gap functions and under the assumption of strong pseudomonotonicity, where pTi is a gap function for i-th component of SVVI and WVVI. As an application, the weak Pareto solution set of vector optimization problems (for short, VOP) is proved to be weak sharp minimum for when each component gi of objective function is strongly convex.
Set-valued weak (res., strong) vector variational inequality Gap function Weak sharp minimum Strong pseudomonotonicity Strong convexity
http://www.sciencedirect.com/science/article/B6VCT-4Y5GXXG-1/2/fb8d353a9d693d2fd520c77ae60f31f3
Li, J.
Huang, N.J.
Yang, X.Q.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:159-1712010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:159-171
article
Pricing surplus server capacity for mean waiting time sensitive customers
We consider a queueing model wherein the resource is shared by two different classes of customers, primary (existing) and secondary (new), under a service level based pricing contract. This contract between secondary class customers and resource manager specifies unit admission price and quality of service (QoS) offered. We assume that the secondary customers' Poisson arrival rate depends linearly on unit price and service level offered while the server uses a delay dependent priority queue management scheme. We analyze the joint problem of optimal pricing and operation of the resource with the inclusion of secondary class customers, while continuing to offer a pre-specified QoS to primary class customers. Our analysis leads to an algorithm that finds, in closed form expressions, the optimal points of the resulting non-convex constrained optimization problem. We also study in detail the structure and the non-linear nature of these optimal pricing and operating decisions.
Queueing Quality of service Dynamic priority schemes Linear demand function Non-convex constrained optimization
http://www.sciencedirect.com/science/article/B6VCT-4Y4R309-2/2/a1b930c13803bd858a8c4e25122b2224
Sinha, Sudhir K.
Rangaraj, N.
Hemachandra, N.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:522-5322010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:522-532
article
The selectope for bicooperative games
A bicooperative game is defined by a worth function on the set of ordered pairs of disjoint coalitions of players. The aim of this paper is to analyze the selectope for bicooperative games. This solution concept was introduced by Hammer et al. (1977) [20] and studied by Derks et al. (2000) [10] for cooperative games. We show the relations between the selectope, the core and the Weber set and obtain a characterization of almost positive bicooperative games as bicooperative games such that the core, the Weber set and the selectope coincide. Moreover, an axiomatic characterization of the elements of the selectope is obtained.
Bicooperative game Core Weber set Selectope
http://www.sciencedirect.com/science/article/B6VCT-4XSVR8H-1/2/f881f8f01913bcfcf8c56c73be842f0d
Bilbao, J.M.
Jiménez, N.
López, J.J.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:59-642010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:59-64
article
The pyramidal capacitated vehicle routing problem
This paper introduces the pyramidal capacitated vehicle routing problem (PCVRP) as a restricted version of the capacitated vehicle routing problem (CVRP). In the PCVRP each route is required to be pyramidal in a sense generalized from the pyramidal traveling salesman problem (PTSP). A pyramidal route is defined as a route on which the vehicle first visits customers in increasing order of customer index, and on the remaining part of the route visits customers in decreasing order of customer index. Moreover, this paper develops an exact branch-and-cut-and-price (BCP) algorithm for the PCVRP. A main feature of the algorithm is that exact pricing over elementary routes are done in pseudo-polynomial time. Computational results suggest that PCVRP solutions are highly useful for obtaining near-optimal solutions to the CVRP. Furthermore, pricing of pyramidal routes may prove to be very useful in column generation for the CVRP.
Routing Pyramidal traveling salesman Branch-and-cut-and-price
http://www.sciencedirect.com/science/article/B6VCT-4XVBP61-2/2/699ee4408ca8e78d010a5d6b1cb45916
Lysgaard, Jens
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:301-3122010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:301-312
article
On competitive sequential location in a network with a decreasing demand intensity
We introduce and analyze a Hotelling like game wherein players can locate in a city, at a fixed cost, according to an exogenously given order. Demand intensity is assumed to be strictly decreasing in distance and players locate in the city as long as it is profitable for them to do so. For a linear city (i) we explicitly determine the number of players who will locate in equilibrium, (ii) we fully characterize and compute the unique family of equilibrium locations, and (iii) we show that players' equilibrium expected profits decline in their position in the order. Our results are then extended to a city represented by an undirected weighted graph whose edge lengths are not too small and co-location on nodes of the graph is not permitted. Further, we compare the equilibrium outcomes with the optimal policy of a monopolist who faces an identical problem and who needs to decide upon the number of stores to open and their locations in the city so as to maximize total profit.
Location Game theory
http://www.sciencedirect.com/science/article/B6VCT-4Y34WB9-1/2/943d60e2fdc410318c70ce74e450f9b7
Granot, Daniel
Granot, Frieda
Raviv, Tal
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:479-4822010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:479-482
article
A unified analysis for the single-machine scheduling problem with controllable and non-controllable variable job processing times
We present a unified analysis for single-machine scheduling problems in which the actual job processing times are controlled by either a linear or a convex resource allocation function and also vary concurrently depending on either the job's position in the sequence and/or on the total processing time of the already processed jobs. We show that the problem is solvable in O(nlogn) time by using a weight-matching approach when a convex resource allocation function is in effect. In the case of a linear resource allocation function, we show that the problem can be solved in O(n3) time by using an assignment formulation. Our approach generalizes the solution approach for the corresponding problems with controllable job processing times to incorporate the variability of the job processing times stemming from either the job's position in the sequence and/or the total processing time of the already processed jobs.
Scheduling Single-machine Variable processing times
http://www.sciencedirect.com/science/article/B6VCT-4XY5DWK-1/2/87df92e90a54d75dd71e7ad6a8576212
Koulamas, Christos
Gupta, Sushil
Kyparisis, George J.
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:274-2842010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:274-284
article
Repeated newsvendor game with transshipments under dual allocations
We study a repeated newsvendor game with transshipments. In every period n retailers face a stochastic demand for an identical product and independently place their inventory orders before demand realization. After observing the actual demand, each retailer decides how much of her leftover inventory or unsatisfied demand she wants to share with the other retailers. Residual inventories are then transshipped in order to meet residual demands, and dual allocations are used to distribute residual profit. Unsold inventories are salvaged at the end of the period. While in a single-shot game retailers in an equilibrium withhold their residuals, we show that it is a subgame-perfect Nash equilibrium for the retailers to share all of the residuals when the discount factor is large enough and the game is repeated infinitely many times. We also study asymptotic behavior of the retailers' order quantities and discount factors when n is large. Finally, we provide conditions under which a system-optimal solution can be achieved in a game with n retailers, and develop a contract for achieving a system-optimal outcome when these conditions are not satisfied.
Supply chain management Inventory sharing Applied game theory Repeated games
http://www.sciencedirect.com/science/article/B6VCT-4XPYXH8-2/2/db1ec4f38185c67bb21a2e3bb472d517
Huang, Xiao
Sosic, Greys
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:648-6612010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:648-661
article
Advertising strategies for new product diffusion in emerging markets: Propositions and analysis
We develop advertising strategies for a firm which has a new product for which demand exists in an emerging market. However, the primary channels for distribution of the firm's product do not exist, because either the market has not been opened up, or the firm has not entered the market. Therefore, the consumers use the secondary channels in other markets. The current research seeks to answer questions such as: (i) Is it beneficial to advertise for the product before the market opens? (ii) What should be the difference in advertising before and after the market opens? (iii) What is the effect of various parameters (such as the likelihood of product adoption for the primary and secondary channels, market potential and coefficient of innovation and imitation) on the optimal advertising policies? The above problem is relevant for the situation faced by Japanese electronics goods manufacturers before they entered the Indian market; or a company, which has regional presence in one part of a country (say, urban), and gets access to another part later (e.g., rural). The problem is modeled as an optimal control problem. Relevant propositions are developed for optimal normative advertising policy. Numerical examples are provided to illustrate key policy results.
Diffusion Advertising Optimal control International marketing
http://www.sciencedirect.com/science/article/B6VCT-4XPB6XK-4/2/f97f1b74f0ccc3e7ebda6de96353d8ee
Swami, Sanjeev
Dutta, Arindam
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:439-4482010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:439-448
article
Interrelating operational and financial performance measurements in inventory control
Financial supply chain management and working capital management are increasingly receiving attention as important avenues to increase profitability in supply chains. By actively managing payment terms and working capital requirements, managers can influence financial performance and achieve significant cost savings. However, measures to improve financial performance implicitly restrict and influence operational performance. In our research we elaborate on the benefits of equally considering both operational and financial aspects in decision-making for the physical and financial supply chain. We develop a mathematical model that determines the optimal purchasing order quantity under working capital restrictions and payment delays. We analyze the trade-offs between the most commonly used financial and operational measurements, such as service level, return on investment, profit margin and inventory level. Our results demonstrate the significance of payment delays: Increases/decreases in the upstream/downstream payment delays favor the system's operations by decreasing operational costs. Moreover increases in the working capital employed in the system decrease the total operational cost, increase the total financial cost and lower the return on working capital investment.
Supply chain management Financial supply chain Working capital Payment delays
http://www.sciencedirect.com/science/article/B6VCT-4XPG1RH-1/2/b17a371dc039ace23b5331c8cad28492
Protopappa-Sieke, Margarita
Seifert, Ralf W.
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:199-2052010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:199-205
article
On the bicriterion - minimal cost/minimal label - spanning tree problem
We address a bicriterion spanning tree problem relevant in some application fields such as telecommunication networks or transportation networks. Each edge is assigned with a cost value and a label (such as a color). The first criterion intends to minimize the total cost of the spanning tree (the summation of its edge costs), while the second intends to get the solution with a minimal number of different labels. Since these criteria, in general, are conflicting criteria we developed an algorithm to generate the set of non-dominated spanning trees. Computational experiments are presented and results discussed.
Spanning tree Minimal cost Minimal label Multi-objective decision making
http://www.sciencedirect.com/science/article/B6VCT-4XJ17NG-2/2/0d000993b6efd52ad162574b8263ab0b
Clímaco, João C.N.
Eugénia Captivo, M.
Pascoal, Marta M.B.
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:313-3242010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:313-324
article
Inventory management of multiple items with irregular demand: A case study
We present the case of a Greek commercial enterprise facing the problem of managing the inventories of thousands of different items, supplied by more than 20 European and Asian manufacturers and sold to a large number of different-type customers. A key feature of the problem is that the demand for the vast majority of items is intermittent and lumpy, thus not allowing the use of the usual normal or Poisson distributions. The paper describes the solutions given to several practical problems in the course of developing an easy-to-use yet effective and all-encompassing inventory control system. Emphasis is placed on the accurate modeling of demand by means of a gamma distribution with a probability mass at zero or a package Poisson distribution for very-slow-moving items. Using those models and simple quantitative tools we develop an efficient procedure for approximate but quite accurate determination of the base stock levels that achieve the desired fill rates in the proposed periodic review system. We briefly describe the computerized implementation of the new system and the very encouraging results.
Supply chain management Inventory Case study Gamma distribution Periodic review Base stock
http://www.sciencedirect.com/science/article/B6VCT-4Y4R49N-1/2/34378d05d890d6d24c0153b6c277b04b
Nenes, George
Panagiotidou, Sofia
Tagaras, George
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:332-3382010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:332-338
article
Two-stage cooperation model with input freely distributed among the stages
Shared flow has been widely used in production scenarios where inputs and outputs are shared among various activities. In DEA literature, shared flow represents situations that DMUs are divided into different components that require common resources or produce goods or services obtained through collaboration among them. The objective of this paper is to offer an approach for studying shared flow in a two-stage production process in series, where shared inputs can be freely allocated among different stages. A product-form cooperative efficiency model is proposed to illustrate the overall efficiency of the DMU, and the relationship between the stages. First, we use a game-theory framework to decide the upper and lower bounds of the efficiencies of the stages in a non-cooperative context. A heuristic is suggested to transform the non-linear model into a parametric linear one, which is then used to solve the cooperative model. The model is justified by a numerical evaluation of bank performances.
Shared flow Two-stage DEA Cooperative efficiency Product-form
http://www.sciencedirect.com/science/article/B6VCT-4Y52R07-3/2/2753d3336656a9a1f9c5f8c03dc3a1fd
Zha, Yong
Liang, Liang
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:698-6992010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:698-699
article
Decision behaviour, analysis and support, Simon French, John Maule, Nadia Papamichail. Cambridge University Press (2009). 472 pp., Â£34.99.
http://www.sciencedirect.com/science/article/B6VCT-4XYB41V-1/2/d710ba7d3c0f77cc65f61942788b3b6d
Morton, Alec
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:218-2262010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:218-226
article
An ensemble method using credal decision trees
Supervised classification learning can be considered as an important tool for decision support. In this paper, we present a method for supervised classification learning, which ensembles decision trees obtained via convex sets of probability distributions (also called credal sets) and uncertainty measures. Our method forces the use of different decision trees and it has mainly the following characteristics: it obtains a good percentage of correct classifications and an improvement in time of processing compared with known classification methods; it not needs to fix the number of decision trees to be used; and it can be parallelized to apply it on very large data sets.
Imprecise probabilities Credal sets Imprecise Dirichlet model Uncertainty measures Supervised classification Decision trees
http://www.sciencedirect.com/science/article/B6VCT-4XX15SF-3/2/ba4fd16e2ed6edec31dedf1edca0b136
Abellán, Joaquín
Masegosa, Andrés R.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:589-5962010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:589-596
article
Optimal fences and joint price and inventory decisions in distinct markets with demand leakage
This paper evaluates the simultaneous determination of price and inventory replenishment when a firm faces demand from distinct market segments. A firm utilizes fences, such as advance or nonrefundable payment, to maintain separation of its market segments; however, fences are imperfect and allow a degree of demand leakage from the higher-priced to the lower-priced market segment. We investigate the optimal structure of joint price and inventory decisions with fencing, and demonstrate that more segments is not necessarily better, especially when demand uncertainty is high in the presence of lost sales. We also show the impact of imperfect fences on the firm's profitability, and evaluate how fencing costs affect the optimal fencing decision.
Fences Pricing Inventory Revenue management
http://www.sciencedirect.com/science/article/B6VCT-4XX15SF-1/2/cd41bcd9c2f79e5dc7b95e14cbba4c26
Zhang, Michael
Bell, Peter C.
Cai, Gangshu (George)
Chen, Xiangfeng
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:630-6382010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:630-638
article
Incomplete information, learning, and natural resource management
The problem of resource extraction developed in Levhari and Mirman (1980) is reconsidered under a situation of incomplete information. Specifically, players do not have information about other players' benefit functions. It is assumed that each player relies on simple, non probabilistic beliefs about the other players' behaviour. Basically, players assume that a variation of their own consumption has a first order linear effect on the consumption of others. We define a simple learning procedure where players' beliefs are updated through observations of resource levels over time. Convergence, viability, and local stability of the procedure are proved. Comparisons are made with the full information benchmark case provided by Levhari and Mirman. For a large set of situations, the steady state of the resource lies between the non-cooperative and cooperative solutions in the benchmark case.
Environment OR in natural resources Economics Learning procedure Non probabilistic beliefs Conjectural variations
http://www.sciencedirect.com/science/article/B6VCT-4XSJVN5-3/2/ffcdeca53552af34517219cd260ffbce
Quérou, N.
Tidball, M.
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:483-4852010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:483-485
article
A note on deriving decision rules to locate export containers in container yards
A counterexample is given to illustrate that a key model transformation in the paper entitled "Deriving decision rules to locate export containers in container yards" [Kim, K.H., Park, Y.M., Ryu, K.-R., 2000. Deriving decision rules to locate export containers in container yards. European Journal of Operational Research 124 (1), 89-101] is not correct. Then, the errors in the original derivation of the model transformation are analyzed, and the correct form is presented.
Transportation Container terminal Storage location Dynamic programming
http://www.sciencedirect.com/science/article/B6VCT-4Y0KWG3-1/2/d5a5080855dca848df374c8a138807d7
Zhang, Canrong
Chen, Weiwei
Shi, Leyuan
Zheng, Li
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:459-4682010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:459-468
article
Credit contagion in a network of firms with spatial interaction
This contribution studies the effects of credit contagion on the credit risk of a portfolio of bank loans. To this aim we introduce a model that takes into account the counterparty risk in a network of interdependent firms that describes the presence of business relations among different firms. The location of the firms is simulated with probabilities computed using an entropy spatial interaction model. By means of a wide simulation analysis we investigate the behavior of the model proposed and study the effects of default contagion on the loss distribution of a portfolio of bank loans.
Finance Credit risk Bank loan portfolios Contagion models Entropy spatial models
http://www.sciencedirect.com/science/article/B6VCT-4Y5GXXG-3/2/5fd7f56a225547272a5fc18feebfc126
Barro, Diana
Basso, Antonella
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:263-2732010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:263-273
article
Supply capacity acquisition and allocation with uncertain customer demands
We study a class of capacity acquisition and assignment problems with stochastic customer demands often found in operations planning contexts. In this setting, a supplier utilizes a set of distinct facilities to satisfy the demands of different customers or markets. Our model simultaneously assigns customers to each facility and determines the best capacity level to operate or install at each facility. We propose a branch-and-price solution approach for this new class of stochastic assignment and capacity planning problems. For problem instances in which capacity levels must fall between some pre-specified limits, we offer a tailored solution approach that reduces solution time by nearly 80% over an alternative approach using a combination of commercial nonlinear optimization solvers. We have also developed a heuristic solution approach that consistently provides optimal or near-optimal solutions, where solutions within 0.01% of optimality are found on average without requiring a nonlinear optimization solver.
Assignment Capacity acquisition Newsvendor Branch-and-price Stochastic demand
http://www.sciencedirect.com/science/article/B6VCT-4XNF6F8-1/2/c8aa8a6e11e19eb840d446975d74a40d
Taaffe, Kevin
Geunes, Joseph
Edwin Romeijn, H.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:98-1052010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:98-105
article
An efficient tabu algorithm for the single row facility layout problem
The general goal of the facility layout problem is to arrange a given number of facilities to minimize the total cost associated with the known or projected interactions between them. One of the special classes of the facility layout problem is the Single Row Facility Layout Problem (SRFLP), which consists of finding an optimal linear placement of rectangular facilities with varying dimensions on a straight line. This paper first presents and proves a theorem to find the optimal solution of a special case of SRFLP. The results obtained by this theorem prove to be very useful in reducing the computational efforts when a new algorithm based on tabu search for the SRFLP is proposed in this paper. Computational results of the proposed algorithm on benchmark problems show the greater efficiency of the algorithm compared to the other heuristics for solving the SRFLP.
Facilities planning and design Linear ordering problem Tabu search Integer programming
http://www.sciencedirect.com/science/article/B6VCT-4XVK3XC-2/2/08b286c1498952db4d883a4c0404d1ce
Samarghandi, Hamed
Eshghi, Kourosh
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:186-1942010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:186-194
article
Multivariate optimisation for procurement of emergency services equipment - Teams of the best or the best of teams?
It is shown that the design of some inventory structures such as critical protective items for local emergency response elements, or selection of some types of sports team (e.g. baseball, cricket), is appropriately formulated as the selection of an optimal set of options over multiple criteria. This paper examines the problem of defining what might constitute optimum sets under such conditions. Noting that options may be characterised with data of various scale types, the paper introduces a number of different policies for optimality appropriate to the different scale types, and derives closed expressions that implement the various policies. It is shown that simply choosing the best individual options to construct a preferred team is likely to be sub-optimal particularly if the policy is hard to meet. This result is shown to be robust over data structures and data scale types. Managerial implications are briefly considered.
Multiple criteria analysis Cost benefit analysis Purchasing
http://www.sciencedirect.com/science/article/B6VCT-4Y34PWF-1/2/206003470fb21830480ed2727621b5d1
Woodruff, Christopher J.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:557-5642010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:557-564
article
Modified interactive Chebyshev algorithm (MICA) for convex multiobjective programming
In this paper, we describe an interactive procedural algorithm for convex multiobjective programming based upon the Tchebycheff method, Wierzbicki's reference point approach, and the procedure of Michalowski and Szapiro. At each iteration, the decision maker (DM) has the option of expressing his or her objective-function aspirations in the form of a reference criterion vector. Also, the DM has the option of expressing minimally acceptable values for each of the objectives in the form of a reservation vector. Based upon this information, a certain region is defined for examination. In addition, a special set of weights is constructed. Then with the weights, the algorithm of this paper is able to generate a group of efficient solutions that provides for an overall view of the current iteration's certain region. By modification of the reference and reservation vectors, one can "steer" the algorithm at each iteration. From a theoretical point of view, we prove that none of the efficient solutions obtained using this scheme impair any reservation value for convex problems. The behavior of the algorithm is illustrated by means of graphical representations and an illustrative numerical example.
Multiobjective programming Interactive procedures Tchebycheff method Reference point methods Aspiration criterion vectors Reservation levels
http://www.sciencedirect.com/science/article/B6VCT-4XRYT5D-1/2/7ae3c7101593e337ac56c2dba7bb6c4c
Luque, Mariano
Ruiz, Francisco
Steuer, Ralph E.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:672-6822010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:672-682
article
An alternative approach to monetary aggregation in DEA
This paper proposes a set of alternative DEA-based money indices that are proved to be both theoretically and empirically competing monetary aggregates since they perform as good as the Divisia aggregates. Based on all the results concerning causality, forecasting and money demand, we conclude that DEA money aggregates prove to be at least competing alternatives to the Divisia aggregates, and hence, suggest that these new aggregates may be considered along with the existing weighted monetary aggregates like the Divisia ones. Given the inherent benefit of the doubt weighting mechanism underlying the DEA models where the optimal weight assigned by DEA to each monetary asset reflects the ongoing financial innovation and the Reserve Bank of India's policy priority in the distribution of total liquidity, we feel that the DEA money indices can truly capture the liquidity better with ongoing financial innovations in the economy.
DEA Money Index Divisia index
http://www.sciencedirect.com/science/article/B6VCT-4XVC4NV-2/2/e056868fe4d4227aed2b7640c65c0170
Sahoo, Biresh K.
Acharya, Debashis
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:106-1122010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:106-112
article
Distributions of rectilinear deviation distance to visit a facility
This paper deals with the deviation distance to visit a facility from pre-planned routes. Facilities are approximated by both points and lines on a continuous plane. To see the relationship between the deviation distance and the availability of facilities, we derive the distributions of the rectilinear deviation distance for regular and random patterns of facilities. These distributions demonstrate how the shortest distance and the relative position of origin and destination affect the deviation distance. We also show that the deviation distance is a generalization of the nearest neighbour distance.
Location Flow demand Shortest distance Nearest neighbour distance
http://www.sciencedirect.com/science/article/B6VCT-4XX15SF-2/2/5b72899e5146c6f89c97db1c37940a02
Miyagawa, Masashi
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:202-2042010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:202-204
article
Fuzzy decision-making method based on the weighted correlation coefficient under intuitionistic fuzzy environment
A multicriteria fuzzy decision-making method based on weighted correlation coefficients using entropy weights is proposed under intuitionistic fuzzy environment for some situations where the information about criteria weights for alternatives is completely unknown. To determine the entropy weights with respect to a set of criteria represented by intuitionistic fuzzy sets (IFSs), we establish an entropy weight model, which can be used to get the criteria weights, and then propose an evaluation formula of weighted correlation coefficient between an alternative and the ideal alternative. The alternatives can be ranked and the most desirable one(s) can be selected according to the weighted correlation coefficients. Finally, two illustrative examples demonstrate the practicality and effectiveness of the proposed method.
Intuitionistic fuzzy set Correlation coefficient Entropy weight Multicriteria fuzzy decision-making
http://www.sciencedirect.com/science/article/B6VCT-4Y646K1-1/2/07bcd03b44f0fb098341e28f05453a0a
Ye, Jun
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:339-3452010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:339-345
article
Supply chain coordination with insurance contract
We propose an insurance contract under which the supplier shares the risk of overstock and understock with the retailer, improving the efficiency of the supply chain with a newsvendor-type product. We first show that the insurance contract could coordinate the supply chain, and obtain bargaining solution in the supply chain model. Then we investigate the effects of agents' risk aversion on the supply chain model and acquire the Pareto-optimal solution through the mean-variance approach. After that, we compare the insurance contract with the revenue sharing contract, focusing particularly on their differences. Finally, extensive numerical studies are conducted, and managerial implications are proposed.
Supply chain management Game theory Risk analysis Insurance contract
http://www.sciencedirect.com/science/article/B6VCT-4Y52R07-2/2/377e738a6b7bf7ce9340b1612e8926d9
Lin, Zhibing
Cai, Chen
Xu, Baoguang
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:229-2362010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:229-236
article
Parallel machine scheduling with nested processing set restrictions
We consider the problem of scheduling a set of n independent jobs on m parallel machines, where each job can only be scheduled on a subset of machines called its processing set. The machines are linearly ordered, and the processing set of job j is given by two machine indexes aj and bj; i.e., job j can only be scheduled on machines aj,aj+1,...,bj. Two distinct processing sets are either nested or disjoint. Preemption is not allowed. Our goal is to minimize the makespan. It is known that the problem is strongly NP-hard and that there is a list-type algorithm with a worst-case bound of 2-1/m. In this paper we give an improved algorithm with a worst-case bound of 7/4. For two and three machines, the algorithm gives a better worst-case bound of 5/4 and 3/2, respectively.
Nested processing set restrictions Nonpreemptive scheduling Makespan NP-hard Approximation algorithm Worst-case bound
http://www.sciencedirect.com/science/article/B6VCT-4XKHD9G-4/2/d5cd8c5d413b1f957aa42b6fe2ce554a
Huo, Yumei
Leung, Joseph Y.-T.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:694-6972010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:694-697
article
A slacks-based measure of super-efficiency in data envelopment analysis: A comment
The slacks-based measure (SBM) of super-efficiency in data envelopment analysis (DEA) developed by Tone [Tone, K., 2002. A slacks-based measure of super-efficiency in data envelopment analysis. European Journal of Operational Research 143, 32-41] is a non-radial super-efficiency model compared to the traditional radial super-efficiency DEA models. This note extends the SBM of super-efficiency to the additive (slacks-based) DEA model. Alternative slacks-based objective functions can be used. Unlike the traditional radial super-efficiency DEA, additive (slacks-based) super-efficiency models are always feasible.
Data envelopment analysis (DEA) Efficiency Slacks Super-efficiency Ranking
http://www.sciencedirect.com/science/article/B6VCT-4XW00BG-2/2/a79fcee04fb2b193e1b87fb55537b14e
Du, Juan
Liang, Liang
Zhu, Joe
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:113-1262010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:113-126
article
Price and lead time decisions in dual-channel supply chains
Manufacturers today are increasingly adopting a dual channel to sell their products, i.e., the traditional retail channel and an online direct channel. Empirical studies have shown that service quality (we focus on the delivery lead time of the direct channel) even goes beyond product price as one of the major factors influencing consumer acceptance of the direct channel. Delivery lead time has significant effects on demand, profit, and pricing strategy. However, there is scant literature addressing the decision on the promised delivery lead time of a direct channel and its impact on the manufacturer's and retailer's pricing decisions. To fill this gap, we examine the optimal decisions of delivery lead time and prices in a centralized and a decentralized dual-channel supply chain using the two-stage optimization technique and Stackelberg game, and analyze the impacts of delivery lead time and customer acceptance of a direct channel on the manufacturer's and retailer's pricing behaviours. We analytically show that delivery lead time strongly influences the manufacturer's and the retailer's pricing strategies and profits. Our numerical studies reveal that the difference between the demand transfer ratios in the two channels with respect to delivery lead time and direct sale price, customer acceptance of the direct channel, and product type have great effects on the lead time and pricing decisions.
Supply chain management Dual channel Internet/direct marketing E-commerce Game theory
http://www.sciencedirect.com/science/article/B6VCT-4Y1NV3K-1/2/1940a10ded25562be96fc356fcddf955
Hua, Guowei
Wang, Shouyang
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:81-972010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:81-97
article
Balancing stochastic two-sided assembly lines: A chance-constrained, piecewise-linear, mixed integer program and a simulated annealing algorithm
Two-sided assembly lines are often designed to produce large-sized products, such as automobiles, trucks and buses. In this type of a production line, both left-side and right-side of the line are used in parallel. In all studies on two-sided assembly lines, the task times are assumed to be deterministic. However, in real life applications, especially in manual assembly lines, the tasks may have varying execution times defined as a probability distribution. The task time variation may result from machine breakdowns, loss of motivation, lack of training, non-qualified operators, complex tasks, environment, etc. In this paper, the problem of balancing two-sided assembly lines with stochastic task times (STALBP) is considered. A chance-constrained, piecewise-linear, mixed integer program (CPMIP) is proposed to model and solve the problem. As a solution approach a simulated annealing (SA) algorithm is proposed. To assess the effectiveness of CPMIP and SA algorithm, a set of test problems are solved. Finally, computational results indicating the effectiveness of CPMIP and SA algorithm are reported.
Facilities planning and design Line balancing Two-sided assembly lines Integer programming Simulated annealing
http://www.sciencedirect.com/science/article/B6VCT-4XVC4NV-1/2/71688d02a126e346ff745d4f1287ccce
Özcan, Ugur
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:328-3352010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:328-335
article
Multicriteria classification models for the identification of targets and acquirers in the Asian banking sector
The purpose of the present study is the development of classification models for the identification of acquirers and targets in the Asian banking sector. We use a sample of 52 targets and 47 acquirers that were involved in acquisitions in 9 Asian banking markets during 1998-2004 and match them by country and time with an equal number of non-involved banks. The models are developed and validated through a tenfold cross-validation approach using two multicriteria decision aid techniques. For comparison purposes we also develop models through discriminant analysis. The results indicate that the multicriteria decision aid models are more efficient that the ones developed through discriminant analysis. Furthermore, in all the cases the models are more efficient in distinguishing between acquirers and non-involved banks than between targets and non-involved banks. Finally, the models with a binary outcome achieve higher accuracies than the ones which simultaneously distinguish between acquirers, targets and non-involved banks.
Multiple criteria analysis Acquisitions Banks
http://www.sciencedirect.com/science/article/B6VCT-4XKHD9G-6/2/21620e90a7fb2cfa5a435c9b6f91f35b
Pasiouras, Fotios
Gaganis, Chrysovalantis
Zopounidis, Constantin
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:412-4212010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:412-421
article
A revenue management model for products with two capacity dimensions
Many perishable products and services have multiple capacity attributes. Shipping capacity of container liners, for example, is measured by both volume and weight. Containers with different size consume various capacities in the two dimensions. Restaurant revenue management aims to maximize the revenue per available seat-hour that captures both the number of dining tables and service manpower. Similar issues arise in the air cargo, trucking and health care industries. We study the revenue management problem with two capacity features and formulate the problem as a continuous-time stochastic control model. Unlike heuristic approaches, we derive the optimal solution in an analytical form. Computation of the optimal solution is fairly efficient. With certain conditions we explore the structural properties of the optimal solution. We show that if the revenue rate is concave in the capacity usage, the expected value of marginal capacity is monotone. As a result, the control policy is featured by a sequence of thresholds which displays a significant difference when the remaining capacity-mix varies. Numerical examples are provided.
Network revenue management Multi-dimensional capacity Dynamic pricing Monotonicity
http://www.sciencedirect.com/science/article/B6VCT-4Y6S7KM-2/2/879525f21556bd84427b86f072d7d17e
Xiao, Baichun
Yang, Wei
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:449-4622010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:449-462
article
Commitment-penalty contracts in drop-shipping supply chains with asymmetric demand information
We study a drop-shipping supply chain in which the retailer receives a customer's order and the supplier fills it. In such a chain, the supplier keeps inventory and bears inventory risks; the retailer focuses on marketing and customer acquisition, and forwards the orders to the supplier. The retailer usually has better customer demand information, and may send an over-estimated demand forecast to maximize her own interest, which may result in overstock for the supplier. On the other hand, since the retailer does not own inventory, the main concern of the retailer is that the acquired orders may not be fulfilled because of the supplier's shortage of stock. To cope with these challenges, we propose a menu of commitment-penalty contracts that can provide greater certainty of demand as well as greater certainty of supply. We focus our study in the asymmetric demand information case and we show that the supplier can obtain the retailer's demand information by offering a menu of commitment-penalty contracts. Under this mechanism, we find the solution that maximizes the supplier's expected profit.
Multi-agent systems Drop-shipping Supply chain Asymmetric information Commitment-penalty contract
http://www.sciencedirect.com/science/article/B6VCT-4XPB6XK-3/2/a67cf0450277bcfc511a3fdff29193d0
Gan, Xianghua
Sethi, Suresh P.
Zhou, Jing
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:410-4202010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:410-420
article
Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization
An accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems is presented. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Beale-Powell restart criterion holds. The parameter scaling the gradient is selected as a spectral gradient. For the steplength computation the method has the advantage that in conjugate gradient algorithms the step lengths may differ from 1 by two order of magnitude and tend to vary unpredictably. Thus, we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in the function values is significantly improved. In mild conditions the algorithm is globally convergent for strongly convex functions. Computational results for a set consisting of 750 unconstrained optimization test problems show that this new accelerated scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods: SCALCG [3], [4], [5] and [6], CONMIN by Shanno and Phua (1976, 1978) [42] and [43], Hestenes and Stiefel (1952) [25], Polak-Ribiére-Polyak (1969) [32] and [33], Dai and Yuan (2001) [17], Dai and Liao (2001) (t=1) [14], conjugate gradient with sufficient descent condition [7], hybrid Dai and Yuan (2001) [17], hybrid Dai and Yuan zero (2001) [17], CG_DESCENT by Hager and Zhang (2005, 2006) [22] and [23], as well as quasi-Newton LBFGS method [26] and truncated Newton method by Nash (1985) [27].
Unconstrained optimization Conjugate gradient method Spectral gradient method Wolfe line search BFGS preconditioning
http://www.sciencedirect.com/science/article/B6VCT-4XVBP61-3/2/7bb685c8752b04369f721665f3bfdffb
Andrei, Neculai
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:19-302010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:19-30
article
Two-stage stochastic matching and spanning tree problems: Polynomial instances and approximation
This article deals with the two-stage stochastic model, which aims at explicitly taking into account uncertainty in optimization problems, that Kong and Schaefer have recently studied for the maximum weight matching problem [N. Kong, A.J. Schaefer, A factor 1/2 approximation algorithm for two-stage stochastic matching problems, European Journal of Operational Research 172(3) (2006) 740-746]. They have proved that the problem is NP-hard, and they have provided a factor approximation algorithm. We further study this problem and strengthen the hardness results, slightly improve the approximation ratio and exhibit some polynomial cases. We similarly tackle the maximum weight spanning tree problem in the two-stage setting. Finally, we make numerical experiments on randomly generated instances to compare the quality of several interesting heuristics.
Stochastic programming Approximation algorithms Matching Maximum spanning tree Combinatorial optimization
http://www.sciencedirect.com/science/article/B6VCT-4XX15SF-4/2/2f6b3948b835729a2456214234c44b7e
Escoffier, Bruno
Gourvès, Laurent
Monnot, Jérôme
Spanjaard, Olivier
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:683-6892010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:683-689
article
Some comments on the economic lot size of a three-stage supply chain with backordering derived without derivatives
Chung and Wee [Chung, C.J., Wee, H.M., 2007. Optimizing the economic lot size of a three-stage supply chain with backordering derived without derivatives. European Journal of Operational Research 183, 933-943] investigate the economic lot size of a three-stage supply chain with backordering derived without derivatives. Their inventory model is correct and interesting. Basically, they adopt an algebraic approach by using the method of complete squares to locate the optimal solution of the integrated system's total cost TC(q,B,n,M) and ignore the role of the functional behavior of TC(q,B,n,M) in locating the optimal solution of it. However, as argued in this paper [Chung, C.J., Wee H.M., 2007. Optimizing the economic lot size of a three-stage supply chain with backordering derived without derivatives. European Journal of Operational Research 183, 933-943] need to explore the functional behavior of TC(q,B,n,M) to justify their solution. So, from the viewpoint of logic, the derivations about the optimal solution have some shortcomings such that the validity of the solution procedure in Chung and Wee [Chung, C.J., Wee H.M., 2007. Optimizing the economic lot size of a three-stage supply chain with backordering derived without derivatives. European Journal of Operational Research 183, 933-943] is questionable. The main purpose of this paper is to overcome those shortcomings and present complete proofs for Chung and Wee [Chung, C.J., Wee H.M., 2007. Optimizing the economic lot size of a three-stage supply chain with backordering derived without derivatives. European Journal of Operational Research 183, 933-943].
Integrated production inventory model Backlogging Economic lot size
http://www.sciencedirect.com/science/article/B6VCT-4XNN5G6-1/2/038e75db6759d7e7304a374fd7fda8df
Chung, Kun-Jen
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:505-5122010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:505-512
article
List pricing versus dynamic pricing: Impact on the revenue risk
We consider the problem of a firm selling multiple products that consume a single resource over a finite time period. The amount of the resource is exogenously fixed. We analyze the difference between a dynamic pricing policy and a list-price capacity control policy. The dynamic pricing policy adjusts prices steadily resolving the underlying problem every time step, whereas the list pricing policy sets static prices once but controls the capacity by allowing or preventing product sales. As steady price changes are often costly or unachievable in practice, we investigate the question of how much riskier it is to apply a list pricing policy rather than a dynamic pricing policy. We conduct several numerical experiments and compare expected revenue, standard deviation, and conditional-value-at-risk between the pricing policies. The differences between the policies show that list pricing can be a useful strategy when dynamic pricing is costly or impractical.
Revenue management Pricing Risk analysis Dynamic programming Capacity control
http://www.sciencedirect.com/science/article/B6VCT-4XVK3XC-1/2/22741cd6cbf1ceda58a816ec27f39551
Koenig, Matthias
Meissner, Joern
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:597-6032010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:597-603
article
Modeling latent sources in call center arrival data
In this paper, we discuss issues that arise in the analysis of call center arrivals that are mostly linked to individual ads. More specifically, we consider the case where there is no complete linkage between the calls and the advertisements that led to the calls. The ability to model and infer such latent call arrival sources is important from a marketing as well as an operations point of view since knowledge of the linkage improves forecasting performance of the model. We pose this as a missing data problem and develop a data augmentation algorithm for the Bayesian analysis. We implement the proposed algorithm to simulated and actual call center arrival data and discuss its performance.
Call center modeling Nonhomogeneous Poisson process Data augmentation Markov chain Monte Carlo
http://www.sciencedirect.com/science/article/B6VCT-4XPP11V-1/2/efebb327f94a07b93fa28a32a528b059
Landon, Joshua
Ruggeri, Fabrizio
Soyer, Refik
Murat Tarimcilar, M.
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:316-3272010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:316-327
article
Optimum service capacity and demand management with price incentives
Service firms periodically face fluctuating demand levels. They incur high costs to handle peak demand and pay for under-utilized capacity during low demand periods. In this paper, we develop a mixed integer programming (MIP) model based on the real life experience of a Brazilian telecommunications firm. The model determines the optimum staffing requirements with different seniority levels for employees, as well as the distribution and balancing of workload utilizing flexibility of some customers in their service completion day. The proposed MIP uses monetary incentives to smooth the workload by redistributing some of the peak demand, thereby increasing capacity utilization. Due to the intractable nature of optimizing the proposed MIP model, we present a heuristic solution approach. The MIP model is applied to the case of the examined Brazilian Telecommunications firm. The computational work on this base case and its extensions shows that the proposed MIP model is of merit, leading to approximately seventeen percent reduction in the base case operating costs. Extensive computational work demonstrates that our heuristic provides quality solutions in very short computational times. The model can also be used to select new customers based on the workload, the revenue potential of these new customers and their flexibility in accepting alternate service completion dates. The generic structure of the proposed approach allows for its application to a wide variety of service organizations facing similar capacity and demand management challenges. Such wide applicability enhances the value of our work and its expected benefits.
Mixed integer programming Workload smoothing Delivery dates Heuristics Incentives
http://www.sciencedirect.com/science/article/B6VCT-4XK45BW-1/2/e918b121affa78b514450bd72f5c91f6
Özlük, Özgür
Elimam, Abdelghani A.
Interaminense, Eduardo
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:251-2542010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:251-254
article
Classification of Dantzig-Wolfe reformulations for binary mixed integer programming problems
In this note, we provide a classification of Dantzig-Wolfe reformulations for Binary Mixed Integer Programming Problems. We specifically focus on modeling the binary conditions in the convexification approach to the Dantzig-Wolfe decomposition. For a general Binary Mixed Integer Programming problem, an extreme point of the overall problem does not necessarily correspond to an extreme point of the subproblem. Therefore, the binary conditions cannot in general be imposed on the new master problem variables but must be imposed on the original binary variables. In some cases, however, it is possible to impose the binary restrictions directly on the new master problem variables. The issue of imposing binary conditions on the original variables versus the master problem variables has not been discussed systematically for MIP problems in general in the literature and most of the research has been focused on the pure binary case. The classification indicates in which cases you can, and cannot, impose binary conditions on the new master problem variables.
Mixed integer programming Dantzig-Wolfe decomposition Reformulation
http://www.sciencedirect.com/science/article/B6VCT-4XPP11V-2/2/3401b988b140c5a60e00815a9e28a553
Jans, Raf
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:355-3652010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:355-365
article
Income prediction in the agrarian sector using product unit neural networks
European Union financial subsidies in the agrarian sector are directly related to maintaining a sustainable farm income, so its determination using, for example, the farm gross margin is a basic element in agrarian programs for sustainable development. Using this tool, it is possible the identification of the agrarian structures that need financial support and to what extent it is needed. However, the process of farm gross margin determination is complicated and expensive because it is necessary to find the value of all the inputs consumed and outputs produced. Considering the circumstances mentioned, the objectives of this research were to: (1) select a representative and reduced set of easy-to-collect descriptive variables to estimate the gross margin of a group of olive-tree farms in Andalusia; (2) investigate if artificial neural network models (ANN) with two different types of basis functions (sigmoidal and product-units) could effectively predict the gross margin of olive-tree farms; (3) compare the effectiveness of multiple linear, quadratic and robust regression models versus ANN; and (4) validate the best mathematical model obtained for gross margin prediction by analysing realistic farm and farmer scenarios. Results from ANN models, specially the product-unit ones, have provided the most accurate gross margin predictions.
Neural networks OR in agriculture Product-unit models Regression
http://www.sciencedirect.com/science/article/B6VCT-4XC57HF-2/2/69747807b579434965a337aab24e7ebd
García-Alonso, Carlos R.
Torres-Jiménez, Mercedes
Hervás-Martínez, César
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:437-4472010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:437-447
article
Free trial or no free trial: Optimal software product design with network effects
A common business strategy to promote product adoption in software industry is to provide a free trial version with limited functionalities of the commercial product to increase the installed user base. The increase of user base will lead to higher value of the software because of positive network effects. However, offering a free trial version may cannibalize some demand of the commercial software. This paper examines the tradeoff between network effects and the cannibalization effect, and aims to uncover the conditions under which firms should introduce the free trial product. We find that when network intensity is strong, it is more profitable for a software monopoly to offer free trial than to segment the market with two versions of different qualities. In addition, this paper solves the joint decision problem of finding the optimal quality for the firm's free trial software and the optimal price of its commercial product.
Software free trial Network effects Information goods
http://www.sciencedirect.com/science/article/B6VCT-4Y52R07-5/2/a8b0253cb9ab9121208d4f6ca95d93cd
Cheng, Hsing Kenneth
Tang, Qian Candy
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:172-1852010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:172-185
article
Neural network metamodeling for cycle time-throughput profiles in manufacturing
This paper proposed a neural network (NN) metamodeling method to generate the cycle time (CT)-throughput (TH) profiles for single/multi-product manufacturing environments. Such CT-TH profiles illustrate the trade-off relationship between CT and TH, the two critical performance measures, and hence provide a comprehensive performance evaluation of a manufacturing system. The proposed methods distinct from the existing NN metamodeling work in three major aspects: First, instead of treating an NN as a black box, the geometry of NN is examined and utilized; second, a progressive model-fitting strategy is developed to obtain the simplest-structured NN that is adequate to capture the CT-TH relationship; third, an experiment design method, particularly suitable to NN modeling, is developed to sequentially collect simulation data for the efficient estimation of the NN models.
Discrete event simulation Response surface modeling Design of experiments Neural networks Semiconductor manufacturing Queueing
http://www.sciencedirect.com/science/article/B6VCT-4Y4R309-1/2/f16d1d7fb2141e7bc7d1b34e1f8f2f39
Yang, Feng
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:325-3312010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:325-331
article
A search method for optimal control of a flow shop system of traditional machines
We consider a convex and nondifferentiable optimization problem for deterministic flow shop systems in which the arrival times of the jobs are known and jobs are processed in the order they arrive. The decision variables are the service times that are to be set only once before processing the first job, and cannot be altered between processes. The cost objective is the sum of regular costs on job completion times and service costs inversely proportional to the controllable service times. A finite set of subproblems, which can be solved by trust-region methods, are defined and their solutions are related to the optimal solution of the optimization problem under consideration. Exploiting these relationships, we introduce a two-phase search method which converges in a finite number of iterations. A numerical study is held to demonstrate the solution performance of the search method compared to a subgradient method proposed in earlier work.
Search method Controllable service times Convex programming Trust-region methods Flow shop
http://www.sciencedirect.com/science/article/B6VCT-4Y4PVM1-4/2/567e15b72ba42c41d80d5818d1b4f6d6
Selvi, Omer
Gokbayrak, Kagan
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:402-4092010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:402-409
article
On multiobjective second order symmetric duality with cone constraints
A pair of Wolfe type multiobjective second order symmetric dual programs with cone constraints is formulated and usual duality results are established under second order invexity assumptions. These results are then used to investigate symmetric duality for minimax version of multiobjective second order symmetric dual programs wherein some of the primal and dual variables are constrained to belong to some arbitrary sets, i.e., the sets of integers. This paper points out certain omissions and inconsistencies in the earlier work of Mishra [S.K. Mishra, Multiobjective second order symmetric duality with cone constraints, European Journal of Operational Research 126 (2000) 675-682] and Mishra and Wang [S.K. Mishra, S.Y. Wang, Second order symmetric duality for nonlinear multiobjective mixed integer programming, European Journal of Operational Research 161 (2005) 673-682].
Multiobjective programming Symmetric duality Minimax Second order invexity
http://www.sciencedirect.com/science/article/B6VCT-4X7GMFC-1/2/68952488553b4288a1fbaab6a1c6a95b
Ahmad, I.
Husain, Z.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:581-5882010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:581-588
article
Large-scale MV efficient frontier computation via a procedure of parametric quadratic programming
Despite the volume of research conducted on efficient frontiers, in many cases it is still not the easiest thing to compute a mean-variance (MV) efficient frontier even when all constraints are linear. This is particularly true of large-scale problems having dense covariance matrices and hence they are the focus in this paper. Because standard approaches for constructing an efficient frontier one point at a time tend to bog down on dense covariance matrix problems with many more than about 500 securities, we propose as an alternative a procedure of parametric quadratic programming for more effective usage on large-scale applications. With the proposed procedure we demonstrate through computational results on problems in the 1000-3000 security range that the efficient frontiers of dense covariance matrix problems in this range are now not only solvable, but can actually be computed in quite reasonable time.
Bi-criterion Portfolio selection Parametric quadratic programming Efficient frontiers Dense covariance matrices Large-scale Hyperbolic segments
http://www.sciencedirect.com/science/article/B6VCT-4XRCRR4-2/2/cff41c4db602c6549ca4e5c0a46a6ea2
Hirschberger, Markus
Qi, Yue
Steuer, Ralph E.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:136-1502010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:136-150
article
Optimization designs and performance comparison of two CUSUM schemes for monitoring process shifts in mean and variance
In statistical process control (SPC), when dealing with a quality characteristic x that is a variable, it is usually necessary to monitor both the mean value and variability. This article proposes an optimization algorithm (called the holistic algorithm) to design the CUSUM charts for this purpose. It facilitates the determination of the charting parameters of the CUSUM charts and considerably or significantly increases their overall detection effectiveness. A single CUSUM chart (called the ABS CUSUM chart) has been developed by the holistic algorithm and fully investigated. This chart is able to detect two-sided mean shifts and increasing variance shifts by inspecting the absolute value of sample mean shift. The results of performance studies show that the overall performance of the ABS CUSUM chart is nearly as good as an optimal 3-CUSUM scheme (a scheme incorporating three individual CUSUM charts). However, since the ABS CUSUM chart is easier for implementation and design, it may be more suitable for many SPC applications in which both mean and variance of a variable have to be monitored.
Quality control Loss function Markov processes Quality control chart Statistical process control
http://www.sciencedirect.com/science/article/B6VCT-4XVRYN1-1/2/ace1882d02d1aa22e75d5041d529ac7c
Wu, Zhang
Yang, Mei
Khoo, Michael B.C.
Yu, Fong-Jung
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:1-182010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:1-18
article
The hybrid flow shop scheduling problem
The scheduling of flow shops with multiple parallel machines per stage, usually referred to as the hybrid flow shop (HFS), is a complex combinatorial problem encountered in many real world applications. Given its importance and complexity, the HFS problem has been intensively studied. This paper presents a literature review on exact, heuristic and metaheuristic methods that have been proposed for its solution. The paper briefly discusses and reviews several variants of the HFS problem, each in turn considering different assumptions, constraints and objective functions. Research opportunities in HFS are also discussed.
Scheduling Hybrid flow shop Review
http://www.sciencedirect.com/science/article/B6VCT-4XFFJN7-1/2/d6a44e8a526f549d622bfa4d500b22e5
Ruiz, Rubén
Vázquez-Rodríguez, José Antonio
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:463-4722010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:463-472
article
Extensions to STaTS for practical applications of the facility layout problem
We consider a very general case of the facility layout problem, which allows incorporating various aspects appearing in real life applications. These aspects include loose requirements on facilities' footprints, each of which only needs to be of rectangular shape and can optionally be restricted concerning the surface area or the aspect ratio. Compared to former approaches other generalizations of practical relevance are multiple, not necessarily rectangular workshops, exclusion zones in workshops, predefined positions of facilities, the consideration of aisles, and the adherence of further restrictions such as the enforced placement of certain facilities next to an exterior wall or a minimum distance between certain pairs of facilities. Although different objectives could be applied, we especially focus on the most relevant one in practice, the minimization of transportation costs. We show that this problem can heuristically be solved using an extension of the Slicing Tree and Tabu Search (STaTS) based approach. The application of this algorithm on practical data shows its effectiveness. The paper concludes with a step-by-step guide for the application of STaTS in practice.
Facility layout problem Layout planning Slicing trees Tabu search
http://www.sciencedirect.com/science/article/B6VCT-4XPB6XK-2/2/b73f103507e096e11249b7b4cd0f99d2
Scholz, Daniel
Jaehn, Florian
Junker, Andreas
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:533-5442010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:533-544
article
Evolutionary optimisation of noisy multi-objective problems using confidence-based dynamic resampling
Many real-world optimisation problems approached by evolutionary algorithms are subject to noise. When noise is present, the evolutionary selection process may become unstable and the convergence of the optimisation adversely affected. In this paper, we present a new technique that efficiently deals with noise in multi-objective optimisation. This technique aims at preventing the propagation of inferior solutions in the evolutionary selection due to noisy objective values. This is done by using an iterative resampling procedure that reduces the noise until the likelihood of selecting the correct solution reaches a given confidence level. To achieve an efficient utilisation of resources, the number of samples used per solution varies based on the amount of noise in the present area of the search space. The proposed algorithm is evaluated on the ZDT benchmark problems and two complex real-world problems of manufacturing optimisation. The first real-world problem concerns the optimisation of engine component manufacturing in aviation industry, while the second real-world problem concerns the optimisation of a camshaft machining line in automotive industry. The results from the optimisations indicate that the proposed technique is successful in reducing noise, and it competes successfully with other noise handling techniques.
Evolutionary computations Multi-objective optimisation Noise Simulation
http://www.sciencedirect.com/science/article/B6VCT-4XPB6XK-1/2/5a9c55e5f7393978722d08540190a444
Syberfeldt, Anna
Ng, Amos
John, Robert I.
Moore, Philip
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:485-4952010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:485-495
article
The reliability importance of components and prime implicants in coherent and non-coherent systems including total-order interactions
In the management of complex systems, knowledge of how components contribute to system performance is essential to the correct allocation of resources. Recent works have renewed interest in the properties of the joint (J) and differential (D) reliability importance measures. However, a common background for these importance measures has not been developed yet. In this work, we build a unified framework for the utilization of J and D in both coherent and non-coherent systems. We show that the reliability function of any system is multilinear and its Taylor expansion is exact at an order T. We then introduce a total order importance measure (DT) that coincides with the exact portion of the change in system reliability associated with any (finite or infinitesimal) change in component reliabilities. We show that DT synthesizes the Birnbaum, joint and differential importance of all orders in one unique indicator. We propose an algorithm that enables the numerical estimation of DT by varying one probability at a time, making it suitable in the analysis of complex systems. Findings demonstrate that the simultaneous utilization of DT and J provides reliability analysts with a complete dissection of system performance.
Reliability Importance measures Joint reliability importance Multilinear functions
http://www.sciencedirect.com/science/article/B6VCT-4XMKB7M-1/2/1b97e5232b0c2ab3ce5a3a5ebf16034b
Borgonovo, E.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:42-462010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:42-46
article
Generalized linear fractional programming under interval uncertainty
Data in many real-life engineering and economical problems suffer from inexactness. Herein we assume that we are given some intervals in which the data can simultaneously and independently perturb. We consider a generalized linear fractional programming problem with interval data and present an efficient method for computing the range of optimal values. The method reduces the problem to solving from two to four real-valued generalized linear fractional programs, which can be computed in polynomial time using an appropriate interior point method solver. We consider also the inverse problem: How much can data of a real generalized linear fractional program vary such that the optimal values do not exceed some prescribed bounds. We propose a method for calculating (often the largest possible) ranges of admissible variations; it needs to solve only two real-valued generalized linear fractional programs. We illustrate the approach on a simple von Neumann economic growth model.
Generalized linear fractional programming Interval analysis Tolerance analysis Sensitivity analysis Economic growth model
http://www.sciencedirect.com/science/article/B6VCT-4Y5GXXG-2/2/3ef3221a2282212d84226d9b028ad75b
Hladík, Milan
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:273-2792010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:273-279
article
Cutting plane algorithms for 0-1 programming based on cardinality cuts
We present new valid inequalities for 0-1 programming problems that work in similar ways to well known cover inequalities. Discussion and analysis of these cuts is followed by their revision and use in integer programming as a new generation of cuts that excludes not only portions of polyhedra containing noninteger points, also parts with some integer points that have been explored in search of an optimal solution. Our computational experimentations demonstrate that this new approach has significant potential for solving large scale integer programming problems.
0-1 Integer programming Valid inequalities Cutting planes
http://www.sciencedirect.com/science/article/B6VCT-4Y7P4KW-1/2/2c7e8ed5e5e9920e55743f0235defd9f
Oguz, Osman
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:565-5802010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:565-580
article
Electre Tri-C: A multiple criteria sorting method based on characteristic reference actions
In this paper, a new sorting method, following a decision aiding constructive approach, is proposed. This method is called Electre Tri-C. As a sorting method, a set of categories must be defined to represent the way in which the actions that are going to be assigned to each of them should further be processed. This method is appropriate to deal with decision aiding contexts where the categories are completely ordered and each of them is defined through a single characteristic reference action. The set of characteristic actions should be co-constructed through an interactive process between the analyst and the decision maker. Electre Tri-C has been conceived to verify a set of natural structural requirements (conformity, homogeneity, monotonicity, and stability), which can be viewed as its fundamental properties. This method is composed of two joint rules, called descending rule and ascending rule, which must be used conjointly (and not separately). Each one of these rules selects only one category or a range of possible categories for a possible assignment of an action. This assignment depends on the comparison of such an action to the characteristic actions according to a chosen credibility level. Numerical examples are also presented in order to illustrate the main theoretical results provided by the method.
Multiple criteria decision aiding Constructive approach Sorting Electre Tri-C Decision support
http://www.sciencedirect.com/science/article/B6VCT-4XKHD9G-2/2/79ca098812ab868b9df3a00d7ae99fda
Almeida-Dias, J.
Figueira, J.R.
Roy, B.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:151-1582010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:151-158
article
Robust strategies for natural gas procurement
In order to serve their customers, natural gas local distribution companies (LDCs) can select from a variety of financial and non-financial contracts. The present paper is concerned with the choice of an appropriate portfolio of natural gas purchases that would allow a LDC to satisfy its demand with a minimum tradeoff between cost and risk, while taking into account risk associated with modeling error. We propose two types of strategies for natural gas procurement. Dynamic strategies model the procurement problem as a mean-risk stochastic program with various risk measures. Naive strategies hedge a fixed fraction of winter demand. The hedge is allocated equally between storage, futures and options. We propose a simulation framework to evaluate the proposed strategies and show that: (i) when the appropriate model for spot prices and its derivatives is used, dynamic strategies provide cheaper gas with low risk compared to naive strategies. (ii) In the presence of a modeling error, dynamic strategies are unable to control the variance of the procurement cost though they provide cheaper cost on average. Based on these results, we define robust strategies as convex combinations of dynamic and naive strategies. The weight of each strategy represents the fraction of demand to be satisfied following this strategy. A mean-variance problem is then solved to obtain optimal weights and construct an efficient frontier of robust strategies that take advantage of the diversification effect.
Natural gas Risk management Procurement strategies Supply mix Stochastic programming
http://www.sciencedirect.com/science/article/B6VCT-4Y1NV3K-2/2/98bbbcfeeef5efac40612d5169324a7b
Aouam, Tarik
Rardin, Ronald
Abrache, Jawad
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:448-4582010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:448-458
article
Cost allocation in collaborative forest transportation
Transportation planning is an important part of the supply chain or wood flow chain in forestry. There are often several forest companies operating in the same region and collaboration between two or more companies is rare. However, there is an increasing interest in collaborative planning as the potential savings are large, often in the range 5-15%. There are several issues to agree on before such collaborative planning can be used in practice. A key question is how the total cost or savings should be distributed among the participants. In this paper, we study a large application in southern Sweden with eight forest companies involved in a collaboration. We investigate a number of sharing mechanisms based on economic models including Shapley value, the nucleolus, separable and non-separable costs, shadow prices and volume weights. We also propose a new allocation method, with the aim that the participants relative profits are as equal as possible. We use two planning models, the first is based on direct flows between supply and demand points and the second includes backhauling. We also study how several time periods and geographical distribution of the supply and demand nodes affect the solutions. Better planning within each company can save about 5% and collaboration can increase this about another 9% to a total of 14%. The proposed allocation method is shown to be a practical approach to share the overall cost/savings.
Transportation OR in natural resources Supply chain management Logistics Economics Group decisions and negotiations Linear programming Backhauling
http://www.sciencedirect.com/science/article/B6VCT-4Y9SVSK-1/2/e61ce5f0f17a2579536c0e37cfb88d85
Frisk, M.
Göthe-Lundgren, M.
Jörnsten, K.
Rönnqvist, M.
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:366-3752010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:366-375
article
Launching new products through exclusive sales channels
When launching a new product, a manufacturer usually sells it through competing retailers under non-exclusive arrangements. Recently, many new products (cellphones, electronics, toys, etc.) are sold through a single sales channel via an exclusive arrangement. In this paper we present two separate models that examine these two arrangements. Each model is based on a Stackelberg game in which the manufacturer acts as the leader by setting the wholesale price and the retailers act as the followers by choosing their retail prices. For each model, we solve the Stackelberg game by determining the manufacturer's optimal wholesale price and each retailer's optimal retail price in equilibrium. Then we examine the conditions under which the manufacturer should sell the new product through an exclusive retailer. In addition, we examine the impact of postponing the wholesale price decision and the impact of demand uncertainty on the manufacturer's optimal profit under both arrangements.
Marketing/manufacturing interfaces Retail competition Channel competition
http://www.sciencedirect.com/science/article/B6VCT-4XRCRR4-1/2/d4d67548bc7f97a67aeeec624bc72ae7
Andritsos, Dimitrios A.
Tang, Christopher S.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:127-1352010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:127-135
article
Using MSRP to enhance the ability of rebates to control distribution channels
Manufacturers have increasingly instituted widespread mail-in rebate programs in recent years. Two primary purposes for rebates are to: (1) more directly impact consumer demand by reducing net retail price, and (2) capitalize on consumers' slippage behavior because not all consumers who intend to redeem the rebate at purchase time end up actually redeeming it. However, retailers can counteract the power of rebates to impact demand by simply raising the retail price by the amount of the manufacturer's rebate. We show that by combining a manufacturer's suggested retail price (MSRP) along with a rebate, the manufacturer can better control the channel by inhibiting the retailer's ability to raise price, particularly when consumers exhibit loss aversion. As a result, incorporating MSRP with a rebate promotion plan increases the manufacturer's profit. More surprisingly, the profit of the supply chain as a whole also increases, and the channel efficiency increases as well. In fact, contrary to results from the existing rebate literature suggesting that rebates should always be offered whenever slippage exists, we demonstrate that MSRP can actually be a more effective tool than rebates in managing retailer and consumer behavior when consumers do not have sufficient loss aversion and the slippage rate is low enough.
Supply chain management Marketing Rebates MSRP Channel control
http://www.sciencedirect.com/science/article/B6VCT-4Y0T931-2/2/38bcaeb1610fef9d9c1e8c371a80b04b
Yang, Shilei
Munson, Charles L.
Chen, Bintong
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:253-2612010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:253-261
article
Necessary optimality conditions for nonsmooth generalized semi-infinite programming problems
This paper is devoted to the study of nonsmooth generalized semi-infinite programming problems in which the index set of the inequality constraints depends on the decision vector and all emerging functions are assumed to be locally Lipschitz. We introduce a constraint qualification which is based on the Mordukhovich subdifferential. Then, we derive a Fritz-John type necessary optimality condition. Finally, interrelations between the new and the existing constraint qualifications such as the Mangasarian-Fromovitz, linear independent, and the Slater are investigated.
Generalized semi-infinite programming Mordukhovich subdifferential Constraint qualification Lagrangian Optimality condition Nonsmooth optimization
http://www.sciencedirect.com/science/article/B6VCT-4Y4PVM1-1/2/712e426f76bdb1cb3da263f9d27e3736
Kanzi, N.
Nobakhtian, S.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:195-2012010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:195-201
article
Do forecasts expressed as prediction intervals improve production planning decisions?
A number of studies have shown that providing point forecasts to decision makers can lead to improved production planning decisions. However, point forecasts do not convey information about the level of uncertainty that is associated with forecasts. In theory, the provision of prediction intervals, in addition to point forecasts, should therefore lead to further enhancements in decision quality. To test whether this is the case in practice, participants in an experiment were asked to decide on the production levels that were needed to meet the following week's demand for a series of products. Either underproduction cost twice as much per unit as overproduction or vice versa. The participants were supplied with either a point forecast, a 50% prediction interval, or a 95% prediction interval for the following week's demand. The prediction intervals did not improve the quality of the decisions and also reduced the propensity of the decision makers to respond appropriately to the asymmetry in the loss function. A simple heuristic is suggested to allow people to make more effective use of prediction intervals. It is found that applying this heuristic to 85% prediction intervals would lead to nearly optimal decisions.
Forecasting Prediction intervals Decision making Asymmetric loss
http://www.sciencedirect.com/science/article/B6VCT-4Y1NV3K-3/2/e9cfd360387bd926513a1bbf049b1e60
Goodwin, Paul
Önkal, Dilek
Thomson, Mary
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:245-2502010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:245-250
article
A heuristic for the one-dimensional cutting stock problem with usable leftover
A heuristic algorithm for the one-dimensional cutting stock problem with usable leftover (residual length) is presented. The algorithm consists of two procedures. The first is a linear programming procedure that fulfills the major portion of the item demand. The second is a sequential heuristic procedure that fulfills the remaining portion of the item demand. The algorithm can balance the cost of the consumed bars, the profit from leftovers and the profit from shorter stocks reduction. The computational results show that the algorithm performs better than a recently published algorithm.
Cutting stock One-dimensional cutting Multi-objective Residual length
http://www.sciencedirect.com/science/article/B6VCT-4XKHD9G-7/2/9eeab8ffb3761beba6b7b2759a27e792
Cui, Yaodong
Yang, Yuli
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:473-4842010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:473-484
article
Vendor managed inventory model for single-vendor multi-retailer supply chains
Vendor managed inventory is an integrated approach for retailer-vendor coordination, according to which the vendor decides on the appropriate inventory levels within bounds that are agreed upon in a contractual agreement between vendor and retailers. In this contract, the vendor usually incurs a penalty cost for items exceeding these bounds. The purpose of this paper is to develop a model for a supply chain with single vendor and multiple retailers under VMI mode of operation. This model explicitly includes the VMI contractual agreement between the vendor and retailers. The developed model can easily describe supply chains with capacity constraints by selecting high penalty cost. Theorems are established to alleviate the complexity of the model and render the mathematics tractable. Moreover, an efficient algorithm is devised to find the global optimal solution. This algorithm reduces the computational efforts significantly. In addition, numerical experiments are conducted to show the utility of the proposed model.
Supply chain management Vendor managed inventory Multiple retailers Inventory management Karush-Kuhn-Tucker point
http://www.sciencedirect.com/science/article/B6VCT-4XS6FFW-3/2/03ebc94f0288654c052cfdbfa4798c6f
Darwish, M.A.
Odah, O.M.
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:227-2362010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:227-236
article
What drives value creation in investment projects? An application of sensitivity analysis to project finance transactions
Evaluating the economic attractiveness of large projects often requires the development of large and complex financial models. Model complexity can prevent management from obtaining crucial information, with the risk of a suboptimal exploitation of the modelling efforts. We propose a methodology based on the so-called "differential importance measure (D)" to enhance the managerial insights obtained from financial models. We illustrate our methodology by applying it to a project finance case study. We show that the additivity property of D grants analysts and managers full flexibility in combining parameters into any group and at the desired aggregation level. We analyze investment criteria related to both the investors's and lenders' perspectives. Results indicate that exogenous factors affect investors (sponsors and lenders) in different ways, whether exogenous variables are considered individually or by groups.
Risk analysis Finance: investment analysis Sensitivity analysis
http://www.sciencedirect.com/science/article/B6VCT-4XWMNC3-1/2/91573cbe4b0b7d53e99a1cadc569ce8f
Borgonovo, E.
Gatti, S.
Peccati, L.
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:336-3422010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:336-342
article
Determining crane areas in intermodal transshipment yards: The yard partition problem
At rail-road transshipment yards, gantry cranes move containers from freight trains to trucks and vice versa. They constitute important entities in today's intermodal transportation systems. Real-world yards are often partitioned into several disjunct crane areas, so that crane interferences during container transshipment are avoided. In practice, the lengths of such crane areas are typically determined by simple rules of thumb, i.e., each crane receives an equally sized area, which might result in an unleveled division of labor among cranes and, thus, prolong train processing times. This paper provides an exact solution procedure which determines disjunct yard areas of varying size for multiple gantry cranes in polynomial runtime, so that the workload for a given pulse of trains is equally distributed among cranes. Furthermore, we investigate the potential acceleration of train processing as compared to equally sized areas in a yard simulation.
Intermodal transport Transshipment yard Container handling Crane scheduling
http://www.sciencedirect.com/science/article/B6VCT-4XNF8BM-2/2/e38d44998a9cc5ec6841cc6038d9a5eb
Boysen, Nils
Fliedner, Malte
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:368-3802010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:368-380
article
Optimally maintaining a Markovian deteriorating system with limited imperfect repairs
We consider the problem of optimally maintaining a periodically inspected system that deteriorates according to a discrete-time Markov process and has a limit on the number of repairs that can be performed before it must be replaced. After each inspection, a decision maker must decide whether to repair the system, replace it with a new one, or leave it operating until the next inspection, where each repair makes the system more susceptible to future deterioration. If the system is found to be failed at an inspection, then it must be either repaired or replaced with a new one at an additional penalty cost. The objective is to minimize the total expected discounted cost due to operation, inspection, maintenance, replacement and failure. We formulate an infinite-horizon Markov decision process model and derive key structural properties of the resulting optimal cost function that are sufficient to establish the existence of an optimal threshold-type policy with respect to the system's deterioration level and cumulative number of repairs. We also explore the sensitivity of the optimal policy to inspection, repair and replacement costs. Numerical examples are presented to illustrate the structure and the sensitivity of the optimal policy.
Reliability Limited repairs Threshold-type policy Markov decision processes
http://www.sciencedirect.com/science/article/B6VCT-4Y5BMCT-1/2/48e5152d4a2e2985e3cb4c00f36172b1
Kurt, Murat
Kharoufeh, Jeffrey P.
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:469-4782010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:469-478
article
A note on coordination in decentralized assembly systems with uncertain component yields
Gurnani and Gerchak [H. Gurnani, Y. Gerchak, Coordination in decentralized assembly systems with uncertain component yields, European Journal of Operational Research 176 (2007) 1559-1576] study coordination of a decentralized assembly system in which the demand of the assembler is deterministic and the component yields are random. They present incentive alignment control mechanisms under which system coordination is achieved. In this note, we extend Gurnani and Gerchak's model to the case of positive salvage value and n asymmetric suppliers, and show that the shortage penalty contract which can coordinate Gurnani and Gerchak's model no longer coordinates the extended model. Furthermore, we present a new kind of contract, surplus subsidy contract, to coordinate the extended model and prove that the profit of the supply chain under coordination can be arbitrarily divided between the component suppliers and the assembler.
Supply chain coordination Random yield Inventory Contract
http://www.sciencedirect.com/science/article/B6VCT-4XY4N0S-1/2/a87e086b7661992cfc9bffb80860fd13
Yan, Xiaoming
Zhang, Minghui
Liu, Ke
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:662-6712010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:662-671
article
Dynamic pricing when consumers are strategic: Analysis of posted and contingent pricing schemes
We study dynamic pricing policies for a monopolist selling perishable products over a finite time horizon to strategic buyers. Buyers are strategic in the sense that they anticipate the firm's price policies. It is expensive and administratively difficult for most brick and mortar retailers to change prices, placing limits on the number of price changes and the types of pricing policies they can adopt. The simplest policy is to commit to a set of price changes. A more complex alternative is to let the price depend on sales history. We investigate two pricing schemes that we call posted and contingent pricing. Using the posted pricing scheme, the firm announces a set of prices at the beginning of the horizon. In the contingent pricing scheme, price evolution depends upon demand realization. Our focus is on the posted pricing scheme because of its ease of implementation. Counter to intuition, we find that neither a posted pricing scheme nor a contingent pricing scheme is dominant and the difference in expected revenues of these two schemes is small. Limiting the number of price changes will result in a decrease in expected revenues. We show that a multi-unit auction with a reservation price provides an upper bound for expected revenues for both pricing schemes. Numerical examples suggest that a posted pricing scheme with two or three price changes is enough to achieve revenues that are close to the upper bound. Dynamic pricing is only useful when strategic buyers perceive scarcity. We study the impact of scarcity and derive the optimal stocking levels for large markets. Finally, we investigate whether or not it is optimal for the seller to conceal inventory or sales information from buyers. A firm benefits if it does not reveal the number of units it has available for sale at the beginning of the season, or subsequently withholds information about the number of units sold.
Revenue management Dynamic pricing Customer strategic behavior
http://www.sciencedirect.com/science/article/B6VCT-4XSJVN5-2/2/1640532e6222b146d637c60c6f0b2771
Dasu, Sriram
Tong, Chunyang
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:486-4872010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:486-487
article
Metaheuristics. From Design to Implementation, El-Ghazali Talbi. John Wiley & Sons Inc. (2009). XXIÂ +Â 593 pp., Publication 978-0-470-27858-1.
http://www.sciencedirect.com/science/article/B6VCT-4Y4XCPD-1/2/79583a1aebd011181fc3789109a587b0
Teghem, Jacques
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:613-6202010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:613-620
article
An investigation into the relationship between size and efficiency of the Italian hospitality sector: A window DEA approach
This paper analyses the efficiency of hotels across all of the 20 regions in Italy using a data envelopment analysis (DEA). The empirical results indicate that Sardinia can be considered as a region "falling further behind", whereas some regions in the North and Centre of Italy can be regarded as "moving ahead". Using the island of Sardinia as a case study, approximately 150 firms are analysed in detail over the time span 2002-2005. Via a window DEA, both technical and scale efficiencies are computed. An efficiency comparison amongst hotels categorised by size and municipality is run. Finally, policy implications are drawn from the empirical findings that advise how to improve hotels that attained low efficiency scores.
Window data envelopment analysis Efficiency Hotel size Regional analysis
http://www.sciencedirect.com/science/article/B6VCT-4XNN5G6-3/2/06030e1d420346f909cc6d7a631e36c8
Pulina, Manuela
Detotto, Claudio
Paba, Antonello
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:401-4112010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:401-411
article
A linear implementation of PACMAN
PACMAN (Passive and Active Compensability Multicriteria ANalysis) is a multiple criteria methodology based on a decision maker oriented notion of compensation, called compensability. A basic step of PACMAN is the construction of compensatory functions, which model intercriteria relations for each pair of criteria on the basis of compensability. In this paper we examine a simplified version of PACMAN, which uses the so-called linear compensatory functions and consistently reduces the overall complexity of its implementation in practical cases. We use MathematicaÂ® to develop a computer-aided graphical interface that eases the interaction among the actors of the decision process at each stage of PACMAN. We also propose the possibility to perform a sensitivity analysis in this simplified version of PACMAN as a nonlinear optimization problem.
C00 D00 D81 Multiple criteria analysis Pairwise criterion comparison approach Compensation Compensability analysis Compensatory function Sensitivity analysis
http://www.sciencedirect.com/science/article/B6VCT-4Y70C5Y-1/2/e08b81acba069f0f6b29518732586e94
Angilella, Silvia
Giarlotta, Alfio
Lamantia, Fabio
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:381-3892010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:381-389
article
Approximate analysis of load-dependent generally distributed queuing networks with low service time variability
In this paper, we present an approximate method for solution of load-dependent, closed queuing networks having general service time distributions with low variability. The proposed technique is an extension of Marie's (1980) method. In the methodology, conditional throughputs are obtained by an iterative procedure. The iterations are repeated until an invalid result is detected or no improvements are found. We demonstrate the performance of the technique with 10 different examples. On average, the solutions have 5% or lower deviations when compared to simulation results.
Queuing networks Performance analysis Load-dependent network Conditional throughput
http://www.sciencedirect.com/science/article/B6VCT-4Y6J3VC-1/2/b8eded1400fb192ee9ec1cac5420860f
Ekren, Banu Yetkin
Heragu, Sunderesh S.
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:247-2522010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:247-252
article
Second-order multiobjective symmetric duality with cone constraints
In this paper, we formulate Wolfe and Mond-Weir type second-order multiobjective symmetric dual problems over arbitrary cones. Weak, strong and converse duality theorems are established under [eta]-bonvexity/[eta]-pseudobonvexity assumptions. This work also removes several omissions in definitions, models and proofs for Wolfe type problems studied in Mishra [9]. Moreover, self-duality theorems for these pairs are obtained assuming the function involved to be skew symmetric.
Multiobjective symmetric duality [eta]-bonvexity/[eta]-pseudobonvexity Cones Efficient solutions Properly efficient solutions
http://www.sciencedirect.com/science/article/B6VCT-4Y52R07-1/2/3b50c8e4dc96066ddb346e8cd1f3bcab
Gulati, T.R.
Saini, Himani
Gupta, S.K.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:545-5562010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:545-556
article
A mesh adaptive direct search algorithm for multiobjective optimization
This work studies multiobjective optimization (MOP) of nonsmooth functions subject to general constraints. We first present definitions and optimality conditions as well as some single-objective formulations of MOP, parameterized with respect to some reference point in the space of objective functions. Next, we propose a new algorithm called MultiMads (multiobjective mesh adaptive direct search) for MOP. MultiMads generates an approximation of the Pareto front by solving a series of single-objective formulations of MOP generated using the NBI (natural boundary intersection) framework. These single-objective problems are solved using the Mads (mesh adaptive direct search) algorithm for constrained nonsmooth optimization. The Pareto front approximation is shown to satisfy some first-order necessary optimality conditions based on the Clarke calculus. MultiMads is then tested on problems from the literature with different Pareto front landscapes and on a styrene production process simulation problem from chemical engineering.
Multiobjective optimization Mesh adaptive direct search (Mads) Convergence analysis
http://www.sciencedirect.com/science/article/B6VCT-4XPYXH8-1/2/08a8dad25fbe9cee8c680af714680301
Audet, Charles
Savard, Gilles
Zghal, Walid
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:639-6472010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:639-647
article
Sustainable vegetable crop supply problem
We consider an agricultural production problem, in which one must meet a known demand of crops while respecting ecologically-based production constraints. The problem is twofold: in order to meet the demand, one must determine the division of the available heterogeneous arable areas in plots and, for each plot, obtain an appropriate crop rotation schedule. Rotation plans must respect ecologically-based constraints such as the interdiction of certain crop successions, and the regular insertion of fallows and green manures. We propose a linear formulation for this problem, in which each variable is associated with a crop rotation schedule. The model may include a large number of variables and it is, therefore, solved by means of a column-generation approach. We also discuss some extensions to the model, in order to incorporate additional characteristics found in field conditions. A set of computational tests using instances based on real-world data confirms the efficacy of the proposed methodology.
Linear programming Crop demand Crop rotation Column-generation
http://www.sciencedirect.com/science/article/B6VCT-4XVBP61-1/2/d3362677b376f2cdfe9e9f51ddee33c3
dos Santos, Lana Mara R.
Costa, Alysson M.
Arenales, Marcos N.
Santos, Ricardo Henrique S.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:621-6292010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:621-629
article
Decision support for centralizing cargo at a Moroccan airport hub using stochastic multicriteria acceptability analysis
The geographical location of Morocco places it at the heart of important sea, air, rail and motorway transport routes between four continents. In this study we evaluate different alternatives to centralize multimodal cargo at a Moroccan airport hub. The choice depends on different socio-economical criteria, the geographical location, and the environmental impacts. Some of the criteria can be measured quantitatively, while for others only qualitative assessment is feasible. Furthermore, significant uncertainty is present in both the criteria measurements and the preferences. We aided this decision process using Stochastic Multicriteria Acceptability Analysis (SMAA). SMAA is a method that allows the representation of a mixture of different kinds of uncertain, imprecise and partially missing information in a consistent way. The results indicated that two of the alternatives, Benslimane and Casablanca, were superior. As a result of the analysis, the National Airport Authority of Morocco started negotiations with investors to develop the hub at Benslimane.
Transportation Air cargo hub Decision support Stochastic multicriteria acceptability analysis
http://www.sciencedirect.com/science/article/B6VCT-4XS6FFW-2/2/81d3f53a40a1de97ef61f8e8c8ecbb0e
Menou, Abdellah
Benallou, Abdelhanine
Lahdelma, Risto
Salminen, Pekka
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:205-2172010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:205-217
article
A multi-objective multi-period stochastic programming model for public debt management
While raising debt on behalf of the government, public debt managers need to consider several possibly conflicting objectives and have to find an appropriate combination for government debt taking into account the uncertainty with regard to the future state of the economy. In this paper, we explicitly consider the underlying uncertainties with a complex multi-period stochastic programming model that captures the trade-offs between the objectives. The model is designed to aid the decision makers in formulating the debt issuance strategy. We apply an interactive procedure that guides the issuer to identify good strategies and demonstrate this approach for the public debt management problem of Turkey.
OR in government Multiple objective programming Risk analysis Stochastic programming Public debt management
http://www.sciencedirect.com/science/article/B6VCT-4XW00BG-1/2/32927c48a26e9cfda1a70834d692f783
Balibek, Emre
Köksalan, Murat
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:431-4362010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:431-436
article
Efficiency in the Greek insurance industry
This paper employs the two-stage procedure of Simar and Wilson (2007) to analyse the effects of deregulation on the efficiency of the Greek insurance industry. The efficiency is estimated by means of data envelopment analysis (DEA). The companies are ranked according to their CRS efficiency score for the period 1994-2003. The first stage results indicate a decline in efficiency over the sample period, while the second stage results confirm that the competition for market shares is a major driver of efficiency in the Greek insurance industry.
Insurance Greece Productivity change Bootstrapped DEA
http://www.sciencedirect.com/science/article/B6VCT-4Y52R07-4/2/c01fefa64449e1e5bca5eec94e82d095
Barros, Carlos Pestana
Nektarios, Milton
Assaf, A.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:421-4382010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:421-438
article
Optimal policies for inventory systems with finite capacity and partially observed Markov-modulated demand and supply processes
We analyze a single-item periodic-review inventory system with random yield and finite capacity operating in a random environment. The primary objective is to extend the model of Gallego and Hu (2004) to the more general case when the environment is only partially observable. Although our analysis is specific to inventory systems, it can also be applied to production systems by replacing the fixed capacity supplier with a fixed capacity producer. Using sufficient statistics, we consider single-period, multiple-period and infinite-period problems to show that a state-dependent modified inflated base-stock policy is optimal. Moreover, we show that the multiple-period cost converges to the infinite-period cost as the length of the planning horizon increases.
Random yield Fixed capacity Random environment Modified inflated base-stock policy Dynamic programming Sufficient statistics POMDP
http://www.sciencedirect.com/science/article/B6VCT-4XMKB7M-2/2/de45b82c54b33462f3b9b6060738b62e
Arifoglu, Kenan
Özekici, Süleyman
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:255-2622010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:255-262
article
Mean-variance analysis of supply chains under wholesale pricing and profit sharing schemes
In this paper, we explore the use of a wholesale pricing and profit sharing scheme (WPPS) for coordinating supply chains under the mean-variance (MV) decision framework. We first analytically establish the necessary and sufficient conditions for coordinating the centralized supply chain by WPPS. We then show that there exists a unique equilibrium of the Stackelberg game with WPPS in the decentralized case. After that, we discuss the information asymmetric case in which the retailer can be benefited by pretending to be more risk averse. Finally, we propose a new measure for the manufacturer to prevent this cheating from happening. Insights are generated.
Supply chain management Mean-variance analysis Supply chain coordination
http://www.sciencedirect.com/science/article/B6VCT-4XHM17C-1/2/9f527e339d492e11f40634793dc05f6b
Wei, Ying
Choi, Tsan-Ming
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:390-4002010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:390-400
article
A parallel multiple reference point approach for multi-objective optimization
This paper presents a multiple reference point approach for multi-objective optimization problems of discrete and combinatorial nature. When approximating the Pareto Frontier, multiple reference points can be used instead of traditional techniques. These multiple reference points can easily be implemented in a parallel algorithmic framework. The reference points can be uniformly distributed within a region that covers the Pareto Frontier. An evolutionary algorithm is based on an achievement scalarizing function that does not impose any restrictions with respect to the location of the reference points in the objective space. Computational experiments are performed on a bi-objective flow-shop scheduling problem. Results, quality measures as well as a statistical analysis are reported in the paper.
Multiple objective programming Parallel computing Multiple reference point approach Evolutionary computations Bi-objective flow-shop scheduling
http://www.sciencedirect.com/science/article/B6VCT-4Y4PVM1-2/2/b184c84907f4a24c938d7548838b1704
Figueira, J.R.
Liefooghe, A.
Talbi, E.-G.
Wierzbicki, A.P.
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:294-3022010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:294-302
article
A hybrid immune multiobjective optimization algorithm
In this paper, we develop a hybrid immune multiobjective optimization algorithm (HIMO) based on clonal selection principle. In HIMO, a hybrid mutation operator is proposed with the combination of Gaussian and polynomial mutations (GP-HM operator). The GP-HM operator adopts an adaptive switching parameter to control the mutation process, which uses relative large steps in high probability for boundary individuals and less-crowded individuals. With the generation running, the probability to perform relative large steps is reduced gradually. By this means, the exploratory capabilities are enhanced by keeping a desirable balance between global search and local search, so as to accelerate the convergence speed to the true Pareto-optimal front in the global space with many local Pareto-optimal fronts. When comparing HIMO with various state-of-the-art multiobjective optimization algorithms developed recently, simulation results show that HIMO performs better evidently.
Multiple objective programming Artificial immune systems Clonal selection principle Hybrid mutation Artificial intelligence
http://www.sciencedirect.com/science/article/B6VCT-4XH5MJ8-1/2/1303907c5303ec03c3eed6d21ac65e8d
Chen, Jianyong
Lin, Qiuzhen
Ji, Zhen
oai:RePEc:eee:ejores:v:204:y:2010:i:2:p:218-2282010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:2:p:218-228
article
A new branch-and-price algorithm for the traveling tournament problem
The traveling tournament problem (ttp) consists of finding a distance-minimal double round-robin tournament where the number of consecutive breaks is bounded. For solving the problem exactly, we propose a new branch-and-price approach. The starting point is a new compact formulation for the ttp. The corresponding extensive formulation resulting from a Dantzig-Wolfe decomposition is identical to one given by Easton, K., Nemhauser, G., Trick, M., 2003. Solving the traveling tournament problem: a combined interger programming and constraint programming approach. In: Burke, E., De Causmaecker, P. (Eds.), Practice and Theory of Automated Timetabling IV, Volume 2740 of Lecture Notes in Computer Science, Springer Verlag Berlin/Heidelberg, pp. 100-109, who suggest to solve the tour-generation subproblem by constraint programming. In contrast to their approach, our method explicitly utilizes the network structure of the compact formulation: First, the column-generation subproblem is a shortest-path problem with additional resource and task-elementarity constraints. We show that this problem can be reformulated as an ordinary shortest-path problem over an expanded network and, thus, be solved much faster. An exact variable elimination procedure then allows the reduction of the expanded networks while still guaranteeing optimality. Second, the compact formulation gives rise to supplemental branching rules, which are needed, since existing rules do not ensure integrality in all cases. Third, non-repeater constraints are added dynamically to the master problem only when violated. The result is a fast exact algorithm, which improves many lower bounds of knowingly hard ttp instances from the literature. For some instances, solutions are proven optimal for the first time.
Timetabling Sports league scheduling Traveling tournament problem Column generation Branch-and-price
http://www.sciencedirect.com/science/article/B6VCT-4XKHD9G-3/2/0d4a1951732b87cd8d98c47930f2c1dc
Irnich, Stefan
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:361-3672010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:361-367
article
Efficient estimation of large portfolio loss probabilities in t-copula models
We consider the problem of accurately measuring the credit risk of a portfolio consisting of loans, bonds and other financial assets. One particular performance measure of interest is the probability of large portfolio losses over a fixed time horizon. We revisit the so-called t-copula that generalizes the popular normal copula to allow for extremal dependence among defaults. By utilizing the asymptotic description of how the rare event occurs, we derive two simple simulation algorithms based on conditional Monte Carlo to estimate the probability that the portfolio incurs large losses under the t-copula. We further show that the less efficient estimator exhibits bounded relative error. An extensive simulation study demonstrates that both estimators outperform existing algorithms. We then discuss a generalization of the t-copula model that allows the multivariate defaults to have an asymmetric distribution. Lastly, we show how the estimators proposed for the t-copula can be modified to estimate the portfolio risk under the skew t-copula model.
Credit risk Copula models Rare-event simulation Cross-entropy method Conditional Monte Carlo
http://www.sciencedirect.com/science/article/B6VCT-4Y4R309-4/2/744aaf85ed8a01cb5cf2804035ec77da
Chan, Joshua C.C.
Kroese, Dirk P.
oai:RePEc:eee:ejores:v:204:y:2010:i:3:p:513-5212010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:204:y:2010:i:3:p:513-521
article
From differential to difference importance measures for Markov reliability models
This paper presents the development of the differential importance measures (DIM), proposed recently for the use in risk-informed decision-making, in the context of Markov reliability models. The proposed DIM are essentially based on directional derivatives. They can be used to quantify the relative contribution of a component (or a group of components, a state or a group of states) of the system on the total variation of system performance provoked by the changes in system parameters values. The estimation of DIM at steady state using only a single sample path of a Markov process is also investigated. A numerical example of a dynamic system is finally introduced to illustrate the use of DIM, as well as the advantages of proposed evaluation approaches.
Reliability Sensitivity analysis Differential importance measures Markov process
http://www.sciencedirect.com/science/article/B6VCT-4XSJVN5-4/2/fa000308d6529ea1dfb679a3ca44f024
Do Van, Phuc
Barros, Anne
Bérenguer, Christophe
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:31-412010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:31-41
article
A hybrid rank-based evolutionary algorithm applied to multi-mode resource-constrained project scheduling problem
We consider the multi-mode resource-constrained project scheduling problem (MRCPSP), where a task has different execution modes characterized by different resource requirements. Due to the nonrenewable resources and the multiple modes, this problem is NP-hard; therefore, we implement an evolutionary algorithm looking for a feasible solution minimizing the makespan. In this paper, we propose and investigate two new ideas. On the one hand, we transform the problem of single objective MRCPSP to bi-objective one to cope with the potential violation of nonrenewable resource constraints. Relaxing the latter constraints allows to visit a larger solution set and thus to simplify the evolutionary operators. On the other hand, we build the fitness function not on a priori grid of the bi-objective space, but on an adaptive one relying on clustering techniques. This proposed idea aims at more relevant fitness values. We show that a clustering-based fitness function can be an appealing feature in multi-objective evolutionary algorithms since it may promote diversity and avoid premature convergence of the algorithms. Clustering heuristics require certainly computation time, but they are still competitive with respect to classical niche formation multi-objective genetic algorithm.
Project scheduling Resource-constrained Multiple modes Evolutionary algorithms Bi-objective approach Clustering
http://www.sciencedirect.com/science/article/B6VCT-4Y05DHT-1/2/e7e45193dca92ef88e539526c7b4bfea
Elloumi, Sonda
Fortemps, Philippe
oai:RePEc:eee:ejores:v:205:y:2010:i:2:p:422-4302010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:2:p:422-430
article
The drivers of citations in management science journals
The number of citations is becoming an increasingly popular index for measuring the impact of a scholar's research or the quality of an academic department. One obvious question is: what are the factors that influence the number of citations that a paper receives? This study investigates the number of citations received by papers published in six well-known management science journals. It considers factors that relate to the author(s), the article itself, and the journal. The results show that the strongest factor is the journal itself; but other factors are also significant including the length of the paper, the number of references, the status of the first author's institution, and the type of paper, especially if it is a review. Overall, this study provides some insights into the determinants of a paper's impact that may be helpful for particular stakeholders to make important decisions.
Citations Impact factors Journals Research quality
http://www.sciencedirect.com/science/article/B6VCT-4XX15SF-5/2/463088b1de1d692b12acce17d3e8e55a
Mingers, John
Xu, Fang
oai:RePEc:eee:ejores:v:205:y:2010:i:1:p:65-802010-04-15RePEc:eee:ejores
RePEc:eee:ejores:v:205:y:2010:i:1:p:65-80
article
A fuzzy linear programming based approach for tactical supply chain planning in an uncertainty environment
This paper models supply chain (SC) uncertainties by fuzzy sets and develops a fuzzy linear programming model for tactical supply chain planning in a multi-echelon, multi-product, multi-level, multi-period supply chain network. In this approach, the demand, process and supply uncertainties are jointly considered. The aim is to centralize multi-node decisions simultaneously to achieve the best use of the available resources along the time horizon so that customer demands are met at a minimum cost. This proposal is tested by using data from a real automobile SC. The fuzzy model provides the decision maker (DM) with alternative decision plans with different degrees of satisfaction.
Supply chain management Supply chain planning Uncertainty modeling Fuzzy sets
http://www.sciencedirect.com/science/article/B6VCT-4XVBP61-5/2/c84883788a439601b0e6b329c37ee21d
Peidro, David
Mula, Josefa
Jiménez, Mariano
del Mar Botella, Ma
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:124-1332011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:124-133
article
The optimal control of just-in-time-based production and distribution systems and performance comparisons with optimized pull systems
In just-in-time (JIT) production systems, there is both input stock in the form of parts and output stock in the form of product at each stage. These activities are controlled by production-ordering and withdrawal kanbans. This paper discusses a discrete-time optimal control problem in a multistage JIT-based production and distribution system with stochastic demand and capacity, developed to minimize the expected total cost per unit of time. The problem can be formulated as an undiscounted Markov decision process (UMDP); however, the curse of dimensionality makes it very difficult to find an exact solution. The author proposes a new neuro-dynamic programming (NDP) algorithm, the simulation-based modified policy iteration method (SBMPIM), to solve the optimal control problem. The existing NDP algorithms and SBMPIM are numerically compared with a traditional UMDP algorithm for a single-stage JIT production system. It is shown that all NDP algorithms except the SBMPIM fail to converge to an optimal control. Additionally, a new algorithm for finding the optimal parameters of pull systems is proposed. Numerical comparisons between near-optimal controls computed using the SBMPIM and optimized pull systems are conducted for three-stage JIT-based production and distribution systems. UMDPs with 42 million states are solved using the SBMPIM. The pull systems discussed are the kanban, base stock, CONWIP, hybrid and extended kanban.
Production JIT-based production and distribution system Optimal control New neuro-dynamic programming algorithm New algorithm for optimizing pull systems Numerical comparisons of optimized pull systems with near optimal control
http://www.sciencedirect.com/science/article/B6VCT-52BPK1C-1/2/7cebeb69614d4573704570c7a7a13796
Ohno, Katsuhisa
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:340-3482011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:340-348
article
Balancing the fit and logistics costs of market segmentations
Segments are typically formed to serve distinct groups of consumers with differentiated marketing mixes, that better fit their specific needs and wants. However, buyers in a segment are not necessarily geographically closely located. Serving a geographically dispersed segment with one marketing mix can increase the logistics costs in the form of high transportation costs and long lead times. This study proposes a segmentation method that balances the fit of a segmentation strategy against the corresponding logistics costs. An application to the problem of segmenting a set of European regions, using consumers' store attribute preferences as a segmentation basis, suggests segment-specific retail positioning strategies that reflect different decisions about store image attributes such as price, assortment, and atmosphere. This approach designates transnational segments that require acceptable logistics costs and offer the highest possible level of within segment homogeneity.
Marketing Segmentation Logistics costs Simulated annealing
http://www.sciencedirect.com/science/article/B6VCT-529MVCM-1/2/043df8b7b7946fece05f80b64c43fcbb
Turkensteen, Marcel
Sierksma, Gerard
Wieringa, Jaap E.
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:349-3582011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:349-358
article
Efficiency analysis, shortage functions, arbitrage, and martingales
This paper shows that standard tools of efficiency analysis, directional distance functions, can be used to characterize the investment-returns technology. That ability to characterize the investment-returns technology and fundamental duality relationships imply that directional distance functions can be used to detect the presence of an arbitrage, to value financial assets in the absence of an arbitrage lying in the span of the market and to place bounds on the no-arbitrage values of assets lying outside the span of the market.
Efficiency analysis Arbitrage Distance functions Directional distance functions Finance Asset pricing
http://www.sciencedirect.com/science/article/B6VCT-52F85R1-1/2/702c80c19aa777b8442565d75959909e
Chambers, Robert G.
Färe, Rolf
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:445-4542011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:445-454
article
Integrated airline crew scheduling: A bi-dynamic constraint aggregation method using neighborhoods
The integrated crew scheduling (ICS) problem consists of determining, for a set of available crew members, least-cost schedules that cover all flights and respect various safety and collective agreement rules. A schedule is a sequence of pairings interspersed by rest periods that may contain days off. A pairing is a sequence of flights, connections, and rests starting and ending at the same crew base. Given its high complexity, the ICS problem has been traditionally tackled using a sequential two-stage approach, where a crew pairing problem is solved in the first stage and a crew assignment problem in the second stage. Recently, Saddoune et al. (2010b) developed a model and a column generation/dynamic constraint aggregation method for solving the ICS problem in one stage. Their computational results showed that the integrated approach can yield significant savings in total cost and number of schedules, but requires much higher computational times than the sequential approach. In this paper, we enhance this method to obtain lower computational times. In fact, we develop a bi-dynamic constraint aggregation method that exploits a neighborhood structure when generating columns (schedules) in the column generation method. On a set of seven instances derived from real-world flight schedules, this method allows to reduce the computational times by an average factor of 2.3, while improving the quality of the computed solutions.
OR in airlines Crew scheduling Integrated crew pairing and crew assignment Column generation Bi-dynamic constraint aggregation
http://www.sciencedirect.com/science/article/B6VCT-526DWM4-1/2/3380e94630c4b4496d7ee62a21661378
Saddoune, Mohammed
Desaulniers, Guy
Elhallaoui, Issmail
Soumis, François
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:583-5952011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:583-595
article
Evaluating the water sector in Italy through a two stage method using the conditional robust nonparametric frontier and multivariate adaptive regression splines
The aim of this paper is to assess the efficiency of the integrated water service in Italy in recent years, through a robust and flexible methodology. This paper, from a methodological point of view, enhances a "two stage" method, based on ideas suggested by Florens and Simar (2005), which estimates the efficiency frontier through conditional robust models and bypasses, at the same time, the choice of a specific functional form in the second stage; the MARS (Multivariate Adaptive Regression splines) method, in fact, provides for approximate production function using linear splines without any assumption of a functional form. Applying this specific two stage method, despite poor assumptions of the production function form, we provide an estimate for the Italian water companies; we have found spatial and dimensional patterns, especially in metropolitan vs. low density areas.
Productive frontier efficiency Conditional oder-m efficiency Two stage methods Multivariate adaptive regression splines Water sector
http://www.sciencedirect.com/science/article/B6VCT-524FSCP-4/2/73df61ff2d30cb9cdacdcc054d768041
Vidoli, Francesco
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:156-1652011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:156-165
article
The simplified solution algorithm for an integrated supplier-buyer inventory model with two-part trade credit in a supply chain system
Ho et al. [Ho, C.H., Ouyang, L.Y., Su, C.H., 2008. Optimal pricing, shipment and payment policy for an integrated supplier-buyer inventory model with two-part trade credit, European Journal of Operational Research 187, 496-510] discussed the integrated inventory model with two-part trade credit and presented an algorithm to solve it. Basically, Ho et al.'s inventory model is correct and interesting. However, this paper indicates that the solution algorithm described in Ho et al. (2008) can be simplified further. So, this paper can not only derive the optimally closed-form formulations for the optimal numbers of shipments but also develop different algorithms to improve those in Ho et al. (2008). Numerical examples illustrate that the algorithm to locate the optimal solution is rather accurate and rapid.
Integrated inventory model Delay payment Cash discount Demand function Supply chain system
http://www.sciencedirect.com/science/article/B6VCT-52CG6PB-2/2/b8198a4f9b728ca6d7ebaafaa28f55f7
Chung, Kun-Jen
Liao, Jui-Jung
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:196-2042011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:196-204
article
Support vector regression for warranty claim forecasting
Forecasting the number of warranty claims is vitally important for manufacturers/warranty providers in preparing fiscal plans. In existing literature, a number of techniques such as log-linear Poisson models, Kalman filter, time series models, and artificial neural network models have been developed. Nevertheless, one might find two weaknesses existing in these approaches: (1) they do not consider the fact that warranty claims reported in the recent months might be more important in forecasting future warranty claims than those reported in the earlier months, and (2) they are developed based on repair rates (i.e., the total number of claims divided by the total number of products in service), which can cause information loss through such an arithmetic-mean operation. To overcome the above two weaknesses, this paper introduces two different approaches to forecasting warranty claims: the first is a weighted support vector regression (SVR) model and the second is a weighted SVR-based time series model. These two approaches can be applied to two scenarios: when only claim rate data are available and when original claim data are available. Two case studies are conducted to validate the two modelling approaches. On the basis of model evaluation over six months ahead forecasting, the results show that the proposed models exhibit superior performance compared to that of multilayer perceptrons, radial basis function networks and ordinary support vector regression models.
Support vector regression Radial basis function network Warranty claims Neural networks Multilayer perceptron
http://www.sciencedirect.com/science/article/B6VCT-52C3K28-1/2/a6038561c90d176fd5ac4c2a30ac59c3
Wu, Shaomin
Akbarov, Artur
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:596-6052011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:596-605
article
Locating specialized service capacity in a multi-hospital network
Multi-hospital systems have become very common in today's healthcare environment. However, there has been limited published research examining the opportunities and challenges of pooling specialized services to a subset of hospitals in the network. Therefore, this paper considers how hospital networks with multiple locations can leverage pooling benefits when deciding where to position specialized services, such as magnetic resonance imaging (MRI), transplants, or neonatal intensive care. Specifically, we develop an optimization model to determine how many and which of a hospital network's hospitals should be set up to deliver a specialized service. Importantly, this model takes into account both financial considerations and patient service levels. Computational results illustrate the value of optimally pooling resources across a subset of hospitals in the network versus two alternate approaches: (1) delivering the service at all locations and requiring each site to handle its own demand, or (2) locating the service at one hospital that handles all network demand.
OR in health services Supply chain management in healthcare Medical resource pooling
http://www.sciencedirect.com/science/article/B6VCT-52BWVXC-3/2/77109855d89dffcb497371989debff2f
Mahar, Stephen
Bretthauer, Kurt M.
Salzarulo, Peter A.
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:518-5282011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:518-528
article
Stochastic uncapacitated hub location
We study stochastic uncapacitated hub location problems in which uncertainty is associated to demands and transportation costs. We show that the stochastic problems with uncertain demands or dependent transportation costs are equivalent to their associated deterministic expected value problem (EVP), in which random variables are replaced by their expectations. In the case of uncertain independent transportation costs, the corresponding stochastic problem is not equivalent to its EVP and specific solution methods need to be developed. We describe a Monte-Carlo simulation-based algorithm that integrates a sample average approximation scheme with a Benders decomposition algorithm to solve problems having stochastic independent transportation costs. Numerical results on a set of instances with up to 50 nodes are reported.
Hub location Stochastic programming Monte-Carlo sampling Benders decomposition
http://www.sciencedirect.com/science/article/B6VCT-526SP81-4/2/2719e56cb4a5fd22f861028bcdfda03a
Contreras, Ivan
Cordeau, Jean-François
Laporte, Gilbert
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:134-1462011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:134-146
article
Simultaneous optimization of capacity and planned lead time in a two-stage production system with different customer due dates
We consider a two-stage make-to-order manufacturing system with random demands, processing times, and distributed customer due dates. The work to each stage is released based on a planned lead time. A general approach to minimize total inventory holding and customer order tardiness cost is presented to find the optimal manufacturing capacities and planned lead times for each manufacturing stage. Expressions are derived for work-in process inventories, finished-goods-inventory and expected backorders under the assumption of a series of M/M/1 queuing systems and exponentially distributed customer required lead times. We prove that the distribution of customer required lead time has no influence on the optimal planned lead times whenever capacity is predefined but it influences the optimal capacity to invest into. For the simultaneous optimization of capacity and planned lead times we present a numerical study that shows that only marginal cost decreases can be gained by setting a planned lead time for the upstream stage and that a considerable cost penalty is incurred if capacity and planned lead time optimization are performed sequentially.
Manufacturing Queuing Inventory Investment analysis
http://www.sciencedirect.com/science/article/B6VCT-52BWVXC-1/2/f751d464d950952bc9b40f08ecbb37c0
Altendorfer, Klaus
Minner, Stefan
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:359-3602011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:359-360
article
Super-efficiency DEA in the presence of infeasibility: One model approach
A two-stage procedure is developed by Lee et al. (2011) [European Journal of Operational Research doi:10.1016/j.ejor.2011.01.022] to address the infeasibility issue in super-efficiency data envelopment analysis (DEA) models. We point out that their two-stage procedure can be solved in a single DEA-based model.
Data envelopment analysis (DEA) Infeasibility Super-efficiency
http://www.sciencedirect.com/science/article/B6VCT-52CYKJS-5/2/f9eb838f7519d9fb75bb83e9ca9ac8d7
Chen, Yao
Liang, Liang
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:606-6082011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:606-608
article
A note on efficiency decomposition in two-stage data envelopment analysis
Data envelopment analysis (DEA) is a useful tool of efficiency measurement for firms and organizations. Kao and Hwang (2008) take into account the series relationship of the two sub-processes in a two-stage production process, and the overall efficiency of the whole process is the product of the efficiencies of the two sub-processes. To find the largest efficiency of one sub-process while maintaining the maximum overall efficiency of the whole process, Kao and Hwang (2008) propose a solution procedure to accomplish this purpose. Nevertheless, one needs to know the overall efficiency of the whole process before calculating the sub-process efficiency. In this note, we propose a method that is able to find the sub-process and overall efficiencies simultaneously.
Data envelopment analysis Efficiency Decomposition Two-stage
http://www.sciencedirect.com/science/article/B6VCT-52BGCRV-1/2/26d7acb7d20ddca1626eb122bdbb6492
Liu, Shiang-Tai
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:329-3392011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:329-339
article
Comparing simulation models for market risk stress testing
The subprime crisis has reminded us that effective stress tests should not only combine subjective scenarios with historical data, but also be probabilistic. In this paper, we combine three hypothetical shocks, of varying degrees, with more than six years of daily data on USD-INR and Euro-INR. Our objective is to compare six simulation-based stress models for foreign exchange positions. We find that while volatility-weighted historical simulation is the best model for volatility persistence, jump diffusion based Monte Carlo simulation is better at capturing correlation breakdown. Loss estimates from very fat-tailed distributions are not sensitive to the severity of stress scenarios.
Risk management Volatility updation Tail diversification Simulation models Fat-tailed distributions
http://www.sciencedirect.com/science/article/B6VCT-5281SNB-3/2/eb7fda2dfb39965a8f6d7c8b83d4f1b0
Basu, Sanjay
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:210-2202011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:210-220
article
An average lexicographic value for cooperative games
For games with a non-empty core the Alexia value is introduced, a value which averages the lexicographic maxima of the core. It is seen that the Alexia value coincides with the Shapley value for convex games, and with the nucleolus for strongly compromise admissible games and big boss games. For simple flow games, clan games and compromise stable games an explicit expression and interpretation of the Alexia value is derived. Furthermore it is shown that the reverse Alexia value, defined by averaging the lexicographic minima of the core, coincides with the Alexia value for convex games and compromise stable games.
Game theory Alexia value Convexity Compromise stability Big boss and clan games
http://www.sciencedirect.com/science/article/B6VCT-52F85R1-2/2/16379e2a90c4ab9674bd19f7d4722a32
Tijs, Stef
Borm, Peter
Lohmann, Edwin
Quant, Marieke
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:66-722011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:66-72
article
Two-machine flow shop scheduling problem with an outsourcing option
We consider a two-machine flow shop problem in which each job is processed through an in-house system or outsourced to a subcontractor. A schedule is established for the in-house jobs, and performance is measured by the makespan. Jobs processed by subcontractors require paying an outsourcing cost. The objective is to minimize the sum of the makespan and total outsourcing costs. We show that the problem is NP-hard in the ordinary sense. We consider a special case in which each job has a processing requirement, and each machine a characteristic value. In this case, the time a job occupies a machine is equal to the job's processing requirement plus a setup time equal to the characteristic value of that machine. We introduce some optimality conditions and present a polynomial-time algorithm to solve the special case.
Scheduling Outsourcing Ordered flow shop Computational complexity
http://www.sciencedirect.com/science/article/B6VCT-52D51FB-4/2/a7b08c0409eb612b6551e4214c4959c7
Choi, Byung-Cheon
Chung, Jibok
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:464-4722011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:464-472
article
Lower bounds for the ITC-2007 curriculum-based course timetabling problem
This paper describes an approach for generating lower bounds for the curriculum-based course timetabling problem, which was presented at the International Timetabling Competition (ITC-2007, Track 3). So far, several methods based on integer linear programming have been proposed for computing lower bounds of this minimization problem. We present a new partition-based approach that is based on the "divide and conquer" principle. The proposed approach uses iterative tabu search to partition the initial problem into sub-problems which are solved with an ILP solver. Computational outcomes show that this approach is able to improve on the current best lower bounds for 12 out of the 21 benchmark instances, and to prove optimality for 6 of them. These new lower bounds are useful to estimate the quality of the upper bounds obtained with various heuristic approaches.
Bounds Partitioning Tabu search Timetabling
http://www.sciencedirect.com/science/article/B6VCT-526SP81-5/2/31ae41254f6dade7f1eeeb79f5462122
Hao, Jin-Kao
Benlic, Una
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:270-2782011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:270-278
article
A relational perspective of attribute reduction in rough set-based data analysis
Attribute reduction is very important in rough set-based data analysis (RSDA) because it can be used to simplify the induced decision rules without reducing the classification accuracy. The notion of reduct plays a key role in rough set-based attribute reduction. In rough set theory, a reduct is generally defined as a minimal subset of attributes that can classify the same domain of objects as unambiguously as the original set of attributes. Nevertheless, from a relational perspective, RSDA relies on a kind of dependency principle. That is, the relationship between the class labels of a pair of objects depends on component-wise comparison of their condition attributes. The larger the number of condition attributes compared, the greater the probability that the dependency will hold. Thus, elimination of condition attributes may cause more object pairs to violate the dependency principle. Based on this observation, a reduct can be defined alternatively as a minimal subset of attributes that does not increase the number of objects violating the dependency principle. While the alternative definition coincides with the original one in ordinary RSDA, it is more easily generalized to cases of fuzzy RSDA and relational data analysis.
Rough sets Decision analysis Fuzzy sets Attribute reduction Relational information system
http://www.sciencedirect.com/science/article/B6VCT-50TRX5P-1/2/c5b2bef4453399d4700b11f0ac7ebbdc
Fan, Tuan-Fang
Liau, Churn-Jung
Liu, Duen-Ren
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:497-5072011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:497-507
article
Dynamic uncapacitated lot sizing with random demand under a fillrate constraint
This paper deals with the single-item dynamic uncapacitated lot sizing problem with random demand. We propose a model based on the "static uncertainty" strategy of Bookbinder and Tan (1988). In contrast to these authors, we use exact expressions for the inventory costs and we apply a fillrate constraint. We present an exact solution method and modify several well-known dynamic lot sizing heuristics such that they can be applied for the case of dynamic stochastic demands. A numerical experiment shows that there are significant differences in the performance of the heuristics whereat the ranking of the heuristics is different from that reported for the case of deterministic demand.
Inventory Production Dynamic programming Heuristics
http://www.sciencedirect.com/science/article/B6VCT-5281SNB-1/2/3a70ed9621ad18ac5014741c9ea2e50a
Tempelmeier, Horst
Herpers, Sascha
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:238-2452011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:238-245
article
A practical weight sensitivity algorithm for goal and multiple objective programming
This paper presents a weight sensitivity algorithm that can be used to investigate a portion of weight space of interest to the decision maker in a goal or multiple objective programme. The preferential information required from the decision maker is an initial estimate of their starting solution, with an equal weights solution being used as a default if this is not available, and preference information that will define the portion of weight space on which the sensitivity analysis is to be conducted. The different types of preferential information and how they are incorporated by the algorithm are discussed. The output of the algorithm is a set of distinct solutions that characterise the portion of weight space searched. The possible different output requirements of decision makers are detailed in the context of the algorithm. The methodology is demonstrated on two examples, one hypothetical and the other relating to predicting cinema-going behaviour. Conclusions and avenues for future research are given.
Multi-objective programming Goal programming Preferential weight choice Multiple criteria decision making
http://www.sciencedirect.com/science/article/B6VCT-52C3K28-4/2/9fcc91f00c07ca8d2a16f6098b85ef4c
Jones, Dylan
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:570-5822011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:570-582
article
Intrafirm trade, arm's-length transfer pricing rule, and coordination failure
This paper demonstrates that uniform imposition of the arm's-length principle on transfer pricing leads to coordination failure among countries in terms of economic welfare if the countries trade products in the form of intrafirm transactions by multinational firms (MNFs). To highlight this implication, we first show that imposition of the arm's-length principle on an MNF induces it to transfer a product among subordinate divisions at marginal cost, i.e., the competitive price, which is consistent with the purpose of the principle. Nonetheless, if regulators in each country impose the principle on MNFs, all of the following economic welfare measures decrease compared with the situation where the principle is not imposed: (1) consumer welfare in each of the trading countries, (2) profit of each MNF, and thus (3) total world economic welfare. This result indicates that it is possible that enforcement of the principle has no positive effect at all in the world because economic welfare of all economic agents deteriorates when the principle is imposed. A numerical analysis demonstrates that this possibility arises in a broad range of circumstances, even including the situation where a giant economic world power and a small underdeveloped country mutually trade products. In these circumstances, an agreement among trading countries that no country imposes the arm's-length principle may encourage Pareto improvement of the world economy.
Economics Transfer pricing Noncooperative game Multinational operation Global supply chain
http://www.sciencedirect.com/science/article/B6VCT-51XY05N-1/2/86fecdd6d13ffb875a1abacacadd8d31
Matsui, Kenji
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:246-2592011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:246-259
article
A simple method to improve the consistency ratio of the pair-wise comparison matrix in ANP
Tests of consistency for the pair-wise comparison matrices have been studied extensively since AHP was introduced by Saaty in 1970s. However, existing methods are either too complicated to be applied in the revising process of the inconsistent comparison matrix or are difficult to preserve most of the original comparison information due to the use of a new pair-wise comparison matrix. Those methods might work for AHP but not for ANP as the comparison matrix of ANP needs to be strictly consistent. To improve the consistency ratio, this paper proposes a simple method, which combines the theorem of matrix multiplication, vectors dot product, and the definition of consistent pair-wise comparison matrix, to identify the inconsistent elements. The correctness of the proposed method is proved mathematically. The experimental studies have also shown that the proposed method is accurate and efficient in decision maker's revising process to satisfy the consistency requirements of AHP/ANP.
Multiple criteria analysis Inconsistency identification Analytic network process (ANP) Pair-wise comparison matrix Order reduction Matrix multiplication
http://www.sciencedirect.com/science/article/B6VCT-52CG6PB-1/2/4dd1d33b5e9809303e413cf11e0d9d3b
Ergu, Daji
Kou, Gang
Peng, Yi
Shi, Yong
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:260-2692011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:260-269
article
Detecting relevant variables and interactions in supervised classification
The widely used Support Vector Machine (SVM) method has shown to yield good results in Supervised Classification problems. When the interpretability is an important issue, then classification methods such as Classification and Regression Trees (CART) might be more attractive, since they are designed to detect the important predictor variables and, for each predictor variable, the critical values which are most relevant for classification. However, when interactions between variables strongly affect the class membership, CART may yield misleading information. Extending previous work of the authors, in this paper an SVM-based method is introduced. The numerical experiments reported show that our method is competitive against SVM and CART in terms of misclassification rates, and, at the same time, is able to detect critical values and variables interactions which are relevant for classification.
Supervised classification Interactions Support vector machines Binarization
http://www.sciencedirect.com/science/article/B6VCT-4YMPX6Y-1/2/9dbf90272ee797d1452171aea43cc4db
Carrizosa, Emilio
Martín-Barragán, Belén
Morales, Dolores Romero
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:83-952011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:83-95
article
An inventory model for slow moving items subject to obsolescence
In this paper, we consider a continuous review inventory system of a slow moving item for which the demand rate drops to a lower level at a known future time instance. The inventory system is controlled according to a one-for-one replenishment policy with a fixed lead time. Adapting to lower demand is achieved by changing the control policy in advance and letting the demand take away the excess stocks. We show that the timing of the control policy change primarily determines the tradeoff between backordering penalties and obsolescence costs. We propose an approximate solution for the optimal time to shift to the new control policy minimizing the expected total cost during the transient period. We find that the advance policy change results in significant cost savings and the approximation yields near optimal expected total costs.
Inventory control Spare parts Obsolescence Advance policy change Excess stock Installed base
http://www.sciencedirect.com/science/article/B6VCT-525GWS6-1/2/418213a4c48db979e4251e3fa4cc6ec0
Pinçe, Çerag
Dekker, Rommert
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:290-3082011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:290-308
article
Improving operational effectiveness of tactical master plans for emergency and elective patients under stochastic demand and capacitated resources
This paper develops a two-stage planning procedure for master planning of elective and emergency patients while allocating at best the available hospital resources. Four types of resources are considered: operating theatre, beds in the medium and in the intensive care units, and nursing hours in the intensive care unit. A tactical plan is obtained by minimizing the deviations of the resources consumption to the target levels of resources utilization, following a goal programming approach. The MIP formulation to get this tactical plan is specifically designed to account for emergency care since it allows for the reservation of some capacity for emergency patients and possible capacity excess. To deal with the deviation between actually arriving elective patients and the average number of patients on which the tactical plan is based, we consider the possibility of planning a higher number of patients than the average to create operating slots in the tactical plan (slack planning). These operating slots are then filled in the operational plan following several flexibility rules. We consider three options for slack planning that lead to three different tactical plans on which we apply three flexibility rules to get finally nine alternative weekly schedules of elective patients. We then develop an algorithm to modify this schedule on a daily basis so as to account for emergency patients' arrivals. Scheduled elective patients may be cancelled and emergency patients may be sent to other hospitals. Cancellation rules for both types of patients rely on the possibility to exceed the available capacities. Several performance indicators are defined to assess patient service and hospital efficiency. Simulation results show a trade-off between hospital efficiency and patient service.
Operating theatre planning Intensive care Emergency and elective patients Operational strategies Master surgical plan Goal programming
http://www.sciencedirect.com/science/article/B6VCT-52B116T-1/2/c892ab35c627399464fb7a2e73eca021
Adan, Ivo
Bekkers, Jos
Dellaert, Nico
Jeunet, Jully
Vissers, Jan
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:107-1182011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:107-118
article
Optimizing stochastic production-inventory systems: A heuristic based on simulation and regression analysis
We present a heuristic optimization method for stochastic production-inventory systems that defy analytical modelling and optimization. The proposed heuristic takes advantage of simulation while at the same time minimizes the impact of the dimensionality curse by using regression analysis. The heuristic was developed and tested for an oil and gas company, which decided to adopt the heuristic as the optimization method for a supply-chain design project. To explore the performance of the heuristic in general settings, we conducted a simulation experiment on 900 test problems. We found that the average cost error of using the proposed heuristic was reasonably low for practical applications.
Production Inventory Simulation Regression analysis Supply-chain management
http://www.sciencedirect.com/science/article/B6VCT-529MVCM-4/2/d3b0b98d1624748fe2c89a3a029f31d7
Arreola-Risa, Antonio
Giménez-García, Víctor M.
Martínez-Parra, José Luis
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:482-4962011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:482-496
article
The tree representation for the pickup and delivery traveling salesman problem with LIFO loading
The feasible solutions of the traveling salesman problem with pickup and delivery (TSPPD) are commonly represented by vertex lists. However, when the TSPPD is required to follow a policy that loading and unloading operations must be performed in a last-in-first-out (LIFO) manner, we show that its feasible solutions can be represented by trees. Consequently, we develop a novel variable neighborhood search (VNS) heuristic for the TSPPD with last-in-first-out loading (TSPPDL) involving several search operators based on the tree data structure. Extensive experiments suggest that our VNS heuristic is superior to the current best heuristics for the TSPPDL in terms of solution quality, while requiring no more computing time as the size of the problem increases.
Traveling salesman Pickup and delivery Last-in-first-out loading Tree data structure Variable neighborhood search
http://www.sciencedirect.com/science/article/B6VCT-524WF4J-5/2/a122814c830a9ff5d5badbdc1d94fd6b
Li, Yongquan
Lim, Andrew
Oon, Wee-Chong
Qin, Hu
Tu, Dejian
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:433-4442011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:433-444
article
Dominance rules in combinatorial optimization problems
The aim of this paper is to study the concept of a "dominance rule" in the context of combinatorial optimization. A dominance rule is established in order to reduce the solution space of a problem by adding new constraints to it, either in a procedure that aims to reduce the domains of variables, or directly in building interesting solutions. Dominance rules have been extensively used over the last 50 years. Surprisingly, to our knowledge, no detailed description of them can be found in the literature other than a few short formal descriptions in the context of enumerative methods. We are therefore proposing an investigation into what dominance rules are. We first provide a definition of a dominance rule with its different nuances. Next, we analyze how dominance rules are generally formulated and what are the consequences of such formulations. Finally, we enumerate the common characteristics of dominance rules encountered in the literature and in the usual process of solving combinatorial optimization problems.
Combinatorial optimization Dominance rules Constraints Modeling
http://www.sciencedirect.com/science/article/B6VCT-51FXR6Y-1/2/9905af04ad2a059f025b89ff26655cdc
Jouglet, Antoine
Carlier, Jacques
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:1-142011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:1-14
article
Remaining useful life estimation - A review on the statistical data driven approaches
Remaining useful life (RUL) is the useful life left on an asset at a particular time of operation. Its estimation is central to condition based maintenance and prognostics and health management. RUL is typically random and unknown, and as such it must be estimated from available sources of information such as the information obtained in condition and health monitoring. The research on how to best estimate the RUL has gained popularity recently due to the rapid advances in condition and health monitoring techniques. However, due to its complicated relationship with observable health information, there is no such best approach which can be used universally to achieve the best estimate. As such this paper reviews the recent modeling developments for estimating the RUL. The review is centred on statistical data driven approaches which rely only on available past observed data and statistical models. The approaches are classified into two broad types of models, that is, models that rely on directly observed state information of the asset, and those do not. We systematically review the models and approaches reported in the literature and finally highlight future research challenges.
Maintenance Remaining useful life Brown motion Stochastic filtering Proportional hazards model Markov
http://www.sciencedirect.com/science/article/B6VCT-51J9DJY-2/2/54e8cc69e971b892bf4b1078eb82e0d6
Si, Xiao-Sheng
Wang, Wenbin
Hu, Chang-Hua
Zhou, Dong-Hua
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:309-3192011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:309-319
article
Solving a real-world vehicle routing problem with multiple use of tractors and trailers and EU-regulations for drivers arising in air cargo road feeder services
In this paper we present two approaches for solving a real-world vehicle routing problem arising in the air cargo road feeder service business. The problem is to combine transportation tasks from a given timetable to trips which have to be assigned to tractors and which can be operated by tractor drivers respecting the restrictive rules on driving times from EC Regulation No. 561/2006. Tractor trips which start and end at the hub can be combined to multiple-trips which are operated by the same tractor. Also, to each trip a trailer has to be assigned which is compatible with all tasks in the trip. The primary objective is to minimize the number of required tractors, i.e. the number of multiple-trips. The methods developed are currently applied in practice.
Vehicle routing Air cargo services EU-regulation Multiple-trips
http://www.sciencedirect.com/science/article/B6VCT-52FDST2-2/2/f70747940439850c16b81dd0824753b2
Derigs, Ulrich
Kurowsky, René
Vogel, Ulrich
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:119-1232011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:119-123
article
The impossibility of convex constant returns-to-scale production technologies with exogenously fixed factors
The extensions to the variable (VRS) and the constant (CRS) returns-to-scale models developed by Banker and Morey are considered among the main approaches to the incorporation of exogenously fixed factors in models of data envelopment analysis (DEA). Recently, Syrjänen showed that the Banker and Morey CRS technology is not convex. Taking into account that its subset VRS technology is explicitly assumed convex, this observation leads to difficulties with explaining the fundamental production assumptions of the CRS extension. Motivated by the example of Syrjänen, the contribution of this paper is twofold. First, we show that the nonconvex Banker and Morey CRS technology is nevertheless a suitable reference technology for the assessment of scale efficiency. Second, we ask if a convex technology could be constructed that would "correct" the nonconvexity of the CRS technology of Banker and Morey. The answer to this is negative: one consequence of assuming both convexity and ray unboundness with fixed exogenous factors is that we can always "mix-and-match" discretionary and nondiscretionary factors taken from different units, arriving at totally unrealistic production plans. This demonstrates that generally there exists no meaningful convex CRS technology with exogenously fixed factors that can be used in its own right, apart from its use as a reference technology in the measurement of scale efficiency.
Data envelopment analysis Exogenous factors Discretionary factors Returns to scale
http://www.sciencedirect.com/science/article/B6VCT-52B116T-2/2/575c489e90b059cb0261b1139edb523b
Podinovski, Victor V.
Bouzdine-Chameeva, Tatiana
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:180-1952011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:180-195
article
Impact of some parameters on investments in oligopolistic electricity markets
It is seldom the case that one has the opportunity to compare investments as projected by a long-term multi-period model to what is eventually realized in practice. Further, although sensitivity analysis is of common use in any optimization setting, the impact of some parameters on strategic investments is not yet fully assessed in the context of the deregulated electricity industry. Starting with a benchmark model of the Finnish industry, we precisely explore the impact on equilibrium investments of varying such parameters as direct- and cross-price elasticities, length of the planning horizon and the depreciation rate of capacity. We run the model with different parameter values and compare the predicted equilibrium with what companies have actually done. The model is a stochastic dynamic game involving three players and played over a ten-year period. Our results show the depreciation rate and the planning horizon have a notable effect on investment levels, whereas price elasticities seem to play a lesser role. Although the model's results are rather well aligned to total industry investments, it diverges from individual levels. This may be due to the cost parameter used and/or to the open-loop information structure adopted in the computations. In any event, these results should be of methodological and practical interest to scholars and practitioners involved in strategic investment in the electricity industry.
Investment dynamics Finnish electricity market Uncertainty Dynamic games S-adapted Open-Loop Nash Equilibrium Interdependent market segments
http://www.sciencedirect.com/science/article/B6VCT-52C8FVH-1/2/2b78edbfcaf597d5af0432d942a0c747
Pineau, Pierre-Olivier
Rasata, Hasina
Zaccour, Georges
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:320-3282011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:320-328
article
Trade-offs between target hardening and overarching protection
Defenders concerned about protecting multiple targets can either protect them individually (through target hardening), or collectively (through overarching protections such as border security, public health, emergency response, or intelligence). Decision makers may find it relatively straightforward to choose which targets to harden, but are likely to find it more difficult to compare seemingly incommensurate forms of protection - e.g., target hardening, versus a reduction in the likelihood of weapons being smuggled across the border. Unfortunately, little previous research has addressed this question, and fundamental research is needed to provide guidance and practical solution approaches. In this paper, we first develop a model to optimally allocate resources between target hardening and overarching protection, then investigate the factors affecting the relative desirability of target hardening versus overarching protection, and finally apply our model to a case study involving critical assets in Wisconsin. The case study demonstrates the value of our method by showing that the optimal solution obtained using our model is in some cases substantially better than the historical budget allocation.
Decision analysis Game theory Resource allocation Terrorism Natural disaster
http://www.sciencedirect.com/science/article/B6VCT-52G22YV-2/2/9d66798f439adf611beb207192d91ec1
Haphuriwat, N.
Bier, V.M.
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:73-822011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:73-82
article
Multi-mode resource-constrained project scheduling using RCPSP and SAT solvers
This paper reports on a new solution approach for the well-known multi-mode resource-constrained project scheduling problem (MRCPSP). This problem type aims at the selection of a single activity mode from a set of available modes in order to construct a precedence and a (renewable and non-renewable) resource feasible project schedule with a minimal makespan. The problem type is known to be NP-hard and has been solved using various exact as well as (meta-)heuristic procedures. The new algorithm splits the problem type into a mode assignment and a single mode project scheduling step. The mode assignment step is solved by a satisfiability (SAT) problem solver and returns a feasible mode selection to the project scheduling step. The project scheduling step is solved using an efficient meta-heuristic procedure from literature to solve the resource-constrained project scheduling problem (RCPSP). However, unlike many traditional meta-heuristic methods in literature to solve the MRCPSP, the new approach executes these two steps in one run, relying on a single priority list. Straightforward adaptations to the pure SAT solver by using pseudo boolean non-renewable resource constraints has led to a high quality solution approach in a reasonable computational time. Computational results show that the procedure can report similar or sometimes even better solutions than found by other procedures in literature, although it often requires a higher CPU time.
Project scheduling SAT Multi-mode RCPSP
http://www.sciencedirect.com/science/article/B6VCT-52CYKJS-1/2/ebac0ec1c513a9a3c43f73970e082268
Coelho, José
Vanhoucke, Mario
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:473-4812011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:473-481
article
Procedures for the Time and Space constrained Assembly Line Balancing Problem
The Time and Space constrained Assembly Line Balancing Problem (TSALBP) is a variant of the classical Simple Assembly Line Balancing Problem that additionally accounts for the space requirements of machinery and assembled parts. The present work proposes an adaptation of the Bounded Dynamic Programming (BDP) method to solve the TSALBP variant with fixed cycle time and area availability. Additionally, different lower bounds for the simple case are extended to support the BDP method as well as to assess the quality of the obtained solutions. Our results indicate that the proposed bounds and solution procedures outperform any other previous approach found in the literature.
Manufacturing Assembly Line Balancing Lower bounds Column generation Bounded Dynamic Programming
http://www.sciencedirect.com/science/article/B6VCT-523M8CC-3/2/a2a92a1a38c6ccdab116220ed47ac1f3
Bautista, Joaquín
Pereira, Jordi
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:166-1792011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:166-179
article
A multi-stage stochastic programming approach in master production scheduling
Master Production Schedules (MPS) are widely used in industry, especially within Enterprise Resource Planning (ERP) software. The classical approach for generating MPS assumes infinite capacity, fixed processing times, and a single scenario for demand forecasts. In this paper, we question these assumptions and consider a problem with finite capacity, controllable processing times, and several demand scenarios instead of just one. We use a multi-stage stochastic programming approach in order to come up with the maximum expected profit given the demand scenarios. Controllable processing times enlarge the solution space so that the limited capacity of production resources are utilized more effectively. We propose an effective formulation that enables an extensive computational study. Our computational results clearly indicate that instead of relying on relatively simple heuristic methods, multi-stage stochastic programming can be used effectively to solve MPS problems, and that controllability increases the performance of multi-stage solutions.
Stochastic programming Master production scheduling Flexible manufacturing Controllable processing times
http://www.sciencedirect.com/science/article/B6VCT-529MVCM-5/2/893ac9e7d638e2eadd70a2a3b46417c9
Körpeoglu, Ersin
Yaman, Hande
Selim Aktürk, M.
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:560-5692011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:560-569
article
Competition strategy and efficiency evaluation for decision making units with fixed-sum outputs
This paper develops a DEA (data envelopment analysis) model to accommodate competition over outputs. In the proposed model, the total output of all decision making units (DMUs) is fixed, and DMUs compete with each other to maximize their self-rated DEA efficiency score. In the presence of competition over outputs, the best-practice frontier deviates from the classical DEA frontier. We also compute the efficiency scores using the proposed fixed sum output DEA (FSODEA) models, and discuss the competition strategy selection rule. The model is illustrated using a hypothetical data set under the constant returns to scale assumption and medal data from the 2000 Sydney Olympics under the variable returns to scale assumption.
Data envelopment analysis (DEA) Efficiency evaluation Competition Fixed-sum output
http://www.sciencedirect.com/science/article/B6VCT-529MVCM-3/2/34a67b0e2482ecb8eb9b78b52c2e5154
Yang, Feng
Wu, Desheng Dash
Liang, Liang
O'Neill, Liam
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:52-652011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:52-65
article
An investigation into two bin packing problems with ordering and orientation implications
This paper considers variants of the one-dimensional bin packing (and stock cutting) problem in which both the ordering and orientation of items in a container influences the validity and quality of a solution. Two new real-world problems of this type are introduced, the first that involves the creation of wooden trapezoidal-shaped trusses for use in the roofing industry, the second that requires the cutting and scoring of rectangular pieces of cardboard in the construction of boxes. To tackle these problems, two variants of a local search-based approximation algorithm are proposed, the first that attempts to determine item ordering and orientation via simple heuristics, the second that employs more accurate but costly branch-and-bound procedures. We investigate the inevitable trade-off between speed and accuracy that occurs with these variants and highlight the circumstances under which each scheme is advantageous.
Packing Cutting Grouping problems Local search Heuristics
http://www.sciencedirect.com/science/article/B6VCT-52D51FB-3/2/7b8e391e793664e423febf9f304284b4
Lewis, R.
Song, X.
Dowsland, K.
Thompson, J.
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:221-2372011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:221-237
article
Optimal investment under operational flexibility, risk aversion, and uncertainty
Traditional real options analysis addresses the problem of investment under uncertainty assuming a risk-neutral decision maker and complete markets. In reality, however, decision makers are often risk averse and markets are incomplete. We confirm that risk aversion lowers the probability of investment and demonstrate how this effect can be mitigated by incorporating operational flexibility in the form of embedded suspension and resumption options. Although such options facilitate investment, we find that the likelihood of investing is still lower compared to the risk-neutral case. Risk aversion also increases the likelihood that the project will be abandoned, although this effect is less pronounced. Finally, we illustrate the impact of risk aversion on the optimal suspension and resumption thresholds and the interaction among risk aversion, volatility, and optimal decision thresholds under complete operational flexibility.
Decision analysis Investment under uncertainty Real options Operational flexibility Risk aversion
http://www.sciencedirect.com/science/article/B6VCT-52BWVXC-2/2/69a407500b35b16d6b4ee63dd27e3c59
Chronopoulos, Michail
De Reyck, Bert
Siddiqui, Afzal
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:609-6092011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:609-609
article
On Replenishment Rules. Forecasting and the Bullwhip Effect in Supply Chains, Stephen M., Disney., Marc R. Lambrecht (Eds.). Foundations and trendsÂ® in technology. Information and Operations Management (2008). Boston-Delft, [euro] 70.00, Book Version (ISBN: 978-1-60198-132-5); [euro] 100, E-book Version (ISBN: 978-1-60198-133-2), pp. xii and 83.
http://www.sciencedirect.com/science/article/B6VCT-52DB30Y-1/2/03511a439dad6f8e265f03566ca2389a
Cannella, Salvatore
Ciancimino, Elena
Framinan, Jose M.
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:147-1552011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:147-155
article
Network DEA model for supply chain performance evaluation
This paper constructs an alternative network DEA model that embodies the internal structure for supply chain performance evaluation. We take the perspective of organization mechanism to deal with the complex interactions in supply chain. Three different network DEA models are introduced under the concept of centralized, decentralized and mixed organization mechanisms, respectively. Efficiency analysis including the relationship between supply chain and divisions, and the relationship among the three different organization mechanisms are discussed. As a further extension, we investigate internal resource waste in supply chain.
Data envelopment analysis Network DEA Supply chain management Performance evaluation
http://www.sciencedirect.com/science/article/B6VCT-52C3K28-2/2/4ac91ad26e009193a28ed4fb2eede4ee
Chen, Ci
Yan, Hong
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:24-362011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:24-36
article
Routing open shop and flow shop scheduling problems
We consider a generalization of the classical open shop and flow shop scheduling problems where the jobs are located at the vertices of an undirected graph and the machines, initially located at the same vertex, have to travel along the graph to process the jobs. The objective is to minimize the makespan. In the tour-version the makespan means the time by which each machine has processed all jobs and returned to the initial location. While in the path-version the makespan represents the maximum completion time of the jobs. We present improved approximation algorithms for various cases of the open shop problem on a general graph, and the tour-version of the two-machine flow shop problem on a tree. Also, we prove that both versions of the latter problem are NP-hard, which answers an open question posed in the literature.
Scheduling Routing Open shop Flow shop Complexity Approximation algorithm
http://www.sciencedirect.com/science/article/B6VCT-529MVCM-2/2/6a62698ec688797fb71b5b6c55a35c06
Yu, Wei
Liu, Zhaohui
Wang, Leiyang
Fan, Tijun
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:508-5172011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:508-517
article
Bank-level estimates of market power
The aim of this study is to provide an empirical methodology for the estimation of market power of individual banks. The new method employs the well-known model of Panzar and Rosse (1987) and proposes its estimation using the local regression technique. Local regression yields coefficient estimates equal to the number of observations and, thus, market power is estimated for each bank at each point in time. In addition, a number of restrictive assumptions regarding the properties of the production function of banks are relaxed. A panel of banks from transition countries that has been recently employed by Delis (2010) to obtain market power estimates using the Panzar and Rosse model at the country level is used for comparative purposes. We find that country averages of the bank-level results exhibit a very close relationship with standard, industry-level Panzar-Rosse estimates. However, the empirical results suggest that many banks in countries with fairly competitive banking systems deviate significantly from the country averages and that market power varies substantially across banks in each country.
Market power Bank-level Local regression
http://www.sciencedirect.com/science/article/B6VCT-524WF4J-4/2/9a71191c5dd5a1e675e52b1534b4dc49
Brissimis, Sophocles N.
Delis, Manthos D.
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:529-5342011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:529-534
article
Assignment markets that are uniquely determined by their core
A matrix A defines an assignment market, where each row represents a buyer and each column a seller. If buyer i is matched with seller j, the market produces aij units of utility. Quint (1991) points out that usually many different assignment matrices exist that define markets with the same core and poses the question of when the matrix is uniquely determined by the core of the related market. We characterize these matrices in terms of a strong form of the doubly dominant diagonal property. A matching between buyers and sellers is optimal if it produces the maximum units of utility. Our characterization allows us to show that the number of optimal matchings in markets uniquely characterized by their core is a power of two.
Cooperative games Assignment game Core Doubly dominant diagonal
http://www.sciencedirect.com/science/article/B6VCT-528GTHD-1/2/d2d349a0bb73d94e26f1569f924c389d
Javier Martínez-de-Albéniz, F.
Núñez, Marina
Rafels, Carles
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:37-512011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:37-51
article
Stochastic single vehicle routing problem with delivery and pick up and a predefined customer sequence
In this paper we study the routing of a single vehicle that delivers products and picks up items with stochastic demand. The vehicle follows a predefined customer sequence and is allowed to return to the depot for loading/unloading as needed. A suitable dynamic programming algorithm is proposed to determine the minimum expected routing cost. Furthermore, the optimal routing policy to be followed by the vehicle's driver is derived by proposing an appropriate theorem. The efficiency of the algorithm is studied by solving large problem sets.
Logistics Stochastic vehicle routing Pick up and delivery Dynamic programming
http://www.sciencedirect.com/science/article/B6VCT-52C3K28-3/2/3de954617185d5dd91367e714c696bcb
Minis, I.
Tatarakis, A.
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:15-232011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:15-23
article
Enhanced speciation in particle swarm optimization for multi-modal problems
In this paper, we present a novel multi-modal optimization algorithm for finding multiple local optima in objective function surfaces. We build from Species-based particle swarm optimization (SPSO) by using deterministic sampling to generate new particles during the optimization process, by implementing proximity-based speciation coupled with speciation of isolated particles, and by including "turbulence regions" around already found solutions to prevent unnecessary function evaluations. Instead of using error threshold values, the new algorithm uses the particle's experience, geometric mean, and "exclusion factor" to detect local optima and stop the algorithm. The performance of each extension is assessed with leave-it-out tests, and the results are discussed. We use the new algorithm called Isolated-Speciation-based particle swarm optimization (ISPSO) and a benchmark algorithm called Niche particle swarm optimization (NichePSO) to solve a six-dimensional rainfall characterization problem for 192 rain gages across the United States. We show why it is important to find multiple local optima for solving this real-world complex problem by discussing its high multi-modality. Solutions found by both algorithms are compared, and we conclude that ISPSO is more reliable than NichePSO at finding optima with a significantly lower objective function value.
Particle swarm optimization Metaheuristics Multi-modal optimization Rainfall characterization
http://www.sciencedirect.com/science/article/B6VCT-529CNM9-1/2/b3a5af9f236cbc350268ef4bd215a2c6
Cho, Huidae
Kim, Dongkyun
Olivera, Francisco
Guikema, Seth D.
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:552-5592011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:552-559
article
Compatible weighting method with rank order centroid: Maximum entropy ordered weighted averaging approach
In a situation where imprecise attribute weights such as a rank order are captured, various approximate weighting methods have been proposed to aid multiattribute decision analysis. Among others, it is well known that the rank order centroid (ROC) weights result in the highest performance in terms of the identification of the best alternative under the ranked attribute weights. In this paper, we aim to reinterpret the meaning of the ROC weights and to develop a compatible weighting method that is based on other well-established academic disciplines. The ordered weighted averaging (OWA) method is a nonlinear aggregation method in that the weights are associated with the objects reordered according to their magnitudes in the aggregation process. Some interesting semantics can be attached to the approximate weights in view of the measure developed in the OWA method. Furthermore, the weights generated by the maximum entropy method show equally compatible performance with the ROC weights under some condition, which is demonstrated by theoretical and simulation analysis.
Multiattribute decision-making Decision-making under uncertainty Approximate weights Rank order centroid Ordered weighted averaging Quantifier function
http://www.sciencedirect.com/science/article/B6VCT-526SP81-3/2/e111342e2fecc12a5a960dcfc80bb6e8
Ahn, Byeong Seok
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:205-2092011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:205-209
article
Approximation of M/M/c retrial queue with PH-retrial times
We consider the M/M/c retrial queues with PH-retrial times. Approximation formulae for the distribution of the number of customers in service facility and the mean number of customers in orbit are presented. Some numerical results are presented.
Retrial queue Phase type distribution Retrial time Approximation
http://www.sciencedirect.com/science/article/B6VCT-52CYKJS-4/2/09c5a925953ecb4c6ca1624352177d0e
Shin, Yang Woo
Moon, Dug Hee
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:279-2892011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:279-289
article
Interactive group decision-making using a fuzzy linguistic approach for evaluating the flexibility in a supply chain
With global competition rapidly intensifying and shifting to the supply chain level, the supply chain flexibility has become increasingly important. However, the literature addressing supply chain flexibility remains limited. This study thus builds a group decision-making structure model of flexibility in supply chain management development. This study presents a framework for evaluating supply chain flexibility comprising two parts, an evaluation hierarchy with flexibility dimensions and related metrics, and an evaluation scheme that uses a three-stage process to evaluate supply chain flexibility. This study then proposes an algorithm for determining the degree of supply chain flexibility using a fuzzy linguistic approach. Evaluations of the degree of supply chain flexibility can identify the need to improve supply chain flexibility, and identify specific dimensions of supply chain flexibility as the best directions for improvement. The results of this study are more objective and unbiased for two reasons. First, the results are generated by group decision-making with interactive consensus analysis. Second, the fuzzy linguistic approach used in this study has more advantage to preserve no loss of information than other methods. Additionally, this study presents an example using a case study to illustrate the availability of the proposed methods and compare it with other methods.
Supply chain management Supply chain flexibility Group decision-making Fuzzy linguistic approach Interactive consensus analysis
http://www.sciencedirect.com/science/article/B6VCT-52CYKJS-2/2/5e55416af1180e6ef08f61cd3b114e35
Chuu, Shian-Jong
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:535-5512011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:535-551
article
Generalized equitable preference in multiobjective programming
The concept of equitability in multiobjective programming is generalized within a framework of convex cones. Two models are presented. First, more general polyhedral cones are assumed to determine the equitable preference. Second, the Pareto cone appearing in the monotonicity axiom of equitability is replaced with a permutation-invariant polyhedral cone. The conditions under which the new models are related and satisfy original and modified axioms of the equitable preference are developed. Relationships between generalized equitability and relative importance of criteria and stochastic dominance are revealed.
Pareto Nondominated Multiobjective programming Cones Relative importance Stochastic dominance
http://www.sciencedirect.com/science/article/B6VCT-52540R9-1/2/0f2f1fbea593f86e3a4441b6aae99644
Mut, Murat
Wiecek, Margaret M.
oai:RePEc:eee:ejores:v:212:y:2011:i:3:p:455-4632011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:3:p:455-463
article
The constrained compartmentalized knapsack problem: mathematical models and solution methods
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort.
Cutting Knapsack problem Column generation Heuristics
http://www.sciencedirect.com/science/article/B6VCT-526SP81-2/2/47dc098c733463843feb1ba19396520e
Leão, Aline A.S.
Santos, Maristela O.
Hoto, Robinson
Arenales, Marcos N.
oai:RePEc:eee:ejores:v:213:y:2011:i:1:p:96-1062011-05-26RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:1:p:96-106
article
A fuzzy genetic algorithm with varying population size to solve an inventory model with credit-linked promotional demand in an imprecise planning horizon
A genetic algorithm (GA) with varying population size is developed where crossover probability is a function of parents' age-type (young, middle-aged, old, etc.) and is obtained using a fuzzy rule base and possibility theory. It is an improved GA where a subset of better children is included with the parent population for next generation and size of this subset is a percentage of the size of its parent set. This GA is used to make managerial decision for an inventory model of a newly launched product. It is assumed that lifetime of the product is finite and imprecise (fuzzy) in nature. Here wholesaler/producer offers a delay period of payment to its retailers to capture the market. Due to this facility retailer also offers a fixed credit-period to its customers for some cycles to boost the demand. During these cycles demand of the item increases with time at a decreasing rate depending upon the duration of customers' credit-period. Models are formulated for both the crisp and fuzzy inventory parameters to maximize the present value of total possible profit from the whole planning horizon under inflation and time value of money. Fuzzy models are transferred to deterministic ones following possibility/necessity measure on fuzzy goal and necessity measure on imprecise constraints. Finally optimal decision is made using above mentioned GA. Performance of the proposed GA on the model with respect to some other GAs are compared.
Fuzzy genetic algorithm Fuzzy rule base Credit-linked demand Imprecise planning horizon
http://www.sciencedirect.com/science/article/B6VCT-5276T3G-1/2/376722c07c16def7893de6cf18e061b0
Kumar Maiti, Manas
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:184-1972011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:184-197
article
A solution approach for optimizing long- and short-term production scheduling at LKAB's Kiruna mine
We present a mixed-integer program to schedule long- and short-term production at LKAB's Kiruna mine, an underground sublevel caving mine located in northern Sweden. The model minimizes deviations from monthly preplanned production quantities while adhering to operational constraints. Because of the mathematical structure of the model and its moderately large size, instances spanning a time horizon of more than a year or two tend to be intractable. We develop an optimization-based decomposition heuristic that, on average, obtains better solutions faster than solving the model directly. We show that for realistic data sets, we can generate solutions with deviations that comprise about 3-6% of total demand in about a third of an hour.
Mining/metals industries Determining optimal operating policies at an underground mine production/scheduling applications Production scheduling at an underground mine integer programming applications Determining a production schedule
http://www.sciencedirect.com/science/article/B6VCT-51S6X9G-1/2/1eb54f9ef0d40a4c8077862d98e8b8ea
Martinez, Michael A.
Newman, Alexandra M.
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1316-13202011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1316-1320
article
A note on competitive supply chains with generalised supply costs
This note generalises models from two influential papers in the theory of supply chain outsourcing under competition: (McGuire and Staelin, 1983) and (Cachon and Harker, 2002). The first paper studies the impact of competitive intensity on the outsourcing decision from the supplier's point of view for linear supply cost; the second paper examines the impact of supply economies of scale from the retailer's point of view when selling perfectly substitutable products. By considering competitive intensity and supply economies of scale simultaneously, we find that equilibrium channel structures are primarily determined by the competitive intensity, which is true even under supply diseconomies of scale; the key message in the second paper of scale economies driving retailer's outsourcing supply decision is highly dependent on the assumption of perfect substitutes. Our finding has no qualitative difference when either the suppliers or the retailers are modeled as the channel leader and make the outsourcing decisions.
Marketing Supply chain outsourcing Channel competition
http://www.sciencedirect.com/science/article/B6VCT-50K5T00-2/2/c379a4b46047f03e54f01794ef5aa061
Atkins, Derek
Liang, Liping
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:495-5132011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:495-513
article
Solving mixed model sequencing problem in assembly lines with serial workstations with work overload minimisation and interruption rules
In this manuscript, we present a formulation for the MMSP-W (Mixed model sequencing problem with workload minimisation) for production lines with serial workstations. We demonstrate the validity of the basic models in the presence of a control system on the production line that allows the stopping of operations with no restrictions. We propose an extension of the basic models that allows conditioned interruption of operations to facilitate line management. We then propose a procedure to solve the proposed problem through BDP (Bounded Dynamic Programming), and demonstrate its validity through a computational experiment with reference instances and a case study linked to the Nissan powertrain plant in Barcelona.
Scheduling Sequencing Work overload Dynamic programming Linear programming
http://www.sciencedirect.com/science/article/B6VCT-51B1WNK-1/2/3975016073f1faa94c4f9beef1abfec3
Bautista, Joaquín
Cano, Alberto
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:590-6002011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:590-600
article
Mathematical programming approaches for generating p-efficient points
Probabilistically constrained problems, in which the random variables are finitely distributed, are non-convex in general and hard to solve. The p-efficiency concept has been widely used to develop efficient methods to solve such problems. Those methods require the generation of p-efficient points (pLEPs) and use an enumeration scheme to identify pLEPs. In this paper, we consider a random vector characterized by a finite set of scenarios and generate pLEPs by solving a mixed-integer programming (MIP) problem. We solve this computationally challenging MIP problem with a new mathematical programming framework. It involves solving a series of increasingly tighter outer approximations and employs, as algorithmic techniques, a bundle preprocessing method, strengthening valid inequalities, and a fixing strategy. The method is exact (resp., heuristic) and ensures the generation of pLEPs (resp., quasi pLEPs) if the fixing strategy is not (resp., is) employed, and it can be used to generate multiple pLEPs. To the best of our knowledge, generating a set of pLEPs using an optimization-based approach and developing effective methods for the application of the p-efficiency concept to the random variables described by a finite set of scenarios are novel. We present extensive numerical results that highlight the computational efficiency and effectiveness of the overall framework and of each of the specific algorithmic techniques.
Stochastic programming Probabilistic constraints p-Efficiency Outer approximation Valid inequalities
http://www.sciencedirect.com/science/article/B6VCT-504STF1-2/2/461705c464ab0e4d6e17052491b4edb3
Lejeune, Miguel
Noyan, Nilay
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1227-12342011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1227-1234
article
A heuristic procedure for the Capacitated m-Ring-Star problem
In this paper we propose a heuristic method to solve the Capacitated m-Ring-Star Problem which has many practical applications in communication networks. The problem consists of finding m rings (simple cycles) visiting a central depot, a subset of customers and a subset of potential (Steiner) nodes, while customers not belonging to any ring must be "allocated" to a visited (customer or Steiner) node. Moreover, the rings must be node-disjoint and the number of customers allocated or visited in a ring cannot be greater than the capacity Q given as an input parameter. The objective is to minimize the total visiting and allocation costs. The problem is a generalization of the Traveling Salesman Problem, hence it is NP-hard. In the proposed heuristic, after the construction phase, a series of different local search procedures are applied iteratively. This method incorporates some random aspects by perturbing the current solution through a "shaking" procedure which is applied whenever the algorithm remains in a local optimum for a given number of iterations. Computational experiments on the benchmark instances of the literature show that the proposed heuristic is able to obtain, within a short computing time, most of the optimal solutions and can improve some of the best known results.
Capacitated m-Ring-Star problem Heuristic algorithms Networks
http://www.sciencedirect.com/science/article/B6VCT-50F8BTY-1/2/dc91e5d5337f7e4b6c3e06c1729cb02f
Naji-Azimi, Zahra
Salari, Majid
Toth, Paolo
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1368-13792011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1368-1379
article
Valuing executive stock options: A quadratic approximation
This paper develops a continuous-time model for valuing executive stock options (ESOs) with features of early exercise, delayed vesting and forfeiture. Applying the quadratic approximation established for valuing American options into ESOs, we obtain an explicit formula for the fair ESO value at its grant date. We show that the approximation formula is consistent with the exact results for two special cases either with no dividend or infinite maturity, and also that the perpetual value for the latter case gives an upper bound of the ESO value. To see the performance of the formula, we numerically examine it with benchmark results generated by a binomial-tree model for some particular cases. Numerical experiments show that there is a complementary relation between the vesting and trading periods with respect to exit rate of ESO holders.
Finance Executive stock options ESO Valuation Quadratic approximation
http://www.sciencedirect.com/science/article/B6VCT-50FGY4P-1/2/550cda9009c0bd9035206fef5cb48dda
Kimura, Toshikazu
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:520-5242011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:520-524
article
Inventory models with managerial policy independent of demand
This paper is an extension of two papers. The first of these, published in European Journal of Operational Research, 2007, 112-120 is by Deng et al. (2007) and concerns inventory models for deteriorating items with ramp type demand. The second, published in Computer & Industrial Engineering, 2009, 1296-1300 is by Cheng and Wang (2009) and concerns inventory models for deteriorating items with trapezoidal type demand. The purpose of this paper is threefold. First, this paper will show that the optimal solution is independent of the demand considered in the two previous papers. Second, several replenishment cycles were considered during the finite time horizon, to balance the set-up cost with the sum of the deteriorated cost, holding cost, and shortage cost. Third, this paper will examine the same numerical example in Cheng and Wang (2009) to show that this new approach will result in the saving of 84.39%.
Inventory Deteriorating items Ramp type demand Trapezoidal type demand
http://www.sciencedirect.com/science/article/B6VCT-51XY05N-2/2/60caafba08920ec451df0d56ff0859f4
Lin, Shih-Wei
oai:RePEc:eee:ejores:v:208:y:2011:i:1:p:12-182011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:1:p:12-18
article
Inverse variational inequalities with projection-based solution methods
An inverse variational inequality is defined as to find a vector , such thatIf an inverse function u = F-1(x) exists, the above inverse variational inequality could be transformed as a regular variational inequality. However, in reality, it is not uncommon that the inverse function of F-1(x) does not have explicit form, although its functional values can be observed. Existing line search algorithms cannot be applied directly to solve such inverse variational inequalities. In this paper, we propose two projection-based methods using the co-coercivity of mapping F. A self-adaptive strategy is developed to determine the step sizes efficiently when the co-coercivity modulus is unknown. The convergence of the proposed methods is proved rigorously.
Inverse variational inequality Co-coercivity Projection method Self-adaptive strategy
http://www.sciencedirect.com/science/article/B6VCT-50W80TK-3/2/5fe0a0582dca59b45f4941290ae8aa0f
He, Xiaozheng
Liu, Henry X.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:986-10012011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:986-1001
article
Managing an integrated production inventory system with information on the production and demand status and multiple non-unitary demand classes
In this paper, we study a system consisting of a manufacturer or supplier serving several retailers or clients. The manufacturer produces a standard product in a make-to-stock fashion in anticipation of orders emanating from n retailers with different contractual agreements hence ranked/prioritized according to their importance. Orders from the retailers are non-unitary and have sizes that follow a discrete distribution. The total production time is assumed to follow a k0-Erlang distribution. Order inter-arrival time for class l demand is assumed to follow a kl-Erlang distribution. Work-in-process as well as the finished product incur a, per unit per unit of time, carrying cost. Unsatisfied units from an order from a particular demand class are assumed lost and incur a class specific lost sale cost. The objective is to determine the optimal production and inventory allocation policies so as to minimize the expected total (discounted or average) cost. We formulate the problem as a Markov decision process and show that the optimal production policy is of the base-stock type with base-stock levels non-decreasing in the demand stages. We also show that the optimal inventory allocation policy is a rationing policy with rationing levels non-decreasing in the demand stages. We also study several important special cases and provide, through numerical experiments, managerial insights including the effect of the different sources of variability on the operating cost and the benefits of such contracts as Vendor Managed Inventory or Collaborative Planning, Forecasting, and Replenishment. Also, we show that a heuristic that ignores the dependence of the base-stock and rationing levels on the demands stages can perform very poorly compared to the optimal policy.
Production/inventory control Stock rationing Information sharing Markov decision process Make-to-stock queues Supply chain applications
http://www.sciencedirect.com/science/article/B6VCT-502GH61-9/2/66639160985bb53f8740b9349fd27101
ElHafsi, Mohsen
Camus, Herve
Craye, Etienne
oai:RePEc:eee:ejores:v:208:y:2011:i:1:p:28-362011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:1:p:28-36
article
Nonmonotone adaptive trust region method
In this paper, we propose a nonmonotone adaptive trust region method for unconstrained optimization problems. This method can produce an adaptive trust region radius automatically at each iteration and allow the functional value of iterates to increase within finite iterations and finally decrease after such finite iterations. This nonmonotone approach and adaptive trust region radius can reduce the number of solving trust region subproblems when reaching the same precision. The global convergence and convergence rate of this method are analyzed under some mild conditions. Numerical results show that the proposed method is effective in practical computation.
Unconstrained optimization Adaptive trust region method Global convergence Convergence rate
http://www.sciencedirect.com/science/article/B6VCT-512MHBH-2/2/e6c94113035d9e9393c71b063cdeb412
Shi, Zhenjun
Wang, Shengquan
oai:RePEc:eee:ejores:v:208:y:2011:i:3:p:233-2382011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:3:p:233-238
article
The minmax regret gradual covering location problem on a network with incomplete information of demand weights
The gradual covering location problem seeks to establish facilities on a network so as to maximize the total demand covered, allowing partial coverage. We focus on the gradual covering location problem when the demand weights associated with nodes of the network are random variables whose probability distributions are unknown. Using only information on the range of these random variables, this study is aimed at finding the "minmax regret" location that minimizes the worst-case coverage loss. We show that under some conditions, the problem is equivalent to known location problems (e.g. the minmax regret median problem). Polynomial time algorithms are developed for the problem on a general network with linear coverage decay functions.
Location Network Gradual covering location Minmax regret Incomplete information
http://www.sciencedirect.com/science/article/B6VCT-50T9WWR-3/2/ba2400c9cce15721a69f842189b43848
Berman, Oded
Wang, Jiamin
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:26-342011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:26-34
article
Optimal inventory policy with supply uncertainty and demand cancellation
We consider a periodic review model where the firm manages its inventory under supply uncertainty and demand cancellation. We show that because of supply uncertainty, the optimal inventory policy has the structure of re-order point type. That is, we order if the initial inventory falls below this re-order point, otherwise we do not order. This is in contrast to the work of Yuan and Cheung (2003) who prove the optimality of an order up to policy in the absence of supply uncertainty. We also investigate the impact of supply uncertainty and demand cancellation on the performance of the supply chain. Using our model, we are able to quantify the importance of reducing the variance of either the distribution of yield or the distribution of demand cancellation. The single, multiple periods and the infinite horizon models are studied.
Inventory Demand cancellation Supply uncertainty Stochastic ordering Dynamic programming
http://www.sciencedirect.com/science/article/B6VCT-51G3W69-1/2/9acdb81685092e7617c8b5496808ff05
Yeo, Wee Meng
Yuan, Xue-Ming
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:368-3782011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:368-378
article
Multiple classifier architectures and their application to credit risk assessment
Multiple classifier systems combine several individual classifiers to deliver a final classification decision. In this paper the performance of several multiple classifier systems are evaluated in terms of their ability to correctly classify consumers as good or bad credit risks. Empirical results suggest that some multiple classifier systems deliver significantly better performance than the single best classifier, but many do not. Overall, bagging and boosting outperform other multi-classifier systems, and a new boosting algorithm, Error Trimmed Boosting, outperforms bagging and AdaBoost by a significant margin.
OR in banking Data mining Classifier combination Classifier ensembles Credit scoring
http://www.sciencedirect.com/science/article/B6VCT-51491G1-4/2/74a7cb4c5c727b1bf1af7f028328deed
Finlay, Steven
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1689-17012011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1689-1701
article
Strategic investment timing under asymmetric access charge regulation in telecommunications
In a liberalized telecommunications market, an incumbent has several advantages over any entrant. An asymmetric access charge regulation for two such asymmetric firms stimulates competitive investment. We show that an entrant with a cost disadvantage has an incentive to invest as a leader under an asymmetric access charge regulation. These results fit well with the findings of previous empirical work. Moreover, we also investigate the effects of an asymmetric access charge regulation on competitive investment strategies.
Investment timing Competition Regulation
http://www.sciencedirect.com/science/article/B6VCT-50F3PJ8-3/2/97ee94a7e0b7c821999874972466b44d
Shibata, Takashi
Yamazaki, Hiroshi
oai:RePEc:eee:ejores:v:211:y:2011:i:2:p:274-2812011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:2:p:274-281
article
The impact of stochastic lead time reduction on inventory cost under order crossover
We use exponential lead times to demonstrate that reducing mean lead time has a secondary reduction of the variance due to order crossover. The net effect is that of reducing the inventory cost, and if the reduction in inventory cost overrides the investment in lead time reduction, then the lead time reduction strategy would be tenable. We define lead time reduction as the process of decreasing lead time at an increased cost. To date, decreasing lead times has been confined to deterministic instances. We examine the case where lead times are exponential, for when lead times are stochastic, deliveries are subject to order crossover, so that we must consider effective lead times rather than the actual lead times. The result is that the variance of these lead times is less than the variance of the original replenishment lead times. Here we present a two-stage procedure for reducing the mean and variance for exponentially distributed lead times. We assume that the lead time is made of one or several components and is the time between when the need of a replenishment order is determined to the time of receipt.
Stochastic lead time Lead time reduction Cost effectiveness Inventory optimization Order crossover Simulation
http://www.sciencedirect.com/science/article/B6VCT-51JPWSB-3/2/a8e773404e0e6f3bc268d1bd3c9a2b4d
Hayya, Jack C.
Harrison, Terry P.
He, X. James
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1086-10952011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1086-1095
article
Shifting representation search for hybrid flexible flowline problems
This paper considers the hybrid flexible flowline scheduling problem with a set of additional restrictions and generalizations that are common in practice. These include precedence constraints, sequence dependent setup times, time lags, machine eligibility and release times. There are many potential solution representations for this problem, ranging from simple and compact, to more complex and complete. Typically, when choosing the degree of detail of the solution representation, a tradeoff can be found between efficiency of the algorithm and the size of the search space. Several adaptations of existing methods are introduced (memetic algorithm, iterated local search, iterated greedy), as well as a novel algorithm called shifting representation search (SRS). This new method starts with an iterated greedy algorithm applied to a permutation version of the problem and at a given time, switches to an iterated local search on the full search space. As far as we know, this shift of the solution representation is new in the scheduling literature. Experimental results and statistical tests clearly prove the superiority of SRS compared with classical and existing methods.
Hybrid flexible flowline Realistic scheduling Precedence constraints Setup times Time lags Local search
http://www.sciencedirect.com/science/article/B6VCT-506W6NT-4/2/4c34d8639521380f0178d350a5bb93e8
Urlings, Thijs
Ruiz, Rubén
Stützle, Thomas
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:310-3172011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:310-317
article
Efficiency measurement using independent component analysis and data envelopment analysis
Efficiency measurement is an important issue for any firm or organization. Efficiency measurement allows organizations to compare their performance with their competitors' and then develop corresponding plans to improve performance. Various efficiency measurement tools, such as conventional statistical methods and non-parametric methods, have been successfully developed in the literature. Among these tools, the data envelopment analysis (DEA) approach is one of the most widely discussed. However, problems of discrimination between efficient and inefficient decision-making units also exist in the DEA context (Adler and Yazhemsky, 2010). In this paper, a two-stage approach of integrating independent component analysis (ICA) and data envelopment analysis (DEA) is proposed to overcome this issue. We suggest using ICA first to extract the input variables for generating independent components, then selecting the ICs representing the independent sources of input variables, and finally, inputting the selected ICs as new variables in the DEA model. A simulated dataset and a hospital dataset provided by the Office of Statistics in Taiwan's Department of Health are used to demonstrate the validity of the proposed two-stage approach. The results show that the proposed method can not only separate performance differences between the DMUs but also improve the discriminatory capability of the DEA's efficiency measurement.
Independent component analysis Data envelopment analysis Efficiency measurement
http://www.sciencedirect.com/science/article/B6VCT-5120K2X-1/2/4648f46b677a5b7f89902aeb17b7008a
Kao, Ling-Jing
Lu, Chi-Jie
Chiu, Chih-Chou
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:642-6492011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:642-649
article
Optimum advertising policy over time for subscriber service innovations in the presence of service cost learning and customers' disadoption
On the theoretical side, this paper characterizes qualitatively optimal advertising policy for new subscriber services. A monopolistic market is analyzed first for which customers' disadoption, discounting of future profits streams and a service cost learning curve are allowed. After characterizing the optimal policy for a general diffusion model, the results pertaining to a specific diffusion model for which advertising affects the coefficient of innovation that incorporates the disadoption rate are reported. The results of the theoretical research show that the advertising policy of the service firm in the presence of customers' disadoption could be very different from the same when disadoption is ignored. On the empirical side, four alternative diffusion models are estimated and their predictive powers using a one-step-ahead forecasting procedure compared. The diffusion data analyzed are related to the Canadian cable TV industry. Empirical research findings suggest that the specific diffusion model considered above is not only of theoretical appeal but also of major empirical relevance. The analytical findings of the study are documented in six theoretical propositions for which proofs are provided in a separate Appendix. The results of a related numerical experiment together with the analytical findings pertaining to the competitive role of advertising are included. Managerial implications of the study together with directions for future research are also discussed.
Marketing Advertising New subscriber services Optimal control theory Regression Service cost learning
http://www.sciencedirect.com/science/article/B6VCT-51SFJY6-1/2/7b8f2ca9286939fb55fc2a23cb323a33
Mesak, Hani I.
Bari, Abdullahel
Babin, Barry J.
Birou, Laura M.
Jurkus, Anthony
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:148-1542011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:148-154
article
A consensual peer-based DEA-model with optimized cross-efficiencies - Input allocation instead of radial reduction
Data Envelopment Analysis DEA is a method for estimating (in-)efficiencies of Decision Making Units DMUs by means of weighted output - to input - ratios, being the weights optimal virtual prices of such ex-post activities for all units. The cross-efficiency matrix then evaluates these output - to input - relations with respect to all optimal price systems, and hence permits efficiency rankings for the DMUs by aggregating the matrix entries line - and/or columnwise. In this contribution the classical input oriented DEA approach is generalized twofold: its first aim is an optimal efficiency improving input allocation rather than a mere radial input reduction. The second aim is the choice of a peer-DMU, the price system of which is acceptable for the remaining units. As free input allocation permits substitutional effects and so rises productivities in view of possible peers and for all units, it supports such consensual choice. Numerical examples show the positive effects of the new concept.
DEA Cross-efficiency matrix Maximum productivity matrix Peer Efficiency-ranking
http://www.sciencedirect.com/science/article/B6VCT-521M6BM-6/2/083de142f33a5b29e30ae8d769e412e4
Rödder, W.
Reucher, E.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:763-7742011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:763-774
article
Supplier-initiated outsourcing: A methodology to exploit synergy in transportation
Over the last decades, transportation has been evolving from a necessary, though low priority function to an important part of business that can enable companies to attain a competitive edge over their competitors. To cut down transportation costs, shippers often outsource their transportation activities to a logistics service provider of their choice. This paper proposes a new procedure that puts the initiative with the service provider instead: supplier-initiated outsourcing. This procedure is based on both operations research and game theoretical insights. To stress the contrast between the traditional push approach of outsourcing, and the here proposed pull approach where the service provider is the initiator of the shift of logistics activities from the shipper to the logistics service provider, we will refer to this phenomenon as insinking. Insinking has the advantage that the logistics service provider can proactively select a group of shippers with a strong synergy potential. Moreover, these synergies can be allocated to the participating shippers in a fair and sustainable way by means of a so-called Shapley Monotonic Path of customized tariffs. Insinking is illustrated by means of a practical example based on data from the Dutch grocery transportation sector.
Cooperative game theory Insinking Logistics service providers Retail Shapley Monotonic Path Vehicle routing
http://www.sciencedirect.com/science/article/B6VCT-509W769-4/2/b2fb6a17b50b2a2cfe4b2ad22150a468
Cruijssen, Frans
Borm, Peter
Fleuren, Hein
Hamers, Herbert
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:481-4912011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:481-491
article
The impact of digital channel distribution on the experience goods industry
We explore the impact of a digital channel for experience goods on the profitability and behavior of players in the supply chain and on piracy. We consider a firm which can sell an experience good in physical form, in digitized form, or both. We analyze different pricing schemes - price for whole album on the retail channel and linear and nonlinear pricing for songs on the digital channel. Consumers are divided into a retail-captive segment whose consumers are limited to the retail channel and a hybrid segment whose consumers have access to both retail and digital channels. Our findings indicate that for realistic problems, the dual distribution channel is most profitable. The profitability of the retail channel increases with the size of the retail-captive segment and the number of desirable songs on an album relative to the total number of songs on it. We show that a skimming pricing strategy is best for the retail channel. The ability to sell the product in digitized form on the Internet erodes much of the power once enjoyed by the record labels. The digital channel may be the best way to promote new artists, especially when the hybrid consumer segment is large. We also find that piracy has a significant effect on channel profits for both the retail and digital channels. Piracy has the strongest negative impact on the exclusive retail channel. Consumers' access to the digital channel reduces piracy in both digital and dual-channel distribution. Dual channel suffers least from piracy because it allows retail-captive consumers to still legally obtain the product.
Marketing E-commerce Channel selection Pricing
http://www.sciencedirect.com/science/article/B6VCT-4YVJ3YX-2/2/d06e1bf6c7a31beaeb6fd32b793e0964
Khouja, Moutaz
Wang, Yulan
oai:RePEc:eee:ejores:v:209:y:2011:i:1:p:11-222011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:1:p:11-22
article
An adaptive insertion algorithm for the single-vehicle dial-a-ride problem with narrow time windows
The dial-a-ride problem (DARP) is a widely studied theoretical challenge related to dispatching vehicles in demand-responsive transport services, in which customers contact a vehicle operator requesting to be carried from specified origins to specified destinations. An important subproblem arising in dynamic dial-a-ride services can be identified as the single-vehicle DARP, in which the goal is to determine the optimal route for a single vehicle with respect to a generalized objective function. The main result of this work is an adaptive insertion algorithm capable of producing optimal solutions for a time constrained version of this problem, which was first studied by Psaraftis in the early 1980s. The complexity of the algorithm is analyzed and evaluated by means of computational experiments, implying that a significant advantage of the proposed method can be identified as the possibility of controlling computational work smoothly, making the algorithm applicable to any problem size.
Transportation Dial-a-ride problem Exact algorithm Heuristics
http://www.sciencedirect.com/science/article/B6VCT-50W80TK-2/2/21f9ef62a7f5da5906cfe7fb08937b4b
Häme, Lauri
oai:RePEc:eee:ejores:v:209:y:2011:i:3:p:203-2142011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:3:p:203-214
article
Preference disaggregation and statistical learning for multicriteria decision support: A review
Disaggregation methods have become popular in multicriteria decision aiding (MCDA) for eliciting preferential information and constructing decision models from decision examples. From a statistical point of view, data mining and machine learning are also involved with similar problems, mainly with regard to identifying patterns and extracting knowledge from data. Recent research has also focused on the introduction of specific domain knowledge in machine learning algorithms. Thus, the connections between disaggregation methods in MCDA and traditional machine learning tools are becoming stronger. In this paper the relationships between the two fields are explored. The differences and similarities between the two approaches are identified, and a review is given regarding the integration of the two fields.
Multiple criteria analysis Disaggregation analysis Preference learning Data mining
http://www.sciencedirect.com/science/article/B6VCT-5057KSN-3/2/f6d174c430abbd294393ad5b039062ca
Doumpos, Michael
Zopounidis, Constantin
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1116-11212011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1116-1121
article
Efficiency analysis to incorporate interval-scale data
We develop an approach to efficiency analysis to enable us to incorporate interval-scale data in addition to ratio-scale data. Our approach introduces a measure of inefficiency and identifies efficient units as is done in Data Envelopment Analysis. The basic idea in our approach is to find the "best" hyperplane separating the units that are better and worse than each unit. "Best" is defined in such a way that the number of not-better units is maximal. The efficiency measure is defined as a proportion of not-better units to all units. The results are invariant under a strictly increasing linear re-scaling of any input- or output-variables. Thus zeroes or negative values do not cause problems for the analysis. The approach is used to analyze the data of the research evaluation exercise recently carried out at the University of Joensuu, Finland.
Data Envelopment Analysis Interval-scale Research evaluation
http://www.sciencedirect.com/science/article/B6VCT-4YRHCS3-2/2/9d5bf623c3ff7065cca0498dd62bfb0c
Dehnokhalaji, Akram
Korhonen, Pekka J.
Köksalan, Murat
Nasrabadi, Nasim
Wallenius, Jyrki
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:170-1832011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:170-183
article
Integrating effective flexibility measures into a strategic supply chain planning model
This paper develops models for capacity, product mix, distribution and input supply flexibility and integrates them in a strategic level, mixed integer supply chain (SC) planning model as a way of addressing demand and supply uncertainty, as well as improving market responsiveness. Capacity flexibility is modeled via the SC's production capacity planning to address budgeted demand and ensure the fulfillment of prospective demand increases when considering various market scenarios. This model selects an optimal number of products from fast moving and extended product range options--based on the product mix flexibility. The model confirms a quick response to a changing marketplace by considering elements like transportation and supply lead time along with the probabilities of stock out options when addressing input supply and distribution flexibility. This paper proposes a solution procedure to solve the model for real world problems, and investigates the sensitivity of the model outputs with respect to changes in flexibility measures.
Capacity flexibility Product mix flexibility Supplier flexibility Customer service level flexibility Strategic supply chain model Demand uncertainty
http://www.sciencedirect.com/science/article/B6VCT-51R4SR7-1/2/f4769aaa35c7061faa53fc1f54b9672a
Das, Kanchan
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:326-3352011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:326-335
article
Functional ANOVA, ultramodularity and monotonicity: Applications in multiattribute utility theory
Utility function properties as monotonicity and concavity play a fundamental role in reflecting a decision-maker's preference structure. These properties are usually characterized via partial derivatives. However, elicitation methods do not necessarily lead to twice-differentiable utility functions. Furthermore, while in a single-attribute context concavity fully reflects risk aversion, in multiattribute problems such correspondence is not one-to-one. We show that Tsetlin and Winkler's multivariate risk attitudes imply ultramodularity of the utility function. We demonstrate that geometric properties of a multivariate utility function can be successfully studied by utilizing an integral function expansion (functional ANOVA). The necessary and sufficient conditions under which monotonicity and/or ultramodularity of single-attribute functions imply the monotonicity and/or ultramodularity of the corresponding multiattribute function under additive, preferential and mutual utility independence are then established without reliance on the utility function differentiability. We also investigate the relationship between the presence of interactions among the attributes of a multiattribute utility function and the decision-maker's multivariate risk attitudes.
Multiattribute utility theory Functional ANOVA Multi-criteria analysis Ultramodular functions
http://www.sciencedirect.com/science/article/B6VCT-50XV99R-2/2/ce8b9e2a3e9076cdedb92be4e0fc863d
Beccacece, F.
Borgonovo, E.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1041-10512011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1041-1051
article
A hybrid heuristic algorithm for the open-pit-mining operational planning problem
This paper deals with the Open-Pit-Mining Operational Planning problem with dynamic truck allocation. The objective is to optimize mineral extraction in the mines by minimizing the number of mining trucks used to meet production goals and quality requirements. According to the literature, this problem is NP-hard, so a heuristic strategy is justified. We present a hybrid algorithm that combines characteristics of two metaheuristics: Greedy Randomized Adaptive Search Procedures and General Variable Neighborhood Search. The proposed algorithm was tested using a set of real-data problems and the results were validated by running the CPLEX optimizer with the same data. This solver used a mixed integer programming model also developed in this work. The computational experiments show that the proposed algorithm is very competitive, finding near optimal solutions (with a gap of less than 1%) in most instances, demanding short computing times.
Open-pit-mining Metaheuristics GRASP Variable neighborhood search Mathematical programming
http://www.sciencedirect.com/science/article/B6VCT-507CRR3-1/2/7e261f08e1d612b9af0eddb2deb7fee0
Souza, M.J.F.
Coelho, I.M.
Ribas, S.
Santos, H.G.
Merschmann, L.H.C.
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1380-13972011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1380-1397
article
The cross-entropy method with patching for rare-event simulation of large Markov chains
There are various importance sampling schemes to estimate rare event probabilities in Markovian systems such as Markovian reliability models and Jackson networks. In this work, we present a general state-dependent importance sampling method which partitions the state space and applies the cross-entropy method to each partition. We investigate two versions of our algorithm and apply them to several examples of reliability and queueing models. In all these examples we compare our method with other importance sampling schemes. The performance of the importance sampling schemes is measured by the relative error of the estimator and by the efficiency of the algorithm. The results from experiments show considerable improvements both in running time of the algorithm and the variance of the estimator.
Cross-entropy Rare events Importance sampling Large-scale Markov chains
http://www.sciencedirect.com/science/article/B6VCT-50GJ2KY-2/2/0de1c1091b8496fcc98461e70727b809
Kaynar, Bahar
Ridder, Ad
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:752-7562011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:752-756
article
Information and preference reversals in lotteries
Several approaches have been proposed for evaluating information in expected utility theory. Among the most popular approaches are the expected utility increase, the selling price and the buying price. While the expected utility increase and the selling price always agree in ranking information alternatives, Hazen and Sounderpandian [11] have demonstrated that the buying price may not always agree with the other two. That is, in some cases, where the expected utility increase would value information A more highly than information B, the buying price may reverse these preferences. In this paper, we discuss the conditions under which all these approaches agree in a generic decision environment where the decision maker may choose to acquire arbitrary information bundles.
Utility theory Preference reversals Value of information
http://www.sciencedirect.com/science/article/B6VCT-516668T-1/2/b4bfdb9bf3e6b8e1452b5a32d594709a
BakIr, Niyazi Onur
Klutke, Georgia-Ann
oai:RePEc:eee:ejores:v:210:y:2011:i:1:p:27-382011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:1:p:27-38
article
The discrete facility location problem with balanced allocation of customers
We consider a discrete facility location problem where the difference between the maximum and minimum number of customers allocated to every plant has to be balanced. Two different Integer Programming formulations are built, and several families of valid inequalities for these formulations are developed. Preprocessing techniques which allow to reduce the size of the largest formulation, based on the upper bound obtained by means of an ad hoc heuristic solution, are also incorporated. Since the number of available valid inequalities for this formulation is exponential, a branch-and-cut algorithm is designed where the most violated inequalities are separated at every node of the branching tree. Both formulations, with and without the improvements, are tested in a computational framework in order to discriminate the most promising solution methods. Difficult instances with up to 50 potential plants and 100 customers, and largest easy instances, can be solved in one CPU hour.
Allocation Facility location Integer Programming Branch-and-cut
http://www.sciencedirect.com/science/article/B6VCT-518TDRH-2/2/94a91e6ebeacc09887aac7f67b23504f
Marín, Alfredo
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:482-4882011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:482-488
article
Parallel-batch scheduling of deteriorating jobs with release dates to minimize the makespan
We consider the problem of scheduling n deteriorating jobs with release dates on a single batching machine. Each job's processing time is an increasing simple linear function of its starting time. The machine can process up to b jobs simultaneously as a batch. The objective is to minimize the maximum completion time, i.e., makespan. For the unbounded model, i.e., b = [infinity], we obtain an O(n log n) dynamic programming algorithm. For the bounded model, i.e., b
Scheduling Batching Deterioration Release dates Dynamic programming
http://www.sciencedirect.com/science/article/B6VCT-51JPWSB-1/2/6e35d7be0ec4edd2d9757cb3f841719a
Li, Shisheng
Ng, C.T.
Cheng, T.C.E.
Yuan, Jinjiang
oai:RePEc:eee:ejores:v:210:y:2011:i:1:p:57-672011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:1:p:57-67
article
A comprehensive method for comparing mental models of dynamic systems
Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure and compare mental models. It shows that the methods used to compare such models lack to account for relevant aspects of dynamic systems, such as, time delays in causal links, feedback structures, and the polarities of feedback loops. Mental models without those properties are mostly static models. To overcome these limitations of the methods to compare mental models, we enhance the widely used distance ratio approach (Markóczy and Goldberg, 1995) so as to comprehend these dynamic characteristics and detect differences among mental models at three levels: the level of elements, the level of individual feedback loops, and the level of the complete model. Our contribution lies in a new method to compare explicated mental models, not to elicit such models. An application of the method shows that this previously non-existent information is essential for understanding differences between managers' mental models of dynamic systems. Thereby, a further path is created to critically analyze and elaborate the models managers use in real world decision making. We discuss the benefits and limitations of our approach for research about mental models and decision making and conclude by identifying directions for further research for operational researchers.
Problem structuring Mental models Dynamic systems Feedback
http://www.sciencedirect.com/science/article/B6VCT-5100HN4-1/2/0ac3cc157c2d7deb44b2bfc59ed941dd
Schaffernicht, Martin
Groesser, Stefan N.
oai:RePEc:eee:ejores:v:209:y:2011:i:2:p:141-1552011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:2:p:141-155
article
A simultaneous bus route design and frequency setting problem for Tin Shui Wai, Hong Kong
A bus network design problem for Tin Shui Wai, a suburban residential area in Hong Kong, is investigated, which considers the bus services from the origins inside this suburban area to the destinations in the urban areas. The problem aims to improve the existing bus services by reducing the number of transfers and the total travel time of the users. This has been achieved by the proposed integrated solution method which can solve the route design and frequency setting problems simultaneously. In the proposed solution method, a genetic algorithm, which tackles the route design problem, is hybridized with a neighborhood search heuristic, which tackles the frequency setting problem. A new solution representation scheme and specific genetic operators are developed so that the genetic algorithm can search all possible route structures, rather than selecting routes from the predefined set. To avoid premature convergence, a diversity control mechanism is incorporated in the solution method based on a new definition of hamming distance. To illustrate the robustness and quality of solutions obtained, computational experiments are performed based on 1000 perturbed demand matrices. The t-test results show that the design obtained by the proposed solution method is robust under demand uncertainty, and the design is better than both the current design and the design obtained by solving the route design problem and the frequency setting problem sequentially. Compared with the current bus network design, the proposed method can generate a design which can simultaneously reduce the number of transfers and total travel time at least by 20.9% and 22.7% respectively. Numerical studies are also performed to illustrate the effectiveness of the diversity control mechanism introduced and the effects of weights on the two objective values.
Transportation Bus network design Route design problem Frequency setting problem Genetic algorithm Neighborhood search
http://www.sciencedirect.com/science/article/B6VCT-50W80TK-1/2/ab905b7563a32f4929bb819489620dcd
Szeto, W.Y.
Wu, Yongzhong
oai:RePEc:eee:ejores:v:211:y:2011:i:2:p:252-2622011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:2:p:252-262
article
A conic quadratic formulation for a class of convex congestion functions in network flow problems
In this paper we consider a multicommodity network flow problem with flow routing and discrete capacity expansion decisions. The problem involves trading off congestion and capacity assignment (or expansion) costs. In particular, we consider congestion costs involving convex, increasing power functions of flows on the arcs. We first observe that under certain conditions the congestion cost can be formulated as a convex function of the capacity level and the flow. Then, we show that the problem can be efficiently formulated by using conic quadratic inequalities. As most of the research on this problem is devoted to heuristic approaches, this study differs in showing that the problem can be solved to optimum by branch-and-bound solvers implementing the second-order cone programming (SOCP) algorithms. Computational experiments on the test problems from the literature show that the continuous relaxation of the formulation gives a tight lower bound and leads to optimal or near optimal integer solutions within reasonable CPU times.
Integer programming Network flows Second-order cone programming Capacity expansion Congestion costs Convex increasing power functions
http://www.sciencedirect.com/science/article/B6VCT-51TYF2C-2/2/a14f6d0bca101b943cc502aceeb7435f
Gürel, Sinan
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:442-4512011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:442-451
article
Allocation strategies in hub networks
In this paper, we study allocation strategies and their effects on total routing costs in hub networks. Given a set of nodes with pairwise traffic demands, the p-hub median problem is the problem of choosing p nodes as hub locations and routing traffic through these hubs at minimum cost. This problem has two versions; in single allocation problems, each node can send and receive traffic through a single hub, whereas in multiple allocation problems, there is no such restriction and a node may send and receive its traffic through all p hubs. This results in high fixed costs and complicated networks. In this study, we introduce the r-allocation p-hub median problem, where each node can be connected to at most r hubs. This new problem generalizes the two versions of the p-hub median problem. We derive mixed-integer programming formulations for this problem and perform a computational study using well-known datasets. For these datasets, we conclude that single allocation solutions are considerably more expensive than multiple allocation solutions, but significant savings can be achieved by allowing nodes to be allocated to two or three hubs rather than one. We also present models for variations of this problem with service quality considerations, flow thresholds, and non-stop service.
Location Hub location p-Hub median Single allocation Multiple allocation
http://www.sciencedirect.com/science/article/B6VCT-51YBTGC-1/2/d38e487ff42097288c8915f71014bd75
Yaman, Hande
oai:RePEc:eee:ejores:v:209:y:2011:i:1:p:51-562011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:1:p:51-56
article
A faster algorithm for 2-cyclic robotic scheduling with a fixed robot route and interval processing times
Consider an m-machine production line for processing identical parts served by a mobile robot. The problem is to find the minimum cycle time for 2-cyclic schedules, in which exactly two parts enter and two parts leave the production line during each cycle. This work treats a special case of the 2-cyclic robot scheduling problem when the robot route is given and the operation durations are to be chosen from prescribed intervals. The problem was previously proved to be polynomially solvable in O(m8log m) time. This paper proposes an improved algorithm with reduced complexity O(m4).
Efficient algorithms Graph-theoretic models Cyclic scheduling Polynomial models Robotic scheduling
http://www.sciencedirect.com/science/article/B6VCT-516M74T-1/2/3c7a52c0fa173fafb5bfcb3109a12bfc
Kats, Vladimir
Levner, Eugene
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:916-9262011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:916-926
article
E-DEA: Enhanced data envelopment analysis
Data envelopment analysis (DEA) has enjoyed a wide range of acceptance by researchers and practitioners alike as an instrument of performance analysis and management since its introduction in 1978. Many formulations and thousands of applications of DEA have been reported in a considerable variety of academic and professional journals all around the world. Almost all of the formulations and applications have basically centered at the concept of "relative self-evaluation", whether they are single or multi-stage applications. This paper suggests a framework for enhancing the theory of DEA through employing the concept of "relative cross-evaluation" in a multi-stage application context. Managerial situations are described where such enhanced-DEA (E-DEA) formulations had actually been used and could also be potentially most meaningful and useful.
Data envelopment analysis Enhanced data envelopment analysis Relative performance Consensus formation Project selection Mathematical programming
http://www.sciencedirect.com/science/article/B6VCT-505F9JN-1/2/4a62f51a4ccbabecb8a5b806efc796ae
Oral, Muhittin
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1608-16192011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1608-1619
article
A stochastic programming model for scheduling call centers with global Service Level Agreements
We consider the issue of call center scheduling in an environment where arrivals rates are highly variable, aggregate volumes are uncertain, and the call center is subject to a global service level constraint. This paper is motivated by work with a provider of outsourced technical support services where call volumes exhibit significant variability and uncertainty. The outsourcing contract specifies a Service Level Agreement that must be satisfied over an extended period of a week or month. We formulate the problem as a mixed-integer stochastic program. Our model has two distinctive features. Firstly, we combine the server sizing and staff scheduling steps into a single optimization program. Secondly, we explicitly recognize the uncertainty in period-by-period arrival rates. We show that the stochastic formulation, in general, calculates a higher cost optimal schedule than a model which ignores variability, but that the expected cost of this schedule is lower. We conduct extensive experimentation to compare the solutions of the stochastic program with the deterministic programs, based on mean valued arrivals. We find that, in general, the stochastic model provides a significant reduction in the expected cost of operation. The stochastic model also allows the manager to make informed risk management decisions by evaluating the probability that the Service Level Agreement will be achieved.
Stochastic programming Scheduling OR in manpower planning Call centers
http://www.sciencedirect.com/science/article/B6VCT-50BJNMJ-3/2/67b5ed6d2b6df9e49bd5938d19967803
Robbins, Thomas R.
Harrison, Terry P.
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1506-15182011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1506-1518
article
Integrated bank performance assessment and management planning using hybrid minimax reference point - DEA approach
The purpose of assessing past performances and setting future targets for an organisation such as a bank branch is to find where the branch stands in comparison to its peers within the bank branch network and how to improve the efficiency of its operations relatively when compared to the best practice branches. However, future performance targets may be set arbitrarily by the head-office and thus could be unrealistic and not achievable by a branch. A hybrid minimax reference point-data envelopment analysis (HMRP-DEA) approach is investigated to incorporate the value judgements of both branch managers and head-office directors and to search for the most preferred solution (MPS) along the efficient frontier for each bank branch. The HMRP-DEA approach is composed of three minimax models, including the super-ideal point model, the ideal point model and the shortest distance model, which share the same decision and objective spaces, are different from each other only in their reference points and weighting schema, and are proven to be equivalent to the output-oriented DEA dual models. These models are examined both analytically and graphically in this paper using a case study, which provides the unprecedented insight into integrated efficiency and trade-off analyses. The HMRP-DEA approach uses DEA as an ex-post-facto evaluation tool for past performance assessment and the minimax reference point approach as an ex-ante planning tool for future performance forecasting and target setting. Thus, the HMRP-DEA approach provides an alternative means for realistic target setting and better resource allocation. It is examined by a detailed investigation into the performance analysis for the fourteen branches of an international bank in the Greater Manchester area.
Data envelopment analysis Multiple objective linear programming Tradeoff analysis Bank performance assessment Management planning
http://www.sciencedirect.com/science/article/B6VCT-50GJ2KY-1/2/572d6fedfe47e91599f9409aae3bc6b3
Yang, J.B.
Wong, B.Y.H.
Xu, D.L.
Liu, X.B.
Steuer, R.E.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1144-11462011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1144-1146
article
Preferences estimation without approximation
We devise an estimation methodology which allows preferences estimation and comparative statics analysis without a reliance on Taylor's approximations and the indirect utility function.
Utility Preferences Uncertainty
http://www.sciencedirect.com/science/article/B6VCT-50DYH3C-2/2/df422bb1b77fd038950e370668368f82
Alghalith, Moawia
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:35-462011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:35-46
article
Speculative production and anticipative reservation of reactive capacity by a multi-product newsvendor
In this paper the optimal sourcing decisions of a multi-product newsvendor prior to the selling season of the products are studied. To satisfy the uncertain demands, the newsvendor can either utilize speculative production, or anticipatively reserve capacity. During the selling season when demand has become known, the newsvendor can utilize its reserved capacity and reactively satisfy demand uncovered by its speculative production. For the case where capacity for speculative production may be limited, but potential reservation of reactive capacity is unlimited two capacity reservation settings are analyzed and compared. In the first one capacity for each product has to be reserved separately, while in the second setting one joint capacity reservation for all products is permitted which can then be allocated to the different products optimally during the selling season. For the case of separate individual reservations the optimal strategies are analytically derived and structural insights concerning their existence are presented. As the model allowing for joint reservation can not be tackled analytically in general an approximation based on an LP formulation is used. Through a numerical example insights on the value of the increased flexibility induced by joint reservation, the cost-premium acceptable for joint reservation and the relative levels of capacity reservation in the two settings are given.
Newsvendor problem Speculative production Capacity reservation
http://www.sciencedirect.com/science/article/B6VCT-51CVFYF-3/2/1de99852c32722a612a749f9ae28521b
Reimann, Marc
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:249-2572011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:249-257
article
A coupling approach to estimating the Lyapunov exponent of stochastic max-plus linear systems
This paper addresses the problem of approximately computing the Lyapunov exponent of stochastic max-plus linear systems. Our approach allows for an efficient simulation of bounds for the Lyapunov exponent. We provide sufficient conditions for the convergence of the bounds. In particular, a perfect sampling scheme for the Lyapunov exponent is established. We illustrate the effectiveness of our bounds with an application to (real-life) railway systems.
Max-plus algebra Stochastic DES Lyapunov exponent Simulation Railway systems
http://www.sciencedirect.com/science/article/B6VCT-51491G1-B/2/517a6141179ee12221801888c74857fe
Goverde, Rob M.P.
Heidergott, Bernd
Merlet, Glenn
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:427-4412011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:427-441
article
Traveling salesman problem heuristics: Leading methods, implementations and latest advances
Heuristics for the traveling salesman problem (TSP) have made remarkable advances in recent years. We survey the leading methods and the special components responsible for their successful implementations, together with an experimental analysis of computational tests on a challenging and diverse set of symmetric and asymmetric TSP benchmark problems. The foremost algorithms are represented by two families, deriving from the Lin-Kernighan (LK) method and the stem-and-cycle (S&C) method. We show how these families can be conveniently viewed within a common ejection chain framework which sheds light on their similarities and differences, and gives clues about the nature of potential enhancements to today's best methods that may provide additional gains in solving large and difficult TSPs.
Traveling salesman problem Heuristics Ejection chains Local search
http://www.sciencedirect.com/science/article/B6VCT-512MHBH-3/2/8292128ec2f610cc97c3e99f0845a3f8
Rego, César
Gamboa, Dorabela
Glover, Fred
Osterman, Colin
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:231-2402011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:231-240
article
Lead-time hedging and coordination between manufacturing and sales departments using Nash and Stackelberg games
In a firm, potential conflict exists between manufacturing and sales departments. Salespersons prefer to order from manufacturing departments in advance so that they can secure products in the amount they need to satisfy customers in time. This time in advance strategy is defined as "lead-time hedging." While this hedging strategy is good for the sales department to guarantee the right quantity at the right time for customers, it adds additional costs and pressure to the manufacturing department. One scheme to resolve this conflict is to introduce a fair "internal price," charged by the manufacturing department to the sales department. In this paper, two models involving a fair internal price are introduced. In one model, a Nash game is played to reach an optimal strategy for both parties. In the other model, a Stackelberg game is played in which the manufacturing department serves as the leader. We show that these two models can successfully reduce lead-time hedging determined by the salesperson and can increase the firm's overall profit, as compared to the traditional model without considering the internal price. More insights have also been analyzed that include the comparisons of the manufacturer's and the salesperson's profits among the traditional model, the Nash game model, the Stackelberg game model, and the centralized global optimization model.
Lead-time hedging Supply chain coordination Game theory
http://www.sciencedirect.com/science/article/B6VCT-511K3SY-1/2/4fc788512de792eccb49afaedd851051
Hu, Yinan
Guan, Yongpei
Liu, Tieming
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:537-5512011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:537-551
article
Deterministic EOQ with partial backordering and correlated demand caused by cross-selling
There has been much work regarding the deterministic EOQ with partial backordering. The majority of these studies assume no correlation in sales, so independent demands across items is applied in the models. However, it is generally recognized that cross-selling effects between items often appear in real contexts. Thus, incorporating such effects in the inventory model in the form of correlated demands makes it of more practical relevance. In this paper, the authors address a two-item inventory system where the demand of a minor item is correlated to that of a major item because of cross-selling. We firstly present a two-item EOQ model with identical order cycles, where the unmet demand of the major item can be partially backordered with lost sales whereas the demand of the minor item must be met without stockouts. This model is further extended to fit a more practical case where the order cycle of the major item is an integer multiple of that of the minor item. The optimal solutions of the two models, as well as the inventory decision procedures, are also developed. Comparative analysis on these two EOQ models has been drawn in the computational study which presents some insights into the parameter effect on the optimal inventory policy.
Inventory EOQ Partial backordering Correlated demand Cross-selling
http://www.sciencedirect.com/science/article/B6VCT-51696VK-2/2/905f4009877a6c0b99e7c69abe82a37b
Zhang, Ren-qian
Kaku, Ikou
Xiao, Yi-yong
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:339-3492011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:339-349
article
DEA model with shared resources and efficiency decomposition
Data envelopment analysis (DEA) has proved to be an excellent approach for measuring performance of decision making units (DMUs) that use multiple inputs to generate multiple outputs. In many real world scenarios, DMUs have a two-stage network process with shared input resources used in both stages of operations. For example, in hospital operations, some of the input resources such as equipment, personnel, and information technology are used in the first stage to generate medical record to track treatments, tests, drug dosages, and costs. The same set of resources used by first stage activities are used to generate the second-stage patient services. Patient services also use the services generated by the first stage operations of housekeeping, medical records, and laundry. These DMUs have not only inputs and outputs, but also intermediate measures that exist in-between the two-stage operations. The distinguishing characteristic is that some of the inputs to the first stage are shared by both the first and second stage, but some of the shared inputs cannot be conveniently split up and allocated to the operations of the two stages. Recognizing this distinction is critical for these types of DEA applications because measuring the efficiency of the production for first-stage outputs can be misleading and can understate the efficiency if DEA fails to consider that some of the inputs generate other second-stage outputs. The current paper develops a set of DEA models for measuring the performance of two-stage network processes with non splittable shared inputs. An additive efficiency decomposition for the two-stage network process is presented. The models are developed under the assumption of variable returns to scale (VRS), but can be readily applied under the assumption of constant returns to scale (CRS). An application is provided.
Data envelopment analysis (DEA) Efficiency Intermediate measure Two-stage Network
http://www.sciencedirect.com/science/article/B6VCT-4YNT46T-1/2/55b707f8c1edaaa3005ced25b98bd3b1
Chen, Yao
Du, Juan
David Sherman, H.
Zhu, Joe
oai:RePEc:eee:ejores:v:208:y:2011:i:1:p:37-452011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:1:p:37-45
article
The combinatorial bandwidth packing problem
In managing a telecommunications network, decisions need to be made concerning the admission of requests submitted by customers to use the network bandwidth. The classical bandwidth packing problem requires that each request submitted by a customer use network resources to establish a one-to-one connection involving one single pair of nodes. We extend the problem to the more practical case where each request submitted by a customer to use the network resources includes a set or combination of calls. This extension suggests that each request requires one-to-many or many-to-many connections to be established between many communicating node pairs. The extension has applications in many important areas such video conferencing and collaborative computing. The combinatorial nature of the requests makes the admission decision more complex because of bandwidth capacity limitations and call routing difficulties. We develop an integer programming formulation of the problem and propose a procedure that can produce verifiably good feasible solutions to the problem. The results of extensive computational experiments over a wide range of problem structures indicate that the procedure provides verifiably good feasible solutions to the problem within reasonable computational times.
Telecommunications Bandwidth packing Integer programming Lagrangean relaxation
http://www.sciencedirect.com/science/article/B6VCT-50R22YG-2/2/f6fb805ab0637e2d9b8b12fc5929533c
Amiri, Ali
Barkhi, Reza
oai:RePEc:eee:ejores:v:209:y:2011:i:1:p:63-722011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:1:p:63-72
article
Analysis of stochastic dual dynamic programming method
In this paper we discuss statistical properties and convergence of the Stochastic Dual Dynamic Programming (SDDP) method applied to multistage linear stochastic programming problems. We assume that the underline data process is stagewise independent and consider the framework where at first a random sample from the original (true) distribution is generated and consequently the SDDP algorithm is applied to the constructed Sample Average Approximation (SAA) problem. Then we proceed to analysis of the SDDP solutions of the SAA problem and their relations to solutions of the "true" problem. Finally we discuss an extension of the SDDP method to a risk averse formulation of multistage stochastic programs. We argue that the computational complexity of the corresponding SDDP algorithm is almost the same as in the risk neutral case.
Stochastic programming Stochastic Dual Dynamic Programming algorithm Sample Average Approximation method Monte Carlo sampling Risk averse optimization
http://www.sciencedirect.com/science/article/B6VCT-50SPVJK-2/2/c0950df61b541a9b5eb50875692563a2
Shapiro, Alexander
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:66-752011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:66-75
article
Using intermediate infeasible solutions to approach vehicle routing problems with precedence and loading constraints
Logistics and transportation issues have been receiving increasing attention during the last decades and their requirements have gradually changed, making it necessary to take into account new situations and conditions. The Double Traveling Salesman Problem with Multiple Stacks (DTSPMS) is a pickup and delivery problem in which some additional precedence and loading constraints are imposed on the vehicle to be used. In this paper we approach the problem using intermediate infeasible solutions to diversify the search process and we develop some fixing procedures and infeasibility measures to deal with this kind of solutions and take advantage of their potential.
Traveling Salesman Heuristics Infeasible solutions
http://www.sciencedirect.com/science/article/B6VCT-51H1DJM-1/2/88a73e0a09fb722ca1c5fb70b5cfd8c1
Felipe, Angel
Teresa Ortuño, M.
Tirado, Gregorio
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1203-12092011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1203-1209
article
New strong duality results for convex programs with separable constraints
It is known that convex programming problems with separable inequality constraints do not have duality gaps. However, strong duality may fail for these programs because the dual programs may not attain their maximum. In this paper, we establish conditions characterizing strong duality for convex programs with separable constraints. We also obtain a sub-differential formula characterizing strong duality for convex programs with separable constraints whenever the primal problems attain their minimum. Examples are given to illustrate our results.
Strong duality Separable convex constraints Constraint qualifications Convex programming
http://www.sciencedirect.com/science/article/B6VCT-50HYG5X-1/2/1b1ecdba412c1dc1c9d418dac23239e5
Jeyakumar, V.
Li, G.
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:624-6342011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:624-634
article
A clustering procedure for reducing the number of representative solutions in the Pareto Front of multiobjective optimization problems
In many multiobjective optimization problems, the Pareto Fronts and Sets contain a large number of solutions and this makes it difficult for the decision maker to identify the preferred ones. A possible way to alleviate this difficulty is to present to the decision maker a subset of a small number of solutions representatives of the Pareto Front characteristics. In this paper, a two-steps procedure is presented, aimed at identifying a limited number of representative solutions to be presented to the decision maker. Pareto Front solutions are first clustered into "families", which are then synthetically represented by a "head-of-the-family" solution. Level Diagrams are then used to represent, analyse and interpret the Pareto Front reduced to its head-of-the-family solutions. The procedure is applied to a reliability allocation case study of literature, in decision-making contexts both without or with explicit preferences by the decision maker on the objectives to be optimized.
Multiobjective optimization Subtractive clustering Level Diagrams Fuzzy preference assignment Genetic algorithms Redundancy allocation
http://www.sciencedirect.com/science/article/B6VCT-51J7CGH-1/2/69b10a760a3bca4ffb4557c75269f74e
Zio, E.
Bazzo, R.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:436-4472011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:436-447
article
A new model for cycles in retail petrol prices
This paper explores the way that retail petrol prices may vary in a cyclical way as a result of competitive behavior by petrol retailers when some drivers choose to fill up early when a low petrol price is available. We include a stochastic model for an individual driver's use of petrol and consider what happens as the expected future price of petrol is adjusted either according to observed prices or in anticipation of cyclical behavior. This model is different from most previous work on petrol price cycles that has focussed on Edgeworth cycles.
Prices Retail petrol price cycles Edgeworth cycles
http://www.sciencedirect.com/science/article/B6VCT-517YN64-1/2/09e89f722207f37a458d6a7a41bcf9c1
Anderson, Edward
oai:RePEc:eee:ejores:v:208:y:2011:i:2:p:109-1182011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:2:p:109-118
article
A comparison of global and semi-local approximation in T-stage stochastic optimization
The paper presents a comparison between two different flavors of nonlinear models to be used for the approximate solution of T-stage stochastic optimization (TSO) problems, a typical paradigm of Markovian decision processes. Specifically, the well-known class of neural networks is compared with a semi-local approach based on kernel functions, characterized by less demanding computational requirements. To this purpose, two alternative methods for the numerical solution of TSO are considered, one corresponding to the classic approximate dynamic programming (ADP) and the other based on a direct optimization of the optimal control functions, introduced here for the first time. Advantages and drawbacks in the TSO context of the two classes of approximators are analyzed, in terms of computational burden and approximation capabilities. Then, their performances are evaluated through simulations in two important high-dimensional TSO test cases, namely inventory forecasting and water reservoirs management.
Markov processes Dynamic programming Neural networks Semi-local approximation
http://www.sciencedirect.com/science/article/B6VCT-50T9WWR-1/2/341ddf4004172a6c19b859ebb509f937
Cervellera, C.
Macciò, D.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:318-3252011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:318-325
article
Operational asset replacement strategy: A real options approach
This paper analyses the problem of replacement by investigating the optimal moment of investment replacement in a given tax environment with a given depreciation policy. An operation and maintenance cost minimization model, based on the definition of equivalent annual cost, is applied to a real options paradigm. The developed methodology allows for an innovative evaluation of the flexibility of replacement process analysis. A new two-factor evaluation function is introduced to quantify decisions on asset replacement under a unique cycle environment. This study improves upon previous findings in the literature as it accounts for autonomous salvage value processes. Based on partial differential equations, this model achieves a general analytical solution and particular numerical solution. The results differ significantly from those observed in one-factor models by showing evidence of over-evaluation in optimal levels of replacement, and by confirming suspicions that different types of uncertainties produce non-monotonous effects on the optimal replacement level. The scientific contribution of this study lies in new and stronger approaches to equivalent annual cost literature, supplying an algorithm for operation and maintenance cost minimization that is conditioned by autonomous salvage value. This study also contributes to the real options literature by developing a two-factor model with Brownian processes applied to asset replacement.
Replacement Real options Uncertainty Equivalent annual cost First passage time
http://www.sciencedirect.com/science/article/B6VCT-511K3SY-2/2/f1c8fd60c20cc787a4012b3d4989fa53
Zambujal-Oliveira, João
Duque, João
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:350-3622011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:350-362
article
WASAN: The development of a facilitated methodology for structuring a waste minimisation problem
This paper contributes a new methodology called Waste And Source-matter ANalyses (WASAN) which supports a group in building agreeable actions for safely minimising avoidable waste. WASAN integrates influences from the Operational Research (OR) methodologies/philosophies of Problem Structuring Methods, Systems Thinking, simulation modelling and sensitivity analysis as well as industry approaches of Waste Management Hierarchy, Hazard Operability (HAZOP) Studies and As Low As Reasonably Practicable (ALARP). The paper shows how these influences are compiled into facilitative structures that support managers in developing recommendations on how to reduce avoidable waste production. WASAN is being designed as Health and Safety Executive Guidance on what constitutes good decision making practice for the companies that manage nuclear sites. In this paper we report and reflect on its use in two soft OR/problem structuring workshops conducted on radioactive waste in the nuclear industry.
Problem Structuring Methods Group decisions/negotiations OR in energy Waste management
http://www.sciencedirect.com/science/article/B6VCT-4YS9RND-2/2/2512150a64407e4d958031f0b2ca29da
Shaw, Duncan
Blundell, Neil
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:330-3382011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:330-338
article
A competitive dynamic pricing model when demand is interdependent over time
In this study, we contribute to the dynamic pricing literature by developing a finite horizon model for two firms offering substitutable and nonperishable products with different quality levels. Customers can purchase and store the products, even if they do not need them at the time, in order to use them in future. The stockpile of the products generated by customers affects the demand in future periods. Therefore, the demand for each product not only is a function of prices and quality levels, but also of the products' stockpile levels. In addition, the stockpile levels change the customers' consumption behavior; more product in a stockpile leads to more consumption. Therefore, we address not only the price and demand relationship but also the stockpiling and consumption relationship in a competitive environment. The decision variable of each firm at the beginning of each period is its unit sale price. We use a deterministic dynamic program to calculate the equilibrium prices at the beginning of each period. By assuming that the market stockpile is public information, we show the existence of a unique Nash equilibrium. We next consider the case when the firms do not know the market stockpile. We then develop appropriate heuristics to calculate the optimal prices in each case. A numerical study is also provided to calculate the price levels in different scenarios and compare their performances.
Pricing Revenue management Game theory Discrete choice model
http://www.sciencedirect.com/science/article/B6VCT-4YNC1R3-1/2/4541f3115db3e70490aa5cc40c9c6741
Sibdari, Soheil
Pyke, David F.
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1635-16442011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1635-1644
article
A knowledge based approach to loss severity assessment in financial institutions using Bayesian networks and loss determinants
Modelling loss severity from rare operational risk events with potentially catastrophic consequences has proved a difficult task for practitioners in the finance industry. Efforts to develop loss severity models that comply with the BASEL II Capital Accord have resulted in two principal model directions where one is based on scenario generated data and the other on scaling of pooled external data. However, lack of relevant historical data and difficulties in constructing relevant scenarios frequently raise questions regarding the credibility of the resulting loss predictions. In this paper we suggest a knowledge based approach for establishing severity distributions based on loss determinants and their causal influence. Loss determinants are key elements affecting the actual size of potential losses, e.g. market volatility, exposure and equity capital. The loss severity distribution is conditional on the state of the identified loss determinants, thus linking loss severity to underlying causal drivers. We suggest Bayesian Networks as a powerful framework for quantitative analysis of the causal mechanisms determining loss severity. Leaning on available data and expert knowledge, the approach presented in this paper provides improved credibility of the loss predictions without being dependent on extensive data volumes.
Risk management OR in financial institutions Bayesian networks Loss determinants Advanced measurement approach
http://www.sciencedirect.com/science/article/B6VCT-50BJNMJ-4/2/b5bfdf5177ca3e69453008cc8cee1b8e
Häger, David
Andersen, Lasse B.
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:142-1512011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:142-151
article
Coordinating ordering and pricing decisions in a two-stage distribution system with price-sensitive demand through short-term discounting
We consider a short-term discounting model in which the distributor offers a discounted price for the retailers' orders placed at the beginning of its replenishment cycle, in a non-cooperative distribution system with one distributor and multiple retailers, each facing price-sensitive demand. We examine the value of the price discount strategy as a mechanism for the distributor to coordinate the retailers' ordering and pricing decisions under two common types of demand, linear demand in price and constant elasticity demand in price. Our numerical study reveals that, in the presence of homogeneous retailers (namely, retailers with identical demand rates), the distributor's profit improvement due to coordination generally decreases as the number of retailers or the inventory holding cost rate increases, but increases as price elasticity increases. Although an increase in the inventory holding cost rate has a negative effect on the distributor's profit, it may have a positive effect on the retailers' profits. We further find that with heterogeneous retailers (namely, retailers with different demand rates), offering a discounted price under linear demand benefits the distributor when both the inventory holding cost rate and the variation in demand are either small or large. This cross effect, however, is absent under constant elasticity demand.
Coordination Price discount Supply chain Short-term discounting
http://www.sciencedirect.com/science/article/B6VCT-4YP8TMV-1/2/1e1292fca2c396844d8916233be853dc
Hsieh, Chung-Chi
Liu, Yu-Te
Wang, Wei-Ming
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:269-2832011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:269-283
article
A model for real-time failure prognosis based on hidden Markov model and belief rule base
As one of most important aspects of condition-based maintenance (CBM), failure prognosis has attracted an increasing attention with the growing demand for higher operational efficiency and safety in industrial systems. Currently there are no effective methods which can predict a hidden failure of a system real-time when there exist influences from the changes of environmental factors and there is no such an accurate mathematical model for the system prognosis due to its intrinsic complexity and operating in potentially uncertain environment. Therefore, this paper focuses on developing a new hidden Markov model (HMM) based method which can deal with the problem. Although an accurate model between environmental factors and a failure process is difficult to obtain, some expert knowledge can be collected and represented by a belief rule base (BRB) which is an expert system in fact. As such, combining the HMM with the BRB, a new prognosis model is proposed to predict the hidden failure real-time even when there are influences from the changes of environmental factors. In the proposed model, the HMM is used to capture the relationships between the hidden failure and monitored observations of a system. The BRB is used to model the relationships between the environmental factors and the transition probabilities among the hidden states of the system including the hidden failure, which is the main contribution of this paper. Moreover, a recursive algorithm for online updating the prognosis model is developed. An experimental case study is examined to demonstrate the implementation and potential applications of the proposed real-time failure prognosis method.
Failure prognosis Belief rule base Expert systems Hidden Markov model Environmental factors
http://www.sciencedirect.com/science/article/B6VCT-4YNT46T-2/2/f8fa6dbf1c7701686dea4cbff82929ad
Zhou, Zhi-Jie
Hu, Chang-Hua
Xu, Dong-Ling
Chen, Mao-Yin
Zhou, Dong-Hua
oai:RePEc:eee:ejores:v:208:y:2011:i:1:p:86-942011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:1:p:86-94
article
Maritime inventory routing with multiple products: A case study from the cement industry
This paper considers a maritime inventory routing problem faced by a major cement producer. A heterogeneous fleet of bulk ships transport multiple non-mixable cement products from producing factories to regional silo stations along the coast of Norway. Inventory constraints are present both at the factories and the silos, and there are upper and lower limits for all inventories. The ship fleet capacity is limited, and in peak periods the demand for cement products at the silos exceeds the fleet capacity. In addition, constraints regarding the capacity of the ships' cargo holds, the depth of the ports and the fact that different cement products cannot be mixed must be taken into consideration. A construction heuristic embedded in a genetic algorithmic framework is developed. The approach adopted is used to solve real instances of the problem within reasonable solution time and with good quality solutions.
Routing Maritime transportation Inventory management Heuristics Genetic algorithms
http://www.sciencedirect.com/science/article/B6VCT-50W80TK-4/2/b38a500e17feacb86488805bb2cffb63
Christiansen, Marielle
Fagerholt, Kjetil
Flatberg, Truls
Haugen, Øyvind
Kloster, Oddvar
Lund, Erik H.
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:434-4442011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:434-444
article
Offshore outsourcing decision making: A policy-maker's perspective
This research aims at finding the best governing policy for offshore outsourcing of business activities. We use Analytical Network Process, a multicriteria decision making methodology, to create the evaluation framework. From the perspective of decision makers, stakeholders, and influence groups, four policy options are evaluated with respect to approximately 50 economic, political, technological and other factors. The model provides both long-term and short-term views of the outsourcing issue concerned to all parties. The all-inclusive approach helps policy makers to decide on the best policy and has the potential to ease tension between proponents and opponents of offshore outsourcing.
Decision analysis Outsourcing Public policy Multicriteria decisions ANP
http://www.sciencedirect.com/science/article/B6VCT-4YTN3YJ-1/2/a36e0421fd44ef32fde13c66603e7129
Tjader, Youxu Cai
Shang, Jennifer S.
Vargas, Luis G.
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1410-14182011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1410-1418
article
A single-shot game of multi-period inspection
This paper deals with an inspection game of Customs and a smuggler during some days. Customs has two options of patrolling or not. The smuggler can take two strategies of shipping its cargo of contraband or not. Two players have several opportunities to take an action during a limited number of days but they may discard some of the opportunities. When the smuggling coincides with the patrol, there occurs one of three events: the capture of the smuggler by Customs, a success of the smuggling and nothing new. If the smuggler is captured or no time remains to complete the game, the game ends. There have been many studies on the inspection game so far by the multi-stage game model, where both players at a stage know players' strategies taken at the previous stage. In this paper, we consider a two-person zero-sum single-shot game, where the game proceeds through multiple periods but both players do not know any strategies taken by their opponents on the process of the game. We apply dynamic programming to the game to exhaust all equilibrium points on a strategy space of player. We also clarify the characteristics of optimal strategies of players by some numerical examples.
Dynamic programming Game theory Inspection game Two-person zero-sum Multi-period
http://www.sciencedirect.com/science/article/B6VCT-50NBNP8-1/2/6fbb76a40977d81404f91931c6053bc6
Hohzaki, Ryusuke
Maehara, Hiroki
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1142-11432011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1142-1143
article
A note on single-machine scheduling with job-dependent learning effects
Mosheiov and Sidney (2003) showed that the makespan minimization problem with job-dependent learning effects can be formulated as an assignment problem and solved in O(n3) time. We show that this problem can be solved in O(nlog n) time by sequencing the jobs according to the shortest processing time (SPT) order if we utilize the observation that the job-dependent learning rates are correlated with the level of sophistication of the jobs and assume that these rates are bounded from below. The optimality of the SPT sequence is also preserved when the job-dependent learning rates are inversely correlated with the level of sophistication of the jobs and bounded from above.
Scheduling Single-machine Learning
http://www.sciencedirect.com/science/article/B6VCT-50C71TB-2/2/461f00b545ebc276a26ccec81ca2bd21
Koulamas, Christos
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:750-7622011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:750-762
article
Multi-source facility location-allocation and inventory problem
We consider a joint facility location-allocation and inventory problem that incorporates multiple sources of warehouses. The problem is motivated by a real situation faced by a multinational applied chemistry company. In this problem, multiple products are produced in several plants. Warehouse can be replenished by several plants together because of capabilities and capacities of plants. Each customer in this problem has stochastic demand and certain amount of safety stock must be maintained in warehouses so as to achieve certain customer service level. The problem is to determine number and locations of warehouses, allocation of customers demand and inventory levels of warehouses. The objective is to minimize the expected total cost with the satisfaction of desired demand weighted average customer lead time and desired cycle service level. The problem is formulated as a mixed integer nonlinear programming model. Utilizing approximation and transformation techniques, we develop an iterative heuristic method for the problem. An experiment study shows that the proposed procedure performs well in comparison with a lower bound.
Location Inventory holding cost Multiple sources Heuristic method
http://www.sciencedirect.com/science/article/B6VCT-509XPT7-3/2/ffd421ba53af3f87bb111f7c05aae91c
Yao, Zhishuang
Lee, Loo Hay
Jaruphongsa, Wikrom
Tan, Vicky
Hui, Chen Fei
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:318-3292011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:318-329
article
Using matrices to link conflict evolution and resolution in a graph model
The graph model for conflict resolution provides a convenient and effective means to model and analyze a strategic conflict. Standard practice is to carry out a stability analysis of a graph model, and then to follow up with a post-stability analysis, an important component of which is status quo analysis. A graph model can be viewed as an edge-colored graph, but the fundamental problem of status quo analysis - to find a shortest colored path from the status quo node to a desired equilibrium - is different from the well-known network analysis problem of finding the shortest path between two nodes. The only matrix method that has been proposed cannot track all aspects of the evolution of a conflict from the status quo state. Our explicit algebraic approach is convenient for computer implementation and, as demonstrated with a real world case study, easy to use. It provides new insights into a graph model, not only identifying all equilibria reachable from the status quo, but also how to reach them. Moreover, this approach bridges the gap between stability analysis and status quo analysis in the graph model for conflict resolution.
Graph model for conflict resolution Status quo analysis Incidence matrix Unilateral move arc-incidence matrix Unilateral improvement arc-incidence matrix Colored path
http://www.sciencedirect.com/science/article/B6VCT-4YPT1PM-1/2/270bd86a837bc917bbd8280e89a4f7ad
Xu, Haiyan
Marc Kilgour, D.
Hipel, Keith W.
Kemkes, Graeme
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:716-7282011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:716-728
article
Product differentiation and operations strategy in a capacitated environment
We study a firm selling two products/services, which are differentiated solely in their prices and delivery times, to two different customer segments in a capacitated environment. From a demand perspective, when both products are available to all customers, they act as substitutes, affecting each other's demand. Customized products for each segment, on the other hand, result in independent demand for each product. From a supply perspective, the firm may either share the same capacity or may dedicate a different capacity for each segment. Our objective is to understand the interaction between product substitution and the firm's operations strategy (dedicated versus shared capacity), and how this interaction shapes the optimal product differentiation strategy. We show that in a highly capacitated system, if the firm decides to move from a dedicated to a shared capacity setting, it will need to offer more differentiated products, whether the products are substitutable or not. In contrast, when independent products become substitutable, it results in a more homogeneous pricing scheme. Moreover, the optimal response to an increase in capacity cost also depends on the firm's operations strategy. In a dedicated capacity scenario, the optimal response is always to offer more homogeneous prices and delivery times. In a shared capacity setting, it is always optimal to quote more homogeneous delivery times, but to increase or decrease the price differentiation depending on whether the status-quo capacity cost is high or low, respectively.
Pricing Product differentiation Delivery time guarantee Product substitution Capacity sharing
http://www.sciencedirect.com/science/article/B6VCT-51JPWSB-6/2/db2e7b8ee7f793685863d2a8e6b16b5c
Jayaswal, Sachin
Jewkes, Elizabeth
Ray, Saibal
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:218-2312011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:218-231
article
Capacitated location model with online demand pooling in a multi-channel supply chain
This paper presents a location model that assigns online demands to the capacitated regional warehouses currently serving in-store demands in a multi-channel supply chain. The model explicitly considers the trade-off between the risk pooling effect and the transportation cost in a two-echelon inventory/logistics system. Keeping the delivery network of the in-store demands unchanged, the model aims to minimize the transportation cost, inventory cost, and fixed handling cost in the system when assigning the online demands. We formulate the assignment problem as a non-linear integer programming model. Lagrangian relaxation based procedures are proposed to solve the model, both the general case and an important special case. Numerical experiments show the efficiency of our algorithms. Furthermore, we find that because of the pooling effect the variance of in-store demands currently served by a warehouse is an important parameter of the warehouse when it is considered as a candidate for supplying online demands. Highly uncertain in-store demands, as well as low transportation cost per unit, can make a warehouse appealing. We illustrate with numerical examples the trade-off between the pooling effect and the transportation cost in the assignment problem. We also evaluate the cost savings between the policy derived from the model, which integrates the transportation cost with the pooling effect, and the commonly used policy, which is based only on the transportation cost. Results show that the derived policy can reduce 1.5-7.5% cost in average and in many instances the percentage of cost savings is more than 10%.
Multi-channel supply chain Online demand Facility location Lagrangian relaxation
http://www.sciencedirect.com/science/article/B6VCT-50106TX-3/2/3677ad057b767b74e8ac6756078a140b
Liu, Kaijun
Zhou, Yonghong
Zhang, Zigang
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:656-6672011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:656-667
article
On contracts for VMI program with continuous review (r, Q) policy
Based on continuous review (r, Q) policy, this paper deals with contracts for vendor managed inventory (VMI) program in a system comprising a single vendor and a single retailer. Two business scenarios that are popular in VMI program are "vendor with ownership" and "retailer with ownership". Taking the system performance in centralized control as benchmark, we define a contract "perfect" if the contract can enable the system to be coordinated and can guarantee the program to be trusted. A revenue sharing contract is designed for vendor with ownership, and a franchising contract is designed for retailer with ownership. Without consideration of order policy and related costs at the vendor site, it is shown that one contract can perform satisfactorily and the other one is a perfect contract. With consideration of order policy and related costs at the vendor site, it is shown that one contract can perform satisfactorily and the performance of the other one depends on system parameters.
Supply chain management VMI program (r, Q) policy Contract Ownership
http://www.sciencedirect.com/science/article/B6VCT-502V6V5-2/2/52111468d7ceb605a8fea0a66e961817
Guan, Ruoxi
Zhao, Xiaobo
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:579-5872011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:579-587
article
Optimizing departure times in vehicle routes
Most solution methods for the vehicle routing problem with time windows (VRPTW) develop routes from the earliest feasible departure time. In practice, however, temporary traffic congestion make such solutions non-optimal with respect to minimizing the total duty time. Furthermore, the VRPTW does not account for driving hours regulations, which restrict the available travel time for truck drivers. To deal with these problems, we consider the vehicle departure time optimization (VDO) problem as a post-processing of a VRPTW. We propose an ILP formulation that minimizes the total duty time. The results of a case study indicate that duty time reductions of 15% can be achieved. Furthermore, computational experiments on VRPTW benchmarks indicate that ignoring traffic congestion or driving hours regulations leads to practically infeasible solutions. Therefore, new vehicle routing methods should be developed that account for these common restrictions. We propose an integrated approach based on classical insertion heuristics.
Integer programming Departure time scheduling Time-dependent travel times Driving hours regulations
http://www.sciencedirect.com/science/article/B6VCT-51BYS9J-1/2/4f06f25b5e0acd2832f286882b4a5318
Kok, A.L.
Hans, E.W.
Schutten, J.M.J.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:398-4092011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:398-409
article
IPSSIS: An integrated multicriteria decision support system for equity portfolio construction and selection
A fundamental principle of modern portfolio theory is that comparisons between portfolios are generally made using two criteria, corresponding to the first two moments of return distributions, namely the expected return and portfolio variance. According to this model and according to most of the portfolio models derived from the stochastic dominance approach, the group of portfolios open to comparisons is divided into two parts: on the one hand there are the efficient portfolios (those that are not dominated by any other portfolio in the group), and on the other, those that are dominated. In other words, these models do not solve for one optimal portfolio, but rather solve for an efficient set of portfolios, among which the investor must choose, given his preference system. One criticism over these models, which has often been addressed both by practitioners and academics, is that they fail to embody the objectives of the decision maker (DM), through the various stages of the decision process. Our purpose in this article is to present an integrated and innovative methodological approach for the construction and selection of equity portfolios, which will take into account the inherent multidimensional nature of the problem, while allowing the DM to incorporate his preferences in the decision process. The proposed approach, which grounds its basis on the field of multiple criteria decision making (MCDM) and more specifically on multiobjective mathematical programming (MMP), is implemented in the IPSSIS (Integrated Portfolio Synthesis and Selection Information System) decision support system (DSS). The validity of the proposed approach is tested through an illustrative application in the Athens Stock Exchange (ASE).
Portfolio construction Portfolio selection Equities Multiple criteria decision making Multiobjective mathematical programming Decision support systems
http://www.sciencedirect.com/science/article/B6VCT-50YK83C-2/2/e621dfb485740cfee8340e0d4d73ab4c
Xidonas, Panagiotis
Mavrotas, George
Zopounidis, Constantin
Psarras, John
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1269-12792011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1269-1279
article
Social responsibility allocation in two-echelon supply chains: Insights from wholesale price contracts
Corporate social responsibility (CSR) is considered in a two-echelon supply chain consisting of an upstream supplier and a downstream firm that are bound by a wholesale price contract. CSR performance (the outcome of CSR conduct) of the whole supply chain is gauged by a global variable and the associated cost of achieving this CSR performance is only incurred by the supplier with an expectation of being shared with the downstream firm via the wholesale price contract. As such, the key issue is to determine who should be allocated as the responsibility holder with the right of offering the contract and how this right should be appropriately restricted. Game-theoretical analyses are carried out on six games, resulting from different interaction schemes between the supplier and the firm, to derive their corresponding equilibriums. Comparative institutional analyses are then conducted to determine the optimal social responsibility allocation based on both economic and CSR performance criteria. Main results are furnished in a series of propositions and their implications to the real-world business practice are discussed. The key findings are threefold: under the current model settings: (1) the optimal allocation scheme is to assign the supplier as the responsibility holder with appropriate restrictions on the corresponding rights to determine the wholesale price; (2) inherent conflict exists between the economic and CSR performance criteria and, hence, the two maxima cannot be achieved simultaneously; and (3) although integrative channel profit is not attainable, the system-wide profit will be improved by implementing optimal social responsibility allocation schemes.
Supply chain management Corporate social responsibility Wholesale price contracts Equilibrium
http://www.sciencedirect.com/science/article/B6VCT-50CV7XN-1/2/adcb7b3624f8a3a116512eea5473a105
Ni, Debing
Li, Kevin W.
Tang, Xiaowo
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:897-9052011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:897-905
article
Channel coordination in a consignment contract
With the emergence of virtual market places, consignment selling has been thriving at an unprecedented pace. It has been shown in the literature that inefficiency exists in a decentralized consignment channel with a single revenue share agreement. In our study, we analyze contracts observed in practice that contain bonus or side payment terms, and examine whether they can promote better coordination between the supplier and the retailer. We found that no multi-tier Bonus system can fully coordinate the consignment channel. However, fine adjustments of bonus parameters can bring the channel close to full coordination. Additionally, we found revenue sharing with side payment contracts not only fully coordinate the channel, but they can also be customized to meet the needs of small, medium and large suppliers for extra retailer services such as warehousing and transportation. Managerial insights on how to design the contracts from the supplier, retailer and channel perspectives are discussed in the paper.
Retailing Production Pricing Channel coordination
http://www.sciencedirect.com/science/article/B6VCT-5057KSN-2/2/3fcece723eb4cb908804136f24f511f0
Zhang, Dengfeng
de Matta, Renato
Lowe, Timothy J.
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1321-13262011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1321-1326
article
Channel coordination under fairness concerns and nonlinear demand
The supply chain literature analyzing supplier-retailer contracts and channel coordination has typically focused on profit or revenue maximization as the members' sole objective. In such settings, it is well known that a simple wholesale price contract is not effective in coordinating the channel due to double marginalization. Recently, Cui et al. [Cui, T.H., Raju, J.S., Zhang, Z.J., 2007. Fairness and channel coordination. Management Science 53 (8) 1303-1314] introduced the members' fairness concerns into channel coordination. Assuming a linear demand function, the authors show that a coordinating wholesale price contract can be designed when only the retailer or both parties are concerned about fairness. In this paper, we extend the authors' results to other nonlinear demand functions that are commonly used in the literature. Our analysis reveals that, compared to the linear demand, the exponential demand function requires less stringent conditions to achieve coordination when only the retailer is fairness-concerned.
Supply chain management Fairness Channel coordination Stackelberg game Wholesale price contract
http://www.sciencedirect.com/science/article/B6VCT-50KC6NY-1/2/d8f766e3cbfba9fa8bbf5d55d6beec36
Caliskan-Demirag, Ozgun
Chen, Youhua (Frank)
Li, Jianbin
oai:RePEc:eee:ejores:v:211:y:2011:i:2:p:403-4142011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:2:p:403-414
article
Implicit collusion and individual market power in electricity markets
Wholesale electricity markets may not produce competitive outcomes, either as a result of the exercise of market power, or through problems of implicit collusion. In comparison with the great amount of attention paid to issues of market power, the problems of implicit collusion have not been extensively studied. In this paper, we use a coevolutionary approach to explore the effect of the price elasticity of demand, capacity and forward contracts on implicit collusion in a duopoly. We will demonstrate that implicit collusion has the most importance in market conditions under which there is an intermediate amount of market power. Thus markets which are either highly competitive or in which one or both of the two generators can exercise considerable market power, are also markets in which implicitly collusive outcomes are less likely to arise.
Tacit collusion Market power Electricity Coevolution
http://www.sciencedirect.com/science/article/B6VCT-51WD0HV-1/2/c0dde2e84f5100f579c34039ceaf92c6
Anderson, E.J.
Cau, T.D.H.
oai:RePEc:eee:ejores:v:211:y:2011:i:2:p:385-3932011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:2:p:385-393
article
The hidden information content of price movements
Dynamic patterns of prices in different markets may motivate (strategic) consumers, who could be monitoring price movements over time, to game vendors. Do past price movements carry information about the probability and magnitude of future price drops? Conducting empirical work in the airline industry on near 1000 US domestic routes, we find that some price-metrics carry information about future price swings: these variables can assist in predicting the likelihood and magnitude of price drops. These price-metrics yield significantly different signals which also vary as the prediction horizon changes.
Marketing/operations interface Reference price Price volatility Dynamic pricing Consumer behaviour Airline industry
http://www.sciencedirect.com/science/article/B6VCT-51TYF2C-1/2/7b6230eec09aa67b155204a56d80963d
Mantin, Benny
Gillen, David
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:141-1472011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:141-147
article
Super-efficiency DEA in the presence of infeasibility
It is well known that super-efficiency data envelopment analysis (DEA) approach can be infeasible under the condition of variable returns to scale (VRS). By extending of the work of Chen (2005), the current study develops a two-stage process for calculating super-efficiency scores regardless whether the standard VRS super-efficiency mode is feasible or not. The proposed approach examines whether the standard VRS super-efficiency DEA model is infeasible. When the model is feasible, our approach yields super-efficiency scores that are identical to those arising from the original model. For efficient DMUs that are infeasible under the super-efficiency model, our approach yields super-efficiency scores that characterize input savings and/or output surpluses. The current study also shows that infeasibility may imply that an efficient DMU does not exhibit super-efficiency in inputs or outputs. When infeasibility occurs, it can be necessary that (i) both inputs and outputs be decreased to reach the frontier formed by the remaining DMUs under the input-orientation and (ii) both inputs and outputs be increased to reach the frontier formed by the remaining DMUs under the output-orientation. The newly developed approach is illustrated with numerical examples.
Data envelopment analysis (DEA) Infeasibility Super-efficiency
http://www.sciencedirect.com/science/article/B6VCT-520M1VJ-3/2/2a5f261ec513fa5f3fc617db65edde29
Lee, Hsuan-Shih
Chu, Ching-Wu
Zhu, Joe
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1002-10132011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1002-1013
article
An innovative approach for strategic capacity portfolio planning under uncertainties
This research studies multi-generation capacity portfolio planning problems under various uncertainty factors. These uncertainty factors include price uncertainties, demand fluctuation and uncertain product life cycle. The objective of this research is to develop an efficient algorithm that generates capacity portfolio policies robust to aforementioned uncertainties. We model this capacity portfolio planning problem using Markov decision processes (MDP). In this MDP model, we consider two generation of manufacturing technology. The new generation capacity serves as a flexible resource that can be used to downward fulfill the deficiency of old generation capacity. The objective of this MDP model is to maximize the expected profit under uncertainties. An efficient algorithm is proposed to solve the problem and provide an optimal capacity expansion policy for both types of capacity. Moreover, we show that the optimal capacity expansion policy can be characterized by a monotone structure. We verify our results by detail simulation study.
Technology choice Multi-capacity expansion Dynamic programming Demand uncertainty
http://www.sciencedirect.com/science/article/B6VCT-504STF1-1/2/b0eaadcf6688fe0a70db32e6c8f7c30c
Wu, Cheng-Hung
Chuang, Ya-Tang
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:213-2222011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:213-222
article
Globally optimal clusterwise regression by mixed logical-quadratic programming
Exact global optimization of the clusterwise regression problem is challenging and there are currently no published feasible methods for performing this clustering optimally, even though it has been over thirty years since its original proposal. This work explores global optimization of the clusterwise regression problem using mathematical programming and related issues. A mixed logical-quadratic programming formulation with implication of constraints is presented and contrasted against a quadratic formulation based on the traditional big-M, which cannot guarantee optimality because the regression line coefficients, and thus errors, may be arbitrarily large. Clusterwise regression optimization times and solution optimality for two clusters are empirically tested on twenty real datasets and three series of synthetic datasets ranging from twenty to one hundred observations and from two to ten independent variables. Additionally, a few small real datasets are clustered into three lines.
Mixed integer quadratic programming Mixed logical-quadratic programming Global optimization Combinatorial optimization Clusterwise regression Clustering
http://www.sciencedirect.com/science/article/B6VCT-51YBTGC-3/2/0465b018601b01e0980de4ad992cb659
Carbonneau, Réal A.
Caporossi, Gilles
Hansen, Pierre
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:1-142011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:1-14
article
Modeling, inference and optimization of regulatory networks based on time series data
In this survey paper, we present advances achieved during the last years in the development and use of OR, in particular, optimization methods in the new gene-environment and eco-finance networks, based on usually finite data series, with an emphasis on uncertainty in them and in the interactions of the model items. Indeed, our networks represent models in the form of time-continuous and time-discrete dynamics, whose unknown parameters we estimate under constraints on complexity and regularization by various kinds of optimization techniques, ranging from linear, mixed-integer, spline, semi-infinite and robust optimization to conic, e.g., semi-definite programming. We present different kinds of uncertainties and a new time-discretization technique, address aspects of data preprocessing and of stability, related aspects from game theory and financial mathematics, we work out structural frontiers and discuss chances for future research and OR application in our real world.
Nonlinear programming Uncertainty modeling Computational biology Environment Games Data mining
http://www.sciencedirect.com/science/article/B6VCT-50DYH3C-4/2/80a74d473c248058002add7921d5ef6a
Weber, Gerhard-Wilhelm
Defterli, Ozlem
Alparslan Gök, SIrma Zeynep
Kropat, Erik
oai:RePEc:eee:ejores:v:209:y:2011:i:3:p:241-2522011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:3:p:241-252
article
Multi-level single machine lot-sizing and scheduling with zero lead times
A pharmaceutical company raised the question whether an increased product portfolio could still be manufactured on the existing machinery. The proportional lot-sizing and scheduling problem (PLSP) seemed to be most appropriate to answer this question. However, although there are papers dealing with a multi-level PLSP none allows a zero lead time offset which is a prerequisite for the case considered here. In this paper we will extend and modify an existing mixed integer linear programming (MIP) model formulation in two ways: first, we will extend the single-level single machine PLSP to a multi-level single machine PLSP (PLSP-ML-SM) with a zero lead time offset. Second, we will describe a new and more compact model formulation incorporating period overlapping setup times and batch size constraints. Based on the real-world application several test instances have been generated to provide insights into those characteristics which make instances of the PLSP-ML-SM difficult to solve by a standard MIP solver.
Lot-sizing Scheduling PLSP Bill of materials Lead time
http://www.sciencedirect.com/science/article/B6VCT-514R67V-1/2/2ecd31e4dcba8ec6588b490988649995
Stadtler, Hartmut
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:97-1042011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:97-104
article
Flow shops with machine maintenance: Ordered and proportionate cases
We consider the m-machine ordered flow shop scheduling problem with machines subject to maintenance and with the makespan as objective. It is assumed that the maintenances are scheduled in advance and that the jobs are resumable. We consider permutation schedules and show that the problem is strongly NP-hard; it remains NP-hard in the ordinary sense even in the case of a single maintenance. We show that if the first (last) machine is the slowest and if maintenances occur only on the first (last) machine, then sequencing the jobs in the LPT (SPT) order yields an optimal schedule for the m-machine problem. As a special case of the ordered flow shop, we focus on the proportionate flow shop where the processing times of any given job on all the machines are identical. We prove that the proportionate flow shop problem with two maintenance periods is NP-hard, while the problem with a single maintenance period can be solved in polynomial time. Furthermore, we show that the optimal algorithm for the single maintenance case is a -approximation algorithm for the two maintenance case. In our conclusion we discuss also the computational complexity of other objective functions.
Ordered flow shop Proportionate flow shop Maintenance Computational complexity Approximation algorithm
http://www.sciencedirect.com/science/article/B6VCT-4YYGGYH-2/2/fa943564cd371d17dd8d19f5bfdc3336
Choi, Byung-Cheon
Lee, Kangbok
Leung, Joseph Y.-T.
Pinedo, Michael L.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:148-1572011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:148-157
article
Hölder continuity and upper estimates of solutions to vector quasiequilibrium problems
In this paper, we establish the Hölder continuity of solution mappings to parametric vector quasiequilibrium problems in metric spaces under the case that solution mappings are set-valued. Our main assumptions are weaker than those in the literature, and the results extend and improve the recent ones. Furthermore, as an application of Hölder continuity, we derive upper bounds for the distance between an approximate solution and a solution set of a vector quasiequilibrium problem with fixed parameters.
Multiple objective programming Vector quasiequilibrium problems Holder continuity Upper bounds Hausdorff distance
http://www.sciencedirect.com/science/article/B6VCT-51696VK-3/2/2abce11fc6071ad73e6fab8174cdeab1
Li, S.J.
Chen, C.R.
Li, X.B.
Teo, K.L.
oai:RePEc:eee:ejores:v:211:y:2011:i:2:p:282-2972011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:2:p:282-297
article
Linear programming based decomposition methods for inventory distribution systems
We consider an inventory distribution system consisting of one warehouse and multiple retailers. The retailers face random demand and are supplied by the warehouse. The warehouse replenishes its stock from an external supplier. The objective is to minimize the total expected replenishment, holding and backlogging cost over a finite planning horizon. The problem can be formulated as a dynamic program, but this dynamic program is difficult to solve due to its high dimensional state variable. It has been observed in the earlier literature that if the warehouse is allowed to ship negative quantities to the retailers, then the problem decomposes by the locations. One way to exploit this observation is to relax the constraints that ensure the nonnegativity of the shipments to the retailers by associating Lagrange multipliers with them, which naturally raises the question of how to choose a good set of Lagrange multipliers. In this paper, we propose efficient methods that choose a good set of Lagrange multipliers by solving linear programming approximations to the inventory distribution problem. Computational experiments indicate that the inventory replenishment policies obtained by our approach can outperform several standard benchmarks by significant margins.
Inventory distribution Approximate dynamic programming Inventory control
http://www.sciencedirect.com/science/article/B6VCT-51JPWSB-4/2/fee6c072a07780ebc0e8937c0eedb6f7
Kunnumkal, Sumit
Topaloglu, Huseyin
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:1-112011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:1-11
article
Order acceptance and scheduling: A taxonomy and review
Over the past 20 years, the topic of order acceptance has attracted considerable attention from those who study scheduling and those who practice it. In a firm that strives to align its functions so that profit is maximized, the coordination of capacity with demand may require that business sometimes be turned away. In particular, there is a trade-off between the revenue brought in by a particular order, and all of its associated costs of processing. The present study focuses on the body of research that approaches this trade-off by considering two decisions: which orders to accept for processing, and how to schedule them. This paper presents a taxonomy and a review of this literature, catalogs its contributions and suggests opportunities for future research in this area.
Scheduling Order acceptance Job rejection Survey
http://www.sciencedirect.com/science/article/B6VCT-5161P9Y-4/2/30cdb68bda32279af3248740570bdc5f
Slotnick, Susan A.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:344-3572011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:344-357
article
Cost-based decision-making in middleware virtualization environments
Middleware virtualization refers to the process of running applications on a set of resources (e.g., databases, application servers, other transactional service resources) such that the resource-to-application binding can be changed dynamically on the basis of applications' resource requirements. Although virtualization is a rapidly growing area, little formal academic or industrial research provides guidelines for cost-optimal allocation strategies. In this work, we study this problem formally. We identify the problem and describe why existing schemes cannot be applied directly. We then formulate a mathematical model describing the business costs of virtualization. We develop runtime models of virtualization decision-making paradigms. We describe the cost implications of various runtime models and consider the cost effects of different managerial decisions and business factors, such as budget changes and changes in demand. Our results yield useful insights for managers in making virtualization decisions.
Computing science Virtualization Resource assignment System design
http://www.sciencedirect.com/science/article/B6VCT-516M74T-3/2/3a2cdff9009188e83487533e53fe54f7
Dutta, Kaushik
VanderMeer, Debra
oai:RePEc:eee:ejores:v:211:y:2011:i:2:p:310-3172011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:2:p:310-317
article
A comparison of two sourcing tactics for a new component
We compare two sourcing tactics for a manufacturer to purchase a new component to be used in a one-time production run of a new product with uncertain and price-elastic demand. One alternative is to issue a request-for-quote (RFQ), which is where the manufacturer requests a price-quantity schedule from suppliers. The manufacturer uses this information to determine a production quantity and the number of components to purchase from each supplier. The other alternative is to post a bid specifying how the manufacturer's purchase quantity will depend on the supplier's component price. The suppliers use this information to compete on quantity. We find that relative to RFQ, which is more challenging for the manufacturer to characterize the supplier response due to the possibility of supplier interaction, the benefit to the manufacturer from posting a bid increases with the number of suppliers due to increased intensity of competition. If the new component is from an emerging industry where there is little mutual awareness among candidate suppliers, then regardless of number of suppliers, expected manufacturer profit is higher under RFQ. Posting a bid is more likely to benefit the manufacturer when the new component is from a more established industry with a high degree of awareness among candidate suppliers.
Sourcing tactics Short life-cycle product Uncertain price-sensitive demand RFQ Reverse auctions
http://www.sciencedirect.com/science/article/B6VCT-51S0WRY-1/2/96b29eabdee51b7bf3591085a1951b4d
Wang, Charles X.
Webster, Scott
Zhang, Sidong
oai:RePEc:eee:ejores:v:211:y:2011:i:2:p:415-4252011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:2:p:415-425
article
Prices, promotions, and channel profitability: Was the conventional wisdom mistaken?
Because of the lack of empirical evidence supporting the shift of economic power from manufacturers to retailers, it has been claimed that the conventional wisdom that retailers benefit more from the use of consumer promotions was mistaken. This paper assesses this claim and examines how two different pricing approaches during manufacturers' instant price promotions targeted at consumers impact on channel profits in a bilateral monopoly. We found that manufacturers should only offer rebates when they keep their prior-to-promotion wholesale prices unchanged. Consistent with the conventional wisdom, retailers do indeed take the lion's share of the promotion incremental profits. Surprisingly, however, we found that in the largest part of the parameter space, manufacturers still earn more total channel profits than retailers over time. The theoretical and managerial implications of these findings are discussed.
Marketing Economic power Game theory Marketing channel Instant rebates
http://www.sciencedirect.com/science/article/B6VCT-51WV03R-2/2/b81d2c748a04a25d7075c4c61af668db
MartI´n-Herrán, Guiomar
Sigué, Simon P.
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1187-12022011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1187-1202
article
Convergence guarantees for generalized adaptive stochastic search methods for continuous global optimization
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.
Global optimization Stochastic search Random search Convergence Evolutionary algorithm Evolutionary programming
http://www.sciencedirect.com/science/article/B6VCT-50GMMCT-1/2/c37f7b34c7b3b3035cb7fabb67b7b011
Regis, Rommel G.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:213-2302011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:213-230
article
Hub location-allocation in intermodal logistic networks
Within the context of intermodal logistics, the design of transportation networks becomes more complex than it is for single mode logistics. In an intermodal network, the respective modes are characterized by the transportation cost structure, modal connectivity, availability of transfer points and service time performance. These characteristics suggest the level of complexity involved in designing intermodal logistics networks. This research develops a mathematical model using the multiple-allocation p-hub median approach. The model encompasses the dynamics of individual modes of transportation through transportation costs, modal connectivity costs, and fixed location costs under service time requirements. A tabu search meta-heuristic is used to solve large size (100 node) problems. The solutions obtained using this meta-heuristic are compared with tight lower bounds developed using a Lagrangian relaxation approach. An experimental study evaluates the performance of the intermodal logistics networks and explores the effects and interactions of several factors on the design of intermodal hub networks subject to service time requirements.
Logistics Intermodal Transportation Hub networks
http://www.sciencedirect.com/science/article/B6VCT-51696VK-1/2/91bcf2ebf5ec9e8ae800b222e68fe243
Ishfaq, Rafay
Sox, Charles R.
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:499-5072011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:499-507
article
Applying simulation optimization to the asset allocation of a property-casualty insurer
Proper asset allocations are vital for property-casualty insurers to be competitive and solvent. Theories of finance offer little practical guidance in constructing such asset allocations however. This research integrates simulation models with a newly developed evolutionary algorithm for the multi-period asset allocation problem of a property-casualty insurer. We first construct a simulation model to simulate operations of a property-casualty insurer. Then we develop multi-phase evolution strategies (MPES) to be used with the simulation model to search for promising asset allocations for the insurer. A thorough experiment is conducted to evaluate the performance of our simulation optimization approach. Computational results show that MPES is an effective search algorithm. It dominates the grid search method by a significant margin. The re-allocation strategy resulting from MPES outperforms re-balancing strategies significantly. This research further demonstrates that the simulation optimization approach can be used to study economic issues related to multi-period asset allocation problems in practical settings.
Simulation Optimization Evolution strategies Asset allocation Property-casualty insurance
http://www.sciencedirect.com/science/article/B6VCT-4YYRMP2-1/2/b471ead9b4520bf959584f7f083fb00b
Yu, Tzu-Yi
Tsai, Chenghsien
Huang, Hsiao-Tzu
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:459-4662011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:459-466
article
Cooperative game theory and inventory management
Supply chain management is related to the coordination of materials, products and information flows among suppliers, manufacturers, distributors, retailers and customers involved in producing and delivering a final product or service. In this setting the centralization of inventory management and coordination of actions, to further reduce costs and improve customer service level, is a relevant issue. In this paper, we provide a review of the applications of cooperative game theory in the management of centralized inventory systems. Besides, we introduce and study a new model of centralized inventory: a multi-client distribution network.
Game theory Cooperative games Inventory models Centralized inventory management
http://www.sciencedirect.com/science/article/B6VCT-50F3PJ8-1/2/b76c3a8bc91d28a55e541eb167dd18ec
Fiestras-Janeiro, M.G.
García-Jurado, I.
Meca, A.
Mosquera, M.A.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:301-3092011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:301-309
article
A posterior preference articulation approach to multiresponse surface optimization
In multiresponse surface optimization (MRSO), responses are often in conflict. To obtain a satisfactory compromise, the preference information of a decision maker (DM) on the tradeoffs among the responses should be incorporated into the problem. In most existing work, the DM expresses a subjective judgment on the responses through a preference parameter before the problem-solving process, after which a single solution is obtained. In this study, we propose a posterior preference articulation approach to MRSO. The approach initially finds a set of nondominated solutions without the DM's preference information, and then allows the DM to select the best solution from among the nondominated solutions. An interactive selection method based on pairwise comparisons made by the DM is adopted in our method to facilitate the DM's selection process. The proposed method does not require that the preference information be specified in advance. It is easy and effective in that a satisfactory compromise can be obtained through a series of pairwise comparisons, regardless of the type of the DM's utility function.
Quality management Multiresponse surface optimization Nondominated solution Posterior preference articulation approach
http://www.sciencedirect.com/science/article/B6VCT-51491G1-7/2/eec7d7e54d8387bf467831dd2e146e17
Lee, Dong-Hee
Kim, Kwang-Jae
Köksalan, Murat
oai:RePEc:eee:ejores:v:207:y:2010:i:3:p:1293-13032011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:3:p:1293-1303
article
Ambulance location and relocation problems with time-dependent travel times
Emergency service providers are facing the following problem: how and where to locate vehicles in order to cover potential future demand effectively. Ambulances are supposed to be located at designated locations such that in case of an emergency the patients can be reached in a time-efficient manner. A patient is said to be covered by a vehicle if (s)he can be reached by an ambulance within a predefined time limit. Due to variations in speed and the resulting travel times it is not sufficient to solve the static ambulance location problem once using fixed average travel times, as the coverage areas themselves change throughout the day. Hence we developed a multi-period version, taking into account time-varying coverage areas, where we allow vehicles to be repositioned in order to maintain a certain coverage standard throughout the planning horizon. We have formulated a mixed integer program for the problem at hand, which tries to optimize coverage at various points in time simultaneously. The problem is solved metaheuristically using variable neighborhood search. We show that it is essential to consider time-dependent variations in travel times and coverage respectively. When ignoring them the resulting objective will be overestimated by more than 24%. By taking into account these variations explicitly the solution on average can be improved by more than 10%.
Location Time-dependent travel time Ambulance vehicles Variable neighborhood search
http://www.sciencedirect.com/science/article/B6VCT-50DYH3C-1/2/179675b0bfa5aaa8bf634862ec16924a
Schmid, Verena
Doerner, Karl F.
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:514-5262011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:514-526
article
Analysis of compound bullwhip effect causes
This research investigates compound causes of the bullwhip effect (BWE) by considering an inventory system with multiple price-sensitive demand streams. Joint price and demand dynamics are captured by a vector time-series process that incorporates the stochastic co-movements in price and demand. We study two BWE measures, one for each demand stream individually and one for the aggregated demand. We show that demand parameters including demand autocorrelation, cross-correlation, and price sensitivity serve as root causes of the BWE. We prove that the impact of these parameters on the BWE can be additively decomposed. Conditions are established under which a pair of simultaneous compound causes may attenuate or dampen the BWE. When demand streams are aggregated, we derive a pooling factor that quantifies the impact of demand aggregation on order stability. When positive, the pooling factor corresponds to a synergy effect that captures the gain in the stability of the pooled orders. Conditions for the existence of the synergy effect are obtained for several special cases involving a zero leadtime. We also discuss how our analytical findings can be managerially applied to bullwhip mitigation strategies.
Bullwhip effect Forecasting Supply chain Inventory control
http://www.sciencedirect.com/science/article/B6VCT-51491G1-5/2/5c2b872de8b0c9c6a8f322e2a01c9375
Zhang, Xiaolong
Burke, Gerard J.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:725-7352011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:725-735
article
Ensuring responsive capacity: How to contract with backup suppliers
Firms that source from offshore plants frequently perceive the lack of reliability and flexibility to be among the major drawbacks of their strategy. To mitigate against imminent mismatches of uncertain supply and demand, establishing capacity hedges in the form of responsive backup suppliers is a way out that many firms follow. This article analyzes how firms should contract with backup suppliers, inducing the latter to install responsive capacity. We show that supply options are appropriate to achieve sourcing channel coordination under forced compliance, whereas any firm commitment contract imposes a deadweight loss on the system. Whereas price-only contracts are unable to coordinate the sourcing channel under voluntary compliance, utilization-dependent price-only contracts are. Under the former contract, a price-focused strategy on the part of the manufacturer turns out to diminish the system's service level and possibly has negative implications on installed backup capacity, and not least on the manufacturer's profit.
Responsive capacity Demand and supply uncertainty Supply chain contracting Operational hedging Backup supplier
http://www.sciencedirect.com/science/article/B6VCT-507CRR3-2/2/fce5820f32912996a750dac1e1ad7def
Sting, Fabian J.
Huchzermeier, Arnd
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:89-992011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:89-99
article
Analytical approximations to predict performance measures of markovian type manufacturing systems with job failures and parallel processing
Manufacturing or service systems with multiple product classes, job circulation due to random failures, resources shared between product classes, and some portions of the manufacturing or assembly carried in series and the rest in parallel are commonly observed in real-life. The web server assembly is one such manufacturing system which exhibits the above characteristics. Predicting the performance measures of these manufacturing systems is not an easy task. The primary objective of this research was to propose analytical approximations to predict the flow times of the manufacturing systems, with the above characteristics, and evaluate its accuracy. The manufacturing system is represented as a network of queues. The parametric decomposition approach is used to develop analytical approximations for a system with arrival and service rates from a Markovian distribution. The results from the analytical approximations are compared to simulation models. In order to bridge the gap in error, correction terms were developed through regression modeling. The experimental study conducted indicates that the analytical approximations along with the correction terms can serve as a good estimate for the flow times of the manufacturing systems with the above characteristics.
Queuing network Parametric decomposition Web server assembly Fork and join queues
http://www.sciencedirect.com/science/article/B6VCT-5223Y1T-1/2/67cbbeb083b8266294b62c07b9208ea4
Hulett, Maria
Damodaran, Purushothaman
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1052-10642011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1052-1064
article
Knowledge sharing in communities of practice: A game theoretic analysis
This research applies game theory to analyze the incentives of knowledge-sharing activities in various types of communities of practice (COPs), characterized by individual profiles and decision structures. Indeed, individual decision making results in the under-provision of knowledge; however, the benefit of knowledge sharing may be raised by IT investment and suitable incentive mechanisms we study here. In general conditions, improving communication and collaboration technologies should be prior to developing data mining technologies. However, when the number of community members is sufficiently small and the heterogeneity of the expected value of knowledge among community members is sufficiently large, developing data mining technologies should be considered more important than the other if most community members are low-type ones. On the other hand, based on a screening technique, we find that the benefit of knowledge sharing in the incomplete information setting can be the same as that in the complete information setting if the cost of more efficient community member is smaller than that of less efficient one.
Economics Cost benefit analysis Gaming Knowledge sharing Incentive mechanism
http://www.sciencedirect.com/science/article/B6VCT-5057KSN-4/2/6e72fe9d0a6b18196338e3d4721f8d8e
Li, Yung-Ming
Jhang-Li, Jhih-Hua
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:618-6232011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:618-623
article
Optimal means for continuous processes in series
We discuss the problem of determining the means of a set of processes in series. Each process generates a random quality characteristic that in turn has lower and upper specification limits. Depending on the value of the quality characteristic, an item can be reworked, scrapped or forwarded to the next process. An item is reworked at the same stage. The processes are continuously running, hence we develop the "long term" probabilities of meeting specifications, and of violating each limit. These are used to construct the profit function to be maximized. We present a recursive form of the profit function that yields a very efficient method for determining the means. The method relies on solving single stage problems. Next, we turn our attention to the single stage problem and show that if the quality characteristics are normally distributed, then a local optimum is also global. Finally, we present a very fast solution method for this problem.
Quality control Process target levels Properties of single machine targeting problems
http://www.sciencedirect.com/science/article/B6VCT-5192153-1/2/e1d4b8bf16ca9527dda31e9c5e16c8f7
Selim, Shokri Z.
Al-Zu'bi, Walid K.
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:1-142011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:1-14
article
A survey of variants and extensions of the resource-constrained project scheduling problem
The resource-constrained project scheduling problem (RCPSP) consists of activities that must be scheduled subject to precedence and resource constraints such that the makespan is minimized. It has become a well-known standard problem in the context of project scheduling which has attracted numerous researchers who developed both exact and heuristic scheduling procedures. However, it is a rather basic model with assumptions that are too restrictive for many practical applications. Consequently, various extensions of the basic RCPSP have been developed. This paper gives an overview over these extensions. The extensions are classified according to the structure of the RCPSP. We summarize generalizations of the activity concept, of the precedence relations and of the resource constraints. Alternative objectives and approaches for scheduling multiple projects are discussed as well. In addition to popular variants and extensions such as multiple modes, minimal and maximal time lags, and net present value-based objectives, the paper also provides a survey of many less known concepts.
Project scheduling Modeling Resource constraints Temporal constraints Networks
http://www.sciencedirect.com/science/article/B6VCT-4XNN5G6-2/2/2500b7b287428c1b2f4823dd73df7d17
Hartmann, Sönke
Briskorn, Dirk
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:694-7052011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:694-705
article
Assigning judges to competitions of several rounds using Tabu search
The judge assignment problem consists in finding an assignment satisfying the competition rules (hard constraints) and meeting, as much as possible, the competition organizers objectives (soft constraints). In this paper, various specific real-world constraints found in organizing academic competitions are handled. We tackle the corresponding problem with a metaheuristic approach based on Tabu search. The numerical results indicate that very good solutions can be generated in reasonable computational times.
Tabu search Scheduling Assignment
http://www.sciencedirect.com/science/article/B6VCT-51F25NV-1/2/bdf99f34d4e4ab095fac9696fe518280
Lamghari, Amina
Ferland, Jacques A.
oai:RePEc:eee:ejores:v:209:y:2011:i:3:p:265-2722011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:3:p:265-272
article
The traveling purchaser problem with stochastic prices: Exact and approximate algorithms
The paper formulates an extension of the traveling purchaser problem where multiple types of commodities are sold at spatially distributed locations with stochastic prices (each following a known probability distribution). A purchaser's goal is to find the optimal routing and purchasing strategies that minimize the expected total travel and purchasing costs needed to purchase one unit of each commodity. The purchaser reveals the actual commodity price at a seller upon arrival, and then either purchases the commodity at the offered price, or rejects the price and visits a next seller. In this paper, we propose an exact solution algorithm based on dynamic programming, an iterative approximate algorithm that yields bounds for the minimum total expected cost, and a greedy heuristic for fast solutions to large-scale applications. We analyze the characteristics of the problem and test the computational performance of the proposed algorithms. The numerical results show that the approximate and heuristic algorithms yield near-optimum strategies and very good estimates of the minimum total cost.
Traveling purchaser problem Stochastic price Dynamic programming Approximation Heuristic
http://www.sciencedirect.com/science/article/B6VCT-511R9M9-1/2/c6d3bde65281bcac50092afd5686e7d3
Kang, Seungmo
Ouyang, Yanfeng
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:736-7442011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:736-744
article
A solution approach to the inventory routing problem in a three-level distribution system
We consider the infinite horizon inventory routing problem in a three-level distribution system with a vendor, a warehouse and multiple geographically dispersed retailers. In this problem, each retailer faces a demand at a deterministic, retailer-specific rate for a single product. The demand of each retailer is replenished either from the vendor through the warehouse or directly from the vendor. Inventories are kept at both the retailers and the warehouse. The objective is to determine a combined transportation (routing) and inventory strategy minimizing a long-run average system-wide cost while meeting the demand of each retailer without shortage. We present a decomposition solution approach based on a fixed partition policy where the retailers are partitioned into disjoint and collectively exhaustive sets and each set of retailers is served on a separate route. Given a fixed partition, the original problem is decomposed into three sub-problems. Efficient algorithms are developed for the sub-problems by exploring important properties of their optimal solutions. A genetic algorithm is proposed to find a near-optimal fixed partition for the problem. Computational results show the performance of the solution approach.
Logistics Distribution Inventory routing Multi-echelon
http://www.sciencedirect.com/science/article/B6VCT-5192153-2/2/daf1832020c8163cc6277a554ca4912f
Li, Jianxiang
Chu, Feng
Chen, Haoxun
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:729-7352011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:729-735
article
Mixed oligopoly with consistent conjectures
In this paper, we consider a model of mixed oligopoly with conjectural variations equilibrium (CVE). The agents' conjectures concern the price variations depending upon their production output's increase or decrease. We establish existence and uniqueness results for the conjectural variations equilibrium (called an exterior equilibrium) for any set of feasible conjectures. To introduce the notion of an interior equilibrium, we develop a consistency criterion for the conjectures (referred to as influence coefficients) and prove the existence theorem for the interior equilibrium (understood as a CVE with consistent conjectures). To prepare the base for the extension of our results to the case of non-differentiable demand functions, we also investigate the behavior of the consistent conjectures in dependence upon a parameter representing the demand function's derivative with respect to the market price.
Game theory Conjectural variations equilibrium Consistent conjectures
http://www.sciencedirect.com/science/article/B6VCT-51CVFYF-2/2/9d1c5dadf848b2d52f8313b289f2ec8e
Kalashnikov, Vyacheslav V.
Bulavsky, Vladimir A.
Kalashnykova, Nataliya I.
Castillo, Felipe J.
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:474-4812011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:474-481
article
Global optimization method for finding dense packings of equal circles in a circle
This paper considers the problem of finding the densest packing of N (N = 1, 2, ...) equal circles in a circle. This problem is perhaps the most classical packing problem. It is also a natural and challenging test system for evaluating various global optimization methods. We propose a quasi-physical global optimization method by simulating two kinds of movements of N elastic disks: smooth movement driven by elastic pressures and abrupt movement driven by strong repulsive forces and attractive forces. The algorithm is tested on the instances of N = 1, 2, ... , 200. Using the best-known record of the radius of the container as an upper bound, we find 63 new packings better than the best-known ones reported in literature.
Packing Global optimization Heuristic Quasi-physical approach
http://www.sciencedirect.com/science/article/B6VCT-51HMX1C-5/2/28e1ab29777f8ba214aecd40a05ff4ea
Huang, Wenqi
Ye, Tao
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:15-252011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:15-25
article
Scheduling just-in-time part supply for mixed-model assembly lines
With increasing cost competition and product variety, providing an efficient just-in-time (JIT) supply has become one of the greatest challenges in the use of mixed-model assembly line production systems. In the present paper, therefore, we propose a new approach for scheduling JIT part supply from a central storage center. Usually, materials are stored in boxes that are allotted to the consumptive stations of the line by a forklift. For such a real-world problem, a new model, a complexity proof as well as different exact and heuristic solution procedures are provided. Furthermore, a direct comparison with a simple two-bin kanban system is provided. Such a system is currently applied in the real-world industrial process that motivates our research. It becomes obvious that this policy is considerably outperformed according to the resulting inventory- and [alpha]-service levels. Moreover, at the interface between logistics and assembly operations, strategic management implications are obtained. Specifically, based on the new approach, it is the first time a statistical analysis is being made as to whether widespread Level Scheduling policies, which are well-known from the Toyota Production System, indeed facilitate material supply. Note that in the literature it is frequently claimed that this causality exists.
Scheduling Mixed-model assembly line Just-in-time In-house logistics
http://www.sciencedirect.com/science/article/B6VCT-51D7HVG-3/2/1c9822ca73797917b90aecdb68dec7d1
Boysen, Nils
Bock, Stefan
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:70-772011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:70-77
article
Scheduling multiple orders per job in a single machine to minimize total completion time
This paper deals with a single-machine scheduling problem with multiple orders per job (MOJ) considerations. Both lot processing machines and item processing machines are also examined. There are two primary decisions that must be made in the proposed problem: (1) how to group the orders together, and (2) how to schedule the jobs once they are formed. In order to obtain the optimal solution to a scheduling problem, these two decisions should be made simultaneously. The performance measure is the total completion time of all orders. Two mixed binary integer programming models are developed to optimally solve this problem. Also, two efficient heuristics are proposed for solving large-sized problems. Computational results are provided to demonstrate the efficiency of the models and the effectiveness of the heuristics.
Scheduling Multiple orders per job Integer programming Heuristics Semiconductor manufacturing
http://www.sciencedirect.com/science/article/B6VCT-4YR8RF0-1/2/b47982cc3c18e42726ff402db9241fa1
Mason, Scott J.
Chen, Jen-Shiang
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:878-8852011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:878-885
article
Incentives and individual motivation in supervised work groups
This paper introduces and analyzes a model of supervised work group where subordinates decide how to exert their effort in complementary tasks while the supervisors decide incentives. Incentives may be a combination of individual and group-based ones. The optimality of incentives is analyzed when considering two different cost functions for subordinates. The two cost functions describe different individual motivations; comparing the resulting effort allocations and production optimality, we can relate them to different organizational theories. Our results provide a measure of how motivation among subordinates may affect production and incentives. Furthermore, the optimal incentives schemes are examined in terms of Adams' equity theory.
Organization theory Production Organizational behavior Incentives Individual motivation
http://www.sciencedirect.com/science/article/B6VCT-5057KSN-1/2/4e8a0ebba2e5259006f694504c91a551
Dal Forno, Arianna
Merlone, Ugo
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:78-822011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:78-82
article
Two point one sided rendezvous
In a rendezvous search two or more teams called seekers try to minimize the time needed to find each other. In this paper, we consider two seekers in the plane. This is a one sided problem since Seeker 1 begins at a predetermined point O. Seeker 2 begins at one of a finite set of points xi with probability pi. We first discuss the general situation and then consider the specific case when Seeker 2 can begin from one of two points.
Rendezvous Search Discrete search
http://www.sciencedirect.com/science/article/B6VCT-4YPPR1C-2/2/8344a0903e7a9f88a209bea9de59c00d
Kikuta, Kensaku
Ruckle, William H.
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:206-2172011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:206-217
article
Modified base-stock policies for semiconductor production system with dependent yield rates
We consider a two-stage production system faced by semiconductor manufacturing which produces a hierarchy of multiple grades of outputs. In the first stage, a single type of input (wafer) is used to produce multiple types of semi-finished parts with dependent yield rates, and in the second stage, each type of semi-finished parts can be transformed into a corresponding type of final products, or downgraded to a type of lower grade final products. Random customer demands are faced on the final products, and demands of different types of final products are not allowed to be substituted. The advantage of this production system is that it can prevent unhealthy ordering from customers who intentionally send out false demand signals for high grade products and revise the orders to lower grade products when the delivery time is close, which was observed in semiconductor manufacturing. The objective of the study is to plan the quantity of the input at the first stage and the respective downgrade quantities at the second stage so as to meet the required service level at the minimum cost. With some common assumptions, we propose a modified base-stock policy for this two-stage production system and show that the occurrence of nil excess inventory above the base-stock level follows a renewal process. We further extend the modified base-stock policy to a better policy that invokes risk pooling over multiple grade products. The performance of these two polices are evaluated via simulation to provide managerial insights.
Semiconductor production system Modified base-stock policy Yield rates
http://www.sciencedirect.com/science/article/B6VCT-502GH61-1/2/77506ff54c032dc31088e51b1bba985e
Huang, Huei-Chuen
Song, Haiqing
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:176-1842011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:176-184
article
Average-weight-controlled bin-oriented heuristics for the one-dimensional bin-packing problem
Bin-oriented heuristics for one-dimensional bin-packing problem construct solutions by packing one bin at a time. Several such heuristics consider two or more subsets for each bin and pack the one with the largest total weight. These heuristics sometimes generate poor solutions, due to a tendency to use many small items early in the process. To address this problem, we propose a method of controlling the average weight of items packed by bin-oriented heuristics. Constructive heuristics and an improvement heuristic based on this approach are introduced. Additionally, reduction methods for bin-oriented heuristics are presented. The results of an extensive computational study show that: (1) controlling average weight significantly improves solutions and reduces computation time of bin-oriented heuristics; (2) reduction methods improve solutions and processing times of some bin-oriented heuristics; and (3) the new improvement heuristic outperforms all other known complex heuristics, in terms of both average solution quality and computation time.
Packing Heuristics Bin-packing Bin-oriented heuristics Reductions
http://www.sciencedirect.com/science/article/B6VCT-51D7HVG-9/2/7d380d59d4898799258f5feb25e12f9c
Fleszar, Krzysztof
Charalambous, Christoforos
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:81-882011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:81-88
article
Service level robustness in stochastic production planning under random machine breakdowns
In this paper, we consider a multi-period, multi-product production planning problem where the production rate and the customer service level are random variables due to machine breakdowns. In order to determine robust production plans, constraints are introduced in the stochastic capacitated lot-sizing problem to ensure that a pre-specified customer service level is met with high probability. The probability of meeting a service level is evaluated by using the first passage time theory of a Wiener process to a boundary. A two-step optimization approach is proposed to solve the developed model. In the first step, the mean-value deterministic model is solved. Then, a method is proposed in the second step to improve the probability of meeting service level. The resulting approach has the advantage of not being a scenario-based one. It is shown that substantial improvements in service level robustness are often possible with minimal increases in expected cost.
Robust production planning Random failures Service level First passage time Brownian motion
http://www.sciencedirect.com/science/article/B6VCT-521M6BM-4/2/da907e14d3a70a997500abfd661cfd6e
Nourelfath, Mustapha
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:54-682011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:54-68
article
A decision model for berth allocation under uncertainty
This paper studies the berth allocation problem (BAP) under uncertain arrival time or operation time of vessels. It does not only concern the proactive strategy to develop an initial schedule that incorporates a degree of anticipation of uncertainty during the schedule's execution, but also studies the reactive recovery strategy which adjusts the initial schedule to handle realistic scenarios with minimum penalty cost of deviating from the initial schedule. A two-stage decision model is developed for the BAP under uncertainties. Moreover, a meta-heuristic approach is proposed for solving the above problem in large-scale realistic environments. Numerical experiments are conducted to validate the effectiveness and efficiency of the proposed method.
Scheduling Port operation Berth allocation Container terminals Meta-heuristic
http://www.sciencedirect.com/science/article/B6VCT-520M1VJ-2/2/e87ec44dcb03513c520a27961b164191
Zhen, Lu
Lee, Loo Hay
Chew, Ek Peng
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:379-3892011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:379-389
article
Strategic multi-store opening under financial constraint
This paper analyzes strategic store openings in a situation in which firms can open multiple stores depending on the financial constraints of the firm. Specifically, given any upper limit of the number of store openings that two potentially symmetric firms can open, they sequentially determine the number of store openings, including their locations, to maximize their profits. As a result of our analysis in a microeconomic framework, we show that the equilibrium strategy can be wholly classified into only two following opposite strategies according to the level of their financial constraints involved. When firms can afford to invest significant amounts of money in the market, the leader chooses "segmentation strategy," in which a part of the market can be monopolized by opening a chain of multiple stores and deterring the follower's entry. In contrast, when the leader has a severe financial constraint so that it can only monopolize less than half of the market, the leader chooses "minimum differentiation strategy," where firms open each of their stores at exactly the same point as the rival's. Under this strategy, the leader necessarily captures just half of the market. Furthermore, we show that regardless of potential symmetry between firms, both first and second mover advantages in terms of profit can occur in the equilibrium.
Marketing Game theory Chain store Entry deterrence Differentiation Hotelling model
http://www.sciencedirect.com/science/article/B6VCT-50YK83C-3/2/bc35d2cc24c5e6ef2193f8d1cdaf70ee
Iida, Tetsuya
Matsubayashi, Nobuo
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:586-5932011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:586-593
article
Qualitative factors in data envelopment analysis: A fuzzy number approach
Qualitative factors are difficult to mathematically manipulate when calculating the efficiency in data envelopment analysis (DEA). The existing methods of representing the qualitative data by ordinal variables and assigning values to obtain efficiency measures only superficially reflect the precedence relationship of the ordinal data. This paper treats the qualitative data as fuzzy numbers, and uses the DEA multipliers associated with the decision making units (DMUs) being evaluated to construct the membership functions. Based on Zadeh's extension principle, a pair of two-level mathematical programs is formulated to calculate the [alpha]-cuts of the fuzzy efficiencies. Fuzzy efficiencies contain more information for making better decisions. A performance evaluation of the chemistry departments of 52 UK universities is used for illustration. Since the membership functions are constructed from the opinion of the DMUs being evaluated, the results are more representative and persuasive.
Data envelopment analysis Two-level mathematical programming Fuzzy sets Assurance region Qualitative data Ordinal data
http://www.sciencedirect.com/science/article/B6VCT-51NG4HR-2/2/fd4c30ebdaec10dc5f4fd0e5aff254ae
Kao, Chiang
Lin, Pei-Huang
oai:RePEc:eee:ejores:v:211:y:2011:i:2:p:232-2402011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:2:p:232-240
article
Stochastic set packing problem
In this paper a stochastic version of the set packing problem (SPP), is studied via scenario analysis. We consider a one-stage recourse approach to deal with the uncertainty in the coefficients. It consists of maximizing in the stochastic SPP a composite function of the expected value minus the weighted risk of obtaining a scenario whose objective function value is worse than a given threshold. The splitting variable representation is decomposed by dualizing the nonanticipativity constraints that link the deterministic SPP with a 0-1 knapsack problem for each scenario under consideration. As a result a (structured) larger pure 0-1 model is created. We present several procedures for obtaining good feasible solutions, as well as a preprocessing approach for fixing variables. The Lagrange multipliers updating is performed by using the Volume Algorithm. Computational experience is reported for a broad variety of instances, which shows that the new approach usually outperforms a state-of-the-art optimization engine, producing a comparable optimality gap with smaller (several orders of magnitude) computing time.
Assignment Set packing Stochastic 0-1 programming Simple recourse Lagrangian decomposition Volume Algorithm
http://www.sciencedirect.com/science/article/B6VCT-51JPWSB-2/2/45184ab3b0d40f9870af1f735c5599d0
Escudero, Laureano F.
Landete, Mercedes
Rodríguez-Chía, Antonio M.
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:650-6572011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:650-657
article
A semi-Markov model with holdout transshipment policy and phase-type exponential lead time
In this paper, a semi-Markov decision model of a two-location inventory system with holdout transshipment policy is reviewed under the assumption of phase-type exponential replenishment lead time rather than exponential lead time. The phase-type exponential lead time more closely approximates fixed lead time as the number of phases increases. Unlike past research in this area which has concentrated on the simple transshipment policies of complete pooling or no pooling, the research presented in this paper endeavors to develop an understanding of a more general class of transshipment policy. In addition, we propose an effective method to approximate the dynamic holdout transshipment policy.
Inventory management Lateral transshipment policy Stochastic modeling and dynamic programming
http://www.sciencedirect.com/science/article/B6VCT-51XR3JM-1/2/56016755aa0c8bc23571f2d44cefe9da
Zhang, Jiaqi
Archibald, Thomas W.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:736-7492011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:736-749
article
Joint ground and air emergency medical services coverage models: A greedy heuristic solution approach
Aeromedical and ground ambulance services often team up in responding to trauma crashes, especially when the emergency helicopter is unable to land at the crash scene. We propose location-coverage models and a greedy heuristic for their solution to simultaneously locate ground and air ambulances, and landing zones (transfer points). We provide a coverage definition based on both response time and total service time, and consider three coverage options; only ground emergency medical services (EMS) coverage, only air EMS coverage, or joint coverage of ground and air EMS in which the patient is transferred from an ambulance into an emergency helicopter at a transfer point. To analyze this complex coverage situation we develop two sets of models, which are variations of the Location Set Covering Problem (LSCP) and the Maximal Covering Location Problem (MCLP). These models address uncertainty in spatial distribution of motor vehicle crash locations by providing coverage to a given set of both crash nodes and paths. The models also consider unavailability of ground ambulances by drawing upon concepts from backup coverage models. We illustrate our results on a case study that uses crash data from the state of New Mexico. The case study shows that crash node and path coverage percentage values decrease when ground ambulances are utilized only within their own jurisdiction.
Emergency services Trauma crashes Aeromedical Location-coverage models Backup coverage
http://www.sciencedirect.com/science/article/B6VCT-507BHNT-1/2/18e4bab846858a92a488a9b5ce714d8d
Erdemir, Elif Tokar
Batta, Rajan
Rogerson, Peter A.
Blatt, Alan
Flanigan, Marie
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:179-1892011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:179-189
article
An attribute weight based feedback model for multiple attributive group decision analysis problems with group consensus requirements in evidential reasoning context
In an evidential reasoning context, a group consensus (GC) based approach can model multiple attributive group decision analysis problems with GC requirements. The predefined GC is reached through several rounds of group analysis and discussion (GAD) in the approach. However, the GAD with no guidance may not be the most appropriate way to reach the predefined GC because several rounds of GAD will spend a lot of time of all experts and yet cannot help them to effectively emphasize on the assessments which primarily damage the GC. In this paper, an attribute weight based feedback model is constructed to effectively identify the assessments primarily damaging the GC and accelerate the GC convergence. Considering important attributes with the weights more than or at least equal to the mean of the weights of all attributes, the feedback model constructs identification rules to identify the assessments damaging the GC for the experts to renew. In addition, a suggestion rule is introduced to generate appropriate recommendations for the experts to renew their identified assessments. The identification rules are constructed at three levels including the attribute, alternative and global levels. The feedback model is used to solve an engineering project management software selection problem to demonstrate its detailed implementation process, its validity and applicability, and its advantages compared with the GC based approach.
Decision analysis Multiple attributive group decision analysis Evidential reasoning approach Group consensus Attribute weight Feedback model
http://www.sciencedirect.com/science/article/B6VCT-521NWB4-2/2/4f3c17de147fe5ce7b4a231ba2bb2c34
Fu, Chao
Yang, Shanlin
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:160-1692011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:160-169
article
Hyper-heuristic approaches for the response time variability problem
We propose two classes for the implementation of hyper-heuristic algorithms. The first is based on constructive heuristics, whereas the second uses improvement methods. Within the latter class, a general framework is designed for the use of local search procedures and metaheuristics as low-level heuristics. A dynamic scheme to guide the use of these approaches is also devised. These ideas are tested on an NP-hard scheduling problem known as the response time variability problem (RTVP). An intensive computational experiment shows, especially in the second class where the new best results are found, the effectiveness of the proposed hyper-heuristics.
Hyper-heuristics Metaheuristics Response time variability Scheduling Fair sequences
http://www.sciencedirect.com/science/article/B6VCT-51P9WY7-1/2/948f32e41a1e4ca57bebe6c457d287e2
García-Villoria, Alberto
Salhi, Said
Corominas, Albert
Pastor, Rafael
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:711-7242011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:711-724
article
On range and response: Dimensions of process flexibility
There are two dimensions to process flexibility: range versus response. Range is the extent to which a system can adapt, while response is the rate at which the system can adapt. Although both dimensions are important, the existing literature does not analytically examine the response dimension vis-a-vis the range dimension. In this paper, we model the response dimension in terms of uniformity of production cost. We distinguish between primary and secondary production where the latter is more expensive. We examine how the range and response dimension interact to affect the performance of the process flexible structure. We provide analytical lower bounds to show that under all scenarios on response flexibility, moderate form of range flexibility (via chaining structure) still manages to accrue non-negligible benefits vis-a-vis the fully flexible structure (the bound is 29.29% when demand is normally distributed). We show further that given limited resources, upgrading system response dimension outperforms upgrading system range dimension in most cases. This confirms what most managers believe in intuitively. We observe also that improving system response can provide even more benefits when coupled with initiatives to reduce demand variability. This is in direct contrast with range flexibility, which is more valuable when the system has higher variability.
Probability: renewal processes Production: process flexibility Facility planning: design
http://www.sciencedirect.com/science/article/B6VCT-5070DH7-1/2/220907c7ebd4e5854b2e5eeee6551e3e
Chou, Mabel C.
Chua, Geoffrey A.
Teo, Chung-Piaw
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:22-322011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:22-32
article
Evolutionary search for difficult problem instances to support the design of job shop dispatching rules
Dispatching rules are simple scheduling heuristics that are widely applied in industrial practice. Their popularity can be attributed to their ability to flexibly react to shop floor disruptions that are prevalent in many real-world manufacturing environments. However, it is a challenging and time-consuming task to design local, decentralised dispatching rules that result in a good global performance of a complex shop. An evolutionary algorithm is developed to generate job shop problem instances for which an examined dispatching rule fails to achieve a good solution due to a single suboptimal decision. These instances can be easily analysed to reveal limitations of that rule which helps with the design of better rules. The method is applied to a job shop problem from the literature, resulting in new best dispatching rules for the mean flow time measure.
Evolutionary computations Heuristics Scheduling Metaheuristics Distributed decision making
http://www.sciencedirect.com/science/article/B6VCT-522SHHJ-1/2/c02e3e96ef379f395390180aa942beee
Branke, Juergen
Pickardt, Christoph W.
oai:RePEc:eee:ejores:v:208:y:2011:i:2:p:119-1302011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:2:p:119-130
article
Quality improvement vs. advertising support: Which strategy works better for a manufacturer?
We consider a marketing channel with a single manufacturer and a single retailer, where both advertising and quality improvement contribute to the build-up of goodwill. In a non-coop scenario, the retailer controls the advertising efforts while the manufacturer controls the quality improvements and wholesale price. Although improving quality positively contributes to goodwill, it also increases the production cost, thereby reducing the manufacturer's profit. In a coop scenario, the manufacturer supports the retailer's advertising while decreasing his investments in quality. We investigate the conditions under which a coop program is beneficial when such a trade-off occurs. Our results demonstrate that only when advertising significantly contributes to goodwill the manufacturer has an incentive to cooperate and a coop program turns out to be Pareto-improving. Conversely, the retailer is always better off with a coop program. Moreover, the channel is operational- and marketing-driven when quality effectiveness is high independent of advertising effectiveness or when both quality and advertising effectiveness are large. In all other cases, the channel is marketing-driven.
Marketing channel Differential game Advertising Quality improvement Support program Feedback equilibrium
http://www.sciencedirect.com/science/article/B6VCT-50SGPD7-2/2/e12b0efe96ba83da264969c70a48232d
De Giovanni, Pietro
oai:RePEc:eee:ejores:v:210:y:2011:i:3:p:594-6052011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:3:p:594-605
article
Queueing system with a constant retrial rate, non-reliable server and threshold-based recovery
In this paper, we examine a Markovian queueing system which is composed of a retrial queue with constant retrial rate and a non-reliable server. Upon arrival a customer occupies the server if it is idle, otherwise it goes to the retrial queue. The customer at the head of the retrial queue is allowed to retry for service. When the server is busy, the server is subject to breakdowns or failures that occur according to a Poisson process. In the failed state the server can be repaired at a repair facility with exponential repair time with respect to the threshold policy: the repair starts when the number of customers in the system reaches some prespecified threshold level q [greater-or-equal, slanted] 1. We perform a steady-state analysis of the corresponding continuous-time Markov chain, derive mean performance characteristics and waiting time distribution as well as calculate optimal threshold level to minimize the long-run average losses for the given cost structure.
Constant retrial rate Threshold-based recovery Performance analysis Waiting time distribution Long-run average cost
http://www.sciencedirect.com/science/article/B6VCT-5161P9Y-2/2/9609171cf57b80f24b5c8241a631bd14
Efrosinin, Dmitry
Winkler, Anastasia
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:83-912011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:83-91
article
Multicriteria models for planning power-networking events
In this paper, we develop effective methods for solving the power-networking problem encountered by the Tulsa Metro Chamber. The primary objective is the maximization of unique contacts made in meetings with multiple rotations of participants. Mixed-integer and constraint-programming models are developed to optimize small- to medium-scale problems, and a heuristic method is developed for large-scale problems representative of the Chamber's application. Tight bounds on the dual objective are presented. The constraint-programming model developed as phase one for the heuristic yields many new best-known solutions to the related social-golfer problem. The solutions generated for the power-networking problem enables the Chamber of Commerce to plan meeting assignments much more effectively.
Combinatorial optimization Constraint satisfaction Integer programming Heuristics Nonprofit commercial organizations
http://www.sciencedirect.com/science/article/B6VCT-4YS9RND-1/2/235ee1c9625727a447d58331abb87121
Russell, Robert A.
Urban, Timothy L.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:807-8162011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:807-816
article
Adaptive neural network model for time-series forecasting
In this study, a novel adaptive neural network (ADNN) with the adaptive metrics of inputs and a new mechanism for admixture of outputs is proposed for time-series prediction. The adaptive metrics of inputs can solve the problems of amplitude changing and trend determination, and avoid the over-fitting of networks. The new mechanism for admixture of outputs can adjust forecasting results by the relative error and make them more accurate. The proposed ADNN method can predict periodical time-series with a complicated structure. The experimental results show that the proposed model outperforms the auto-regression (AR), artificial neural network (ANN), and adaptive k-nearest neighbors (AKN) models. The ADNN model is proved to benefit from the merits of the ANN and the AKN through its' novel structure with high robustness particularly for both chaotic and real time-series predictions.
Time-series Forecasting Adaptive metrics Neural networks
http://www.sciencedirect.com/science/article/B6VCT-506W6NT-1/2/f0b5a41762a03e3e7053129073cf15f1
Wong, W.K.
Xia, Min
Chu, W.C.
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:76-892011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:76-89
article
Measurement of Returns to Scale and Damages to Scale for DEA-based operational and environmental assessment: How to manage desirable (good) and undesirable (bad) outputs?
Environmental assessment is increasingly important in preventing various types of pollutions. Data Envelopment Analysis (DEA) has been long used as an operational performance measure, but we have insufficiently explored the use of DEA for environmental assessment. This study explores a new use of DEA for the environmental assessment in which outputs are classified into desirable (good) and undesirable (bad) outputs. Such an output separation is important in the DEA-based environmental assessment. This study extends the use of DEA to the measurement of both Returns to Scale (RTS) for desirable outputs and Damages to Scale (DTS) for undesirable outputs. A Range-Adjusted Measure (RAM) is used as a DEA model for this study because the non-radial model can easily combine the two types of outputs in a unified treatment. All the mathematical features regarding the RAM-based RTS/DTS measurement are first discussed from the operational and environmental performance in a separate treatment. Then, this study combines the two performance measures as a unified measure. The RAM-based RTS/DTS is mathematically explored from the unified measure for operational and environmental performance.
Environmental assessment DEA Returns to Scale Damages to Scale
http://www.sciencedirect.com/science/article/B6VCT-51HMX1C-1/2/4da85cfde3b3cdffe1a7a3d1d3a6eeb8
Sueyoshi, Toshiyuki
Goto, Mika
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:249-2572011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:249-257
article
Methodological analysis of supply chains management applications
Formal modelling may be used to express management operational plans to achieve the desired normative objectives of firms. The plans so formulated should be demonstrably optimal with regard to certain specific objectives assumed by top management and ought to provide accurate results, when enacted, with a given tolerance at a prespecified probability. Modelling Decision Support Systems is based on various alternative methodologies: managerial-situational, interpretative or formal-deductive, which affect the results and precision obtainable. The third approach requires dynamical nonlinear stochastic modelling to determine precise Supply Chain Management (SCM) plans, without incurring in the limitations that may characterize the former approaches. The aim of this paper is to examine different management methodologies, to determine the most appropriate implementation for accurate SCM plans. Two well known SCM implementations, the bullwhip effect and the collaborative planning and extensions will be examined under different methodologies for clarity and to verify their limitations.
Decision support systems Modelling systems and languages System dynamics Supply chain management Set-valued mappings
http://www.sciencedirect.com/science/article/B6VCT-502GH61-4/2/6b73c4fb22ee2856bd9120191d3065a0
Di Giacomo, Laura
Patrizi, Giacomo
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:287-3002011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:287-300
article
A hybrid hypercube - Genetic algorithm approach for deploying many emergency response mobile units in an urban network
Emergency response services are critical for modern societies. This paper presents a model and a heuristic solution for the optimal deployment of many emergency response units in an urban transportation network and an application for transit mobile repair units (TMRU) in the city of Athens, Greece. The model considers the stochastic nature of such services, suggesting that a unit may be already engaged, when an incident occurs. The proposed model integrates a queuing model (the hypercube model), a location model and a metaheuristic optimization algorithm (genetic algorithm) for obtaining appropriate unit locations in a two-step approach. In the first step, the service area is partitioned into sub-areas (called superdistricts) while, in parallel, necessary number of units is determined for each superdistrict. An approximate solution to the symmetric hypercube model with spatially homogeneous demand is developed. A Genetic Algorithm is combined with the approximate hypercube model for obtaining best superdistricts and associated unit numbers. With both of the above requirements defined in step one, the second step proceeds in the optimal deployment of units within each superdistrict.
Emergency response Hypercube Spatial queues Genetic algorithms
http://www.sciencedirect.com/science/article/B6VCT-5103WXH-1/2/3d0b3a53206513bfb66cd89c03824e71
Geroliminis, Nikolas
Kepaptsoglou, Konstantinos
Karlaftis, Matthew G.
oai:RePEc:eee:ejores:v:208:y:2011:i:2:p:131-1412011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:2:p:131-141
article
A foraging problem: Sit-and-wait versus active predation
The literature on foraging shows that some predators use a combination of ambush and active search to locate a prey. Let us suppose that a prey must go every day to some determined places to feed, and to another place, 0, to drink. A predator can stay at zone 0 waiting for the prey (sit-and-wait strategy) or it can move between the different places where the prey will go to eat (search strategy). If predator and prey meet each other in the same place, prey will be caught with a probability depending on the place. We study this problem in different situations, modelling them as two-person zero-sum games. We solve them in closed form, giving optimal strategies for prey and for predator and the value of the games.
Game theory Two-person games Search games Search problems Predator-prey interactions
http://www.sciencedirect.com/science/article/B6VCT-50R22YG-1/2/6faf18abda256f3dcb44cfe92de81d78
Zoroa, N.
Fernández-Sáez, M.J.
Zoroa, P.
oai:RePEc:eee:ejores:v:212:y:2011:i:1:p:190-1982011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:212:y:2011:i:1:p:190-198
article
Aggregation of economic growth rates and of its sources
In this paper we consider the question of measuring aggregate economic growth and its sources. We derive a theoretically justified solution for aggregating (across firms, industries, countries, etc.) growth rates and their sources within the framework of Solow's (1957) growth accounting method. The resulting aggregation scheme turns out to be quite intuitive and, in fact, the one that is sometimes used in practice, but with theoretical justification missing and so the main value of our work is that our formal derivations show under what conditions this scheme has economic theory justification. We also provide a small empirical illustration of our method on the real data set and show how different the conclusions can be depending on the aggregation scheme used.
Growth accounting Productivity Aggregation
http://www.sciencedirect.com/science/article/B6VCT-51WV03R-5/2/be6e9c0f1fa975e3645933e3fd884e2c
Zelenyuk, Valentin
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:886-8962011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:886-896
article
Evidence-based modelling of strategic fit: An introduction to RCaRBS
This paper presents an important development of a novel non-parametric object classification technique, namely CaRBS (Classification and Ranking Belief Simplex), to enable regression-type analyses. Termed RCaRBS, it is, as with CaRBS, an evidence-based technique, with its mathematical operations based on the Dempster-Shafer theory of evidence. Its exposition is demonstrated here by modelling the strategic fit of a set of public organizations. In addition to the consideration of the predictive fit of a series of models, graphical exploration of the contribution of individual variables in the derived models is also undertaken when using RCaRBS. Comparison analyses, including through fivefold cross-validation, are carried out using multiple regression and neural networks models. The findings highlight that RCaRBS achieves parity of test set predictive fit with regression and better fit than neural networks. The RCaRBS technique can also enable researchers to explore non-linear relationships (contributions) between variables in greater detail than either regression or neural networks models.
Decision analysis Evidence theory Neural networks Strategic fit Trigonometric differential evolution
http://www.sciencedirect.com/science/article/B6VCT-506J3NF-2/2/4d0e4a3068d85c637a372016c3f28c0d
Beynon, Malcolm J.
Andrews, Rhys
Boyne, George A.
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:508-5132011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:508-513
article
Should you stop investing in a sinking fund when it is sinking?
Many people invest regularly in sinking funds that track stock market indices. When stock markets themselves sink significantly, as in the current credit crunch, investors face a decision as to whether they should continue paying into a falling fund, or switch payment to a risk-free deposit account until the market recovers. Most financial advice is to keep investing on the grounds that as the unit price falls more units can be purchased and that this is ultimately beneficial (dollar-cost averaging, DCA) However, most academic studies show that DCA is sub-optimal, at least to a lump sum strategy. In this paper we consider a specific, tax-free fund - the Individual Savings Account (ISA). We demonstrate, both analytically and numerically, that in a situation of perfect information a stop and restart policy can beat DCA. From these results we test some heuristics that could be used by an everyday investor under real-world conditions of uncertainty and volatility.
Investment analysis Dollar-cost averaging Index-linked funds Stop and restart
http://www.sciencedirect.com/science/article/B6VCT-4YVJ3YX-3/2/4c315fcf0d8173ec5f7e8443c6b8b9fd
Mingers, John
Parker, Kim T.
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:515-5192011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:515-519
article
Decomposition of technical and scale efficiencies in two-stage production systems
Conventional data envelopment analysis (DEA) models are used to measure the technical and scale efficiencies of a system when it is considered as a whole unit. This paper extends the efficiency measurement to two-stage systems where each stage has one process and all the outputs from the first process become the inputs of the second. An input-oriented DEA model for the first process is developed to separate the process efficiency into the input technical and scale efficiencies, and an output-oriented model is developed for the second process to separate the process efficiency into the output technical and scale efficiencies. Combining the two models, the system efficiency is expressed as the product of the overall technical and scale efficiencies, where the overall technical and scale efficiencies are the products of the corresponding efficiencies of the two processes, respectively. The detailed decomposition allows the sources of inefficiency to be identified.
Data envelopment analysis Efficiency Two-stage system Decomposition
http://www.sciencedirect.com/science/article/B6VCT-51X1VRN-4/2/0dcd80984563a1f03a02a89ac9c0ad98
Kao, Chiang
Hwang, Shiuh-Nan
oai:RePEc:eee:ejores:v:209:y:2011:i:3:p:215-2182011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:3:p:215-218
article
An efficient implementation of the robust tabu search heuristic for sparse quadratic assignment problems
We propose and develop an efficient implementation of the robust tabu search heuristic for sparse quadratic assignment problems. The traditional implementation of the heuristic applicable to all quadratic assignment problems is of O(N2) complexity per iteration for problems of size N. Using multiple priority queues to determine the next best move instead of scanning all possible moves, and using adjacency lists to minimize the operations needed to determine the cost of moves, we reduce the asymptotic (N --> [infinity]) complexity per iteration to O(N log N). For practical sized problems, the complexity is O(N).
Combinatorial optimization Computing science Heuristics Tabu search
http://www.sciencedirect.com/science/article/B6VCT-511G1R7-1/2/a19a5572a1ada22e3523b88e780c03d3
Paul, G.
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:232-2372011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:232-237
article
A coordinating contract for transshipment in a two-company supply chain
We study a supply chain with two independent companies producing an identical product and cooperating through transshipment. Previous studies of this chain show that only under certain conditions, linear transshipment prices could be found that induce the companies to choose the first best production quantities. Moreover, even if such transshipment prices do exist, they result in a unique division of total expected profit and thus they cannot accommodate arbitrary divisions of the profit. Using the Generalized Nash Bargaining Solution, we derive coordinating transshipment prices that always give rise to a coordinating contract for the chain. This contract relies on an implicit pricing mechanism.
Group decisions and negotiations Transshipment Contracts Coordination Game theory
http://www.sciencedirect.com/science/article/B6VCT-50106TX-2/2/5053226abacb17f19947dee9e6eb8c0f
Hezarkhani, Behzad
Kubiak, Wieslaw
oai:RePEc:eee:ejores:v:211:y:2011:i:1:p:213-2152011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:1:p:213-215
article
A note on the classification of consumer demand functions with respect to retailer pass-through rates
Tyagi (1999) derived conditions on the curvature of consumer demand functions which make it optimal for a profit-maximizing retailer to pass-through greater (less) than 100% of a manufacturer trade deal amount. Since the pass-through is customarily evaluated at the optimal wholesale price, then additional sufficient conditions are needed to ensure the existence of an optimal wholesale price. The purpose of this note is to derive the additional required conditions on the curvature of the consumer demand functions for the existence of a greater (less) than 100% retailer pass-through rate at the optimal wholesale price.
Pass-through Channels Demand functions
http://www.sciencedirect.com/science/article/B6VCT-51TYF2C-3/2/91ca673ffb466b0c80d008993968728f
Kyparisis, George J.
Koulamas, Christos
oai:RePEc:eee:ejores:v:209:y:2011:i:1:p:1-102011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:1:p:1-10
article
The orienteering problem: A survey
During the last decade, a number of challenging applications in logistics, tourism and other fields were modelled as orienteering problems (OP). In the orienteering problem, a set of vertices is given, each with a score. The goal is to determine a path, limited in length, that visits some vertices and maximises the sum of the collected scores. In this paper, the literature about the orienteering problem and its applications is reviewed. The OP is formally described and many relevant variants are presented. All published exact solution approaches and (meta) heuristics are discussed and compared. Interesting open research questions concerning the OP conclude this paper.
Combinatorial optimisation Orienteering problem Survey
http://www.sciencedirect.com/science/article/B6VCT-4YRXD2K-2/2/660e009d5be5798591f8ed65d80fc2a7
Vansteenwegen, Pieter
Souffriau, Wouter
Oudheusden, Dirk Van
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:644-6552011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:644-655
article
MIP-based decomposition strategies for large-scale scheduling problems in multiproduct multistage batch plants: A benchmark scheduling problem of the pharmaceutical industry
An efficient systematic iterative solution strategy for solving real-world scheduling problems in multiproduct multistage batch plants is presented. Since the proposed method has its core a mathematical model, two alternative MIP scheduling formulations are suggested. The MIP-based solution strategy consists of a constructive step, wherein a feasible and initial solution is rapidly generated by following an iterative insertion procedure, and an improvement step, wherein the initial solution is systematically enhanced by implementing iteratively several rescheduling techniques, based on the mathematical model. A salient feature of our approach is that the scheduler can maintain the number of decisions at a reasonable level thus reducing appropriately the search space. A fact that usually results in manageable model sizes that often guarantees a more stable and predictable optimization model behavior. The proposed strategy performance is tested on several complicated problem instances of a multiproduct multistage pharmaceuticals scheduling problem. On average, high quality solutions are reported with relatively low computational effort. Authors encourage other researchers to adopt the large-scale pharmaceutical scheduling problem to test on it their solution techniques, and use it as a challenging comparison reference.
Scheduling Large scale optimization Mixed integer programming Decomposition strategy Pharmaceutical industry
http://www.sciencedirect.com/science/article/B6VCT-509XPT7-1/2/941f9ca98ecdb379756df832b0d89ecc
Kopanos, Georgios M.
Méndez, Carlos A.
Puigjaner, Luis
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:121-1302011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:121-130
article
The Generalized-Trend-Diffusion modeling algorithm for small data sets in the early stages of manufacturing systems
The statistical theories are not expected to generate significant conclusions, when applied to very small data sets. Knowledge derived from limited data gathered in the early stages is considered too fragile for long term production decisions. Unfortunately, this work is necessary in the competitive industry and business environments. Our previous researches have been aimed at learning from small data sets for scheduling flexible manufacturing systems, and this article will focus development of an incremental learning procedure for small sequential data sets. The main consideration concentrates on two properties of data: that the data size is very small and the data are time-dependent. For this reason, we propose an extended algorithm named the Generalized-Trend-Diffusion (GTD) method, based on fuzzy theories, developing a unique backward tracking process for exploring predictive information through the strategy of shadow data generation. The extra information extracted from the shadow data has proven useful in accelerating the learning task and dynamically correcting the derived knowledge in a concurrent fashion.
Back-propagation neural networks Sequential data Small data set learning Time series
http://www.sciencedirect.com/science/article/B6VCT-4YM7K8V-4/2/cd0b63909bf8c38a0d33c7f0f9db42e1
Lin, Yao-San
Li, Der-Chiang
oai:RePEc:eee:ejores:v:209:y:2011:i:2:p:156-1652011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:2:p:156-165
article
Prepositioning supplies in preparation for disasters
In this paper, we examine the decision of where to preposition supplies in preparation for a disaster, such as a hurricane or terrorist attack, and how much to preposition at a location. If supplies are located closer to the disaster, it can allow for faster delivery of supplies after the disaster. As a result of being closer, though, the supplies may be in a risky location if the disaster occurs. Considering these risks, we derive equations for determining the optimal stocking quantity and the total expected costs associated with delivering to a demand point from a supply point. We provide a sensitivity analysis to show how different parameters impact stocking levels and costs. We show how our cost model can be used to select the single best supply point location from a discrete set of choices and how it can be embedded within existing location algorithms to choose multiple supply points. Our computational experiments involve a variety of relationships between distance and risk and show how these can impact location decisions and stocking levels.
Facility location Inventory Disaster preparedness Prepositioning
http://www.sciencedirect.com/science/article/B6VCT-50XS6FT-1/2/6ed9c7222b29246ee62da89794221ed6
Campbell, Ann Melissa
Jones, Philip C.
oai:RePEc:eee:ejores:v:211:y:2011:i:3:p:612-6222011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:211:y:2011:i:3:p:612-622
article
A genetic algorithm for the unrelated parallel machine scheduling problem with sequence dependent setup times
In this work a genetic algorithm is presented for the unrelated parallel machine scheduling problem in which machine and job sequence dependent setup times are considered. The proposed genetic algorithm includes a fast local search and a local search enhanced crossover operator. Two versions of the algorithm are obtained after extensive calibrations using the Design of Experiments (DOE) approach. We review, evaluate and compare the proposed algorithm against the best methods known from the literature. We also develop a benchmark of small and large instances to carry out the computational experiments. After an exhaustive computational and statistical analysis we can conclude that the proposed method shows an excellent performance overcoming the rest of the evaluated methods in a comprehensive benchmark set of instances.
Parallel machine Scheduling Makespan Setup times
http://www.sciencedirect.com/science/article/B6VCT-51X1VRN-5/2/0b7fac77528c00166e1718642aa7b086
Vallada, Eva
Ruiz, Rubén
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:92-962011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:92-96
article
The capacitated single-allocation hub location problem revisited: A note on a classical formulation
In this paper a well-known formulation for the capacitated single-allocation hub location problem is revisited. An example is presented showing that for some instances this formulation is incomplete. The reasons for the incompleteness are identified leading to the inclusion of an additional set of constraints. Computational experiments are performed showing that the new constraints also help to decrease the computational time required to solve the problem optimally.
Hub location MIP formulations Transportation
http://www.sciencedirect.com/science/article/B6VCT-4YXK0C6-2/2/8dc3b233f3b792e5928e5b0936d4b49f
Correia, Isabel
Nickel, Stefan
Saldanha-da-Gama, Francisco
oai:RePEc:eee:ejores:v:208:y:2011:i:2:p:161-1692011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:208:y:2011:i:2:p:161-169
article
Optimal tax depreciation with loss carry-forward and backward options
The choice of depreciation method from among straight-line and accelerated methods can have a significant impact on the present value of expected tax payments. This is a problem that has been studied for decades, with most results indicating the optimality of accelerated methods. Recent research questions this claim by relaxing the assumption of positive taxable income. The situation where net-operating losses may be carried-forward and backward in time is the subject of this paper. We model this situation and establish conditions that allow straight-line depreciation to be preferred over accelerated methods. The results are focused around a threshold number of periods of consecutive losses, which are determined by the allowable periods to carry a loss forward. For consecutive losses beyond this threshold, straight-line will always be optimal. When the cumulative depreciation charges up to and including the window are guaranteed to be applied on or before the threshold period, then straight-line will never be optimal.
Decision analysis Depreciation Tax minimization Cash flow analysis
http://www.sciencedirect.com/science/article/B6VCT-50F8BTY-2/2/f84a7709d10b35e8000703df7edc9194
Kulp, Alison
Hartman, Joseph C.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:448-4512011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:448-451
article
Revisiting the PERT mean and variance
Difficulties with the interpretation of the parameters of the beta distribution let Malcolm et al. (1959) to suggest in the Program Evaluation and Review Technique (PERT) their by now classical expressions for the mean and variance for activity completion for practical applications. In this note, we shall provide an alternative for the PERT variance expression addressing a concern raised by Hahn (2008) regarding the constant PERT variance assumption given the range for an activity's duration, while retaining the original PERT mean expression. Moreover, our approach ensures that an activity's elicited most likely value aligns with the beta distribution's mode. While this was the original intent of Malcolm et al. (1959), their method of selecting beta parameters via the PERT mean and variance is not consistent in this manner.
Project scheduling PERT Beta distribution Expert judgment
http://www.sciencedirect.com/science/article/B6VCT-50VKMBN-1/2/93c4772c08091bb4b1ac2f897a7b1734
HerrerI´as-Velasco, José Manuel
HerrerI´as-Pleguezuelo, Rafael
van Dorp, Johan René
oai:RePEc:eee:ejores:v:209:y:2011:i:2:p:95-1032011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:209:y:2011:i:2:p:95-103
article
Assessment of China transit and economic efficiencies in a modified value-chains DEA model
This study incorporates the concepts of undesirable intermediate, intermediate input, uncontrollable input, and undesirable output to the value-chains model (Chen and Zhu, 2004), thereby creating a modified value-chains model to compute transit and economic efficiencies in 30 regions of China. The modified value-chains model forms a more general formulation to the value-added chains in the utilization of the above concepts; it also provides an optimal intermediate measure which differs from the independent two-stage measure. Empirical evaluations indicate that large-scale transit development in China's coastal area does not necessarily represent higher transit efficiency. Because in the coastal area, there is a lack of significant positive relationships between transit and economic efficiency. High economic efficiency does not contribute to greater transit efficiency. The finding also suggests that by simultaneously decreasing the quantity in passenger and freight transport; transit and economic efficiencies have greatly improved in most regions of China.
Data envelopment analysis Undesirable output Value-chains Economic efficiency Transit efficiency
http://www.sciencedirect.com/science/article/B6VCT-506J0J7-3/2/4f57ba878e6a4ed124b80e6e0dcea1bb
Chiu, Yung-ho
Huang, Chin-wei
Ma, Chun-Mei
oai:RePEc:eee:ejores:v:207:y:2010:i:1:p:473-4802011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:1:p:473-480
article
Pricing problem in wireless telecommunication product and service bundling
This paper investigates a mixed bundling problem in the wireless telecommunication business. Customers can buy cellular phones at a discount price if they subscribe to a service plan with the price above a threshold. We have proposed using nonlinear mixed-integer programming to determine the optimal price to maximize the total profit of the service providers. An efficient algorithm has been presented to solve this problem when discrete demand data is available. We have compared the profits from three strategies: individual sale, mixed bundle and pure bundle. Our analysis suggests the condition under which the mixed bundle strategy outperforms other strategies. We have also studied the impact of parameters on the solution. The results of the analysis may help the service providers adjust their pricing schemes according to changes in the market. In the case of incomplete information (only the distribution of the demand is known), we apply another research approach (partition graph) to determine the optimal bundle price.
Bundle Pricing Nonlinear programming Telecommunication
http://www.sciencedirect.com/science/article/B6VCT-4YYGGYH-1/2/cd60428460d6279efe1b46bc678c9a7d
Yang, Bibo
Ng, C.T.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1122-11292011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1122-1129
article
Network DEA: Additive efficiency decomposition
In conventional DEA analysis, DMUs are generally treated as a black-box in the sense that internal structures are ignored, and the performance of a DMU is assumed to be a function of a set of chosen inputs and outputs. A significant body of work has been directed at problem settings where the DMU is characterized by a multistage process; supply chains and many manufacturing processes take this form. Recent DEA literature on serial processes has tended to concentrate on closed systems, that is, where the outputs from one stage become the inputs to the next stage, and where no other inputs enter the process at any intermediate stage. The current paper examines the more general problem of an open multistage process. Here, some outputs from a given stage may leave the system while others become inputs to the next stage. As well, new inputs can enter at any stage. We then extend the methodology to examine general network structures. We represent the overall efficiency of such a structure as an additive weighted average of the efficiencies of the individual components or stages that make up that structure. The model therefore allows one to evaluate not only the overall performance of the network, but as well represent how that performance decomposes into measures for the individual components of the network. We illustrate the model using two data sets.
DEA Multistage Serial systems Additive decomposition
http://www.sciencedirect.com/science/article/B6VCT-5025V9W-3/2/5b2d41667ab8ea4c1037e33620269735
Cook, Wade D.
Zhu, Joe
Bi, Gongbing
Yang, Feng
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:697-7102011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:697-710
article
A dynamic model of supplier switching
While a broad branch of literature deals with the development of buyer-supplier relationships, limited research exists under which circumstances a buyer should terminate such a relationship and switch to a new supplier. Recently, Wagner and Friedl (2007) have developed a framework to analyze a static one-shot supplier switching decision when the buyer has asymmetric information about the supplier's production costs. We extend their basic framework to a dynamic one, assuming that the supplier learns the production costs over time when he sets up the production process. Since the supplier's cost information at the individual stages crucially determines the setup and the switching decision, it becomes essential for supply chain management to provide proper incentives so that the supplier reveals his cost information truthfully over time. We characterize the optimal setup and switching strategy as well as the optimal supply chain contract. We also compare our findings with those of the static setting to provide further insights.
Purchasing Supplier switching Supply chain contracts Information asymmetry
http://www.sciencedirect.com/science/article/B6VCT-506W6NT-2/2/95244505b4bd463e5e022ff4c4e30068
Pfeiffer, Thomas
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:980-9852011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:980-985
article
Computing stable loads for pallets
This paper describes an Integer Programming model for generating stable loading patterns for the Pallet Loading Problem under several stability criteria. The results obtained during evaluation show great improvement in the number of stable patterns in comparison with results reported earlier. Moreover, most of the solved cases also ensure optimality in terms of utilization of a pallet.
Logistics Distribution Packing Pallet loading
http://www.sciencedirect.com/science/article/B6VCT-502GH61-8/2/fce2d37de1c7370feac9ce38176f75e0
Kocjan, W.
Holmström, K.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:557-5652011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:557-565
article
The examination timetabling problem at Universiti Malaysia Pahang: Comparison of a constructive heuristic with an existing software solution
This paper presents a real-world, capacitated examination timetabling problem from Universiti Malaysia Pahang (UMP), Malaysia. The problem has constraints which have not been modelled before, these being the distance between examination rooms and splitting exams across several rooms. These constraints provide additional challenges in defining a suitable model and in developing a constructive heuristic. One of the contributions of this paper is to formally define this real-world problem. A further contribution is the constructive heuristic that is able to produce good quality solutions for the problem, which are superior to the solutions that are produced using the university's current software. Moreover, our method adheres to all hard constraints which the current systems fails to do.
Optimisation Timetabling Heuristic Scheduling
http://www.sciencedirect.com/science/article/B6VCT-4YVJ3YX-4/2/facd478ee2646ff6358bfbb1e322c433
Kahar, M.N.M.
Kendall, G.
oai:RePEc:eee:ejores:v:210:y:2011:i:2:p:241-2482011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:210:y:2011:i:2:p:241-248
article
Optimal core acquisition and remanufacturing policies under uncertain core quality fractions
Cores acquired by a remanufacturer are typically highly variable in quality. Even if the expected fractions of the various quality levels are known, then the exact fractions when acquiring cores are still uncertain. Our model incorporates this uncertainty in determining optimal acquisition decisions by considering multiple quality classes and a multinomial quality distribution for an acquired lot. We derive optimal acquisition and remanufacturing policies for both deterministic and uncertain demand. For deterministic demand, we derive a simple closed-form expression for the total expected cost. In a numerical experiment, we highlight the effect of uncertainty in quality fractions on the optimal number of acquired cores and show that the cost error of ignoring uncertainty can be significant. For uncertain demand, we derive optimal newsboy-type solutions for the optimal remanufacture-up-to levels and an approximate expression for the total expected cost given the number of acquired cores. In a further numerical experiment, we explore the effects of demand uncertainty on the optimal acquisition and remanufacturing decisions, and on the total expected cost.
Remanufacturing Product acquisition management Quality uncertainty
http://www.sciencedirect.com/science/article/B6VCT-50BB5CR-4/2/1162b518ef9098ab0ecc287c812716ea
Teunter, Ruud H.
Flapper, Simme Douwe P.
oai:RePEc:eee:ejores:v:207:y:2010:i:2:p:1104-11152011-03-31RePEc:eee:ejores
RePEc:eee:ejores:v:207:y:2010:i:2:p:1104-1115
article
A distance friction minimization approach in data envelopment analysis: A comparative study on airport efficiency
This paper aims to present a newly developed distance friction minimization (DFM) method in the context of data envelopment analysis (DEA) in order to generate an appropriate (non-radial) efficiency-improving projection model, for both input reduction and output increase. In this approach, a g