Plénières de COSI - 2014
Michel Balinski.
Sihem Amer-Yahia
Saïd Salhi
L’échec du vote majoritaire et comment le surmonter
Michel
Balinski
Chercheur
à l'École polytechnique - Laboratoire d'Économétrie
Directeur
de recherche émérite au CNRS
Lauréat
du Prix de théorie John von Neumann 2013 (en anglais John von Neumann Theory
Prize), décerné par l'Institute for Operations Research and the
Management
Sciences (INFORMS).
Abstract:
Dès l’enfance tout le monde vote entre deux alternatifs en choisissant
un plutôt que l’autre, la majorité des voix déterminant le choix
collectif. Cette pratique courante s’est transformée en une conviction
que ce choix est réellement jugé le meilleur alternatif par un
électorat ou un jury.
L’objectif de cette présentation est (1) de démontrer que le vote
majoritaire peut bien faire un choix contraire à celui considéré le
meilleur par un électorat ou un jury – même quand il s’agit de
seulement deux candidats ou alternatifs – et (2) d’expliquer comment un
électorat ou un jury peut – en utilisant le jugement majoritaire –
designer le choix réellement évalué le meilleur par une majorité.
Références.
Michel Balinski et Rida Laraki, 2010. Majority Judgment : Measuring, Ranking, and Electing, MIT Press.
-- et --, 2013. « Jugement majoritaire versus vote majoritaire (via les
présidentielles 2011-2012), » Revue Française d’Economie XXVII 11- 44.
-- et --, 2013. « How best to rank wines », in E. Giraud-Héraud and
M.-C. Pichery (eds.), Wine Economics: Quantitative Studies and
Empirical Applications, Palgrave.
-- et --, 2014. « Judge : Don’t Vote ! » A paraître, Operations Research.
-- et --, 2014. « What should ‘majority decision’ mean? » 2014. A
paraître, in Jon Elster and Stéphanie Novak (eds.), Majority Decisions,
Cambridge University Press.
-- et --, 2014. « Majority measures ». En préparation.
Retour au programme des plénières.
New
Perspectives in Social Data Management
Sihem
Amer-Yahia
Directrice de
Recherche
at CNRS
Laboratoire
d’Informatique de Grenoble, France
Web page: http://membres.liglab.fr/amery/
Abstract: The web has
evolved from a technology platform to a
social milieu where factual, opinion and behavior data interleave. A
number of
social applications are being built to analyze and extract value from
this
data, encouraging us to adopt a data-driven approach to research.
I will describe a
perspective on why and how social data management is fundamentally
different
from data management as it is taught in school today. More
specifically, I'll
talk about data preparation, data exploration and application
validation.
This talk is
based on published and ongoing work with colleagues at LIG, UT Austin,
U. of
Trento, U. of Tacoma, and Google Research.
Short biography
Retour au programme des plénières.
An Adaptive Multiphase Approach for Very Large Unconditional and Conditional p-Median Problems
Saïd Salhi
Professor of
Management Science and Operational Research
Centre
for Logistics & Heuristic Optimization (CLHO), Kent Business
School,
University of Kent, UK
Head of
Management Science Group
Web page : http://www.kent.ac.uk/kbs/profiles/staff/salhi_said.html
Abstract: In this talk an
overview of
logistic system is first given followed by the importance of location
analysis
as one of the activity in the chain. The aim of the talk is to explore
ways to
tackle very large data sets when solving a class of location problems
namely
the p-median problem. Here the goal is find the optimal (or best)
location of p
facilities that serve all the customers. As the problem is very large
aggregation techniques are briefly reviewed. A multiphase approach that
incorporates demand points aggregation, Variable Neighbourhood Search
(VNS) and
an exact method is proposed for solving very large-scale
unconditional
and conditional p-median problems.
The method consists of four phases. In the first phase several
aggregated
problems are solved with a “Mini VNS” to generate promising facility
sites
which are then used to solve a reduced problem in Phase 2 using VNS or
an exact
method based on an efficient 0-1 formulation. The
new
solution is then fed into an iterative learning process which tackles
the
aggregated problem (Phase 3).
Phase 4 is a post optimisation phase applied to the original
(disaggregated) problem. For the p-median
problem, the method is tested on three types of datasets which consist
of up to
89,600 demand points. The first two datasets are the BIRCH and the TSP
datasets
whereas the third is our newly geometrically constructed dataset that
has
guaranteed optimal solutions. The computational experiments show that
the
proposed approach produces very competitive results. The proposed
approach is
also adapted to cater for the conditional p-median
problem with interesting results. Here there already exist q facilities
in the
system and the aim is to find the new location of p extra facilities.
Joint work with Drs Chandra Ade
Irawan & Maria Paola Scaparra
Retour au programme des plénières.
Recent advances in exact algorithms fot the Capacitated Vehicle Routing Problem
Eduardo Uchoa
Professor of
Production Engineering
Universidade
Federal Fluminense, Niterói, Brazil.
Web page: http://www.inf.puc-rio.br/~uchoa/
The Capacitated Vehicle Routing Problem (CVRP) was
defined by Dantzig and Ramser (1959) as follows. A fleet of vehicles
with
identical capacities must depart from a depot in order to deliver
demands at a set
of customers. The solutions are sets of routes starting and ending at
the depot
such that: (i) each customer is visited by a route, (ii) the sum of the
demands
of the customers in a route does not exceed the vehicle capacities. The
goal is
to minimize the total travel distance. Although dozens of other VRP
variants
(including features like time windows, multiple depots, heterogeneous
fleet,
split delivery, pickup and delivery, precedences, loading constraints,
etc) are
studied, CVRP occupies a central position in the vast VRP literature.
Since it
is the most classical and simpler variant, it is natural that many
important
new ideas were first tested in it.
In the early 2000's, the best performing algorithms
for the CVRP were branch-and-cut algorithms that separated quite
complex
families of cuts identified by polyhedral investigation. At that time,
some instances
with only 50 customers could not be solved to optimality. Fukasawa et
al.
(2006) could solve all the instances from the literature with up to 134
customers, combining cut separation with column generation, in a
so-called branch-cut-and-price
algorithm. Since then, all the best CVRP algorithms are based in that
combination.
This talk reviews those algorithms, highlighting the contributions in
Baldacci,
Christofides and Mingozzi (2008), Baldacci, Mingozzi and Roberti
(2011), Contardo
(2012), and Ropke (2012). It also presents the very recent developments
in
Pecin (2014) that allowed breaking the 200 customer barrier.