NIPS 2012 Workshop: Perturbations, Optimization, and Statistics
Workshop in conjunction with NIPS 2012.
Saturday, December 8, 2012, Lake Tahoe, Nevada, United States.
Location: Glenbrook + Emerald Bay, Harrah's Special Events Center 2nd Floor.
Schedule: Check out the workshop's program here.
Description
In nearly all machine learning tasks, we expect there to be randomness, or
noise, in the data we observe and in the relationships encoded by the
model. Usually, this noise is considered undesirable, and we would eliminate
it if possible. However, there is an emerging body of work on perturbation
methods, showing the benefits of explicitly adding noise into the modeling,
learning, and inference pipelines. This workshop will bring together the
growing community of researchers interested in different aspects of this area,
and will broaden our understanding of why and how perturbation methods can be
useful.
More generally, perturbation methods usually provide efficient and principled
ways to reason about the neighborhood of possible outcomes when trying to make
the best decision. For example, some might want to arrive at the best outcome
that is robust to small changes in model parameters. Others might want to find
the best choice while compensating for their lack of knowledge by averaging
over the different outcomes. Recently, several works influenced by diverse
fields of research such as statistics, optimization, machine learning, and
theoretical computer science, use perturbation methods in similar ways. The
goal of this workshop is to explore different techniques in perturbation
methods and their consequences on computation, statistics and optimization. We
shall specifically be interested in understanding the following issues:
Statistical Modeling: What types of statistical models can be defined for
structured prediction? How can random perturbations be used to relate
computation and statistics?
Efficient Sampling: What are the computational properties that allow
efficient and unbiased sampling? How do perturbations control the geometry of
such models and how can we construct sampling methods for these families?
Approximate Inference: What are the computational and statistical
requirements from inference? How can the maximum of random perturbations be
used to measure the uncertainty of a system?
Learning: How can we probabilistically learn model parameters from training
data using random perturbations? What are the connections with max-margin and
conditional random fields techniques?
Theory: How does the maximum of a random process relate to its complexity?
What are the statistical and computational properties it describes in
Gaussian free fields over graphs?
Pseudo-sampling: How do dynamical systems encode randomness? To what extent
do perturbations direct us to the “pseudo-randomness” of its underlying
dynamics?
Robust classification: How can classifiers be learned in a robust way, and
how can support vector machines be realized in this context? What are the
relations between adversarial perturbations and regularizations and what are
their extensions to structured predictions?
Robust reconstructions: How can information be robustly encoded? In what
ways can learning be improved by perturbing the input measurements?
Adversarial Uncertainty: How can structured prediction be performed in
zero-sum game setting? What are the computational qualities of such
solutions, and do Nash-equilibria exists in these cases?
Target Audience: The workshop should appeal to NIPS attendees interested in
both theoretical aspects such as Bayesian modeling, Monte Carlo sampling,
optimization, inference, and learning, as well as practical applications in
computer vision and language modeling.
Call for Papers
In addition to a program of invited presentations, we solicit contribution of
short papers that explore perturbation-based methods in the context of topics
such as: statistical modeling, sampling, inference, estimation, theory, robust
optimization, robust learning. We are interested in both theoretical and
application-oriented works. We also welcome papers that explore connections
between alternative ways of using perturbations.
Contributed papers should adhere to the
NIPS format and are encouraged to
be up to four pages long (without counting the list of references). Papers
submitted for review do not need to be anonymized. There will be no official
proceedings. Thus, apart from papers reporting novel unpublished work, we also
welcome submissions describing work in progress or summarizing a longer paper
under review for a journal or conference (this should be clearly stated
though). Accepted papers will be presented as posters; some may also be
selected for spotlight talks.
Please submit papers in PDF format by email to posNIPS2012@gmail.com. The
submission deadline has been extended to October 19, 2012 and notifications
of acceptance will be sent by October 28, 2012. At least one of the authors
must be attending the workshop to present the work.
You can also download the Call for Papers in PDF or
TXT.
Confirmed Invited Speakers
Organizers
References
We have assembled below a list of indicative references related to the workshop's theme.
Machine learning
-
Efficient algorithms for online decision problems (J. of Comp. and System Sci., 2005)
A. Kalai, S. Vempala
-
Extracting and composing robust features with denoising autoencoders (ICML, 2008)
P. Vincent, H. Larochelle, Y. Bengio, P. Manzagol
-
Herding dynamical weights to learn (ICML, 2009)
M. Welling
-
Gaussian sampling by local perturbations (NIPS, 2010)
G. Papandreou, A. Yuille
-
Perturb-and-MAP random fields: Using discrete optimization to learn and sample from energy models (ICCV, 2011)
G. Papandreou, A. Yuille
-
Robust Max-Product Belief Propagation (arXiv, 2011)
M. Ibrahimi, A. Javanmard, Y. Kanoria, A. Montanari
-
Robust Optimization and Machine Learning (book chapter, 2011)
C. Caramanis and S. Mannor and H. Xu
-
Randomized Optimum Models for Structured Prediction (AISTATS, 2012)
D. Tarlow, R. Adams, R. Zemel
-
On the Partition Function and Random Maximum A-Posteriori Perturbations (ICML, 2012)
T. Hazan, T. Jaakkola
-
A Simple Geometric Interpretation of SVM using Stochastic Adversaries (AISTATS, 2012)
R. Livni, K. Crammer, A. Globerson
Extreme value statistics
Discrete choice in psychology and economics
Mathematics and statistical physics
-
Information, Physics, and Computation (Oxford Univ. Press, 2009)
M. Mezard and A. Montanari
-
The supremum of some canonical processes (Am. J. of Math, 1994)
M. Talagrand
-
Random weighting, asymptotic counting, and inverse isoperimetry (Isr. J. Math, 2007)
A Barvinok, A. Samorodnitsky
-
Cover times, blanket times, and majorizing measures (arXiv, 2010)
J. Ding, J.R. Lee, Y. Peres
-
Smoothed Analysis: An Attempt to Explain the Behavior of Algorithms in Practice (CACM, 2009)
D.A. Spielman, S. Teng
-
Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted Allocations (arXiv-10)
G. Aggarwal, G. Goel, C. Karande, A. Mehta
Optimization
-
Fast approximate energy minimization via graph cuts (PAMI, 2001)
Y. Boykov, O. Veksler, R. Zabih
-
Dynamic programming and graph algorithms in computer vision (PAMI, 2011)
P.F. Felzenszwalb, R. Zabih
-
Introduction to dual decomposition for inference (MIT Press, 2011)
D. Sontag, A. Globerson, T. Jaakkola
-
Optimization for Machine Learning (MIT Press, 2011)
S. Sra, S. Nowozin, S. Wright., Editors, .
-
Markov Random Fields for Vision and Image Processing (MIT Press, 2011)
A. Blake, P. Kohli, and C. Rother (eds.)
-
Stochastic Programming (Kluwer, 1995)
A. Prekopa
-
Robust Optimization (Princeton Univ. Press, 2009)
A. Bental, L. El Ghaoui, A. Nemirovski
|