George Papandreou – Home Page
About Me
Since December 2014 I have been working at Google as Research Scientist. I
will continue to update this web page.
I am a Research Assistant Professor at the Toyota Technological Institute at Chicago.
My research interests are in computer vision, machine learning, and multimodal
perception. My current focus is on deep learning. I approach these
problems with methods from Bayesian statistics, signal processing, and applied
mathematics.
From 2009 to 2013 I was a Postdoctoral Research Scholar at
UCLA, working with Prof. Alan Yuille. I hold a Diploma (2003) and a PhD (2009) in Electrical and
Computer Engineering from NTUA, Greece,
where I was a CVSP group member, advised by
Prof. Petros Maragos.
[CV…]
[Bio…]
Recent Research Highlight: Deep Epitomic Convolutional Networks
|
I have been exploring the powerful epitomic data structure for
transformation-aware image analysis and recognition. Building on image
epitomes, I have developed a new BoW-type model using a dictionary of flat
mini-epitomes learned in an unsupervised fashion from raw images. In my most
recent work in the context of deep learning, I have proposed the epitomic
convolution layer as a powerful replacement of a consecutive pair of
convolution and max-pooling layers.
Deep epitomic nets along with explicit scale/position search have been the key
ingredients in our TTIC_ECP entry to the Imagenet LSVRC 2014 image
classification competition, achieving 10.2% top-5 error rate, a 3%
performance improvement over a baseline conventional max-pooled convnet.
[CVPR 2014]
[arXiv]
[ILSVRC results]
[ILSVRC workshop]
|
Recent Research Highlight: Perturb-and-MAP Random Fields
|
I have been developing a new Perturb-and-MAP framework for one-shot random
sampling in Gaussian or discete-label Markov random fields
(MRF). Perturb-and-MAP random fields turn powerful deterministic energy
minimization methods into efficient random sampling algorithms. By avoiding
costly MCMC, one can generate in a fraction of a second independent random
samples from million-node networks. Applications include model parameter
estimation and solution uncertainty quantification in computer vision
applications.
For an overview, see my review article which appears as book chapter in the recently published MIT
Press book on Advanced Structured Prediction.
[Read more…]
|
News
December 19, 2014 — Recent work (DeepLab-CRF arXiv paper) setting a new state-of-art (66.4 % IoU, further
improved to 67.1 % IoU with the addition of intermediate layer features) in
semantic image segmentation on the
PASCAL VOC 2012 benchmark. We refine densely computed convolutional neural
network response maps with fully-connected conditional random
fields. Algorithmic improvements allow us to compute dense segmentation maps
in a fraction of a second. Joint work with Jay Chen and collaborators at Ecole Centrale Paris/INRIA, Google Research,
and UCLA.
November 30, 2014 — Technical report on
arXiv exploring explicit position, scale, and aspect ratio modeling in the
context of deep convolutional neural networks (DCNNs). We describe an
improved version of our TTIC_ECP entry on the Imagenet 2014 image
classification/localization competition. We further show that competitive
object detection results (56.4 % mAP on PASCAL VOC 2007) are possible when
applying DCNNs in a plain sliding window fashion. We describe some tricks
that allow dense sliding window DCNN detection to surprisingly be faster
than current two-stage approaches such as RCNN which rely on region proposal
+ scoring steps. Joint work with I. Kokkinos and P.-A. Savalle.
February 8, 2014 — My review paper on Perturb-and-MAP appears as invited
chapter in a forthcoming MIT Press book on Advanced Structured Prediction
edited by S. Nowozin, P. Gehler, J. Jancsary, and C. Lampert.
[pdf]
December 9, 2013 — Iasonas Kokkinos, Alex Bronstein, Michael Bronstein,
and myself will be teaching a full-day tutorial on June 23, 2014 at CVPR
2014. The tutorial BASIS-14
(BASes for Images and Surfaces) will cover linear and non-linear image and
surface analysis methods, from fundamental concepts to state-of-the-art
techniques, from the viewpoint of basis expansions.
|