You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 21 Oct 2021 • Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco

Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.

no code implementations • 20 Sep 2021 • Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna

Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties.

no code implementations • 23 Jun 2021 • Luigi Carratino, Stefano Vigogna, Daniele Calandriello, Lorenzo Rosasco

We introduce ParK, a new large-scale solver for kernel ridge regression.

no code implementations • 16 Jun 2021 • Marco Rando, Luigi Carratino, Silvia Villa, Lorenzo Rosasco

In this paper, we introduce Ada-BKB (Adaptive Budgeted Kernelized Bandit), a no-regret Gaussian process optimization algorithm for functions on continuous domains, that provably runs in $O(T^2 d_\text{eff}^2)$, where $d_\text{eff}$ is the effective dimension of the explored space, and which is typically much smaller than $T$.

no code implementations • 16 Jun 2021 • Nicola Rigolli, Nicodemo Magnoli, Lorenzo Rosasco, Agnese Seminara

Animal behavior and neural recordings show that the brain is able to measure both the intensity of an odor and the timing of odor encounters.

no code implementations • 9 Jun 2021 • Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco

Optimization was recently shown to control the inductive bias in a learning process, a property referred to as implicit, or iterative regularization.

no code implementations • 29 Apr 2021 • Diego Ferigo, Raffaello Camoriano, Paolo Maria Viceconte, Daniele Calandriello, Silvio Traversaro, Lorenzo Rosasco, Daniele Pucci

Balancing and push-recovery are essential capabilities enabling humanoid robots to solve complex locomotion tasks.

1 code implementation • 25 Feb 2021 • Gian Maria Marconi, Raffaello Camoriano, Lorenzo Rosasco, Carlo Ciliberto

Among these, computing the inverse kinematics of a redundant robot arm poses a significant challenge due to the non-linear structure of the robot, the hard joint constraints and the non-invertible kinematics map.

no code implementations • 28 Dec 2020 • Elisa Maiettini, Raffaello Camoriano, Giulia Pasquale, Vadim Tikhanoff, Lorenzo Rosasco, Lorenzo Natale

These methods have important limitations for robotics: Learning solely on off-line data may introduce biases (the so-called domain shift), and prevents adaptation to novel tasks.

1 code implementation • 25 Nov 2020 • Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

Our approach is validated on the YCB-Video dataset which is widely adopted in the computer vision and robotics community, demonstrating that we can achieve and even surpass performance of the state-of-the-art, with a significant reduction (${\sim}6\times$) of the training time.

1 code implementation • 25 Nov 2020 • Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

This shortens training time while maintaining state-of-the-art performance.

1 code implementation • ICML 2020 • Dominic Richards, Patrick Rebeschini, Lorenzo Rosasco

Under standard source and capacity assumptions, we establish high probability bounds on the predictive performance for each agent as a function of the step size, number of iterations, inverse spectral gap of the communication matrix and number of Random Features.

no code implementations • 28 Jun 2020 • Akshay Rangamani, Lorenzo Rosasco, Tomaso Poggio

We study the average $\mbox{CV}_{loo}$ stability of kernel ridge-less regression and derive corresponding risk bounds.

1 code implementation • NeurIPS 2020 • Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, Alessandro Rudi

Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size.

no code implementations • 17 Jun 2020 • Andrea Della Vecchia, Jaouad Mourtada, Ernesto de Vito, Lorenzo Rosasco

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.

1 code implementation • 17 Jun 2020 • Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa

We study iterative regularization for linear models, when the bias is convex but not necessarily strongly convex.

no code implementations • 17 Jun 2020 • Nicolò Pagliana, Alessandro Rudi, Ernesto De Vito, Lorenzo Rosasco

We study the learning properties of nonparametric ridge-less least squares.

no code implementations • 11 Jun 2020 • Dominic Richards, Jaouad Mourtada, Lorenzo Rosasco

We analyze the prediction error of ridge regression in an asymptotic regime where the sample size and dimension go to infinity at a proportional rate.

no code implementations • 28 May 2020 • Gian Maria Marconi, Lorenzo Rosasco, Carlo Ciliberto

Geometric representation learning has recently shown great promise in several machine learning settings, ranging from relational learning to language processing and generative models.

1 code implementation • ICML 2020 • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco

Gaussian processes (GP) are one of the most successful frameworks to model uncertainty.

no code implementations • 22 Feb 2020 • Cristian Rusu, Lorenzo Rosasco

We investigate numerically efficient approximations of eigenspaces associated to symmetric and general matrices.

no code implementations • 13 Feb 2020 • Carlo Ciliberto, Lorenzo Rosasco, Alessandro Rudi

We propose and analyze a novel theoretical and algorithmic framework for structured prediction.

no code implementations • NeurIPS 2018 • Daniele Calandriello, Lorenzo Rosasco

We investigate the efficiency of k-means in terms of both statistical and computational requirements.

no code implementations • 18 Jul 2019 • Cristian Rusu, Lorenzo Rosasco

We study the problem of approximating orthogonal matrices so that their application is numerically fast and yet accurate.

no code implementations • 11 Jul 2019 • Nicholas Sterge, Bharath Sriperumbudur, Lorenzo Rosasco, Alessandro Rudi

In this paper, we propose and study a Nystr\"om based approach to efficient large scale kernel principal component analysis (PCA).

no code implementations • 8 Jul 2019 • Enrico Cecini, Ernesto de Vito, Lorenzo Rosasco

Our main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown distribution.

no code implementations • NeurIPS 2019 • Nicolò Pagliana, Lorenzo Rosasco

We study learning properties of accelerated gradient descent methods for linear least-squares in Hilbert spaces.

no code implementations • 27 May 2019 • Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco

We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold.

1 code implementation • 13 Mar 2019 • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco

Moreover, we show that our procedure selects at most $\tilde{O}(d_{eff})$ points, where $d_{eff}$ is the effective dimension of the explored space, which is typically much smaller than both $d$ and $t$.

no code implementations • 12 Mar 2019 • Andrzej Banburski, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Fernanda De La Torre, Jack Hidary, Tomaso Poggio

In particular, gradient descent induces a dynamics of the normalized weights which converge for $t \to \infty$ to an equilibrium which corresponds to a minimum norm (or maximum margin) solution.

no code implementations • NeurIPS 2019 • Nicole Mücke, Gergely Neu, Lorenzo Rosasco

While stochastic gradient descent (SGD) is one of the major workhorses in machine learning, the learning properties of many practically used variants are poorly understood.

1 code implementation • NeurIPS 2018 • Alessandro Rudi, Daniele Calandriello, Luigi Carratino, Lorenzo Rosasco

Leverage score sampling provides an appealing way to perform approximate computations for large matrices.

no code implementations • NeurIPS 2018 • Luigi Carratino, Alessandro Rudi, Lorenzo Rosasco

Sketching and stochastic gradient methods are arguably the most common techniques to derive efficient large scale learning algorithms.

no code implementations • NeurIPS 2018 • Alessandro Rudi, Carlo Ciliberto, Gian Maria Marconi, Lorenzo Rosasco

Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure.

1 code implementation • NeurIPS 2018 • Dimitrios Milios, Raffaello Camoriano, Pietro Michiardi, Lorenzo Rosasco, Maurizio Filippone

In this paper, we study the problem of deriving fast and accurate classification algorithms with uncertainty quantification.

no code implementations • 23 Mar 2018 • Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach.

no code implementations • 22 Feb 2018 • Gergely Neu, Lorenzo Rosasco

We propose and analyze a variant of the classic Polyak-Ruppert averaging scheme, broadly used in stochastic gradient methods.

no code implementations • 20 Jan 2018 • Junhong Lin, Alessandro Rudi, Lorenzo Rosasco, Volkan Cevher

In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space.

no code implementations • 30 Dec 2017 • Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix, Jack Hidary, Hrushikesh Mhaskar

In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian.

no code implementations • 21 Oct 2017 • Junhong Lin, Lorenzo Rosasco

In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nystr\"om subsampling, allowing multiple passes over the data and mini-batches.

1 code implementation • 28 Sep 2017 • Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale

We report on an extensive study of the benefits and limitations of current deep learning approaches to object recognition in robot vision scenarios, introducing a novel dataset used for our investigation.

no code implementations • 18 Jul 2017 • Simon Matet, Lorenzo Rosasco, Silvia Villa, Bang Long Vu

We consider the problem of designing efficient regularization algorithms when regularization is encoded by a (strongly) convex functional.

no code implementations • 18 Jul 2017 • Saverio Salzo, Johan A. K. Suykens, Lorenzo Rosasco

In this paper, we discuss how a suitable family of tensor kernels can be used to efficiently solve nonparametric extensions of $\ell^p$ regularized learning methods.

no code implementations • 3 Jul 2017 • Junhong Lin, Lorenzo Rosasco

In this paper, we provide an in-depth theoretical analysis for different variants of doubly stochastic learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss.

4 code implementations • NeurIPS 2017 • Alessandro Rudi, Luigi Carratino, Lorenzo Rosasco

In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points.

no code implementations • NeurIPS 2017 • Carlo Ciliberto, Alessandro Rudi, Lorenzo Rosasco, Massimiliano Pontil

However, in practice assuming the tasks to be linearly related might be restrictive, and allowing for nonlinear structures is a challenge.

no code implementations • 28 Mar 2017 • Guillaume Garrigos, Lorenzo Rosasco, Silvia Villa

We provide a comprehensive study of the convergence of the forward-backward algorithm under suitable geometric conditions, such as conditioning or {\L}ojasiewicz properties.

no code implementations • NeurIPS 2016 • Junhong Lin, Lorenzo Rosasco

We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed.

no code implementations • 2 Nov 2016 • Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao

The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning.

no code implementations • 28 May 2016 • Junhong Lin, Lorenzo Rosasco

As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).

1 code implementation • 26 May 2016 • Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco

We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions.

no code implementations • NeurIPS 2016 • Carlo Ciliberto, Alessandro Rudi, Lorenzo Rosasco

We propose and analyze a regularization approach for structured prediction problems.

1 code implementation • 17 May 2016 • Raffaello Camoriano, Giulia Pasquale, Carlo Ciliberto, Lorenzo Natale, Lorenzo Rosasco, Giorgio Metta

We consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment.

1 code implementation • NeurIPS 2017 • Alessandro Rudi, Lorenzo Rosasco

We study the generalization properties of ridge regression with random features in the statistical learning framework.

no code implementations • 18 Jan 2016 • Raffaello Camoriano, Silvio Traversaro, Lorenzo Rosasco, Giorgio Metta, Francesco Nori

This paper presents a novel approach for incremental semiparametric inverse dynamics learning.

1 code implementation • 19 Oct 2015 • Tomas Angles, Raffaello Camoriano, Alessandro Rudi, Lorenzo Rosasco

Early stopping is a well known approach to reduce the time complexity for performing training and model selection of large scale learning machines.

3 code implementations • 16 Oct 2015 • Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio

Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs.

Ranked #7 on Link Prediction on FB15k

no code implementations • 23 Sep 2015 • Giulia Pasquale, Tanis Mar, Carlo Ciliberto, Lorenzo Rosasco, Lorenzo Natale

The importance of depth perception in the interactions that humans have within their nearby space is a well established fact.

no code implementations • 5 Aug 2015 • Fabio Anselmi, Lorenzo Rosasco, Cheston Tan, Tomaso Poggio

In i-theory a typical layer of a hierarchical architecture consists of HW modules pooling the dot products of the inputs to the layer with the transformations of a few templates under a group.

1 code implementation • NeurIPS 2015 • Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco

We study Nystr\"om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered.

no code implementations • 13 Apr 2015 • Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale

In this paper we investigate such possibility, while taking further steps in developing a computational vision system to be embedded on a robotic platform, the iCub humanoid robot.

no code implementations • CVPR 2015 • Carlo Ciliberto, Lorenzo Rosasco, Silvia Villa

Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e. g. object detection, classification, tracking of multiple agents, or denoising, to name a few.

1 code implementation • 13 Apr 2015 • Carlo Ciliberto, Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco

In this context a fundamental question is how to incorporate the tasks structure in the learning problem. We tackle this question by studying a general computational framework that allows to encode a-priori knowledge of the tasks structure in the form of a convex penalty; in this setting a variety of previously proposed methods can be recovered as special cases, including linear and non-linear approaches.

no code implementations • 31 Mar 2015 • Junhong Lin, Lorenzo Rosasco, Ding-Xuan Zhou

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method.

no code implementations • 19 Mar 2015 • Fabio Anselmi, Lorenzo Rosasco, Tomaso Poggio

We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one the transformation of the other.

no code implementations • NeurIPS 2013 • Alessandro Rudi, Guille D. Canas, Lorenzo Rosasco

A large number of algorithms in machine learning, from principal component analysis (PCA), and its non-linear (kernel) extensions, to more recent spectral embedding and support estimation methods, rely on estimating a linear subspace from samples.

no code implementations • 16 Jun 2014 • Georgios Evangelopoulos, Stephen Voinea, Chiyuan Zhang, Lorenzo Rosasco, Tomaso Poggio

Recognition of speech, and in particular the ability to generalize and learn from small sets of labelled examples like humans do, depends on an appropriate representation of the acoustic input.

no code implementations • NeurIPS 2015 • Lorenzo Rosasco, Silvia Villa

Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method.

no code implementations • 1 Apr 2014 • Chiyuan Zhang, Georgios Evangelopoulos, Stephen Voinea, Lorenzo Rosasco, Tomaso Poggio

We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.

no code implementations • 17 Nov 2013 • Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, Tomaso Poggio

It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition---and that this representation may be continuously learned in an unsupervised way during development and visual experience.

no code implementations • 15 Jun 2013 • Sean Ryan Fanello, Carlo Ciliberto, Matteo Santoro, Lorenzo Natale, Giorgio Metta, Lorenzo Rosasco, Francesca Odone

In this paper we present and start analyzing the iCub World data-set, an object recognition data-set, we acquired using a Human-Robot Interaction (HRI) scheme and the iCub humanoid robot platform.

no code implementations • 24 Mar 2013 • Silvia Villa, Lorenzo Rosasco, Tomaso Poggio

We consider the fundamental question of learnability of a hypotheses class in the supervised learning setting and in the general learning setting introduced by Vladimir Vapnik.

no code implementations • NeurIPS 2012 • Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, Jean-Jeacques Slotine

In this paper we dicuss a novel framework for multiclass learning, defined by a suitable coding/decoding strategy, namely the simplex coding, that allows to generalize to multiple classes a relaxation approach commonly used in binary classification.

no code implementations • NeurIPS 2012 • Guillermo Canas, Tomaso Poggio, Lorenzo Rosasco

We study the problem of estimating a manifold from random samples.

no code implementations • NeurIPS 2012 • Guillermo Canas, Lorenzo Rosasco

We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space.

no code implementations • 16 Apr 2012 • Ernesto De Vito, Lorenzo Rosasco, Alessandro Toigo

We consider the problem of learning a set from random samples.

3 code implementations • 30 Jun 2011 • Mauricio A. Alvarez, Lorenzo Rosasco, Neil D. Lawrence

Kernel methods are among the most popular techniques in machine learning.

no code implementations • NeurIPS 2010 • Sofia Mosci, Silvia Villa, Alessandro Verri, Lorenzo Rosasco

We deal with the problem of variable selection when variables must be selected group-wise, with possibly overlapping groups defined a priori.

no code implementations • NeurIPS 2010 • Ernesto D. Vito, Lorenzo Rosasco, Alessandro Toigo

In this paper we consider the problem of learning from data the support of a probability distribution when the distribution {\em does not} have a density (with respect to some reference measure).

no code implementations • NeurIPS 2009 • Jake Bouvrie, Lorenzo Rosasco, Tomaso Poggio

A goal of central importance in the study of hierarchical models for object recognition -- and indeed the visual cortex -- is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.