vers le site de l'ENS
Lien vers l'accueil
lancer la recherche

» Conférences d’après mars 2011 : nouveau site


Journée Mathematical Foundations of Learning Theory

Organisé par : Pascal Massart (univ. Paris XI) et Shahar Mendelson (The Australian National University & Technion, I.I.T) et Alain Pajor (univ. Marne-la-Vallée) et Patricia Reynaud-Bouret (CNRS) et Gilles Stoltz (ENS) et Alexandre Tsybakov (univ. Paris VI)

The goal of the meeting is to bring together a diverse group of mathematicians and theoretical computer scientists working on all aspects of the theoretical analysis of machine learning techniques for prediction and other data analysis problems.
The talks will not necessarily be directly related to learning theory, but rather they will be devoted to interesting problems in mathematics that have had or are likely to have an impact on the development of the mathematical theory of learning. They will cover different areas of Analysis, Statistics, Probability, Game Theory, Information Theory, Optimization, and Computer Science.

Ressources en ligne

  • Introduction - Mathematical Foundations of Learning Theory (le 31 mai 2006) — Gilles Stoltz

  • The Dantzig Selector: Statistical Estimation when p is Larger than n (le 31 mai 2006) — Emmanuel Candes
    In many important statistical applications, the number of variables or parameters is much larger than the number of observations. In radiology and biomedical imaging for instance, one is typically able to collect far fewer measurements about an image of interest than the unknown number of pixels. Examples in functional MRI and tomography immediately come to mind. Other examples of high-dimensional data in genomics, signal processing and many other fields abound. In the context of multiple linear regression for instance, a fundamental question is whether it is possible to estimate a vector of parameters of size p from a vector of observations of size n when n ≪ p. This seems a priori hopeless.
    This talk introduces a new estimator, dubbed the “Dantzig selector” in honor of the late George Dantzig as it invokes linear programming, and which enjoys remarkable statistical properties. Suppose that the data or design matrix obeys a uniform uncertainty principle and that the true parameter vector is sufficiently sparse or compressible which roughly guarantees that the model is identifiable. Then the estimator achieves an accuracy which nearly equals that one would achieve with an oracle that would supply perfect information about which coordinates of the unknown parameter vector are nonzero and which were above the noise level. Our results connect with the important model selection problem. In effect, the Dantzig Selector automatically selects the subset of covariates with nearly the best predictive power, by solving a convenient linear program.
    Our results are also inspired by a recent body of work perhaps now best known under the name of “Compressive Sampling,” a new sampling theory we introduced very recently. If time allows, I will discuss applications of Compressive Sampling in other fields such as coding theory.
    Further references: The main paper but there are also many other papers on a related subject, for instance, Decoding by linear programming and Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.
  • About Talagrand’s Concentration Inequality for Exponential Measures (le 31 mai 2006) — Franck Barthe
    This lecture provides a survey on the recent works extending Talagrand’s concentration inequality for the exponential measures, or more generally product log-concave measures. Among others, transportation cost inequalities and modified logarithmic Sobolev inequalities will be presented and studied.
  • Deterministic Calibration and Nash Equilibrium (le 31 mai 2006) — Dean Foster
    We provide a natural learning process in which the joint frequency of empirical play converges into the set of convex combinations of Nash equilibria. In this process, all players rationally choose their actions using a public prediction made by a deterministic, weakly calibrated algorithm. Furthermore, the public predictions used in any given round of play are frequently close to some Nash equilibrium of the game.
    (Joint work with Sham M. Kakade.)
  • On Context-tree Prediction of Individual Sequences (le 31 mai 2006) — Neri Merhav
    Motivated by the evident success of context–tree based methods in lossless data compression, we explore, in this talk, methods of the same spirit in universal prediction of individual sequences. By context–tree prediction, we refer to a family of prediction schemes, where at each time instant t, after having observed all outcomes of the data sequence x1 , . . . , xt−1 , but not yet xt , the prediction is based on a “context” (or a state) that consists of the k most recent past outcomes xt−k , . . . , xt−1 , where the choice of k may depend on the contents of a possibly longer, though limited, portion of the observed past, xt−kmax , . . . , xt−1 . This is different from the study reported in Feder, Merhav, and Gutman (2002), where general finite–state predictors as well as “Markov” (finite–memory) predictors of fixed order, where studied in the regime of individual sequences.
    Another important difference between this study and Feder, Merhav, and Gutman (2002) is the asymptotic regime. While in Feder, Merhav, and Gutman (2002), the resources of the predictor (i.e., the number of states or the memory size) were kept fixed regardless of the length N of the data sequence, here we investigate situations where the number of contexts, or states, is allowed to grow concurrently with N . We are primarily interested in the following fundamental question: What is the critical growth rate of the number of contexts, below which the performance of the best context–tree predictor is still universally achievable, but above which it is not? We show that this critical growth rate is linear in N . In particular, we propose a universal context–tree algorithm that essentially achieves optimum performance as long as the growth rate is sublinear, and show that, on the other hand, this is impossible in the linear case.
    Further references
  • Suboptimality of MDL and Bayes in Classification under Misspecification (le 31 mai 2006) — Peter Grünwald
    We show that forms of Bayesian and MDL learning that are often applied to classification problems can be “statistically inconsistent”. We present a classification model (a large family of classifiers) and a distribution such that the best classifier within the model has classification risk r, where r can be taken arbitrarily close to 0. Nevertheless, no matter how many data are observed, both the classifier inferred by MDL and the classifier based on the Bayesian posterior will make predictions with error much larger than r. If r is chosen not too small, predictions based on the Bayesian posterior can even perform substantially worse than random guessing, no matter how many data are observed. Our result can be re-interpreted as showing that, if a probabilistic model does not contain the data generating distribution, then Bayes and MDL do not always converge to the distribution in the model that is closest in KL divergence to the data generating distribution. We compare this result with earlier results on Bayesian inconsistency by Diaconis, Freedman and Barron.
    This work is a follow-up on joint work with John Langford of the Toyota Technological Institute, Chicago, published at COLT 2004, available at
  • A Correlation Inequality for Nonlinear Reconstruction (le 1er juin 2006) — Ofer Zeitouni
    Consider the problem of reconstructing a Gaussian vector based on the maximum of its projections on the elements of an orthogonal basis. S. Mallat and myself showed that the optimal basis for this problem is the Karhunen-Loeve one. I will discuss the proof and conjectured generalizations.
  • Bayesian and Non-Bayesian Learning in Games (le 1er juin 2006) — Ehud Lehrer
    I will contrast Bayesian with non-Bayesian learning to play an equilibrium. I will primarily refer to the mathematics involved in the two corresponding research directions.
  • Learning from Dependent Observations (le 1er juin 2006) — Ingo Steinwart
    The standard assumption in statistical learning theory is that the available samples are realizations of i.i.d. random variables. However, in many applications this assumption cannot be rigorously justified, in particular if the observations are intrinsically temporal. In this talk I will present some recent results on the learnability of rather general observation-generating random processes. In particular, I will establish a weak consistency result for support vector machine classification and regression. In addition, refined results for e.g. α-mixing processes will be presented. If time permits I will finally discuss whether the behaviour of certain dynamical systems can be learned.
  • Statistical Analysis for Rounding Data (le 1er juin 2006) — Zhidong Bai
    Unless the model is discrete, data rounding is unavoidable in practical measurement. However, the errors caused by rounding of data are almost ignored by all classical statistical theories. Although some pioneers have noticed this problem, few suitable approaches were proposed to deal with this error. In this work, both by simulations as well as by theoretical analysis, we demonstrate that the traditionally used sample mean and sample variance, covariance are no longer consistent nor asymptotically normal, when rounding errors are present. Also, by some concrete examples when measurements are rounded to some extent, we propose to use MLE or approximated MLE (AMLE) to estimate the parameters and discuss the properties of them and tests based on the new estimators. In particular, as an example, we shall discuss the limiting properties of the new estimator of parameters in an AR(p) model and M A(q) model when the observations are rounded.
    (Joint work with Shurong Zheng and Baoxue Zhang.)
  • An Isoperimetric Inequality with Applications to Learning (le 1er juin 2006) — Shie Mannor
    An issue of central importance is learning in the presence of data corruption, or noise. In this talk, we consider the case where data corruption has produced a data sample with a large margin. The essential question is “what is the cost of this margin?” in terms of generalization error. We provide an answer for the case where the underlying distribution has a nearly log-concave density.
    First, we prove that given such a nearly log-concave density, in any partition of the space into two well separated sets, the measure of the points that do not belong to these sets is large. Next, we apply this isoperimetric inequality to derive lower bounds on the generalization error in classification. We further consider regression problems and show that if the inputs and outputs are sampled from a nearly log-concave distribution, the measure of points for which the prediction is wrong by more than ǫ0 and less than ǫ1 is (roughly) linear in ǫ1 − ǫ0 , as long as ǫ0 is not too small, and ǫ1 not too large. We also show that when the data are sampled from a nearly log-concave distribution, the margin cannot be large in a strong probabilistic sense.
  • Statistical Analysis of Subspace Methods and Associated Learning Algorithms (le 2 juin 2006) — John Shawe-Taylor
    Subspace inference is a critical component in many practical applications of learning from data, yet very little analysis has been made of the performance of these algorithms. The talk considers the question of providing a statistical analysis of subspace methods and of learning using the associated representations. We begin with considering principal components analysis and the relation between process and empirical eigenvalues. We go on to consider more advanced techniques such as canonical correlation analysis and linear functions learned in the inferred representation. Sparse analogies of these techniques will be discussed with associated bounds.
  • Theory and Algorithms for Large Scaling Ranking Problems (le 2 juin 2006) — Tong Zhang
    I will discuss machine learning problems encountered in web search and advertising, and then focus on ranking. In the web search setting, I will talk about training relevance models based on DCG (discounted cumulated gain) optimization. Under this metric, the system output quality is naturally determined by the performance near the top of its rank-list. I will mainly focus on various theoretical issues in this learning problem.
    As a related practical illustration, I will talk about optimizing the ranking function of a statistical machine translation system according to the BLEU metric (standard measure of translation quality). Our approach treats machine translation as a black-box, and can optimize millions of system parameters automatically. This has never been attempted before. I will present our method and some results.
    (Joint work with David Cossock, Yahoo, and Christoph Tillmann, IBM.)
  • Sparsity in High-Dimensional Learning Problems (le 2 juin 2006) — Vladimir Koltchinskii
    We study penalized empirical risk minimization with convex loss over the linear span of a large finite set H of base functions. The penalty is based on the lp -norm of the vector of coefficients with p = 1 + c/ log N, where N is the cardinality of H. We prove several inequalities that directly relate "the degrees of sparsity" of empirical and true solutions of such problems and show what impact the sparsity has on on the excess risk bounds and on the accuracy of estimation of the vector of coefficients. We discuss several other problems, such as data-driven choice of regularization parameter that provides adaptation to unknown sparsity of the true solution as well as the problem of adaptation to linear dependencies in the set H. We also discuss the connections of these results to recent work on aggregation of statistical estimators (Tsybakov and coauthors) and to sparse recovery problems in computational harmonic analysis (Donoho, Candes, Tao among others).
  • Regularization of Kernel Methods by Decreasing the Bandwidth of the Gaussian Kernel (le 2 juin 2006) — Jean-Philippe Vert
    We consider learning algorithms that minimize an empirical risk regularized by the norm in the reproducing kernel Hilbert space of the Gaussian kernel. The conditions on the loss function for Bayes consistency of such methods have been studied recently when the regularization term asymptotically vanishes as the sample size increases. Here we study the different situation where the regularization term does not vanish, but the bandwidth of the Gaussian kernel instead decreases with the sample size. We will explicit the asymptotic limit of the function selected by the algorithm, give conditions on the loss function to ensure Bayes consistency, and provide non-asymptotic learning bounds in this case. We will deduce in particular the consistency of the one-class support vector machine algorithm as a density level set estimator.
    (Joint work with Régis Vert.)
  • Learning and Randomization (le 2 juin 2006) — Nicolò Cesa-Bianchi
    Randomization is a fundamental tool in learning. In this talk we illustrate some interesting applications of randomized algorithms to the solution of various problems in the areas of individual sequence prediction and pattern classification.
  • On Optimal and Universal Estimators in Learning Theory (le 2 juin 2006) — Vladimir Temlyakov
    This talk addresses some problems of supervised learning. Supervised learning, or learning-from-examples, refers to a process that builds on the base of available data of inputs xi and outputs yi , i = 1, . . . , m, a function that best represents the relation between the inputs x ∈ X and the corresponding outputs y ∈ Y . The goal is to find an estimator f z on the base of given data z := ((x1 , y1 ), . . . , (xm, ym)) that approximates well the regression function fρ (or its projection) of an unknown Borel probability measure ρ defined on Z = X × Y . We assume that (xi , yi ), i = 1, . . . , m, are indepent and distributed according to ρ.
    There are several important ingredients in mathematical formulation of this problem. We follow the way that has become standard in approximation theory and has been used in recent papers. In this approach we first choose a function class W (a hypothesis space H) to work with. After selecting a class W we have the following two ways to go. The first one is based on the idea of studying approximation of the L2 (ρX ) projection fW := (fρ )W of fρ onto W . Here, ρX is the marginal probability measure. This setting is known as the improper function learning problem or the pro jection learning problem. In this case we do not assume that the regression function fρ comes from a specific (say, smoothness) class of functions. The second way is based on the assumption fρ ∈ W . This setting is known as the proper function learning problem. For instance, we may assume that fρ has some smoothness. We will give some upper and lower estimates in both settings.
    In the problem of universal estimators we assume that an unknown measure ρ satisfies some conditions. Following the standard way from nonparametric statistics we formulate these conditions in the form fρ ∈ Θ. Next, we assume that the only a priori information available is that fρ belongs to a class Θ (unknown) from a known collection {Θ} of classes. We want to build an estimator that provides approximation of fρ close to the optimal for the class Θ. We use a standard method of penalized least squares estimators for construction of universal estimators.
  • Complexity of Sign Matrices and its Many Aspects (le 3 juin 2006) — Nathan Linial
    Consider a matrix of +1/-1 as a family of concepts to be learned. Various measures can be associated with this matrix in an attempt to quantify how hard it is to learn this concept class. Among the better known measures are the VC dimension and the margin. In joint work with Adi Shraibman we are putting these notions in a broader framework of complexity measures of sign matrices. The simplest complexity measure is the rank, and many other natural concepts arise which are related to various other fields such as Banach Space Theory, communication complexity and discrepance theory. We are investigating these different concepts and their mutual relationships.
  • Is There Life beyond the Classification Problem? (le 3 juin 2006) — Nicolas Vayatis
    In the recent years, significant progress has been achieved on the statistical understanding of celebrated classification algorithms such as boosting and SVM. The key for proceeding to a statistical analysis was to interpret these algorithms as optimization procedures minimizing a penalized convex risk functional. From there it was possible: first, to relate the convex criterion to the standard performance measure -the classification error- and then, to adapt the flourishing theory of empirical risk minimization in order to provide generalization error bounds and oracle inequalities for convex risk minimization procedures. In the talk, I will discuss whether this programme can be applied to another problem: the ranking/scoring problem. Indeed, in applications such as Information Retrieval or Credit Risk screening, the goal is to rank/score webpages or individuals, rather than simply assigning them to a specified category. In this perspective, standard performance measures lead to statistical functionals of order two for which classification theory does not apply straightforwardly. In the talk, I will give some insights and results on these new challenging issues.
    (Joint work with Stephan Clémençon and Gabor Lugosi.)
  • Asymptotic Properties of Convex Optimization Methods for Multiclass Classification (le 3 juin 2006) — Peter Bartlett
    We consider the following pattern classification problem: given a sample of i.i.d. pairs (Xi , Yi ) ∈ X × Y , where Y is finite, find a function f : X → Y that has small misclassification probability. Many successful algorithms for binary classification (with |Y | = 2) involve optimization of a convex criterion. These methods can be generalized in many ways to handle the multiclass case. It turns out that the study of multiclass methods is not a simple extension of results for the binary case. For instance, many apparently natural generalizations of binary methods do not preserve the attractive property of universal consistency (that is, for any probability distribution, the risk of the classifier approaches the best possible). We consider methods that choose a vector-valued function f to optimize a convex criterion of the form



Pascal Massart (univ. Paris XI)

Laboratoire de Mathématique, Université Paris-Sud


Shahar Mendelson (The Australian National University & Technion, I.I.T)

Senior Fellow, Centre for Mathematics and Its Applications, The Australian National University, Canberra, Australia
Associate Professor, Department of Mathematics, Technion, I.I.T, Haifa, Israel.


Alain Pajor (univ. Marne-la-Vallée)

Laboratoire d’Analyse et de Mathématiques Appliquées UMR 8050 (CNRS-univ. Marne-la-Vallée)


Patricia Reynaud-Bouret (CNRS)

Département de mathématiques et applications de l’Ecole normale supérieure


Gilles Stoltz (ENS)

Département de mathématiques et applications de l’Ecole normale supérieure


Alexandre Tsybakov (univ. Paris VI)

Laboratoire de Probabilités et Modèles Aléatoires (CNRS-univ. Paris VI) UMR 7599

En savoir plus sur le cycle...

Colloques et journées d’études

École d’été bio-image

Éthique à l’ENS

5e journée de protohistoire celtique : les Celtes à l’époque de la Tène moyenne

Atelier La recherche philosophique et l’organisation des masters en Europe

Atelier sur la transdisciplinarité

Bergson et Jaurès : Métaphysique, politique et histoire

Bergson professeur

Catastrophe et Récit

CAVOUR L’EUROPEEN et la "révolution diplomatique"

Colloque Éthique et esthétique de la corrida

Colloque Évaluer, dévaluer, réévaluer l’évaluation

Colloque "École 2.0 ?"

Colloque "De Brunschvicg à Bachelard"

Colloque "J’ai tué" : Violence guerrière et fiction

Colloque "Le déni de l’Holocauste" / "Holocauste Denial" Conference

Colloque "Vous avez dit Hopper ?"

Colloque Alain, littérature et philosophie mêlées

Colloque Albert Camus : littérature, morale, philosophie

Colloque Architectures au-delà du mur, Berlin-Varsovie-Moscou, 1989–2006

Colloque Autour de "Logiques des Mondes" d’Alain Badiou

Colloque Cavaillès, Lautman, Canguilhem : le concept, l’être, la vie

Colloque Ceci tuera cela : Autour de Victor Hugo

Colloque Celtes et Gaulois : L’archéologie face à l’histoire

Colloque Continuous Dynamics and Computability

Colloque Daniel Arasse

Colloque De l’imitation dans les musées

Colloque Derrida, la tradition de la philosophie

Colloque Einstein chez les philosophes : La réception philosophique de la relativité

Colloque en l’honneur du bicentenaire de la publication de Philosophie zoologique de Lamarck

Colloque Enseignement des sciences à l’école primaire : Éducation à l’environnement pour un développement durable

Colloque Figures de l’imposture, entre philosophie, littérature et sciences

Colloque Fonctionnement cérébral et apprentissage

Colloque Fractures et recompositions

Colloque Jean Deny : Les relations culturelles et scientifiques entre Turquie et France au XXe siècle

Colloque Jean-Paul Sartre : De L’Être et le Néant à la Critique de la raison dialectique

Colloque L’œuvre d’art totale, un simple décor ?

Colloque L’amitié et les sciences, de Descartes à Lévi-Strauss

Colloque L’art et la mesure : Histoire de l’art et approches quantitatives, sources, outils, méthodes

Colloque L’essor des neurosciences, France, 1945–1975

Colloque L’histoire des sciences, des techniques et de la médecine en Europe

Colloque La critique / Le critique

Colloque La pensée de Pierre Boulez à travers ses écrits

Colloque La philosophie du soin - Éthique, médecine et société

Colloque La physiognomonie à la Renaissance / The Arts and Sciences of the Face 1500–1850

Colloque La terminologie, discipline scientifique

Colloque Le corps souffrant sur la scène contemporaine

Colloque Le dialogue ou les voies du dissensus : Philosophie et théâtre contemporains

Colloque Le rayonnement de la philosophie polonaise au XX° siècle

Colloque Le terrorisme : Un défi à la pensée politique

Colloque Les défis d’Hilary Putnam

Colloque Les disciplines face à leur histoire

Colloque Les engagements ontologiques des sciences

Colloque Levinas et la philosophie du XXe siècle en France

Colloque Littérature et vanité : La trace de l’Ecclésiaste de Montaigne à Beckett

Colloque Littérature et vanité : La trace de l’Ecclésiaste de Montaigne à Beckett

Colloque Mathématiques, Sciences expérimentales et d’observation à l’école primaire

Colloque Mutations de l’écriture : Arts et sciences

Colloque Neurosciences cognitives du développement

Colloque Origines de la vie : auto-organisation et/ou évolution biologique ?

Colloque Ouvrir la logique au monde

Colloque Penser la catastrophe : Entre nature et politique

Colloque Promenade dans la physique d’aujourd’hui / A Stroll through Today’s Physics

Colloque Proust dans l’œil des philosophes

Colloque Qu’est-ce qui est réel ?

Colloque Raymond Aron : Genèse et actualité d’une pensée politique

Colloque Roland Barthes : Littérature et philosophie des années 1960

Colloque Science, technologie et créativité

Colloque Simone Weil et la philosophie du XXe siècle

Colloque Temps long : De la physique aux sciences humaines

Colloque Vladimir Jankélévitch : Actuel inactuel

Colloque Voyage au centre du récit

Conférence sur les travaux de la commission Sen-Stiglitz

De-Teleologising History of Money and Its Theory

Des psychanalystes face à la politique française à l’égard des Roms et des Gens du Voyage

Ecole d’été Langues et langage : compréhension, traduction, argumentation

Enfance et littérature : colonies et colonisation

Ethnografeast II : La fabrique de l’ethnographie

Forum de la régulation 2003

Journée Économie et sciences sociales

Journée Agriculture, forêts, pêche : nature exploitée et ingénierie écologique

Journée Architectures imaginaires

Journée Aspects historiques et philosophiques de la théorie des Catégories

Journée Conservation, restauration, évaluation écologique

Journée d’études : Dessiner le monde, une exploration des imaginaires cartographiques

Journée d’études autour de Richard Shusterman

Journée d’études L’anneau des disciplines

Journée d’études sur la représentativité

Journée Empirical Study of Juridical Review

Journée Environnement, agriculture et éthique (aspects scientifiques, industriels, et sociétaux)

Journée Environnement, agriculture, éthique

Journée GDR Dynamo à l’Institut Henri-Poincaré

Journée Georges Bram 2011 (10ème journée de Conférences en Histoire des sciences et épistémologie)

Journée Henri Cartan

Journée Images, raison et raisonnement

Journée Intrications philosophie-sciences de la nature

Journée Jean Hyppolite : Entre structure et existence

Journée Jean Wahl, le multiple

Journée Jules Verne et la vulgarisation scientifique

Journée L’action : Délibérer, décider, accomplir

Journée L’Europe face à sa constitution

Journée L’histoire de l’art, de l’Université à l’École (INHA)

Journée La classification : Problèmes logiques et épistémologiques

Journée La justice, approches croisées

Journée Le bijou, ses fonctions et ses usages de la Préhistoire à nos jours

Journée Le classicisme hollywoodien

Journée Le concept de ‘randomness’ : aléatoire et stochastique

Journée Les Principles of Psychology de William James et leur postérité scientifique aujourd’hui

Journée Les archives de l’enseignement supérieur français contemporain

Journée Lyrisme : Autour de Jonathan Culler

Journée Mathematical Foundations of Learning Theory

Journée Michel Foucault : Travaux actuels

Journée Parsifal, une œuvre pour notre temps ?

Journée Perspectives nouvelles en biologie théorique

Journée Pierre Hadot et la philosophie française contemporaine

Journée Pilotage, gouvernance, excellence

Journée publique de réflexion : Quelle éthique face au changement climatique ? Agir en situation d’incertitude

Journée Regards insolites sur la sexualité

Journée Science et démocratie

Journée Trois corps, classique-quantique, discret-continu

Journée Visualization of High-Resolution 3D Turbulent Flows

Journées d’études publiques du groupe MENS (Métaphysique à l’ENS)

Journées Esthétique : histoire et théories

L’héritage scientifique de Jacques Herbrand

La dynamique non linéaire du climat : journée scientifique en l’honneur de Michael Ghil

La philosophie française contemporaine en Asie

La place du mécénat dans les institutions publiques : quelles contreparties ?

La psychothérapie en Europe, spécificités nationales et tendances communes

Le mois de la science-fiction de l’ENS

Le rayonnement oriental des Parthes

Les chantiers de construction en Italie et dans les provinces romaines – III. L’économie des chantiers

Les inspirations françaises dans l’Empire ottoman et la Turquie moderne

Littérature et thérapeutique des passions : la catharsis en question

Portrait de villes américaines à l’âge global

Quatrième journée de protohistoire celtique : Le chercheur, le gisement archéologique et les problématiques de l’âge du Fer européen

Questions de phrasé

Réseaux sociaux, analyse et data mining

Rencontres sur l’archéologie de la construction : les chantiers publics de l’Occident romain

Round Table - GPC 2007 (Grid and Pervasive Computing)

Séminaire ARC2 "Accumulation, régulation, croissance et crise"

Semaine arabe 2011

Semaine de commémoration et de réflexion sur la Shoah

Semaine de l’Europe

Semaine de l’histoire 2003 : L’histoire face à la demande sociale

Semaine de l’histoire 2006 : L’historien et les images

Semaine de l’histoire 2007 : L’historien face à l’actualité de l’Antiquité à nos jours

Semaine de l’histoire 2008 : L’historien et les modes

Semaine de l’histoire 2009 : Histoire et fiction

Symposium Facets of quantum physics at ENS

The Journées 2010 "Systèmes de référence spatio-temporels"

Troisième congrès de la SPS : Sciences et décision