About Me:
I am a postdoctoral researcher working with Scott Niekum in the Personal Autonomous Robotics Lab (PeARL). I recieved my PhD in 2021 from the College of Computer and Information Sciences at the University of Massachusets, where I worked with Philip S. Thomas and Yuriy Brun.
My research focuses on ensuring the safe and fair use of machine learning, by designing algorithms that provide highconfidence guarantees of safe outcomes in a variety of problem settings, such as supervised learning and reinforcement learning problems.
CV available upon request.
Selected Papers:

Fairness Guarantees under Demographic Shift
Giguere, S. Metevier, B., Brun, Y., Thomas, PS, da Silva B.C., and Niekum, S. International Conference on Learning Representations, 2022.

Distributional DepthBased Estimation of Object Articulation Models
Jain, A. Giguere, S. Lioutikov, R., and Niekum, S. Conference on Robot Learning, 2022.

SOPE: Spectrum of OffPolicy Estimators
Yuan, C., Chandak, Y., Giguere, S., Thomas, PS, and Niekum, S. Advances in Neural Information Processing Systems, 2021.

Safe and Practical Machine Learning
Giguere, S. Ph.D. Thesis, 2021.

Offline Contextual Bandits with High Probability Fairness Guarantees
Metevier, B., Giguere S., Brockman, S., Kobren, Y., Brun, Y., Brunskill, E. and Thomas, P. Advances in Neural Information Processing Systems, 2019.

Preventing undesirable behavior of intelligent machines
Thomas, P.S., da Silva, B.C., Barto, A., Giguere, S., Brun, Y., and Brunskill, E. Science, 2019.

A Manifold Approach to Learning Mutually Orthogonal Subspaces
Giguere, S., Garcia, F. and Mahadevan, S. arXiv:1703.02992, 2017.

A Fully Customized Baseline Removal Framework for Spectroscopic Applications
Giguere, S., Boucher, T., Carey, C., Mahadevan, S. and Dyar, M.D. Applied Spectroscopy, 2017.

Projected Natural ActorCritic
Thomas, P. S., Dabney, W., Mahadevan, S., and Giguere, S. Advances in Neural Information Processing Systems, 2013.

Basis Adaptation for Sparse Nonlinear Reinforcement Learning
Mahadevan, S., Giguere, S., and Jacek, N. Proceedings of the AAAI Conference, 2013.

Attribit: Content Creation with Semantic Attributes
Chaudhuri, S., Kalogerakis, E., Giguere, S., and Funkhouser, T. Proceedings of the 26th annual ACM symposium on User interface software and technology, 2013.
Workshop Papers

AuxAIRL: EndtoEnd SelfSupervised Reward Learning for Extrapolating beyond Suboptimal Demonstrations
Cui, Y., Saran, A., Giguere, S., Stone, P. and Niekum, S. Proceedings of the ICML Workshop on SelfSupervised Learning for Reasoning and Perception, 2021.

An Optimization Perspective on Baseline Removal for Spectroscopy
Giguere, S., Carey, C., Boucher, T., Mahadevan, S. and Dyar, M.D. Proceedings of the 5th IJCAI Workshop on Artificial Intelligence in Space, 2015.

Manifold Learning for Regression of Mars Spectra
Boucher, T., Carey, C., Giguere, S., Mahadevan, S., Dyar, M.D., Clegg, S. and Wiens, R. Proceedings of the 5th IJCAI Workshop on Artificial Intelligence in Space, 2015.

Automatic WholeSpectrum Matching
Carey, C., Boucher, T., Giguere, S., Mahadevan, S. and Dyar, M.D. Proceedings of the 5th IJCAI Workshop on Artificial Intelligence in Space, 2015.
Safe, Fair, and Reliable Machine Learning
As increasingly sensitive decision making problems become automated using models trained by machine learning algorithms, it is important for machine learning researchers to design training algorithms that provide assurance that the models they produce will be well behaved. While the meaning of "wellbehaved" may vary between applications, the requirement is the same: practitioners require assurances that the models they train will behave in predictable ways once they are deployed.
In our 2019 Science paper, Preventing undesirable behavior of intelligent machines, we introduced a class of algorithms that provide highconfidence safety guarantees. While most existing safe machine learning algorithms make strong assumptions about how unsafe behavior is defined, the algorithms we propose use a flexible interface that allows the user to specify their definitions in a straightforward way at training time, and that is general enough to enforce a wide range of commonly used definitions.
While these algorithms provide practitioners with significant advantages when deploying their models, there are several challenges that can arise in practice. Notably, users often require guarantees to hold even when a trained model is deployed into an environment that differs from the training environment. In these settings, the safety guarantees provided by existing methods are no longer valid when the environment changes, presenting significant risk. To overcome these challenges, we have proposed algorithms that provide safety guarantees under distribution shift. For more details, read our 2022 ICML paper, Fairness Guarantees under Demographic Shift, or my 2021 Ph.D. dissertation, Safe and Practical Machine Learning.
Optimization on Subspace Manifolds
In many learning algorithms, data is projected onto a lowdimensional subspace. If the subspace is chosen well, this process can remove redundant features and simplify the overall learning process. Often, it's sufficient to generate the subspace using the Principal Components Analysis (PCA), but this is certainly not the only choice. Depending on the application, a better approach may be to optimize the subspace using a loss function that captures the real goal of the problem.
Optimizing a variable that represents a subspace is nontrivial because the set of all subspaces is not a Euclidean space. Instead, it forms a manifold called the Grassmannian. As a result, a variable that initially represents a subspace may no longer be valid if it is updated using standard gradientbased approaches. Fortunately, there are specialized Riemannian optimization methods that solve this problem by incrementally moving along the surface of the manifold. This allows the loss to be minimized without having to constrain the variable explicitly.
While it's useful to have a convenient way of optimizing over subspaces, there are other more general constraints that can be useful in practice.
For example, suppose some data is composed of signals from two distinct processes. We might ask if a specific feature we observe is attributable to the first process or the second one. Given appropriate domain knowledge, we could create a pair of subspaces, and optimize them to span the features generated by each process. However, if the subspaces are learned separately, there is nothing to stop a feature from being included in both of them. To ensure that each feature is only contained in one of the subspaces, there must not be any overlap between the subspaces. Thus, to implement an approach like this, we would need to optimize over pairs (or more generally, collections) of mutually orthogonal subspaces, which is not possible using the Grassmannian manifold.
Recently, myself and others have proposed the partitioned subspace manifold, which generalizes the Grassmannian and captures the geometry of these constraints. We've also derived Riemannian optimization methods for the manifold, making it easy for users to apply these constraints in their applications. Currently, we have used this approach for multipledataset analysis and for domain adaptation, and have found that the manifold offers several interesting and promising characteristics when setting up an optimization problem. You can read more about this work in our paper, "A Manifold Approach to Learning Mutually Orthogonal Subspaces".
Baseline Removal for Spectroscopic Data
One of the many useful tools onboard the NASA Mars Rover Curiosity is the ChemCam instrument, which uses LaserInduced Breakdown Spectroscopy (LIBS) to obtain data describing the chemical composition of the Martian surface. Each LIBS sample is a high dimensional signal that is transmitted to Earth where it can be analyzed. In LIBS as well as many other areas of spectroscopy, the shape, size, and distribution of peaks present in spectral data are of central interest as they encode properties of the sample that are useful for prediction. Unfortunately, spectral data is often corrupted by physical phenomena that introduce a smoothly varying continuum or baseline into the signal. The problem of correcting for these effects is known as baseline removal.
Over several decades, a large number of methods have been proposed that solve this problem with varying degrees of success, but selecting and tuning the best method for a given task is tedious and time consuming. It is therefore desirable to automate the search for the ideal baseline removal method and its parameters. Alongside Professor Darby Dyar, we designed a system that generates novel baseline removal methods optimized for the particular problem a scientist might be working on. Following our initial investigations that showed that existing methods share many common subtasks, such as locating peaks in the spectrum, our approach combines them in a variety of ways to discover a baseline removal method that performs best at a given task, as specified by a userprovided task objective function. To determine the best method and parameters, we employ global optimization techniques to efficiently rule out configurations that are unlikely to perform well.