Physical and digital books, media, journals, archives, and databases.
Results include
  1. Interpreting contact interactions to overcome failure in robot assembly tasks

    Zachares, Peter Anastasi
    [Stanford, California] : [Stanford University], 2020

    A key challenge towards the goal of multi-part assembly tasks is finding methods for robust sensorimotor control in the presence of uncertainty. In contrast to previous works that rely on a priori knowledge on whether two parts match, we propose a method to learn this through physical interaction. This method involves a hierarchical approach that enables a robot to autonomously assemble parts while being uncertain about part types and positions. In particular, the method's probabilistic approach learns a set of differentiable filters that leverage the tactile sensorimotor trace from failed assembly attempts to update a robot's belief about part position and type. This enables the robot to overcome assembly failure. Through experiments, we demonstrate the effectiveness of the proposed approach on a set of object fitting tasks. The experimental results indicate that the proposed approach achieves higher precision in object position and type estimation, and accomplishes object fitting tasks faster than baselines

  2. Understanding and learning robotic manipulation skills from humans

    Galbally Herrero, Elena
    [Stanford, California] : [Stanford University], 2022

    Humans are constantly learning new skills and improving upon their existing abilities. In particular, when it comes to manipulating objects, humans are extremely effective at generalizing to new scenarios and using physical compliance to our advantage. Compliance is key to generating robust behaviors by reducing the need to rely on precise trajectories. Inspired by humans, we propose to program robots at a higher level of abstraction by using primitives that leverage contact information and compliant strategies. Compliance increases robustness to uncertainty in the environment and primitives provide us with atomic actions that can be reused to avoid coding new tasks from scratch. We have developed a framework that allows us to: (i) collect and segment human data from multiple contact-rich tasks through direct or haptic demonstrations, (ii) analyze this data and extract the human's compliant strategy, and (iii) encode the strategy into robot primitives using task-level controllers. During autonomous task execution, haptic interfaces enable human real-time intervention and additional data collection for recovery from failures. The framework was extensively validated through simulation and hardware experiments, including five real-world construction tasks

  3. Improving and accelerating particle-based probabilistic inference

    Zhu, Michael Hongyu
    [Stanford, California] : [Stanford University], 2021

    Probabilistic inference is a powerful approach for reasoning under uncertainty that goes beyond point estimation of model parameters to full estimation of the posterior distribution. However, approximating intractable posterior distributions and estimating expectations involving high-dimensional integrals pose algorithmic and computational challenges, especially for large-scale datasets. Two main approaches are sampling-based approaches, such as Markov Chain Monte Carlo (MCMC) and Particle Filters, and optimization-based approaches, like Variational Inference. This thesis presents research on improving and accelerating particle-based probabilistic inference in the areas of MCMC, Particle Filters, Particle-Based Variational Inference, and discrete graphical models. First, we present Sample Adaptive MCMC, a particle-based adaptive MCMC algorithm. We demonstrate how Sample Adaptive MCMC does not require any tuning of the proposal distribution, potentially automating the sampling procedure, and employs global proposals, potentially leading to large speedups over existing MCMC methods. Second, we present a pathwise derivative estimator for Particle Filters including the resampling step. The problem preventing a fully differentiable Particle Filter is the non-differentiability of the discrete particle resampling step. The key idea of our proposed method is to reformulate the Particle Filter algorithm in such a way that eliminates the discrete particle resampling step and makes the reformulated Particle Filter completely continuous and fully differentiable. Third, we propose stochastic variance reduction and quasi-Newton methods for Particle-Based Variational Inference. The insight of our work is that for accurate posterior inference, highly accurate solutions to the Particle-Based Variational Inference optimization problem are needed, so we leverage ideas from large-scale optimization. Lastly, we introduce a meta-algorithm for probabilistic inference in discrete graphical models based on random projections. The key idea is to run approximate inference algorithms for an exponentially large number of samples obtained by random projections. The number of samples used controls the trade-off between the accuracy of the approximate inference algorithm and the variance of the estimator

Guides

Course- and topic-based guides to collections, tools, and services.
No guide results found... Try a different search

Library website

Library info; guides & content by subject specialists
No website results found... Try a different search

Exhibits

Digital showcases for research and teaching.
No exhibits results found... Try a different search

EarthWorks

Geospatial content, including GIS datasets, digitized maps, and census data.
No earthworks results found... Try a different search

More search tools

Tools to help you discover resources at Stanford and beyond.