ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • [paper] tossingbot _Andy Zeng
    AAU TC-bot project --- mini 1 2020. 2. 3. 15:30

    1. Introduction 

    2. Related Work 

    3. Method Overview 

    3-A. A Perception Module : Learning Visual Representations

    3-B. Grasping Module : Learning Parallel-Jaw Grasps. 

    3-C. Throwing Module : Learning Throwing Velocities 

    4. Learning Residual Pysics for Throwing 

    5. Jointly Learning Grasping and Throwing 

    6. Evaluation 

    6-A. Experimental Setup 

    6-B. Baseline Methods 

    6-C. Baseline Comparisons 

    6-D. Pick and Place Efficiency 

    6-E. Learning Stable Grasps for Throwing 

    6-F. Generalizing to New Target Locations 

    7. Discussion and Future Work 

    8. Appendix 

    8-A. Additional Training Details 

    8-B. Additional Timing Details 

    8-C. Additional Details of inferring 

    8-D. Additional Details of Human Baseline Experiments

    8-E. Additional Visualizations of Learned Grasps 

    8-F. Emerging Object Semantics from Interaction  

     

    I. Introduction 

    from pre-throw conditions (e.g. initial grasp of the object) 

    to varying object-centric properties (e.g. mass distribution, friction, shape) and dynamics (e.g. aerodynamics). 

     

    사전 던지기 조건 (예 : 물체의 초기 파악)에서

    다양한 물체 중심 특성 (예 : 질량 분포, 마찰, 모양) 및 역학 (예 : 공기 역학).

     

    prior studies are often confined to assuming homogeneous prethrow conditions (e.g. object fixtured in gripper or manually

     

    이전의 연구는 종종 동일한 전처리 조건 하에서 진행되었다. 

     

    TossingBot, an end-to-end formulation
    that uses trial and error to learn how to plan control
    parameters for grasping and throwing from visual observations.

    The formulation learns grasping and throwing jointly –
    discovering grasps that enable accurate throws, while learning
    throws that compensate for the dynamics of arbitrary objects.

    There are two key aspects to our system:

     

    1) Joing Learning of grasping and throwing policies

    Grasping is directly supervised by the accuracy of
    throws (grasp success => accurate throw), while throws are
    directly conditioned on specific grasps (via dense predictions).

     

    2) Residual learning of throw release velocities:

    The physicsbased controller uses ballistics to provide consistent estimates of^v that generalize well to different landing locations, while the data-driven residuals learn to compensate for objectcentric properties and dynamics. Our experiments show that
    this hybrid data-driven method, Residual Physics, leads to significantly more accurate throws than baseline alternatives.

     

    물리 기반 컨트롤러는 탄도를 사용하여 서로 다른 착륙 위치로 잘 일반화되는 ^ v의 일관된 추정치를 제공하며, 데이터 중심 잔차는 객체 중심 속성과 역학을 보상하는 법을 배웁니다. 우리의 실험은 이 하이브리드 데이터 중심 방법 인 Residual Physics는 기본 대안보다 훨씬 더 정확한 던지기가 가능합니다.

     

    We observe that throwing performance strongly correlates with the quality of grasps,
    and experimental results show that our formulation is capable of learning synergistic grasping and throwing policies for
    arbitrary objects in real settings.

     

    II. RELATED WORK

     

    *Analytical models for throwing (limitation)

    >> In our work, we leverage deep learning and self-supervision to compensate for the dy
    namics that are not explicitly accounted for in contact/ballistic
    models, and we train our policies online via trial and error so
    that they can adapt to new situations on the fly (e.g. new object
    and manipulator dynamics).

    dkfjdlkfjslkfjkdfsj

    *Learning models for throwing

    >> In contrast to prior work, we make no assumptions on
    the physical properties of thrown objects, nor do we assume
    that the objects are at a fixed pose in the gripper before
    each throw. Instead, we propose an object-agnostic pick-andthrow
    formulation that jointly learns to acquire its own prethrow
    conditions (via grasping) while learning throwing control
    parameters that compensate for varying object properties
    and dynamics. The system learns from scratch through selfsupervised
    trial and error, and resets it own training so that
    human intervention is kept at a minimum.

     

    애드센스 -
Designed by Tistory.