Monash University
Browse

Publications

  • The ACRV picking benchmark: A robotic shelf picking benchmark to foster reproducible research
  • Design of a multi-modal end-effector and grasping system
  • Picking the right robotics challenge
  • The limits and potentials of deep learning for robotics
  • Coordinated Heterogeneous Distributed Perception based on Latent Space Representation
  • Special issue on deep learning in robotics
  • Design of a multi-modal end-effector and grasping system: How integrated design helped win the amazon robotics challenge
  • Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter
  • Zero-shot Sim-to-Real Transfer with Modular Priors
  • Zero-shot Sim-to-Real Transfer with Modular Priors
  • What Would You Do? Acting by Learning to Predict
  • Cartman: Cartesian Manipulator for Warehouse Automation in Cluttered Environments
  • The ACRV Picking Benchmark: A Robotic Shelf Picking Benchmark to Foster Reproducible Research.
  • A Modular Software Framework for Eye-hand Coordination in Humanoid Robots
  • A Distributed Robotic Vision Service
  • Guest Editorial Open Discussion of Robot Grasping Benchmarks, Protocols, and Metrics
  • Hierarchical Grasp Detection for Visually Challenging Environments
  • Hierarchical Grasp Detection for Visually Challenging Environments
  • Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination
  • Richardson-Lucy Deblurring for Moving Light Field Cameras
  • A Robustness Analysis of Deep Q Networks
  • Visual Servoing from Deep Neural Networks
  • Adversarial Discriminative Sim-to-real Transfer of Visuo-motor Policies
  • Training Deep Neural Networks for Visual Servoing
  • Exciting Students for Systems Programming Through the Use of Mobile Robots
  • Tuning modular networks with weighted losses for hand-eye coordination
  • Richardson-Lucy deblurring for moving light field cameras
  • Cartman: The low-cost Cartesian manipulator that won the Amazon Robotics Challenge
  • Robotic manipulation and the role of the task in the metric of success
  • Visual servoing from deep neural networks
  • Sim-to-real transfer of robot learning with variable length inputs
  • Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach
  • Semantic segmentation from limited training data
  • Modular deep Q networks for sim-to-real transfer of visuo-motor policies
  • Towards vision-based manipulation of plastic materials
  • What would you do? Acting by learning to predict
  • Multi-view picking: Next-best-view reaching for improved grasping in clutter
  • Adversarial discriminative sim-to-real transfer of visuo-motor policies
  • Quantifying the reality gap in robotic manipulation tasks
  • Benchmarking simulated robotic manipulation through a real world dataset
  • Learning robust, real-time, reactive robotic grasping
  • Expert systems: Special issue on “Machine Learning Methods Neural Networks applied to Vision and Robotics (MLMVR)”
  • Training deep neural networks for visual servoing
  • Special issue on deep learning for robotic vision
  • Multisensory assisted in-hand manipulation of objects with a dexterous hand
  • Multi-modal generative models for learning epistemic active sensing
  • A distributed robotic vision service
  • Guest Editorial Open Discussion of Robot Grasping Benchmarks, Protocols, and Metrics
  • Special issue on deep learning in robotics
  • EGAD! An Evolved Grasping Analysis Dataset for Diversity and Reproducibility in Robotic Manipulation
  • Teleoperation of a 7 DOF Humanoid Robot Arm Using Human Arm Accelerations and EMG Signals
  • LunaRoo: A Proposal for the Google Lunar XPrize Payload Opportunity with the Part Time Scientists team
  • Evolving ANNs for Spacecraft Rendezvous and Docking
  • Artificial curiosity for autonomous space exploration
  • Extending visual perception with haptic exploration for improved scene understanding
  • CUDA massively parallel trajectory evolution
  • A robustness analysis of Deep Q Networks
  • Learning Visual Object Detection and Localisation Using icVision
  • Autonomous learning of robust visual object detection and identification on a humanoid
  • Transferring spatial perception between robots operating in a shared workspace
  • Reactive reaching and grasping on a humanoid: Towards closing the action-perception loop on the iCub
  • Humanoid learns to detect its own hands
  • Cartesian Genetic Programming for Image Processing
  • Learning Spatial Object Localization from Vision on a Humanoid Robot
  • Curiosity Driven Reinforcement Learning for Motion Planning on Humanoids
  • A Bottom-Up Integration of Vision and Actions To Create Cognitive Humanoids
  • On Cooperation in a Multi Robot Society for Space Exploration
  • Improving robot vision models for object detection through interaction
  • An Integrated, Modular Framework for Computer Vision and Cognitive Robotics Research (icVision)
  • Artificial neural networks for spatial perception: Towards visual object localisation in humanoid robots
  • The Modular Behavioral Environment for Humanoids & other Robots (MoBeE).
  • Interactive computational imaging for deformable object analysis
  • Extending Visual Perception With Haptic Exploration for Improved Scene Understanding
  • Task-Relevant Roadmaps: A Framework for Humanoid Motion Planning
  • ALife in Humanoids: Developing a Framework to Employ Artificial Life Techniques for High-Level Perception and Cognition Tasks on Humanoid Robots.
  • LunaRoo: Designing a hopping lunar science payload
  • Towards Spatial Perception: Learning to Locate Objects From Vision
  • Mars Terrain Image Classification using Cartesian Genetic Programming
  • The Need for More Dynamic and Active Datasets
  • icVision: A Modular Vision System for Cognitive Robotics Research
  • Artificial Curiosity for Autonomous Space Exploration
  • CUDA Massively Parallel Trajectory Evolution
  • Towards vision-based deep reinforcement learning for robotic motion control
  • Modular deep q networks for sim-to-real transfer of visuo-motor policies
  • The need for dynamic and active datasets
  • On cooperation in a multi robot society for space exploration
  • ALife in humanoids: Developing a framework to employ artificial life techniques for high-level perception and cognition tasks on humanoid robots
  • LunaRoo: A proposal for the Google Lunar XPrize Payload Opportunity with the part time scientists team
  • Task-relevant roadmaps: A framework for humanoid motion planning
  • Mars terrain image classification using Cartesian genetic programming
  • Evolving ANNs for spacecraft rendezvous and docking
  • Towards spatial perception: Learning to locate objects from vision
  • Improving robot vision models for object detection through interaction
  • Towards vision-based deep reinforcement learning for robotic motion control
  • Designing a robotic hopping cube for lunar exploration
  • Teleoperation of a 7 DOF humanoid robot arm using human arm accelerations and EMG signals
  • icVision: A modular vision system for cognitive robotics research
  • Autonomous Learning Of Robust Visual Object Detection And Identification On A Humanoid
  • Humanoid Learns to Detect Its Own Hands
  • MT-CGP: Mixed type cartesian genetic programming
  • Robot formations for area coverage
  • A survey of multi-robot cooperation in space
  • Project SMURFS - A society of multiple robots
  • Multi-robot Cooperation in Space: A Survey
  • A Survey of Multi-Robot Cooperation in Space
  • Multi-robot formations for area coverage in space applications
  • The REEL-E Project (HALE launch campaign) - Results and Lessons Learned.
  • Simulating Resource Sharing In Spacecraft Clusters Using Multi-Agent-Systems
  • Universal Robot-Human-Operations Cooperative Unit
  • Learning Setup Policies: Reliable Transition Between Locomotion Behaviours
  • Deep Learning Approaches to Grasp Synthesis: A Review

Usage metrics

Co-workers & collaborators

Juxi Leitner's public data