C. Lehnert: Using active robotic vision techniques to deal with visual occlusions in robotic harvesting


Bio Information

Chris Lehnert is a lecturer within the Electrical Engineering and Robotics school at the Queensland University of Technology (QUT) and is a Chief Investigator with the QUT Centre for Robotics focusing on research within the robotic physical interaction program. Chris has over ten years’ experience in teaching and research in robotics. His current research interests are on developing novel robotic perception, grasping and control methods for challenging problems in the real world, such as autonomous harvesting in horticulture and industrial warehouse automation.

Chris received a B.Eng (Hons) in Mechatronics from The University of Queensland in 2009 and a PhD in robotics from the Queensland University of Technology in 2015 where his dissertation developed novel machine learning methods for controlling complex robotic arms. Chris is an active member of the international robotics community including as a guest editor for the International Journal of Field Robotics (JFR).

Chris led a team at QUT to develop one of the world’s best robotic capsicum harvesters, “Harvey”, which can detect and harvest capsicums within a greenhouse demonstrated to achieve a harvesting success rate of 76.5%. In 2017, Chris helped lead a team from QUT to win the international Amazon Robotics Challenge, using a custom robot for identifying household items in clutter and picking and placing them into packing boxes.

Chris is currently leading the robotics activity stream within the Future Food Systems CRC, a $150 million centre for 10 years co-funded by industry, research partners and the federal government. The CRC is focusing on regional food precincts and next generation indoor cropping, in particular looking at developing robotics and automated systems for indoor cropping.


Presentation Abstract

In this presentation, I will discuss the potential for using active robotic vision methods to help deal with occlusions when autonomously harvesting crops in indoor protected cropping systems. In particular, I will outline a novel robotic vision method (3D Move to See) that aims at guiding a robots harvesting tool towards a target crop while maximising its view of the crop and being efficient in time when executing the harvesting task.

I will focus on robotic harvesting of indoor crops which offers an attractive solution to reducing labour costs while enabling future potential value through selective harvesting, optimising crop quality, scheduling and therefore profit.


Video

Ask the Speaker

You can post your questions for the speaker of this talk here until Nov 8. Your questions are visible to all other registered participants of the conference and will be visible together with your username that consists of the beginning of your email address. Participants can “like” your question. Questions with more like are more likely to be discussed during the live Q&A session on November 10, 2020.