An autonomous harvest robot capable of identifying, picking, and depositing apples in a span of seven seconds at full capacity has been developed by Dr Chao Chen with his research team from the Department of Mechanical and Aerospace Engineering in Monash University.
We’re told, “following extensive trials in February and March at Fankhauser Apples in Drouin, Victoria, the robot was able to harvest more than 85% of all reachable apples in the canopy as identified by its vision system.”
Of all apples harvested, “less than 6% were damaged due to stem removal. Apples without stems can still be sold, but don’t necessarily fit the cosmetic guidelines of some retailers.”
|
When the robot was set to half its maximum speed, the median harvest rate was 12.6 seconds per apple. In streamlined pick-and-drop scenarios, the cycle time reduced to roughly nine seconds.
At full speed, individual apple harvesting time can drop to seven seconds.
Dr Chen, Director of Laboratory of Motion Generation and Analysis (LMGA), says: “Our developed vision system can not only identify apples in a tree within its range in an orchard environment by means of deep learning, but also identify and categorise obstacles, such as leaves and branches, to calculate the optimum trajectory for apple extraction.”
While automatic harvest robots are promising for the agricultural industry, it pose challenges for fruit and vegetable growers.
Robotic harvesting of fruit and vegetables require vision system to detect and localise the produce. To increase the success rate and reduce the damage of produce during the harvesting process, information on the shape, and stem-branch joint location and orientation are also required.
To counter this problem, researchers created a state-of-the-art motion-planning algorithm featuring “fast-generation of collision-free trajectories to minimise processing and travel times between apples, reducing harvesting time and maximising the number of apples that can be harvested at a single location.”
Dr Chen claims the “robot’s vision system can identify more than 90% of all visible apples seen within the camera’s view from a distance of approximately 1.2m. The system can work in all types of lighting and weather conditions and takes less than 200 milliseconds to process the image of an apple.”
“We also implemented a ‘path-planning’ algorithm that was able to generate collision-free trajectories for more than 95% of all reachable apples in the canopy. It takes just eight seconds to plan the entire trajectory for the robot to grasp and deposit an apple”, Dr Chen says.
“The robot picks apples with a specially designed, pneumatically powered, soft gripper with four independently actuated fingers and suction system that grasps and extracts apples efficiently, while minimising damage to the fruit and the tree itself.
“In addition, the suction system draws the apple from the canopy into the gripper, reducing the need for the gripper to reach into the canopy and potentially damaging its surroundings. The gripper can extract more than 85% of all apples from the canopy that were planned for harvesting.”
Dr Chen says the system can address the challenges of solving the current labour shortage in Australia’s agricultural sector and the future food crisis as population grows and decreased arable land.”
He says technological advances could also help increase the productivity of fruit and attract younger people to working in farms with this technology.
The research team consists of Dr Wesley Au, Xing Wang, Hugh Zhou, and Dr Hanwen Kang, and led by Dr Chen. The project is funded by the Australian Research Council Industrial Transformation Research Hubs scheme.