Where we stand now

A lot has been done since the start of the project in September. To give you an overview of where we stand and what we have been up to during the first two and a half months, here come the hot news!

First, we have decided what the most important tasks we want to fulfil with our robot are. We asked ourselves: What is something that has not been properly solved by current systems? 

[expander_maker id=”1″ more=”Read more” less=”Read less”]
One point we identified was that many robots have trouble moving through a field without getting stuck, which of course is not ideal in agricultural applications. So, we set ourselves the goal to design a robot that can deal with all kinds of soil conditions. Uneven soil, wet soil, slopes, you name it. At the same time, our system will still be light enough to not cause undesired soil compression which would hinder crop growth. We are going to tackle this challenge by designing a strong suspension in combination with four-wheel drive.

Also, we want to skip the step of making a map of the field from data collected during the sowing process and instead implement an algorithm that can robustly navigate through a sugar beet field using only its sensors.

To do the weeding in the field, we are going to apply two different methods: Between the rows we are going to pull through a passive tool, namely sweep shares, while in the row itself a robotic arm is going to selectively destroy different weeds by using a milling head. 

Also, we have received great support: Be it from sponsors, our coaches or the people from ETH Strickhof who have been sharing their knowledge about agriculture with us and allow us to test our prototype on one of their fields.

Right now, until the end of the semester we are mainly busy designing the system (drinking lots of coffee in the process) and getting it ready for production. Also, first steps towards finding the best navigation and weed detecting methods are being taken by testing different cameras and accumulating knowledge about different vision algorithms.

We are happy with our progress so far and are looking forward to giving you the next update in a few weeks.


Scroll to top