[mlpack] Robotic Arm - GSoC Project Idea

Vaibhav Jain vabsweb at gmail.com
Tue Mar 13 11:38:14 EDT 2018


Hey Marcus.


> That is definitely one way, but perhaps we can simplify the way, e.g. by
> using
> only a single camera, this makes the overall training more challenging
> since,
> the preprocessing step is more complex; but at the end, you can directly
> use the
> pipeline for a broad range of applications, without extracting a lot of
> model-
> specific information. Let me know what you think.
>
> About the setup of the simulation, it would be best to use a single Kinect
Sensor which will cover the field of operation as well as the whole arm. We
can run any other package (like this <http://wiki.ros.org/openni_tracker>
one) to detect the motion of the arm. The main challenge would be to train
the arm to position itself for better grip on the object. By that I mean,
it is easy to move the end effector from one position in 3D space to
another. We do not need machine learning for that. What we have to train
the robot is how to position itself around the object so that it can pick
random objects of different dimensions. That's my opinion. What do you
think?

Regards,
-- 
- Vaibhav Jain
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20180313/1bfe7665/attachment.html>


More information about the mlpack mailing list