Vision

MyGym enables you to use pre-trained vision models to extend the versatility of your training scenarios. The vision models can be used instead of ground truth data from simulator to retrieve information about the environment where robot performs its task. Vision models take simulator’s camera data (RGB and/or depth image) as inputs to inference and return information about observed scene. Thanks to that, your training becomes independent on ground truth from simulator and can be therefore easier transfered to real robot tasks.

MyGym integrated two different vision modules - YOLACT and VAE - and you can alternate between ground truth and these when specifying the type of source of reward signal in config file or as a command line argument: reward_type= either gt (ground truth) or 3dvs (YOLACT) or 2dvu (VAE).

YOLACT

Mygym implements YOLACT 1 for instance segmantation. If 3dvs is chosen for reward_type, the pre-trained YOLACT model is used to get observations from the environment. The input into YOLACT inference is RGB image rendered by the active camera, the inference results are masks and bounding boxes of detected objects. The vision module further calculates the position of centroids of detected objects in pixel space. Lastly, the vision module utilizes the depth image from the active camera to project the object’s centroid into 3D worl coordinates. This way, the absolute position of task objects is obtained only from sensory data without any ground truth inputs.

The current pre-trained model can detect all Objects and three of Robots including their grippers (kuka, jaco, panda).

If you would like to train new YOLACT model, you can use prepared dataset generator available in myGym, see Generate dataset. For instructions regarding training itself, visit YOLACT home page.

1

Daniel Bolya, Chong Zhou, Fanyi Xiao, & Yong Jae Lee (2019). YOLACT: Real-time Instance Segmentation. In ICCV.

VAE

The objective of an unsupervised version of the prepared tasks (reach task, push task, pick and place etc.) is to minimize the difference between the actual and goal scene images. To measure their difference, we have implemented a variational autoencoder (VAE) that compresses each image into an n-dimensional latent vector. Since the VAE is optimized so that it preserves similarities among images also in the latent space (scenes with objects close to each other will have their encoded vectors also closer to each other), it is possible to measure the euclidean distance between the encoded scenes and use it for reward calculation - i.e., the smaller the euclidean distance between actual and goal image, the higher the reward. Pleas note that the limitation of using VAE is that it works conveniently only with 2D information - i.e., it is a very weak source of visal information in 3D tasks such as pick and place.

We provide a pretrained VAE for some of the task scenarios, but we also include code for training of your own VAE (including dataset generation), so that you can create custom experiments.

Note

If you want to use pretrained visual modules, please download them first:

cd myGym sh download_vision.sh

class myGym.envs.vision_module.VisionModule(vision_src='ground_truth', env=None, vae_path=None, yolact_path=None, yolact_config=None)[source]

Vision class that retrieves information from environment based on a visual subsystem (YOLACT, VAE) or ground truth

Parameters:
param vision_src

(string) Source of information from environment (ground_truth, yolact, vae)

param env

(object) Environment, where the training takes place

param vae_path

(string) Path to a trained VAE in 2dvu reward type

param yolact_path

(string) Path to a trained Yolact in 3dvu reward type

param yolact_config

(string) Path to saved Yolact config obj or name of an existing one in the data/Config script or None for autodetection

get_module_type()[source]

Get source of the information from environment (ground_truth, yolact, vae)

Returns:
return source

(string) Source of information

crop_image(img)[source]

Crop image by 1/4 from each side

Parameters:
param img

(list) Original image

Returns:
return img

(list) Cropped image

get_obj_pixel_position(obj=None, img=None)[source]

Get mask and centroid in pixel space coordinates of an object from 2D image

Parameters:
param obj

(object) Object to find its mask and centroid

param img

(array) 2D input image to inference of vision model

Returns:
return mask

(list) Mask of object

return centroid

(list) Centroid of object in pixel sprace coordinates

get_obj_bbox(obj=None, img=None)[source]

Get bounding box of an object from 2D image

Parameters:
param obj

(object) Object to find its bounding box

param img

(array) 2D input image to inference of vision model

Returns:
return bbox

(list) Bounding box of object

get_obj_position(obj=None, img=None, depth=None)[source]

Get object position in world coordinates of environment from 2D and depth image

Parameters:
param obj

(object) Object to find its mask and centroid

param img

(array) 2D input image to inference of vision model

param depth

(array) Depth input image to inference of vision model

Returns:
return position

(list) Centroid of object in world coordinates

get_obj_orientation(obj=None, img=None)[source]

Get object orientation in world coordinates of environment from 2D image

Parameters:
param obj

(object) Object to find its mask and centroid

param img

(array) 2D input image to inference of vision model

Returns:
return orientation

(list) Orientation of object in world coordinates

vae_generate_sample()[source]

Generate image as a sample of VAE latent representation

Returns:
return dec_img

Generated image from VAE latent representation

encode_with_vae(imgs, task='reach', decode=0)[source]

Encode the input image into an n-dimensional latent variable using VAE model

Parameters:
param imgs

(list of arrays) Input images

param task

(string) Type of learned task (reach, push, …)

param decode

(bool) Whether to decode encoded images from latent representation back to image array

Returns:
return latent_z

(list) Latent representation of images

return dec_img

(list of arrays) Decoded images from latent representation back to image arrays

inference_yolact(img)[source]

Infere using YOLACT model

Parameters:
param img

(array) Input 2D image

Returns:
return classes

(list of ints) Classes IDs of detected objects

return class_names

(list of strings) Classes names of detected objects

return scores

(list of floats) Scores (confidence) of object detections

return boxes

(list of lists) Bounding boxes of detected objects

return masks

(list of lists) Masks of detected objects

return centroids

(list of lists) Centroids of detected objects in pixel space coordinates