# Duplicating Robotics Results: Domain Randomization¶

In this ROSject you will find a reporduction of the experiments done in the paper titled Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World by Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, Pieter Abbeel. You can find the original paper here.

This ROSject has been created by Miguel Angel Rodriguez with the supervision of Ricardo Tellez, both from The Construct. You can copy, download, reproduce and share this ROSject, as long as you keep a copy of this paragraph.

## This Live Class is provided as a ROSject¶

A **ROSject** is a ROS project packaged in such a way that all the material it contains (ROS code, Gazebo simulations and Notebooks) can be shared with any body using only a single link. That is what we did with all the attendants to the Live Class, we shared this ROSject with them (so they can have access to all the ROS material they contain).

## Other tools used for this project¶

This version is implemented using the following core systems:

• Keras: This high-level neural networks API allows you to use transperently TensorFlow, CNTK, or Theano. We will use it to make the robot learn with e Deep Convolutional Neural Network, in this case using the model MobileNetV2.

• ROS: We will use ROS as comunication system and we will organise everything around its structure and library.

• Python 2 and Python 3: Due to the nature of Keras, we will have to much Python2 and Python 3 usage. This is specially sensible concerning ROS, which has lower support for Python3 out of the box.

• Gazebo: We will create Gazebo plugins and use the internal GAzebo system to manipulate the environment settings and appearance.

• TensorBoard: Its a module from TensorFlow that will allow us to monitor the training.

WARNING: This Notebook might contain quick flashing colours

## Index¶

We will divide this notebook in two main parts:

• Demo: Its a full fledged demo of a working 10 object and random environment model.
• QuickGuide: Here are the list of steps to perform a training and testing in an experiment of domain randomization

# Quick Demo¶

In this quick demo, you are going to see the final result of the whole process. That is:

1. How the objects are randomly spread on the table
2. How the already trained deep learning system is able to identify the correct position of the spam object.
3. How the robot plans a trajectory to grasp the spam and throw it into the garbage

### Start the simulation¶

We need to launch the simulation to Evaluate performance of the trained model:

---> Simulation Dropdown pannel --> fetch_domain_randomization --> main_const_orientation.launch

You should get something similar to the image. Bare in mind that the colours are the default ones, once the dynamic environment service is called they will start changing:

### Launch the deep network controller¶

Execute in WebShell #1

cd /home/user/ai_ws rm -rf build/ devel/ source /home/user/ai_ws/dnn_venv/bin/activate catkin_make source devel/setup.bash rospack profile roslaunch dcnn_training_pkg start_fetch_randomenv_10obj.launch

You can now Open The Graphical Tools panel, and it will appear an RVIZ window similar to this:

# How did we achieve all that?¶

Here you will find a full example on how to:

• Launch the Simulation: Learn how to launch the simulation needed.
• Generate training material: Learn how to generate all the training data needed for the training fase.
• Generate the DataSet: From the training materia, create the Dataset used directly by the training.
• Train your model: Train the MobileNetV2 model with the images and data you generated.
• Evaluate performance of the trained model: Here you will load the model trained and see if the Fetch Robot can grasp the Object and how precise is your model.

This quickguide launches a learning pipeline that has the following:

• Two objects: Spam ( object to grasp )and one distractor ( Suzzane ).
• The training wont be done with oclusion with fetch arm.
• Random position of models inside space in the table where FetchRobot can reach. Using a Beta distribition to generate more on the edge of the table positions. The orientations are unchanged.
• Prediction of the position of the Spam in 2D space of the table plane.
• Environment texture and lighting Random changes.
• Camera random position stays fixed and lights also.
• 96x96 Size of images used.
• Training of 30 epoch and retrain starting from previously generated weights another 30 epoch.
• Validation by trying to graps 10 times the Spam Object in a random environment each time.

## Simulation Launch¶

We need to launch the simulation for the Generate training material and Evaluate performance of the trained model parts:

---> Simulation Dropdown pannel --> fetch_domain_randomization --> main_2objects_fetchrange_randomenv.launch

You should get something similar to the image. Bare in mind that the colours are the default ones, once the dynamic environment service is called they will start changing:

## Part I: Generate Random Env Material¶

Here you will generate all the images and the xml files with the Pose data of the demo_spam object in the scene. The number of the training material can be changed in the launch file, and other object related variables.. In this case you will generate 20 images by default.

randomgazebomanager_pkg/create_training_material_2objects.launch

In [ ]:
<arg name="number_of_elements" default="20" />


If you dont have the time to generate the necesary number of images 120.000 ( would take around 40 hours, depending on the computer power ), you can download them from our repo, where we have kindly done it for you. In this ROSDS its already provided for you, but you could also download the generated training material from that git. You have to take into account that the whole training set is 4.1GB of data.

On part II of this Live Class series about Domain Randomization, we will teach you how to paralelize the creation of the datasets using Gym Computers, so you can

### reduce the amount of time from 40 hours to just a few minutes!¶

If you want to generate your own Training Data please continue. You don't have to generate 120.000 images, but the more the better performance you will obtain in the training phase.

WARNING: This will erase the Database data prestored here.

Execute in WebShell #1

cd /home/user/ai_ws rm -rf build/ devel/ source /home/user/ai_ws/dnn_venv/bin/activate catkin_make source devel/setup.bash rospack profile roslaunch randomgazebomanager_pkg create_training_material_2objects.launch

Now you have to wait more or less the time estimation output in the webshell in minutes.

Please remember the path to the data_gen and the data_gen_anotations, because it will be used by the Database Generator.

## Part II: Generate Training format material and train Model¶

Now that you have generated your 120.000 images for training ( or copied from the already done repo ), you have to convert them to a format more suited for the model you are going to train , in this case MobileNetV2. We will therefore rescale the images to be squared and smaller to increase speed.

Here we get the training material you previously generated and convert it to CSV format and scale down the images. First we would have to change the path to where our training material is. In this example is should be:

To change it just go the dcnn_training_pkg/launch/generate_dataset_96.launch. This is necessary of you generated your own in previous step and want to use that one. This launch has as default the path /home/user/datasets_ws/randomenv_demo.

You can launch the dcnn_training_pkg/launch/generate_dataset_96_2objects_fetchrange_randomenv.launch instead if you want it all ready to just launch.

Because this is a quickstart is already done for you, so you could skip this if you dont want to spend 30minutes generating the Database. If you want to generate it, go ahead.

WARNING: This will erase the Databse data prestored here.

Execute in WebShell #1

cd /home/user/ai_ws rm -rf build/ devel/ source /home/user/ai_ws/dnn_venv/bin/activate catkin_make source devel/setup.bash rospack profile roslaunch dcnn_training_pkg generate_dataset_96.launch

It will also convert the XML generated with the data of the position of the spam into a CSV file. Note that it will also separate the images and CSV in two groups:

• train
• validation

This is crucial for the training fase.

Example of CSV data, inside the dcnn_training_pkg/dataset_gen_csv/train.csv:

In [ ]:
/home/user/datasets_ws/randomenv_demo/train/2018_11_30_10_20_0.png,640,480,0.3571717739514635,0.43525975982601806,0.6397487758003331,-3.006409837937684e-09,7.466752991614877e-10,2.7450011405723804e-10,1.0,demo_spam1,0
/home/user/datasets_ws/randomenv_demo/train/2018_11_30_10_20_5.png,640,480,0.46007204263179513,0.37257355226789574,0.6397487758003331,-3.0064096927563464e-09,7.466752393572779e-10,2.732706654416275e-10,1.0,demo_spam1,0


## Part III: Train Model¶

Now its time to train the model. We will use the scaled 96x96 images and the csv files. Here we will do the training of the model , saving the weight files for the DeepConvolutional Neural Network MobileNetV2 into the folder model_weight_checkpoints_gen.

Again, if you want to jump this step ( its around 24 hours of training to get the results mentioned below), you have at your disposal the model already trained at:

/home/user/ai_ws/src/domain_randomization_dnn_training/dcnn_training_pkg/bk/model-120K_fetchrange_randomenv-96-1.0-30-32-TIME-1541250696.5790591-0.00145191.h5

All the data generated during training. You can find here two main things:

• Weight File: The Best Weight file in .h5 format, in the weights folder.
• TensorBoard Logs: These are the logs that were generated during learning. If you want to see the model used or the evolution of the accuracy, leraning rates, loss and so on during the learing, use these files. Yoou can find them insid ethe logs_gen folder.

If you want to continue creating your own training and weigh files, follow:

WARNING: This will erase the Databse data prestored here.

You will see that the training system will always save a weight file checkpoint each time the loss gets lower than the minimum value until then. This will guarantee us that it will only save better models. It will also lower the learning rate if too many times the loss doesn't get lower. This is regulated by the patience which is 5 in this case.

We will use a training elements batch of 32.

WARNING: Depending on the Power of the system you have, you will have to decrease or increase the number of THREADS. Otherwise you might run out of memory really fast.

Local PC: Depends on your configuration. Start with 2, and go up until it crashes and then go down a notch.

In dcnn_training_pkg/launch/train_model_96.launch, change:

In this case, this training will be very simple with only 2 epochs. Of course this is not enough for a precise training. You the should do at least 30 epochs with 120.000 images, 90.000 for training and the rest for validation. If you wan t to generate those big numbers training, launch train_model_96_2objects_fetchrange_randomenv instead of train_model_96.launch. This will retrieve the premade CSV files and scaled images saved in the path /home/user/datasets_ws/pre_made_datasets.

Execute in WebShell #1

cd /home/user/ai_ws rm -rf build/ devel/ source /home/user/ai_ws/dnn_venv/bin/activate catkin_make source devel/setup.bash rospack profile roslaunch dcnn_training_pkg train_model_96.launch

See the training logs visualy with Tensorboard:

To see the tensorboard data in Tensorboard open the Tensorboard from the Tools --> Tensorboard panel, and add the path to the tensorboard logs. In this case the path is:

In [ ]:
datasets_ws/randomenv_demo


Please remember that its the Relative PATH from the user, and NOT the absolute path.

And with only that, the system will automatically look for the folder logs_gen and read the tensorboard data. You can do this while the training is taking place or as you did before, from a finished training.

It will start in the GRAPHS section, showing you a Zoomed-Out of all the MobileNetV2 model. It it doesnt appear git it some minutes and click on the refresh ICON

Select SCALAR, and wait for the first epoch to finish. Then the first log data will be published. You should see something similar to this once the first epoch is done, and evolve through the training:

## Launch the RandomEnv Validation¶

Now the moment of truth, lets see how the model learned. We now have to validate the learned weights. For that we launch the following:

We need to launch the simulation:

---> Simulation Dropdown pannel --> fetch_domain_randomization --> main_2objects_fetchrange_randomenv.launch

We want to move the fetch robot, so we have to start the FetchMove Service. Note that you musn't be in the virtual env:

And then launch the validation script in another WebShell:

If you created your own training:

• Here you will have to replace the name of the MyModelWeights.h5 file with the last one generated in your training fase and COPY that MyModelWeights.h5 to the folder /home/user/ai_ws/src/domain_randomization_dcnn_training/dcnn_training_pkg/bk.

If you used the premaid training file:

• You Don't have to do anything, its already prepared with the correct weights file.
cd /home/user/ai_ws source ./dnn_venv/bin/activate source devel/setup.bash rospack profile roscd dcnn_training_pkg/model_weight_checkpoints_gen ls

--> Get the latest one MyModelWeights.h5 and copy it into the BK folder.

Go to roscd dcnn_training_pkg/launch/start_fetch_randomenv.launch

In [ ]:
<arg name="weight_file_name" default="YOUR_MODEL_WEIGHTS_FILE.h5" />


Now you can launch it

Execute in WebShell #2

cd /home/user/ai_ws rm -rf build/ devel/ source /home/user/ai_ws/dnn_venv/bin/activate catkin_make source devel/setup.bash rospack profile roslaunch dcnn_training_pkg start_fetch_randomenv.launch

You can now Open The Graphical Tools panel, wnd it will appear an RVIZ window similar to this:

You will see the Image captured through the camera used for the detection, the spam model represented where its really in the simulation and a green sphere representing the position predicted by the Model. You will also see a menu indicating if the object was grasped or not. This menu is very usefull to do the counting of the percentage of valid graps Fetch does.

## Retrain¶

Now we have it trained already and validated. But what if we want to add more training to an existing weight file because we want to add more objects, or orientation changes or whatevere? Well here is the Retrain. You have to state which model to start from.For that you have to copy it to the bk folder and set its name in the launch file train_model_96_retrain.launch.

In [ ]:
<arg name="weight_file_name" default="model-120K_fetchrange_randomenv-96-1.0-30-32-TIME-1541250696.5790591-0.00145191.h5" />


And now retrain:

Execute in WebShell #1

cd /home/user/ai_ws rm -rf build/ devel/ source /home/user/ai_ws/dnn_venv/bin/activate catkin_make source devel/setup.bash rospack profile roslaunch dcnn_training_pkg train_model_96_retrain.launch

## Where to find all the Weights premade:¶

You can find many weights examples in the repo REPO Weights. Here you will find all the final weights of the different trainings done in this notebook, just in case you want to skip all the training time or you don't want to start the training from scratch.

## 3D Models:¶

Practically the 99% of the 2D models available in this simulation were created with Blender, based on the mathematical 3D advanced shapes or the default shapes, like Suzzane or the Teapot. Only one element was used that is external to Blender, which is the Spam object. This comes from ycb-benchmarks.

In this simulation you have a large variety of objects to choose from to do the whole full fledged training for the paper. You can find all the models inside the simulation_ws/src/domain_randomization_dynamic_objects/dynamic_objects/models. Here you have a sample of what you could generate:

# Acknowledgements¶

This ROSject wouldnt be possible if it werent for the great done by TheConstructTeam and by all the people that created all the software used here:

• First of all to OpenAI and the creators of the original paper which this course is base upon: Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World by Josh Tobin1, Rachel Fong2, Alex Ray2, Jonas Schneider2, Wojciech Zaremba2, Pieter Abbeel3. You can find the original paper here.

• Berk C. Calli, Arjun Singh, Aaron Walsman, Siddhartha Srinivasa, Pieter Abbeel, Aaron M. Dollar, for working on the models database from where the SpamModel was extracted ycb-benchmarks.

• The XML Class used was developed by : https://github.com/Near32/GazeboDomainRandom.

• Blender for the creation of th erest of the models used as distractors.

• OSRF for ROS and Gazebo

• Keras for making DeepLearning a bit more accesible for everyone.

# Want to go deeper? Check this online course¶

At the Robot Ignite Academy we have created a step by step course to learn in deep how everything you have seen here works:

In this course, you will find more examples, a step by step guide description of all the steps, and a MicroProject to practice everything you think you have learned in a completely new environment and robot. Give it a try!