An AI project to leverage state-of-the-art neural networks to detect prarie burrows in aerial images. Project undertaken at University of Colorado, Boulder for Laboratory for Interdisciplinary Statistical Analysis. For more information, contact:
- Patricia Todd [email protected]
- Rahul Chowdhury [email protected]
- The main motivation found from this Medium article: Link
- Oh, such a cool raccoon detector :) Link
- LabelImg for image annotation. Link
- Tensorflow Object Detection API Link
Ensure all of these installation steps are followed: Link
All analysis currently done on this Jupyter notebook: Link
-
transform_files.py
-> Python foo to easily map and correlate the annotated xml files to the corresponding image. Dump images and annotated files as specified insrc_images
andsrc_annotated
variables. Also, makes sure that the<filename></filename>
tag in the xml file has the corresponding consistent file name. Finally, it dumps the annotated images indata/images/
and annotated files indata/tagged
-
split_data.py
-> Usage: python split_data.py split_ratio
Example:python split_data.py .80
will split the data and annotated folders into 80% and 20% for training and testing purposes and dump them into respective folders ->['images_train','images_test','tagged_train','tagged_test']
-
xml_to_csv.py
-> Coverts the xml information in thetagged_train
andtagged_test
folders into correspondingcsv
files. -
generate_tfrecord.py
-> Adopted this guy from this awesome raccoon detector project Check it out!. This code will create the TFRecords for each data point in our training and testing data set. To read more why this step is essential, read more about TFRecords here
Usage:
python generate_tfrecord.py --csv_input=../data/tagged_train_labels.csv --image_dir=../data/images_train/ --output_path=../data/train.record
python generate_tfrecord.py --csv_input=../data/tagged_test_labels.csv --image_dir=../data/images_test/ --output_path=../data/test.record
- To train the model ->
cd code/object-detection/
python train.py --logtostderr --train_dir=../../data/training/ --pipeline_config_path=../../data/training/faster_rcnn_inception_v2_pets.config
- Export Inference Graph -> [Run from the main directory]
python object_detection/export_inference_graph.py --input_type image_tensor --pipeline_config_path data/training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix data/training/model.ckpt-{model_number} --output_directory inference_graph
[Note: model_number here is the last model saved in the training folder. Use the export_inference_graph inside object_detection folder under the root directory and not in the slim directory as this one has an option to take the --output_directory flag]
-
remove_no_burrows_xml_files_by_reading_list.py path_to_folder empty_list
-> Ignore xml files with no annotation for training purposes -> Usage: python remove_no_burrows_xml_files_by_reading_list.py ../data/Annotated_2018/ [0,4,9,14,24,49,59,69,74,84,94,99,104,109,114,199] -
remove_no_burrows_xml_files_by_reading_xml
-> Ignore xml files with no annotation for training purposes by reading the xml files -> Usage: python remove_no_burrows_xml_files_by_reading_xml.py ../data/Annotated_2018/