Skip to content

The goal of this repository is to provide atmospheric scientists with an accessible methodology for cloud-type classification utilizing the SWIRLL Roundshot Camera, Tensorflow's ssd_mobilenet_v1_coco model and high performance computing.

Notifications You must be signed in to change notification settings

Corey4005/swirll-cam-cloud-classifier

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to swirll-cam-cloud-classifier repo!

The goal of this repository is to provide students at UAH with an opportunity to collaborate on a github project developing a cloud classifier for the SWIRLL roundshot camera, as well as provide a sample methodology.

Download repo to your local machine:

git pull https://github.com/Corey4005/swirll-cam-cloud-classifier.git

Credits for Image Labeler Used in This Project

  • We use the image labeler found in this repo, which was built by Tzuta Lin to label cloud classes in SWIRLL images. We have simply copied the code from their repo into this one so all important codes can be pulled together.
  • There is documentation on the how to use the image labeler in the imglabeler directory of this repo.

Credits for Compute Resources

This work was made possible in part by a grant of high performance computing resources and technical support from the Alabama Supercomputer Authority.

Cloud Mask Tool

Individual images from the SWIRLL roundshot camera can be masked, returning the cloud fraction in Oktas using the plot_cloudfraction.py script. A demonstration of this can be found on line 14 of the SWIRLLCAM-Demo Jupyter Notebook. Providing a simple filepath to the image of interest, a two-line command can be passed to a Jupyter Notebook cell which generates the raw and "blue sky characterization" (a measure of how much of the image is sky and how much is cloud).

Object Detection Model

The fair-weather-cumulus object detector was trained on Tensorflow 1.14.0 using the ssd_mobilenet_v1_coco model. This is a convolutional neural network which seperates standard convolution into two steps described in the figure 2 below:

Standard convolution kernals contain a width Dk, a height Dk and a depth M. These kernals are applied to an image with horizontal width Df and vertical height Df, N2 times producting a feature map G with computational cost of Dk x Dk x M x N x Df x Df. The ssd_mobilenet_v1_coco model breaks this standard method into two steps: pairwise and depthwise convolutions. This resuls in a computational cost of Dk x Dk x M x Df x Df + M x N x Df x Df. Ratioing the two-step cost by the standard and canceling like-terms results in a 9-time, less-computationally expensive training algorithm. The linear algebra deriving this intuition is explained in greater detail by Howard et. al's team at Google in section 3.1 of their paper titled MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.

About

The goal of this repository is to provide atmospheric scientists with an accessible methodology for cloud-type classification utilizing the SWIRLL Roundshot Camera, Tensorflow's ssd_mobilenet_v1_coco model and high performance computing.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •