CSci 4511 - Homework 6

Homework 6 -- due Thursday May 3


This homework will be graded out of 100 points. It is completely extra credit and can add as much as 2.5% of the grade.

Coding portion

  1. [20 points] Put together an image set.
    Get at least 3 different kinds of images to classify. You cannot use mine for your first three datasets; if you're interested in classifying 4 or more categories, go ahead and use one mine as your fourth / fifth / Nth database. Try to get several hundred of each datatype; 300 could work, but 1000 is better. Report on the contents and size of your dataset; for each image type, report See directions here. Note that the items you choose will impact your network's ability to differentiate; I would expect it to take more images to correctly classify puppies versus full grown dogs versus wolves, and fewer images to correctly classify dogs versus trucks versus couches. Use appropriate images. You'll be submitting some of them in your homework. Within that guideline, pick whatever you want.
  2. [60 points] Train your network and report on it!
    I recommend at least 20-30 epochs. I also recommend using roughly equal sizes of each category (e.g. training sample sizes of 490, 500, and 510 will perform better than sample sizes of 490, 1000, and 5000).
    1. A network you can download as-is that should work: network itself, a script to build and train it, and a script to use the weights you just found to predict things. For your very first time running through the network, you can use the dataset I used at Dropbox.
    2. Generate and train different networks by changing at least 5 parameters, at least one thing each time; the point of this exercise is to see how changes to the network change the performance. Explain your parameter changes (from what, to what) before you report on the following items.
    3. Record the loss and accuracy graphs for each change (this graph is generated for you by the script). Your report should contain at least 5 graphs.
    4. Along with the graphs, answer if each change required more, fewer, or about the same number of epochs to reach the same accuracy as other network variations? You could also report these numbers in a table, but I still want to see graphs of accuracy and loss, and your understanding of if your change made your network better, worse, or had no impact.
    5. Changes you might want to experiment with:
      • Varying the sizes of the train and test sets (e.g. from 80% training, 20% testing, to 50% training, 50% testing, or some such new split).
      • Image size (default is 96x96). Smaller images will train faster; larger images are more informative. You may run out of memory if you use large images. Remember to change the image size requested in the train AND predict files. I would stick with even numbers; some nice multiple of 16 or 32 is usually good.
      • Number of epochs. Is 1000 much better than 50?
      • Number of images in each category. Are they roughly equal? Does one image type have far fewer or far more images?
      • The data augmentation you use. The default adds rotation, shifts up and down, zoom, shearing, and horizontal flips. You can remove some of these, or alter them (e.g. from rotation_range 25, to 45).
      • Number of classes. Is training on 5 image types much different than training on 3 image types?
      • The filter size in each convolutional layer. The default is 3. How does 4x4 perform? 2x2? (Look for the line ""Conv2D(64, (3, 3), padding="same", input_shape=inputShape>"" in smallervggnet.py)
      • Use an activation function other than ReLU, like "tanh" or "signmoid". See possible functions here - https://keras.io/activations/
      • The network layers themselves! Snip some layers, or add new ones. What if you take out an entire Convolution / Relu / Pooling sequence?
  3. [20] Run the "noisy" images you saved when putting together your image set, and report on some of them. Show at least three images; aim for a mix of easy, hard, and nonsense images: Show the image, and





Copyright: © 2018 by the Regents of the University of Minnesota
Department of Computer Science and Engineering. All rights reserved.
Comments to: Marie Manner
Changes and corrections are in red.