Neural network architectures are many and varied, and often take days or weeks to train from scratch. With common architectures such as VGG-16 having on the order of 100s of millions of trainable parameters, it is advantageous to reduce the complexity of training by repurpose existing parameters while making only small modifications to the final layers of a network. This presentation will demonstrate ways in which convolutional neural networks can be repurposed for tasks for which they were not originally designed.