Neural networks are the state of the art for artificial intelligence. Advanced deep neural networks have even been shown to outperform humans in challenging image recognition tasks. However, these networks are usually trained in supervised settings using large amounts of well-structured and labeled datasets. Training, which lasts from days to weeks on large-scale computing platforms, consume energy on the order of hundreds of kilowatts. In contrast, the biological brain is accurate and almost effortless in recognizing images with incredible computational efficiency in an unsupervised fashion. To close this gap, new methods for implementing unsupervised neural network learning with energy efficient hardware are crucial. In this work, we investigate hardware implementations of spiking neural networks using resistive synaptic devices. We discuss device-level and network-level approaches to improve energy efficiency in network training. We show that accurate modeling of synaptic devices is very important for assessing network performance on hardware implementations. We develop compact models for device variations based on experimental data and investigate their impact on unsupervised learning performance. Finally, we discuss methodologies to redesign the network training algorithm using realistic characteristics of devices in order to compensate for variations, demonstrating high recognition accuracy and energy efficiency during network training in hardware.