Bilgisayar Mühendisliği Bölümü Kitap Bölümü Koleksiyonu

Bu koleksiyon için kalıcı URI

Güncel Gönderiler

Listeleniyor 1 - 2 / 2
  • Öğe
    Reinforcement learning and trustworthy autonomy
    (Springer International Publishing, 2018) Luo J.; Green S.; Feghali P.; Legrady G.; Koç, Çetin Kaya
    Cyber-Physical Systems (CPS) possess physical and software interdependence and are typically designed by teams of mechanical, electrical, and software engineers. The interdisciplinary nature of CPS makes them difficult to design with safety guarantees. When autonomy is incorporated, design complexity and, especially, the difficulty of providing safety assurances are increased. Visionbased reinforcement learning is an increasingly popular family of machine learning algorithms that may be used to provide autonomy for CPS. Understanding how visual stimuli trigger various actions is critical for trustworthy autonomy. In this chapter we introduce reinforcement learning in the context of Microsoft's AirSim drone simulator. Specifically, we guide the reader through the necessary steps for creating a drone simulation environment suitable for experimenting with visionbased reinforcement learning. We also explore how existing vision-oriented deep learning analysis methods may be applied toward safety verification in vision-based reinforcement learning applications. © Springer Nature Switzerland AG 2018.
  • Öğe
    Mathematical optimizations for deep learning
    (Springer International Publishing, 2018) Green, Sam; Vineyard, Craig M.; Koç, Çetin Kaya
    Deep neural networks are often computationally expensive, during both the training stage and inference stage. Training is always expensive, because back-propagation requires high-precision floating-pointmultiplication and addition. However, various mathematical optimizations may be employed to reduce the computational cost of inference. Optimized inference is important for reducing power consumption and latency and for increasing throughput. This chapter introduces the central approaches for optimizing deep neural network inference: pruning "unnecessary" weights, quantizing weights and inputs, sharing weights between layer units, compressing weights before transferring from main memory, distilling large high-performance models into smaller models, and decomposing convolutional filters to reduce multiply and accumulate operations. In this chapter, using a unified notation, we provide a mathematical and algorithmic description of the aforementioned deep neural network inference optimization methods. © Springer Nature Switzerland AG 2018.