講座論壇
Stochastic Gradient Estimation for Artificial Neural Networks
发布时间:2019-12-12 08:36:39 229

 

演讲人:彭一杰 博士

题  目:Stochastic Gradient Estimation for Artificial Neural Networks

时  间:2019年12月16日 星期一 下午 3:00  

地  点:哈工大(深圳)A507

   

報告內容:

    We investigate a new approach to compute the gradients of artificial neural networks (ANNs), based on the so-called push-out likelihood ratio method. Unlike the widely used backpropagation (BP) method that requires continuity of the loss function and the activation function, our approach bypasses this requirement by injecting artificial noises into the signals passed along the neurons. We show how this approach has a similar computational complexity as BP, and moreover is more advantageous in terms of removing the backward recursion and eliciting transparent formulas even for higher-order gradients. Our approach allows efficient training for ANNs with more flexibility on the loss and activation functions, and shows empirical improvements on the robustness of ANNs under adversarial attacks and corruptions of natural noises.

 

個人簡介   

    Dr. Yijie Peng is currently an assistant professor of the Department of Industrial Engineering and Management at Peking University (PKU). He received his Ph.D. from the Department of Management Science at Fudan University and his B.S. degree from the School of Mathematics at Wuhan University. Before joining PKU, he worked as an assistant professor at George Mason University, and postdoctoral scholar at Fudan University and R.H. Smith School of Business at University of Maryland at College Park. Many of his publications appear in high-quality journals including Operations Research, IEEE Transactions on Automatic Control, and INFORMS Journal on Computing. He is awarded the 2019 Outstanding Simulation Publication Award of INFORMS simulation society. His research interests include stochastic modeling and analysis, simulation optimization, machine learning, data analytics, and healthcare.