Next meeting:
Total votes:
4
(last vote was 1 year ago)
|
|
Total votes:
2
(last vote was 1 year ago)
Title:
|
|
Total votes:
2
(last vote was 1 year ago)
Author:
Discussion leader:
Nobody volunteered yet
|
|
Total votes:
1
(last vote was 6 months ago)
Author:
Claudio Andrea Manzari, Yujin Park, Benjamin R. Safdi, Inbar Savoray
Discussion leader:
Nobody volunteered yet
|
|
Total votes:
1
(last vote was 1 year ago)
Title:
Author:
David Alesini, Danilo Babusci, Paolo Beltrame, Fabio Bossi, Paolo Ciambrone, Alessandro D'Elia, Daniele Di Gioacchino, Giampiero Di Pirro et al.
Discussion leader:
Nobody volunteered yet
|
|
Total votes:
1
(last vote was 1 year ago)
Title:
|
|
Total votes:
1
(last vote was 1 year ago)
Author:
Florian Goertz, Álvaro Pastor-Gutiérrez
Discussion leader:
Nobody volunteered yet
|
|
|
Votes: 1
6 years ago Author:
[The LIGO Scientific Collaboration], [the Virgo Collaboration], B. P. Abbott, R. Abbott, T. D. Abbott, F. Acernese, K. Ackley, C. Adams et al.
|
|
Votes: 1
6 years ago Author:
Seyda Ipek, Tim Tait
|
|
Votes: 2
6 years ago
Title:
Gravitational Waves from First-Order Phase Transitions: LIGO as a Window
to Unexplored Seesaw Scales
(View PDF)
Author:
Vedran Brdar, Alexander J. Helmboldt, Jisuke Kubo
|
|
Votes: 5
6 years ago
Title:
Reconciling the Diversity and Uniformity of Galactic Rotation Curves
with Self-Interacting Dark Matter
(View PDF)
Author:
Tao Ren, Anna Kwa, Manoj Kaplinghat, Hai-Bo Yu
|
|
Votes: 2
6 years ago Author:
Christopher V. Cappiello, Kenny C. Y. Ng, John F. Beacom
|
|
Votes: 1
6 years ago Author:
Ryan Cooke, Michele Fumagalli
|
|
Votes: 3
6 years ago
Title:
Dark Quark Nuggets
(View PDF)
Author:
Yang Bai, Andrew J. Long, Sida Lu
|
|
Votes: 4
6 years ago Author:
Alfredo Urbano, Hardi Veermäe
|
|
Votes: 4
6 years ago Author:
John T. Giblin, James B. Mertens, Glenn D. Starkman, Chi Tian
|
|
6 years ago
Title: Machine Learning Binarized Neural Networks: Training Deep Neural Networks with We
Link: https://arxiv.org/pdf/1602.02830.pdf
Description: We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.
|