你好,游客 登录
背景:
阅读新闻

ICML2018论文公布!一文了解机器学习最新热议论文和研究热点

[日期:2018-05-15] 来源:新智元  作者: [字体: ]

新智元推荐 

来源:专知

编辑:克雷格

【新智元导读】ICML 2018上周公布了会议接受论文,各家组织机构和研究大牛们在Twitter上纷纷报喜,放出接受论文,恭喜!有Google BrAIn、DeepMind、Facebook、微软和各大高校等。我们整理了Twitter上的关注度比较热的一些论文,供大家了解,最新关于机器学习的一些热门研究方向!

  1. Differentiable Dynamic Programming for Structured Prediction and Attention

最热的就是这篇第一作者Arthur Mensch‏,来自法国Inria Parietal,也是scikit-learn 作者之一,论文关于结构性预测与注意力中的可微分动态编程。

作者重点指出:Sparsity and backprop in CRF-like inference layers using max-smoothing, application in text + time series (NER, NMT, DTW)。

Twitter上截止到现在 600赞。

论文网址:

http://www.zhuanzhi.ai/document/34c4176a60e002b524b56b5114db0e78

这位评价甚高!oneofthemostinnovativedeeplearningpapers!

欢迎大家阅读!

2. WaveRNN、Parralel WaveNet 

来自DeepMind的两篇论文关于语音合成!

WaveRNN: http://arxiv.org/abs/1802.08435

Parallel WaveNet: http://arxiv.org/abs/1711.10433

WaveNet早已名声卓著,比原来快千倍,语音更自然,已经用在Google自家产品Google Assistant 里~

3. GAN性能表现分析

来自谷歌大脑GoodFellow团队,Is Generator Conditioning Causally Related to GAN Performance? find: 1. Spectrum of G's in/out Jacobian predicts Inception Score. 2. Intervening to change spectrum affects scores a lot

论文链接:https://t.co/cXQDEE2Uee

4.优化方法 Adam分析

Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients

论文地址:https://arxiv.org/abs/1705.07774

5. 图像转换器

论文地址:https://arxiv.org/abs/1802.05751

其他论文列表:

论文地址:

Bayesian Quadrature for Multiple Related Integrals 

https://arxiv.org/abs/1801.04153 

Stein Points

https://arxiv.org/abs/1803.10161

Active Learning with Logged Data

https://arxiv.org/abs/1802.09069

Analyzing the Robustness of Nearest Neighbors to Adversarial Examples

https://arxiv.org/abs/1706.03922

Hierarchical Imitation and Reinforcement Learning

https://arxiv.org/abs/1803.00590

Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model

https://arxiv.org/abs/1802.04551

Detecting and Correcting for Label Shift with Black Box Predictors

https://arxiv.org/abs/1802.03916

Yes, but Did It Work?: Evaluating Variational Inference

https://arxiv.org/abs/1802.02538

MAGAN: Aligning Biological Manifolds

https://arxiv.org/abs/1803.00385

Does Distributionally Robust Supervised Learning Give Robust Classifiers?

https://arxiv.org/abs/1611.02041

Knowledge Transfer with Jacobian Matching

https://arxiv.org/abs/1803.00443

Kronecker Recurrent Units

https://arxiv.org/abs/1705.10142

Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors

https://arxiv.org/abs/1712.09376

The Manifold Assumption and Defenses Against Adversarial Perturbations

https://arxiv.org/abs/1711.08001

Overcoming catastrophic forgetting with hard attention to the task

https://arxiv.org/abs/1801.01423

On the Opportunities and Pitfalls of Nesting Monte Carlo Estimators

https://arxiv.org/abs/1709.06181

Tighter Variational Bounds are Not Necessarily Better

https://arxiv.org/abs/1802.04537

LaVAN: Localized and Visible Adversarial Noise

https://arxiv.org/abs/1801.02608

Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples

https://arxiv.org/abs/1711.09576

Geometry Score: A Method For Comparing Generative Adversarial Networks

https://arxiv.org/abs/1802.02664

收藏 推荐 打印 | 录入:Cstor | 阅读:
相关新闻      
本文评论   查看全部评论 (0)
表情: 表情 姓名: 字数
点评:
       
评论声明
  • 尊重网上道德,遵守中华人民共和国的各项有关法律法规
  • 承担一切因您的行为而直接或间接导致的民事或刑事法律责任
  • 本站管理人员有权保留或删除其管辖留言中的任意内容
  • 本站有权在网站内转载或引用您的评论
  • 参与本评论即表明您已经阅读并接受上述条款