2017 年牛津大学联合 DeepMind 自然语言研究团队开设了一门高阶课程:「深度自然语言处理(Deep NLP)」。该课程的相关资源(讲义及视频)已在 GitHub 开源了。

This is an applied course focussing on recent advances in analysing and generating speech and text using recurrent neural networks. We introduce the mathematical definitions of the relevant machine learning models and derive their associated optimisation algorithms. The course covers a range of applications of neural networks in NLP including analysing latent dimensions in text, transcribing speech to text, translating between languages, and answering questions. These topics are organised into three high level themes forming a progression from understanding the use of neural networks for sequential language modelling, to understanding their use as conditional language models for transduction tasks, and finally to approaches employing these techniques in combination with other mechanisms for advanced applications. Throughout the course the practical implementation of such models on CPU and GPU hardware is also discussed.

这门应用型课程:

  • 使用循环神经网络(RNN)分析和生成语音和文本方面的最新进展;
  • 介绍相关的机器学习模型的数学定义,并推导出其相关的优化算法;
  • 涵盖了神经网络在 NLP 方面的大量应用,包括分析文本潜在层面的应用,语音转换文本,语言之间的翻译,以及回答问题。

https://github.com/oxford-cs-deepnlp-2017/lectures

讲师团队

  • Phil Blunsom (Oxford University and DeepMind)
  • Chris Dyer (Carnegie Mellon University and DeepMind)
  • Edward Grefenstette (DeepMind)
  • Karl Moritz Hermann (DeepMind)
  • Andrew Senior (DeepMind)
  • Wang Ling (DeepMind)
  • Jeremy Appleyard (NVIDIA)

目前部分课程内容资源还未公开,感兴趣的童鞋请多留意关注。

20 6 收藏


直接登录

推荐关注