Skip to content

My continuously updated Machine Learning, Probabilistic Models and Deep Learning notes and demos (1000 slides) 我不间断更新的机器学习,概率模型和深度学习的讲义(1000 页)和视频链接

Notifications You must be signed in to change notification settings

Dwyane05/machine-learning-notes

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Recent Research Talks 最近的研究讲义

Topics include: Expectation-Maximization & Matrix Capsule Networks; Determinantal Point Process & Neural Networks compression; Kalman Filter & LSTM; Model estimation & Binary classifier

2018年7月在创新工厂和北大合办的DeeCamp的课件(当概率遇到神经网络) 主题包括:EM算法和矩阵胶囊网络; 行列式点过程和神经网络压缩; 卡尔曼滤波器和LSTM; 模型估计和二分类问题关系

Detailed illustration of Noise Contrastive Estimation (detals & derivations), Probability Density Re-parameterization, Natural Gradients
噪声对比估计 (noise Contrastive Estimation), 概率密度再参数化以及自然梯度的详细说明

Video Tutorial to these notes 视频资料

  • I recorded about 20% of these notes in videos in 2015 in Mandarin (all my notes and writings are in English) You may find them on Youtube and bilibili and Youku

我在2015年用中文录制了这些课件中约20%的内容 (我目前的课件都是英文的)大家可以在Youtube 哔哩哔哩 and 优酷 下载

Data Science 数据科学课件

An extremely gentle 30 minutes introduction to AI and Machine Learning. Thanks to my PhD student Haodong Chang for assist editing

30分钟介绍人工智能和机器学习, 感谢我的学生常浩东进行协助编辑

Classification: Logistic and Softmax; Regression: Linear, polynomial; Mix Effect model [costFunction.m] and [soft_max.m]

分类介绍: Logistic回归和Softmax分类; 回归介绍:线性回归,多项式回归; 混合效果模型 [costFunction.m][soft_max.m]

collaborative filtering, Factorization Machines, Non-Negative Matrix factorisation, Multiplicative Update Rule

推荐系统: 协同过滤,分解机,非负矩阵分解,和期中“乘法更新规则”的介绍

classic PCA and t-SNE

经典的PCA降维法和t-SNE降维法

Supervised vs Unsupervised Learning, Classification accuracy

数据分析简介和相关的jupyter notebook,包括监督与无监督学习,分类准确性

Deep Learning 深度学习课件

Optimisation methods in general. not limited to just Deep Learning

常用的优化方法。不仅限于深度学习

basic neural networks and multilayer perceptron

神经网络: 基本神经网络和多层感知器

detailed explanation of CNN, various Loss function, Centre Loss, contrastive Loss, Residual Networks, Capsule Networks, YOLO, SSD

卷积神经网络:从基础到最近的研究:包括卷积神经网络的详细解释,各种损失函数,中心损失函数,对比损失函数,残差网络,胶囊网络, YOLO,SSD

Word2Vec, skip-gram, GloVe, Fasttext

系统的介绍了自然语言处理中的“词表示”中的技巧

RNN, LSTM, Seq2Seq with Attenion, Beam search, Attention is all you need, Convolution Seq2Seq, Pointer Networks

深度自然语言处理:递归神经网络,LSTM,具有注意力机制的Seq2Seq,集束搜索,指针网络和 "Attention is all you need", 卷积Seq2Seq

basic knowledge in reinforcement learning, Markov Decision Process, Bellman Equation and move onto Deep Q-Learning (under construction)

深度增强学习: 强化学习的基础知识,马尔可夫决策过程,贝尔曼方程,深度Q学习

basic knowledge in Restricted Boltzmann Machine (RBM)

受限玻尔兹曼机(RBM)中的基础知识

Probability and Statistics Background 概率论与数理统计基础课件

revision on Bayes model include Bayesian predictive model, conditional expectation

复习贝叶斯模型,包括贝叶斯预测模型,条件期望等基础知识

some useful distributions, conjugacy, MLE, MAP, Exponential family and natural parameters

一些常用的分布,共轭特性,最大似然估计, 最大后验估计, 指数族和自然参数

useful statistical properties to help us prove things, include Chebyshev and Markov inequality

一些非常有用的统计属性可以帮助我们在机器学习中的证明,包括切比雪夫和马尔科夫不等式

Probabilistic Model 概率模型课件

Proof of convergence for E-M, examples of E-M through Gaussian Mixture Model, [gmm_demo.m] and [kmeans_demo.m] and [Youku]

最大期望E-M的收敛证明, E-M到高斯混合模型的例子, [gmm_demo.m][kmeans_demo.m][优酷链接]

explain in detail of Kalman Filter [Youku], [kalman_demo.m] and Hidden Markov Model [Youku]

状态空间模型(动态模型) 详细解释了卡尔曼滤波器 [优酷链接], [kalman_demo.m] 和隐马尔可夫模型 [优酷链接]

Inference 推断课件

explain Variational Bayes both the non-exponential and exponential family distribution plus stochastic variational inference. [vb_normal_gamma.m] and [优酷链接]

变分推导的介绍: 解释变分贝叶斯非指数和指数族分布加上随机变分推断。[vb_normal_gamma.m][优酷链接]

stochastic matrix, Power Method Convergence Theorem, detailed balance and PageRank algorithm

随机矩阵,幂方法收敛定理,详细平衡和谷歌PageRank算法

inverse CDF, rejection, adaptive rejection, importance sampling [adaptive_rejection_sampling.m] and [hybrid_gmm.m]

累积分布函数逆采样, 拒绝式采样, 自适应拒绝式采样, 重要性采样 [adaptive_rejection_sampling.m][hybrid_gmm.m]

M-H, Gibbs, Slice Sampling, Elliptical Slice sampling, Swendesen-Wang, demonstrate collapsed Gibbs using LDA [lda_gibbs_example.m] and [test_autocorrelation.m] and [gibbs.m] and [Youku]

马尔可夫链蒙特卡洛的各种方法 [lda_gibbs_example.m][test_autocorrelation.m][gibbs.m][优酷链接]

Sequential Monte-Carlo, Condensational Filter algorithm, Auxiliary Particle Filter [Youku]

粒子滤波器(序列蒙特卡洛)[优酷链接]

Advanced Probabilistic Model 高级概率模型课件

Dircihlet Process (DP), Chinese Restaurant Process insights, Slice sampling for DP [dirichlet_process.m] and [优酷链接] and [Jupyter Notebook]

非参贝叶斯及其推导基础: 狄利克雷过程,中国餐馆过程,狄利克雷过程Slice采样 [dirichlet_process.m][优酷链接][Jupyter Notebook]

Hierarchical DP, HDP-HMM, Indian Buffet Process (IBP)

非参贝叶斯扩展: 层次狄利克雷过程,分层狄利克雷过程-隐马尔可夫模型,印度自助餐过程(IBP)

explain the details of DPP’s marginal distribution, L-ensemble, its sampling strategy, our work in time-varying DPP

行列式点过程解释:行列式点过程的边缘分布,L-ensemble,其抽样策略,我们在“时变行列式点过程”中的工作细节

Special Thanks

  • I would like to thank my following PhD students for help me proofreading, and provide great discussions and suggestions to various topics in these notes, including (but not limited to) Hayden Chang, Shawn Jiang, Erica Huang, Deng Chen, Ember Liang; 特别感谢我的博士生团队协助我一起校对课件,以及就课件内容所提出的想法和建议,团队成员包括(但不限于)常浩东,姜帅,黄皖鸣,邓辰,梁轩。

  • Special thanks to Dr Haiguang Huang for his efforts to translate my content into Chinese 特别感谢黄海广博士协助我将课件目录翻译成中文

  • I always look for high quality PhD students in Machine Learning, both in terms of probabilistic model and Deep Learning models. Contact me on [email protected]

如果你想加入我的机器学习博士生团队或有兴趣实习, 请通过[email protected]与我联系。

About

My continuously updated Machine Learning, Probabilistic Models and Deep Learning notes and demos (1000 slides) 我不间断更新的机器学习,概率模型和深度学习的讲义(1000 页)和视频链接

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%