教程

Machine Learning & Deep Learning Tutorials Awesome

-该存储库包含按主题分类的机器学习和深度学习教程,文章和其他资源清单. 其他很棒的清单可以在这里找到 list.

-如果您想为这个清单做贡献,请阅读 Contributing Guidelines.

Introduction

Interview Resources

Artificial Intelligence

Genetic Algorithms

Statistics

-教程

- [AP Statistics Tutorial](http://stattrek.com/tutorials/ap-statistics-tutorial.aspx)

- [Statistics and Probability Tutorial](http://stattrek.com/tutorials/statistics-tutorial.aspx)

- [Matrix Algebra Tutorial](http://stattrek.com/tutorials/matrix-algebra-tutorial.aspx)

Useful Blogs

Resources on Quora

Kaggle Competitions WriteUp

Cheat Sheets

Classification

Linear Regression

- [Assumptions of Linear Regression](http://pareonline.net/getvn.asp?n=2&v=8), [Stack Exchange](http://stats.stackexchange.com/questions/16381/what-is-a-complete-list-of-the-usual-assumptions-for-linear-regression)

- [Linear Regression Comprehensive Resource](http://people.duke.edu/~rnau/regintro.htm)

- [Applying and Interpreting Linear Regression](http://www.dataschool.io/applying-and-interpreting-linear-regression/)

- [What does having constant variance in a linear regression model mean?](http://stats.stackexchange.com/questions/52089/what-does-having-constant-variance-in-a-linear-regression-model-mean/52107?stw=2#52107)

- [Difference between linear regression on y with x and x with y](http://stats.stackexchange.com/questions/22718/what-is-the-difference-between-linear-regression-on-y-with-x-and-x-with-y?lq=1)

- [Is linear regression valid when the dependant variable is not normally distributed?](https://www.researchgate.net/post/Is_linear_regression_valid_when_the_outcome_dependant_variable_not_normally_distributed)

-多重共线性和VIF

- [Dummy Variable Trap | Multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)

- [Dealing with multicollinearity using VIFs](https://jonlefcheck.net/2012/12/28/dealing-with-multicollinearity-using-variance-inflation-factors/)


- [Interpreting plot.lm() in R](http://stats.stackexchange.com/questions/58141/interpreting-plot-lm)

- [How to interpret a QQ plot?](http://stats.stackexchange.com/questions/101274/how-to-interpret-a-qq-plot?lq=1)

- [Interpreting Residuals vs Fitted Plot](http://stats.stackexchange.com/questions/76226/interpreting-the-residuals-vs-fitted-values-plot-for-verifying-the-assumptions)


- [How should outliers be dealt with?](http://stats.stackexchange.com/questions/175/how-should-outliers-be-dealt-with-in-linear-regression-analysis)

Logistic Regression

Model Validation using Resampling

Deep Learning

-神经机器翻译

- **[Machine Translation Reading List](https://github.com/THUNLP-MT/MT-Reading-List#machine-translation-reading-list)**

- [Introduction to Neural Machine Translation with GPUs (part 1)](https://devblogs.nvidia.com/parallelforall/introduction-neural-machine-translation-with-gpus/), [Part 2](https://devblogs.nvidia.com/parallelforall/introduction-neural-machine-translation-gpus-part-2/), [Part 3](https://devblogs.nvidia.com/parallelforall/introduction-neural-machine-translation-gpus-part-3/)

- [Deep Speech: Accurate Speech Recognition with GPU-Accelerated Deep Learning](https://devblogs.nvidia.com/parallelforall/deep-speech-accurate-speech-recognition-gpu-accelerated-deep-learning/)

-深度学习框架

- [Torch vs. Theano](http://fastml.com/torch-vs-theano/)

- [dl4j vs. torch7 vs. theano](http://deeplearning4j.org/compare-dl4j-torch7-pylearn.html)

- [Deep Learning Libraries by Language](http://www.teglor.com/b/deep-learning-libraries-language-cm569/)


- [Theano](https://en.wikipedia.org/wiki/Theano_(software))

    - [Website](http://deeplearning.net/software/theano/)

    - [Theano Introduction](http://www.wildml.com/2015/09/speeding-up-your-neural-network-with-theano-and-the-gpu/)

    - [Theano Tutorial](http://outlace.com/Beginner-Tutorial-Theano/)

    - [Good Theano Tutorial](http://deeplearning.net/software/theano/tutorial/)

    - [Logistic Regression using Theano for classifying digits](http://deeplearning.net/tutorial/logreg.html#logreg)

    - [MLP using Theano](http://deeplearning.net/tutorial/mlp.html#mlp)

    - [CNN using Theano](http://deeplearning.net/tutorial/lenet.html#lenet)

    - [RNNs using Theano](http://deeplearning.net/tutorial/rnnslu.html#rnnslu)

    - [LSTM for Sentiment Analysis in Theano](http://deeplearning.net/tutorial/lstm.html#lstm)

    - [RBM using Theano](http://deeplearning.net/tutorial/rbm.html#rbm)

    - [DBNs using Theano](http://deeplearning.net/tutorial/DBN.html#dbn)

    - [All Codes](https://github.com/lisa-lab/DeepLearningTutorials)

    - [Deep Learning Implementation Tutorials - Keras and Lasagne](https://github.com/vict0rsch/deep_learning/)

- [Torch](http://torch.ch/)

    - [Torch ML Tutorial](http://code.madbits.com/wiki/doku.php), [Code](https://github.com/torch/tutorials)

    - [Intro to Torch](http://ml.informatik.uni-freiburg.de/_media/teaching/ws1415/presentation_dl_lect3.pdf)

    - [Learning Torch GitHub Repo](https://github.com/chetannaik/learning_torch)

    - [Awesome-Torch (Repository on GitHub)](https://github.com/carpedm20/awesome-torch)

    - [Machine Learning using Torch Oxford Univ](https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/), [Code](https://github.com/oxford-cs-ml-2015)

    - [Torch Internals Overview](https://apaszke.github.io/torch-internals.html)

    - [Torch Cheatsheet](https://github.com/torch/torch7/wiki/Cheatsheet)

    - [Understanding Natural Language with Deep Neural Networks Using Torch](http://devblogs.nvidia.com/parallelforall/understanding-natural-language-deep-neural-networks-using-torch/)

-咖啡
    - [Deep Learning for Computer Vision with Caffe and cuDNN](https://devblogs.nvidia.com/parallelforall/deep-learning-computer-vision-caffe-cudnn/)

-TensorFlow
    - [Website](http://tensorflow.org/)

    - [TensorFlow Examples for Beginners](https://github.com/aymericdamien/TensorFlow-Examples)

    - [Stanford Tensorflow for Deep Learning Research Course](https://web.stanford.edu/class/cs20si/syllabus.html)

        - [GitHub Repo](https://github.com/chiphuyen/tf-stanford-tutorials)

    - [Simplified Scikit-learn Style Interface to TensorFlow](https://github.com/tensorflow/skflow)

    - [Learning TensorFlow GitHub Repo](https://github.com/chetannaik/learning_tensorflow)

    - [Benchmark TensorFlow GitHub](https://github.com/soumith/convnet-benchmarks/issues/66)

    - [Awesome TensorFlow List](https://github.com/jtoy/awesome-tensorflow)

    - [TensorFlow Book](https://github.com/BinRoot/TensorFlow-Book)

    - [Android TensorFlow Machine Learning Example](https://blog.mindorks.com/android-tensorflow-machine-learning-example-ff0e9b2654cc)

        - [GitHub Repo](https://github.com/MindorksOpenSource/AndroidTensorFlowMachineLearningExample)
    - [Creating Custom Model For Android Using TensorFlow](https://blog.mindorks.com/creating-custom-model-for-android-using-tensorflow-3f963d270bfb)
        - [GitHub Repo](https://github.com/MindorksOpenSource/AndroidTensorFlowMNISTExample)

-前馈网络

- [A Quick Introduction to Neural Networks](https://ujjwalkarn.me/2016/08/09/quick-intro-neural-networks/)

- [Implementing a Neural Network from scratch](http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/), [Code](https://github.com/dennybritz/nn-from-scratch)

- [Speeding up your Neural Network with Theano and the gpu](http://www.wildml.com/2015/09/speeding-up-your-neural-network-with-theano-and-the-gpu/), [Code](https://github.com/dennybritz/nn-theano)

- [Basic ANN Theory](https://takinginitiative.wordpress.com/2008/04/03/basic-neural-network-tutorial-theory/)

- [Role of Bias in Neural Networks](http://stackoverflow.com/questions/2480650/role-of-bias-in-neural-networks)

- [Choosing number of hidden layers and nodes](http://stackoverflow.com/questions/3345079/estimating-the-number-of-neurons-and-number-of-layers-of-an-artificial-neural-ne),[2](http://stackoverflow.com/questions/10565868/multi-layer-perceptron-mlp-architecture-criteria-for-choosing-number-of-hidde?lq=1),[3](http://stackoverflow.com/questions/9436209/how-to-choose-number-of-hidden-layers-and-nodes-in-neural-network/2#)

- [Backpropagation in Matrix Form](http://sudeepraja.github.io/Neural/)

- [ANN implemented in C++ | AI Junkie](http://www.ai-junkie.com/ann/evolved/nnt6.html)

- [Simple Implementation](http://stackoverflow.com/questions/15395835/simple-multi-layer-neural-network-implementation)

- [NN for Beginners](http://www.codeproject.com/Articles/16419/AI-Neural-Network-for-beginners-Part-of)

- [Regression and Classification with NNs (Slides)](http://www.autonlab.org/tutorials/neural13.pdf)

- [Another Intro](http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html)

-递归和LSTM网络 - awesome-rnn: list of resources (GitHub Repo)

- [Recurrent Neural Net Tutorial Part 1](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/), [Part 2](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-2-implementing-a-language-model-rnn-with-python-numpy-and-theano/), [Part 3](http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/), [Code](https://github.com/dennybritz/rnn-tutorial-rnnlm/)

- [NLP RNN Representations](http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/)

- [The Unreasonable effectiveness of RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/), [Torch Code](https://github.com/karpathy/char-rnn), [Python Code](https://gist.github.com/karpathy/d4dee566867f8291f086)

- [Intro to RNN](http://deeplearning4j.org/recurrentnetwork.html), [LSTM](http://deeplearning4j.org/lstm.html)

- [An application of RNN](http://hackaday.com/2015/10/15/73-computer-scientists-created-a-neural-net-and-you-wont-believe-what-happened-next/)

- [Optimizing RNN Performance](http://svail.github.io/)

- [Simple RNN](http://outlace.com/Simple-Recurrent-Neural-Network/)

- [Auto-Generating Clickbait with RNN](https://larseidnes.com/2015/10/13/auto-generating-clickbait-with-recurrent-neural-networks/)

- [Sequence Learning using RNN (Slides)](http://www.slideshare.net/indicods/general-sequence-learning-with-recurrent-neural-networks-for-next-ml)

- [Machine Translation using RNN (Paper)](http://emnlp2014.org/papers/pdf/EMNLP2014179.pdf)

- [Music generation using RNNs (Keras)](https://github.com/MattVitelli/GRUV)

- [Using RNN to create on-the-fly dialogue (Keras)](http://neuralniche.com/post/tutorial/)

-长期短期记忆(LSTM)

    - [Understanding LSTM Networks](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)

    - [LSTM explained](https://apaszke.github.io/lstm-explained.html)

    - [Beginner’s Guide to LSTM](http://deeplearning4j.org/lstm.html)

    - [Implementing LSTM from scratch](http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/), [Python/Theano code](https://github.com/dennybritz/rnn-tutorial-gru-lstm)

    - [Torch Code for character-level language models using LSTM](https://github.com/karpathy/char-rnn)

    - [LSTM for Kaggle EEG Detection competition (Torch Code)](https://github.com/apaszke/kaggle-grasp-and-lift)

    - [LSTM for Sentiment Analysis in Theano](http://deeplearning.net/tutorial/lstm.html#lstm)

    - [Deep Learning for Visual Q&A | LSTM | CNN](http://avisingh599.github.io/deeplearning/visual-qa/), [Code](https://github.com/avisingh599/visual-qa)

    - [Computer Responds to email using LSTM | Google](http://googleresearch.blogspot.in/2015/11/computer-respond-to-this-email.html)

    - [LSTM dramatically improves Google Voice Search](http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html), [Another Article](http://deeplearning.net/2015/09/30/long-short-term-memory-dramatically-improves-google-voice-etc-now-available-to-a-billion-users/)

    - [Understanding Natural Language with LSTM Using Torch](http://devblogs.nvidia.com/parallelforall/understanding-natural-language-deep-neural-networks-using-torch/)

    - [Torch code for Visual Question Answering using a CNN+LSTM model](https://github.com/abhshkdz/neural-vqa)

    - [LSTM for Human Activity Recognition](https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition/)

-门控循环单元(GRU)

    - [LSTM vs GRU](http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/)

- [Time series forecasting with Sequence-to-Sequence (seq2seq) rnn models](https://github.com/guillaume-chevalier/seq2seq-signal-prediction)

-受限玻尔兹曼机

- [Beginner's Guide about RBMs](http://deeplearning4j.org/restrictedboltzmannmachine.html)

- [Another Good Tutorial](http://deeplearning.net/tutorial/rbm.html)

- [Introduction to RBMs](http://blog.echen.me/2011/07/18/introduction-to-restricted-boltzmann-machines/)

- [Hinton's Guide to Training RBMs](https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf)

- [RBMs in R](https://github.com/zachmayer/rbm)

- [Deep Belief Networks Tutorial](http://deeplearning4j.org/deepbeliefnetwork.html)

- [word2vec, DBN, RNTN for Sentiment Analysis ](http://deeplearning4j.org/zh-sentiment_analysis_word2vec.html)

-自动编码器:不受监督(在设置target =输入之后应用BackProp)

- [Andrew Ng Sparse Autoencoders pdf](https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf)

- [Deep Autoencoders Tutorial](http://deeplearning4j.org/deepautoencoder.html)

- [Denoising Autoencoders](http://deeplearning.net/tutorial/dA.html), [Theano Code](http://deeplearning.net/tutorial/code/dA.py)

- [Stacked Denoising Autoencoders](http://deeplearning.net/tutorial/SdA.html#sda)

-卷积神经网络

- [An Intuitive Explanation of Convolutional Neural Networks](https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/)

- [Awesome Deep Vision: List of Resources (GitHub)](https://github.com/kjw0612/awesome-deep-vision)

- [Intro to CNNs](http://deeplearning4j.org/convolutionalnets.html)

- [Understanding CNN for NLP](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/)

- [Stanford Notes](http://vision.stanford.edu/teaching/cs231n/), [Codes](http://cs231n.github.io/), [GitHub](https://github.com/cs231n/cs231n.github.io)

- [JavaScript Library (Browser Based) for CNNs](http://cs.stanford.edu/people/karpathy/convnetjs/)

- [Using CNNs to detect facial keypoints](http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/)

- [Deep learning to classify business photos at Yelp](http://engineeringblog.yelp.com/2015/10/how-we-use-deep-learning-to-classify-business-photos-at-yelp.html)

- [Interview with Yann LeCun | Kaggle](http://blog.kaggle.com/2014/12/22/convolutional-nets-and-cifar-10-an-interview-with-yan-lecun/)

- [Visualising and Understanding CNNs](https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf)

-网络表示学习

- [Awesome Graph Embedding](https://github.com/benedekrozemberczki/awesome-graph-embedding)

- [Awesome Network Embedding](https://github.com/chihming/awesome-network-embedding)

- [Network Representation Learning Papers](https://github.com/thunlp)

- [Knowledge Representation Learning Papers](https://github.com/thunlp/KRLPapers)

- [Graph Based Deep Learning Literature](https://github.com/naganandy/graph-based-deep-learning-literature)

Natural Language Processing

-主题建模 - Topic Modeling Wikipedia - Probabilistic Topic Models Princeton PDF

- [LDA Wikipedia](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation), [LSA Wikipedia](https://en.wikipedia.org/wiki/Latent_semantic_analysis), [Probabilistic LSA Wikipedia](https://en.wikipedia.org/wiki/Probabilistic_latent_semantic_analysis)

- [What is a good explanation of Latent Dirichlet Allocation (LDA)?](https://www.quora.com/What-is-a-good-explanation-of-Latent-Dirichlet-Allocation)

- [**Introduction to LDA**](http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation/), [Another good explanation](http://confusedlanguagetech.blogspot.in/2012/07/jordan-boyd-graber-and-philip-resnik.html)

- [The LDA Buffet - Intuitive Explanation](http://www.matthewjockers.net/2011/09/29/the-lda-buffet-is-now-open-or-latent-dirichlet-allocation-for-english-majors/)

- [Your Guide to Latent Dirichlet Allocation (LDA)](https://medium.com/@lettier/how-does-lda-work-ill-explain-using-emoji-108abf40fa7d)

- [Difference between LSI and LDA](https://www.quora.com/Whats-the-difference-between-Latent-Semantic-Indexing-LSI-and-Latent-Dirichlet-Allocation-LDA)

- [Original LDA Paper](https://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pdf)

- [alpha and beta in LDA](http://datascience.stackexchange.com/questions/199/what-does-the-alpha-and-beta-hyperparameters-contribute-to-in-latent-dirichlet-a)

- [Intuitive explanation of the Dirichlet distribution](https://www.quora.com/What-is-an-intuitive-explanation-of-the-Dirichlet-distribution)
- [topicmodels: An R Package for Fitting Topic Models](https://cran.r-project.org/web/packages/topicmodels/vignettes/topicmodels.pdf)

- [Topic modeling made just simple enough](https://tedunderwood.com/2012/04/07/topic-modeling-made-just-simple-enough/)

- [Online LDA](http://alexminnaar.com/online-latent-dirichlet-allocation-the-best-option-for-topic-modeling-with-large-data-sets.html), [Online LDA with Spark](http://alexminnaar.com/distributed-online-latent-dirichlet-allocation-with-apache-spark.html)

- [LDA in Scala](http://alexminnaar.com/latent-dirichlet-allocation-in-scala-part-i-the-theory.html), [Part 2](http://alexminnaar.com/latent-dirichlet-allocation-in-scala-part-ii-the-code.html)

- [Segmentation of Twitter Timelines via Topic Modeling](https://alexisperrier.com/nlp/2015/09/16/segmentation_twitter_timelines_lda_vs_lsa.html)

- [Topic Modeling of Twitter Followers](http://alexperrier.github.io/jekyll/update/2015/09/04/topic-modeling-of-twitter-followers.html)

- [Multilingual Latent Dirichlet Allocation (LDA)](https://github.com/ArtificiAI/Multilingual-Latent-Dirichlet-Allocation-LDA). ([Tutorial here](https://github.com/ArtificiAI/Multilingual-Latent-Dirichlet-Allocation-LDA/blob/master/Multilingual-LDA-Pipeline-Tutorial.ipynb))

- [Deep Belief Nets for Topic Modeling](https://github.com/larsmaaloee/deep-belief-nets-for-topic-modeling)
- [Gaussian LDA for Topic Models with Word Embeddings](http://www.cs.cmu.edu/~rajarshd/papers/acl2015.pdf)
-Python
    - [Series of lecture notes for probabilistic topic models written in ipython notebook](https://github.com/arongdari/topic-model-lecture-note)
    - [Implementation of various topic models in Python](https://github.com/arongdari/python-topic-model)

-word2vec

- [Google word2vec](https://code.google.com/archive/p/word2vec)

- [Bag of Words Model Wiki](https://en.wikipedia.org/wiki/Bag-of-words_model)

- [word2vec Tutorial](https://rare-technologies.com/word2vec-tutorial/)

- [A closer look at Skip Gram Modeling](http://homepages.inf.ed.ac.uk/ballison/pdf/lrec_skipgrams.pdf)

- [Skip Gram Model Tutorial](http://alexminnaar.com/word2vec-tutorial-part-i-the-skip-gram-model.html), [CBoW Model](http://alexminnaar.com/word2vec-tutorial-part-ii-the-continuous-bag-of-words-model.html)

- [Word Vectors Kaggle Tutorial Python](https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-2-word-vectors), [Part 2](https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-3-more-fun-with-word-vectors)

- [Making sense of word2vec](http://rare-technologies.com/making-sense-of-word2vec/)

- [word2vec explained on deeplearning4j](http://deeplearning4j.org/word2vec.html)

- [Quora word2vec](https://www.quora.com/How-does-word2vec-work)

- [Other Quora Resources](https://www.quora.com/What-are-the-continuous-bag-of-words-and-skip-gram-architectures-in-laymans-terms), [2](https://www.quora.com/What-is-the-difference-between-the-Bag-of-Words-model-and-the-Continuous-Bag-of-Words-model), [3](https://www.quora.com/Is-skip-gram-negative-sampling-better-than-CBOW-NS-for-word2vec-If-so-why)

- [word2vec, DBN, RNTN for Sentiment Analysis ](http://deeplearning4j.org/zh-sentiment_analysis_word2vec.html)

-文本聚类

- [How string clustering works](http://stackoverflow.com/questions/8196371/how-clustering-works-especially-string-clustering)

- [Levenshtein distance for measuring the difference between two sequences](https://en.wikipedia.org/wiki/Levenshtein_distance)

- [Text clustering with Levenshtein distances](http://stackoverflow.com/questions/21511801/text-clustering-with-levenshtein-distances)

-文字分类

- [Classification Text with Bag of Words](http://fastml.com/classifying-text-with-bag-of-words-a-tutorial/)

-命名实体识别

 - [Stanford Named Entity Recognizer (NER)](https://nlp.stanford.edu/software/CRF-NER.shtml)

 - [Named Entity Recognition: Applications and Use Cases- Towards Data Science](https://towardsdatascience.com/named-entity-recognition-applications-and-use-cases-acdbf57d595e)

Computer Vision

Support Vector Machine

-比较

- [SVMs > ANNs](http://stackoverflow.com/questions/6699222/support-vector-machines-better-than-artificial-neural-networks-in-which-learn?rq=1), [ANNs > SVMs](http://stackoverflow.com/questions/11632516/what-are-advantages-of-artificial-neural-networks-over-support-vector-machines), [Another Comparison](http://www.svms.org/anns.html)

- [Trees > SVMs](http://stats.stackexchange.com/questions/57438/why-is-svm-not-so-good-as-decision-tree-on-the-same-data)

- [Kernel Logistic Regression vs SVM](http://stats.stackexchange.com/questions/43996/kernel-logistic-regression-vs-svm)

- [Logistic Regression vs SVM](http://stats.stackexchange.com/questions/58684/regularized-logistic-regression-and-support-vector-machine), [2](http://stats.stackexchange.com/questions/95340/svm-v-s-logistic-regression), [3](https://www.quora.com/Support-Vector-Machines/What-is-the-difference-between-Linear-SVMs-and-Logistic-Regression)

-软件

- [LIBSVM](https://www.csie.ntu.edu.tw/~cjlin/libsvm/)

- [Intro to SVM in R](http://cbio.ensmp.fr/~jvert/svn/tutorials/practical/svmbasic/svmbasic_notes.pdf)

-内核 - What are Kernels in ML and SVM?

- [Intuition Behind Gaussian Kernel in SVMs?](https://www.quora.com/Support-Vector-Machines/What-is-the-intuition-behind-Gaussian-kernel-in-SVM)

-支持向量机后的概率

- [Platt's Probabilistic Outputs for SVM](http://www.csie.ntu.edu.tw/~htlin/paper/doc/plattprob.pdf)

- [Platt Calibration Wiki](https://en.wikipedia.org/wiki/Platt_scaling)

- [Why use Platts Scaling](http://stats.stackexchange.com/questions/5196/why-use-platts-scaling)

- [Classifier Classification with Platt's Scaling](http://fastml.com/classifier-calibration-with-platts-scaling-and-isotonic-regression/)

Reinforcement Learning

Decision Trees

-不同算法的比较

- [CART vs CTREE](http://stats.stackexchange.com/questions/12140/conditional-inference-trees-vs-traditional-decision-trees)

- [Comparison of complexity or performance](https://stackoverflow.com/questions/9979461/different-decision-tree-algorithms-with-comparison-of-complexity-or-performance)

- [CHAID vs CART](http://stats.stackexchange.com/questions/61230/chaid-vs-crt-or-cart) , [CART vs CHAID](http://www.bzst.com/2006/10/classification-trees-cart-vs-chaid.html)

- [Good Article on comparison](http://www.ftpress.com/articles/article.aspx?p=2248639&seqNum=11)

-购物车

- [Recursive Partitioning Wikipedia](https://en.wikipedia.org/wiki/Recursive_partitioning)

- [CART Explained](http://documents.software.dell.com/Statistics/Textbook/Classification-and-Regression-Trees)

- [How to measure/rank “variable importance” when using CART?](http://stats.stackexchange.com/questions/6478/how-to-measure-rank-variable-importance-when-using-cart-specifically-using)

- [Pruning a Tree in R](http://stackoverflow.com/questions/15318409/how-to-prune-a-tree-in-r)

- [Does rpart use multivariate splits by default?](http://stats.stackexchange.com/questions/4356/does-rpart-use-multivariate-splits-by-default)

- [FAQs about Recursive Partitioning](http://stats.stackexchange.com/questions/tagged/rpart)

-CTREE

- [party package in R](https://cran.r-project.org/web/packages/party/party.pdf)

- [Show volumne in each node using ctree in R](http://stackoverflow.com/questions/13772715/show-volume-in-each-node-using-ctree-plot-in-r)

- [How to extract tree structure from ctree function?](http://stackoverflow.com/questions/8675664/how-to-extract-tree-structure-from-ctree-function)

-CHAID

- [Wikipedia Artice on CHAID](https://en.wikipedia.org/wiki/CHAID)

- [Basic Introduction to CHAID](https://smartdrill.com/Introduction-to-CHAID.html)

- [Good Tutorial on CHAID](http://www.statsoft.com/Textbook/CHAID-Analysis)

-3月

- [Wikipedia Article on MARS](https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines)

-概率决策树

- [Bayesian Learning in Probabilistic Decision Trees](http://www.stats.org.uk/bayesian/Jordan.pdf)

- [Probabilistic Trees Research Paper](http://people.stern.nyu.edu/adamodar/pdfiles/papers/probabilistic.pdf)

Random Forest / Bagging

Boosting

-梯度提升机

- [Gradiet Boosting Wiki](https://en.wikipedia.org/wiki/Gradient_boosting)

- [Guidelines for GBM parameters in R](http://stats.stackexchange.com/questions/25748/what-are-some-useful-guidelines-for-gbm-parameters), [Strategy to set parameters](http://stats.stackexchange.com/questions/35984/strategy-to-set-the-gbm-parameters)

- [Meaning of Interaction Depth](http://stats.stackexchange.com/questions/16501/what-does-interaction-depth-mean-in-gbm), [2](http://stats.stackexchange.com/questions/16501/what-does-interaction-depth-mean-in-gbm)

- [Role of n.minobsinnode parameter of GBM in R](http://stats.stackexchange.com/questions/30645/role-of-n-minobsinnode-parameter-of-gbm-in-r)

- [GBM in R](http://www.slideshare.net/mark_landry/gbm-package-in-r)

- [FAQs about GBM](http://stats.stackexchange.com/tags/gbm/hot)

- [GBM vs xgboost](https://www.kaggle.com/c/higgs-boson/forums/t/9497/r-s-gbm-vs-python-s-xgboost)

-xgboost

- [xgboost tuning kaggle](https://www.kaggle.com/khozzy/rossmann-store-sales/xgboost-parameter-tuning-template/log)

- [xgboost vs gbm](https://www.kaggle.com/c/otto-group-product-classification-challenge/forums/t/13012/question-to-experienced-kagglers-and-anyone-who-wants-to-take-a-shot/68296#post68296)

- [xgboost survey](https://www.kaggle.com/c/higgs-boson/forums/t/10335/xgboost-post-competition-survey)

- [Practical XGBoost in Python online course (free)](http://education.parrotprediction.teachable.com/courses/practical-xgboost-in-python)

-AdaBoost

- [AdaBoost Wiki](https://en.wikipedia.org/wiki/AdaBoost), [Python Code](https://gist.github.com/tristanwietsma/5486024)

- [AdaBoost Sparse Input Support](http://hamzehal.blogspot.com/2014/06/adaboost-sparse-input-support.html)

- [adaBag R package](https://cran.r-project.org/web/packages/adabag/adabag.pdf)

- [Tutorial](http://math.mit.edu/~rothvoss/18.304.3PM/Presentations/1-Eric-Boosting304FinalRpdf.pdf)

-CatBoost

- [CatBoost Documentation](https://catboost.ai/docs/)

- [Benchmarks](https://catboost.ai/#benchmark)

- [Tutorial](https://github.com/catboost/tutorials)

- [GitHub Project](https://github.com/catboost)

- [CatBoost vs. Light GBM vs. XGBoost](https://towardsdatascience.com/catboost-vs-light-gbm-vs-xgboost-5f93620723db)

Ensembles

Stacking Models

Vapnik–Chervonenkis Dimension

Bayesian Machine Learning

Semi Supervised Learning

Optimization

Other Tutorials

-有关使用R的数据科学教程的集合,请参阅 this list.

-有关使用Python的数据科学教程的集合,请参阅 this list.