パターン認識特論

Numbering Code G-INF01 63165 LE12 Year/Term 2022 ・ First semester
Number of Credits 2 Course Type Lecture
Target Year Target Student
Language English Day/Period Wed.2
Instructor name KAWAHARA TATSUYA (Graduate School of Informatics Professor)
NISHINO KO (Graduate School of Informatics Professor)
NOBUHARA SHOUHEI (Graduate School of Informatics Associate Professor)
YOSHII KAZUYOSHI (Graduate School of Informatics Associate Professor)
Outline and Purpose of the Course The course introduces fundamentals of pattern recognition, clustering methods with several distance measures, and feature extraction methods. It gives a review of state-of-the-art classifiers such as Gaussian Mixture Models (GMM), Hidden Markov Models (HMM), and Deep Neural Networks (DNN) as well as learning theories which include Maximum Likelihood Estimation (MLE), Bayesian learning, and Deep learning. It also focuses on modeling and recognition of sequential patterns.

本講義では、パターン認識の基礎、距離尺度とクラスタリング、特徴抽出などについて概説する。その上で、より高度な識別器(GMM、HMM、DNNなど)と学習規範 (最尤推定、ベイズ学習、深層学習など)について紹介する。時系列パターンのモデル化・認識についてもとりあげる。
Course Goals To learn the basic methodology and a variety of techniques of pattern recognition and apply them to the own research topics.

パターン認識に関する基本的な方法論と様々な技術を修得するとともに、自らの研究課題等に対して応用できる能力を身につける。
Schedule and Contents 1. Fundamentals (3 weeks; Nishino)
Introduction, Probability Theory
Decision Theory, Linear Regression
Linear Classification

2. Statistical Feature Extraction (3 weeks; Nobuhara)
PCA, Fisher LDA, Basics of Matrix
Application of PCA & Fisher LDA, Subspace, Factor Analysis (FA)
ICA, probabilistic PCA, probabilistic FA

3. Modeling and Recognition of Sequential Patterns (3 weeks; Nobuhara & Kawahara)
Kalman filter, Particle filter
DP matching, HMM

5. Maximum Likelihood Estimation and Bayesian Learning (3 weeks; Yoshii)
GMM, maximum likelihood estimation, EM algorithm
Bayesian estimation, variational Bayes, Gibbs sampling
Bayesian nonparametrics, Dirichlet, gamma, and beta processes

6. Discriminative Model and Deep Learning (3 weeks; Kawahara)
Discriminative learning, Logistic Regression, CRF, SVM, boosting
Deep learning, deep neural network
Deep learning, recurrent neural network

1.基礎 (3回;西野)
導入, 確率理論
決定理論, 線形回帰
線形識別

2.統計的特徴抽出 (3回;延原)
主成分分析, 判別分析
主成分分析, 判別分析の応用, 部分空間, 因子分析
独立成分分析, 確率的主成分分析, 確率的因子分析

3.時系列パターンのモデル化と認識 (3回;延原・河原)
カルマンフィルタ, パーティクルフィルタ
DPマッチング, HMM

4.最尤推定とベイズ学習 (3回;吉井)
GMM, 最尤推定, EMアルゴリズム
ベイズ推定, 変分ベイズ, ギブスサンプリング
ノンパラメトリックベイズ, ディレクレ/ガンマ/ベータ過程

6.識別モデルと深層学習 (3回;河原)
識別学習, ロジスティック回帰, CRF, SVM, ブースティング
深層学習, ディープニューラルネットワーク
深層学習, リカレントニューラルネットワーク
Evaluation Methods and Policy Grading will be determined by question-answers and submitted reports on the assignments which will be given by individual lecturers during the course.

各講師が授業中に提示する質問への回答や課題のレポートに基づいて、到達目標の達成度を情報学研究科成績評価規定第7条により総合的に判断する。
Course Requirements None
Study outside of Class (preparation and review) Lecture materials will be provided via PandA CMS.
Students are expected to review them.

講義資料はPandA CMSで配布する。予習・復習を行うこと。
References, etc. Pattern Recognition and Machine Learning (パターン認識と機械学習), C.M. Bishop, (Springer-Verlag, 2006)
Deep Learning, Goodfellow, Bengio, and Courville., (MIT Press, 2016)
Pattern Classification (パターン識別), Duda, Hart, and Stork, (John Wiley & Sons, 2001)
The Elements of Statistical Learning (統計的学習の基礎), Hastie, Tibshirani, and Friedman, (Springer, 2009)
PAGE TOP