CS-644B: Pattern Recognition
"We
don't see things as they are. We see them as we are." - Anais
Nin
Detailed
Course Contents
Introduction
to Pattern Recognition via Character Recognition
- Transducers
- Preprocessing
- Feature extraction (feature-space
representation)
- Classification (decision regions)
- Grids (square, triangular,
hexagonal)
- Connectivity
- Contour tracing (square &
Moore neighborhood tracing)
- M.I.T. reading machine for
the blind
- Hysteresis smoothing (digital
filtering)
- Types of input to pattern
recognition programs
Spatial Smoothing
- Regularization
- Logical smoothing (salt-and-pepper
noise)
- Local averaging
- Median filtering
- Polygonal approximation
Spatial Differentiation
- Sobel operator
- Roberts cross operator
- Laplacian
- Unsharp masking
Spatial Moments
- Moments of distributions
- Moments of area & perimeter
- Moments for feature extraction
- Moments for pre-processing
- Moments as predictors of discrimination
performance
Medial Axis Transformations
- Distance between sets
- Medial Axis (prairie-fire
transformation)
- Skeletonization
- Hilditch's algorithm
- Rosenfeld's algorithm
- Minkowski metrics
- Distance transforms
- Skeleton clean-up via distance
transforms
- Medial axes via distance transforms
Topological Feature
Extraction
- Convex hulls, concavities
and enclosures
Processing Line Drawings
- Square, circular, and grid-intersect
quantization
- Probability of obtaining diagonal
elements
- Geometric probability (Bertrand's
paradox)
- Difference encoding &
chain correlation functions
- Minkowski metric quantization
Detection of Structure
in Noisy Pictures and Dot Patterns
- Point-to-curve transformations
(Hough transform)
- Line and circle detection
- Hypothesis testing approach
- Maximum-entropy quantization
- Proximity graphs and perception
- Triangulations and Voronoi
diagrams
- The shape of a set of points
- Relative neighbourhood graphs
- Sphere-of-influence graphs
- Alpha hulls & Beta skeletons
Neural Networks and
Bayesian Decision Theory
- Formal neurons, linear machines
& perceptrons
- Continuous and discrete measurements
- Minimum risk classification
- Minimum error classification
- Discriminant functions
- The multivariate Gaussian
probability density function
- Mahalanobis distance classifiers
- Parametric decision rules
- Independence and the discrete
case
Independence of Measurements,
Redundancy, and Synergism
- Conditional and unconditional
independence
- Dependence and correlation
- The best k measurements are
not the k best
- Information theory and feature
evaluation criteria
- Feature selection methods
Neural Networks and
Non-parametric Learning
- Perceptrons
- Non-parametric training of linear machines
- Error-correction procedures
- The fundamental learning theorem
- Multi-layer networks
Estimation of Parameters
and Classifier Performance
- Properties of estimators
- Dimensionality and sample
size
- Estimation of the probability
of misclassification
Nearest Neighbor Decision
Rules
- The k-nearest neighbor rule
- Efficient search methods for
nearest neighbors
- Decreasing space requirements
- Editing training sets
- Error bounds
Using Contextual Information
in Pattern Recognition
- Markov methods
- Forward dynamic programming
and The Viterbi algorithm
- Combined bottom-up and top-down
algorithms
Cluster Analysis and
Unsupervised Learning
- Decision-directed
learning
- Graph-theoretic methods
- Agglomerative and divisive
methods
Teaching Activities
Homepage