n-Dimensional Vector
Jump to navigation
Jump to search
An n-Dimensional Vector is a vector with more than on dimension.
- Context:
- It can be from an n-Dimensional Space.
- See: 2-Dimensional Vector, 3-Dimensional Vector.
References
2003
- (Zelenko et al., 2003) ⇒ Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. (2003). “Kernel Methods for Relation Extraction.” In: Journal of Machine Learning Research, 3.
- QUOTE: Most learning algorithms rely on feature-based representation of objects. That is, an object is transformed into a collection features [math]\displaystyle{ f_1,...,f_N }[/math], thereby producing a N-dimensional vector.
2001
- (Hand et al., 2001) ⇒ David J. Hand, Heikki Mannila, and Padhraic Smyth. (2001). “Principles of Data Mining.” In: MIT Press. ISBN:026208290X
- QUOTE: A model structure, as defined here, is a global summary of a data set; it makes statements about any point in the full measurement space. Geometrically, if we consider the rows of the data matrix as corresponding to p-dimensional vectors (i.e., points in p-dimensional space), the model can make a statement about any point in this space (and hence, any object). For example, it can assign a point to a cluster or predict the value of some other variable. Even when some of the measurements are missing (i.e., some of the components of the p-dimensional vector are unknown), a model can typically make some statement about the object represented by the (incomplete) vector.
19995
- (Cortes & Vapnik, 1995) ⇒ Corinna Cortes, and Vladimir N. Vapnik. (1995). “Support Vector Networks.” In: Machine Learning, 20(3). doi:10.1007/BF00994018
- QUOTE: More than 60 years ago R. A. Fisher (Fisher, 1936) suggested the first algorithm for pattern recognition. He considered a model of two normal distributed populations, [math]\displaystyle{ N(m_1, \Sigma_1) }[/math] and [math]\displaystyle{ N(m_2, \Sigma_2) }[/math] of [math]\displaystyle{ n }[/math] dimensional vectors [math]\displaystyle{ \mathbf{x} }[/math] with mean vectors [math]\displaystyle{ m_1 }[/math] and [math]\displaystyle{ m_2 }[/math] and co-variance matrices [math]\displaystyle{ \Sigma_1 }[/math] and [math]\displaystyle{ \Sigma_2 }[/math], and showed that the optimal (Bayesian) solution is a quadratic decision function: ...