Lines Matching refs:classifier

70 classifier. Later the technique was extended to regression and clustering problems. SVM is a partial
77 nearest feature vectors from both classes (in case of 2-class classifier) is maximal. The feature
184 committee @cite HTF01 . A weak classifier is only required to be better than chance, and thus can be
186 strong classifier that often outperforms most "monolithic" strong classifiers such as SVMs and
199 Initially the same weight is assigned to each sample (step 2). Then, a weak classifier
203 next weak classifier continues for another \f$M\f$ -1 times. The final classifier \f$F(x)\f$ is the
214 - Fit the classifier \f$f_m(x) \in{-1,1}\f$, using weights \f$w_i\f$ on the training data.
231 Examples with a very low relative weight have a small impact on the weak classifier training. Thus,
232 such examples may be excluded during the weak classifier training without having much effect on the
233 induced classifier. This process is controlled with the weight_trim_rate parameter. Only examples
234 with the summary fraction weight_trim_rate of the total weight mass are used in the weak classifier
244 the raw sum from Boost classifier.
253 The classification works as follows: the random trees classifier takes the input feature vector,
255 of "votes". In case of a regression, the classifier response is the average of the responses over
418 workaround. If a certain feature in the input or output (in case of n -class classifier for
438 it creates a multiple 2-class classifiers). In order to train the logistic regression classifier,
441 discriminative classifier (see <http://www.cs.cmu.edu/~tom/NewChapters.html> for more details).
478 A sample set of training parameters for the Logistic Regression classifier can be initialized as fo…