View Source Evision.Face.FisherFaceRecognizer (Evision v0.2.9)

Summary

Types

t()

Type that represents an Face.FisherFaceRecognizer struct.

Functions

Clears the algorithm state

create

Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read

getEigenVectors

Gets string information by label.

Gets vector of labels by string.

getNumComponents

Predicts a label and associated confidence (e.g. distance) for a given input image.

  • if implemented - send all result of prediction to collector that can be used for somehow custom result handling
Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()

predict_label

Loads a FaceRecognizer and its model state.

Sets string info for the specified model's label.

setNumComponents

Trains a FaceRecognizer with given data and associated labels.

Updates a FaceRecognizer with given data and associated labels.

Saves a FaceRecognizer and its model state.

Types

@type t() :: %Evision.Face.FisherFaceRecognizer{ref: reference()}

Type that represents an Face.FisherFaceRecognizer struct.

  • ref. reference()

    The underlying erlang resource variable.

Functions

@spec clear(Keyword.t()) :: any() | {:error, String.t()}
@spec clear(t()) :: t() | {:error, String.t()}

Clears the algorithm state

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()

Python prototype (for reference only):

clear() -> None
@spec create() :: t() | {:error, String.t()}

create

Keyword Arguments
  • num_components: integer().

    The number of components (read: Fisherfaces) kept for this Linear Discriminant Analysis with the Fisherfaces criterion. It's useful to keep all components, that means the number of your classes c (read: subjects, persons you want to recognize). If you leave this at the default (0) or set it to a value less-equal 0 or greater (c-1), it will be set to the correct number (c-1) automatically.

  • threshold: double.

    The threshold applied in the prediction. If the distance to the nearest neighbor is larger than the threshold, this method returns -1.

Return
  • retval: FisherFaceRecognizer

Notes:

  • Training and prediction must be done on grayscale images, use cvtColor to convert between the color spaces.

  • THE FISHERFACES METHOD MAKES THE ASSUMPTION, THAT THE TRAINING AND TEST IMAGES ARE OF EQUAL SIZE. (caps-lock, because I got so many mails asking for this). You have to make sure your input data has the correct shape, else a meaningful exception is thrown. Use resize to resize the images.

  • This model does not support updating.

Model internal data:

  • num_components see FisherFaceRecognizer::create.

  • threshold see FisherFaceRecognizer::create.

  • eigenvalues The eigenvalues for this Linear Discriminant Analysis (ordered descending).

  • eigenvectors The eigenvectors for this Linear Discriminant Analysis (ordered by their eigenvalue).

  • mean The sample mean calculated from the training data.

  • projections The projections of the training data.

  • labels The labels corresponding to the projections.

Python prototype (for reference only):

create([, num_components[, threshold]]) -> retval
@spec create(Keyword.t()) :: any() | {:error, String.t()}
@spec create([num_components: term(), threshold: term()] | nil) ::
  t() | {:error, String.t()}

create

Keyword Arguments
  • num_components: integer().

    The number of components (read: Fisherfaces) kept for this Linear Discriminant Analysis with the Fisherfaces criterion. It's useful to keep all components, that means the number of your classes c (read: subjects, persons you want to recognize). If you leave this at the default (0) or set it to a value less-equal 0 or greater (c-1), it will be set to the correct number (c-1) automatically.

  • threshold: double.

    The threshold applied in the prediction. If the distance to the nearest neighbor is larger than the threshold, this method returns -1.

Return
  • retval: FisherFaceRecognizer

Notes:

  • Training and prediction must be done on grayscale images, use cvtColor to convert between the color spaces.

  • THE FISHERFACES METHOD MAKES THE ASSUMPTION, THAT THE TRAINING AND TEST IMAGES ARE OF EQUAL SIZE. (caps-lock, because I got so many mails asking for this). You have to make sure your input data has the correct shape, else a meaningful exception is thrown. Use resize to resize the images.

  • This model does not support updating.

Model internal data:

  • num_components see FisherFaceRecognizer::create.

  • threshold see FisherFaceRecognizer::create.

  • eigenvalues The eigenvalues for this Linear Discriminant Analysis (ordered descending).

  • eigenvectors The eigenvectors for this Linear Discriminant Analysis (ordered by their eigenvalue).

  • mean The sample mean calculated from the training data.

  • projections The projections of the training data.

  • labels The labels corresponding to the projections.

Python prototype (for reference only):

create([, num_components[, threshold]]) -> retval
@spec empty(Keyword.t()) :: any() | {:error, String.t()}
@spec empty(t()) :: boolean() | {:error, String.t()}

Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return
  • retval: bool

Python prototype (for reference only):

empty() -> retval
Link to this function

getDefaultName(named_args)

View Source
@spec getDefaultName(Keyword.t()) :: any() | {:error, String.t()}
@spec getDefaultName(t()) :: binary() | {:error, String.t()}

getDefaultName

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return

Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string.

Python prototype (for reference only):

getDefaultName() -> retval
Link to this function

getEigenValues(named_args)

View Source
@spec getEigenValues(Keyword.t()) :: any() | {:error, String.t()}
@spec getEigenValues(t()) :: Evision.Mat.t() | {:error, String.t()}

getEigenValues

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return
  • retval: Evision.Mat.t()

Python prototype (for reference only):

getEigenValues() -> retval
Link to this function

getEigenVectors(named_args)

View Source
@spec getEigenVectors(Keyword.t()) :: any() | {:error, String.t()}
@spec getEigenVectors(t()) :: Evision.Mat.t() | {:error, String.t()}

getEigenVectors

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return
  • retval: Evision.Mat.t()

Python prototype (for reference only):

getEigenVectors() -> retval
Link to this function

getLabelInfo(named_args)

View Source
@spec getLabelInfo(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

getLabelInfo(self, label)

View Source
@spec getLabelInfo(t(), integer()) :: binary() | {:error, String.t()}

Gets string information by label.

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
  • label: integer()
Return

If an unknown label id is provided or there is no label information associated with the specified label id the method returns an empty string.

Python prototype (for reference only):

getLabelInfo(label) -> retval
@spec getLabels(Keyword.t()) :: any() | {:error, String.t()}
@spec getLabels(t()) :: Evision.Mat.t() | {:error, String.t()}

getLabels

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return
  • retval: Evision.Mat.t()

Python prototype (for reference only):

getLabels() -> retval
Link to this function

getLabelsByString(named_args)

View Source
@spec getLabelsByString(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

getLabelsByString(self, str)

View Source
@spec getLabelsByString(t(), binary()) :: [integer()] | {:error, String.t()}

Gets vector of labels by string.

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
  • str: String
Return
  • retval: [integer()]

The function searches for the labels containing the specified sub-string in the associated string info.

Python prototype (for reference only):

getLabelsByString(str) -> retval
@spec getMean(Keyword.t()) :: any() | {:error, String.t()}
@spec getMean(t()) :: Evision.Mat.t() | {:error, String.t()}

getMean

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return
  • retval: Evision.Mat.t()

Python prototype (for reference only):

getMean() -> retval
Link to this function

getNumComponents(named_args)

View Source
@spec getNumComponents(Keyword.t()) :: any() | {:error, String.t()}
@spec getNumComponents(t()) :: integer() | {:error, String.t()}

getNumComponents

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return
  • retval: integer()

@see setNumComponents/2

Python prototype (for reference only):

getNumComponents() -> retval
Link to this function

getProjections(named_args)

View Source
@spec getProjections(Keyword.t()) :: any() | {:error, String.t()}
@spec getProjections(t()) :: [Evision.Mat.t()] | {:error, String.t()}

getProjections

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return
  • retval: [Evision.Mat]

Python prototype (for reference only):

getProjections() -> retval
Link to this function

getThreshold(named_args)

View Source
@spec getThreshold(Keyword.t()) :: any() | {:error, String.t()}
@spec getThreshold(t()) :: number() | {:error, String.t()}

getThreshold

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
Return
  • retval: double

@see setThreshold/2

Python prototype (for reference only):

getThreshold() -> retval
@spec predict(Keyword.t()) :: any() | {:error, String.t()}
@spec predict(t(), Evision.Mat.maybe_mat_in()) ::
  {integer(), number()} | {:error, String.t()}

Predicts a label and associated confidence (e.g. distance) for a given input image.

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()

  • src: Evision.Mat.

    Sample image to get a prediction from.

Return
  • label: integer().

    The predicted label for the given image.

  • confidence: double.

    Associated confidence (e.g. distance) for the predicted label.

The suffix const means that prediction does not affect the internal model state, so the method can be safely called from within different threads. The following example shows how to get a prediction from a trained model:

using namespace cv;
// Do your initialization here (create the cv::FaceRecognizer model) ...
// ...
// Read in a sample image:
Mat img = imread("person1/3.jpg", IMREAD_GRAYSCALE);
// And get a prediction from the cv::FaceRecognizer:
int predicted = model->predict(img);

Or to get a prediction and the associated confidence (e.g. distance):

using namespace cv;
// Do your initialization here (create the cv::FaceRecognizer model) ...
// ...
Mat img = imread("person1/3.jpg", IMREAD_GRAYSCALE);
// Some variables for the predicted label and associated confidence (e.g. distance):
int predicted_label = -1;
double predicted_confidence = 0.0;
// Get the prediction and associated confidence from the model
model->predict(img, predicted_label, predicted_confidence);

Python prototype (for reference only):

predict(src) -> label, confidence
Link to this function

predict_collect(named_args)

View Source
@spec predict_collect(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

predict_collect(self, src, collector)

View Source
@spec predict_collect(
  t(),
  Evision.Mat.maybe_mat_in(),
  Evision.Face.PredictCollector.t()
) ::
  t() | {:error, String.t()}
  • if implemented - send all result of prediction to collector that can be used for somehow custom result handling
Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()

  • src: Evision.Mat.

    Sample image to get a prediction from.

  • collector: PredictCollector.

    User-defined collector object that accepts all results

To implement this method u just have to do same internal cycle as in predict(InputArray src, CV_OUT int &label, CV_OUT double &confidence) but not try to get "best@ result, just resend it to caller side with given collector

Python prototype (for reference only):

predict_collect(src, collector) -> None
Link to this function

predict_label(named_args)

View Source
@spec predict_label(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

predict_label(self, src)

View Source
@spec predict_label(t(), Evision.Mat.maybe_mat_in()) ::
  integer() | {:error, String.t()}

predict_label

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
  • src: Evision.Mat
Return
  • retval: integer()

Has overloading in C++

Python prototype (for reference only):

predict_label(src) -> retval
@spec read(Keyword.t()) :: any() | {:error, String.t()}
@spec read(t(), binary()) :: t() | {:error, String.t()}

Loads a FaceRecognizer and its model state.

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
  • filename: String

Loads a persisted model and state from a given XML or YAML file . Every FaceRecognizer has to overwrite FaceRecognizer::load(FileStorage& fs) to enable loading the model state. FaceRecognizer::load(FileStorage& fs) in turn gets called by FaceRecognizer::load(const String& filename), to ease saving a model.

Python prototype (for reference only):

read(filename) -> None
@spec save(Keyword.t()) :: any() | {:error, String.t()}
@spec save(t(), binary()) :: t() | {:error, String.t()}

save

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
  • filename: String

Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).

Python prototype (for reference only):

save(filename) -> None
Link to this function

setLabelInfo(named_args)

View Source
@spec setLabelInfo(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setLabelInfo(self, label, strInfo)

View Source
@spec setLabelInfo(t(), integer(), binary()) :: t() | {:error, String.t()}

Sets string info for the specified model's label.

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
  • label: integer()
  • strInfo: String

The string info is replaced by the provided value if it was set before for the specified label.

Python prototype (for reference only):

setLabelInfo(label, strInfo) -> None
Link to this function

setNumComponents(named_args)

View Source
@spec setNumComponents(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

setNumComponents(self, val)

View Source
@spec setNumComponents(t(), integer()) :: t() | {:error, String.t()}

setNumComponents

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
  • val: integer()

@see getNumComponents/1

Python prototype (for reference only):

setNumComponents(val) -> None
Link to this function

setThreshold(named_args)

View Source
@spec setThreshold(Keyword.t()) :: any() | {:error, String.t()}
@spec setThreshold(t(), number()) :: t() | {:error, String.t()}

setThreshold

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()
  • val: double

@see getThreshold/1

Python prototype (for reference only):

setThreshold(val) -> None
@spec train(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

train(self, src, labels)

View Source
@spec train(t(), [Evision.Mat.maybe_mat_in()], Evision.Mat.maybe_mat_in()) ::
  t() | {:error, String.t()}

Trains a FaceRecognizer with given data and associated labels.

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()

  • src: [Evision.Mat].

    The training images, that means the faces you want to learn. The data has to be given as a vector\<Mat>.

  • labels: Evision.Mat.

    The labels corresponding to the images have to be given either as a vector\<int> or a Mat of type CV_32SC1.

The following source code snippet shows you how to learn a Fisherfaces model on a given set of images. The images are read with imread and pushed into a std::vector\<Mat>. The labels of each image are stored within a std::vector\<int> (you could also use a Mat of type CV_32SC1). Think of the label as the subject (the person) this image belongs to, so same subjects (persons) should have the same label. For the available FaceRecognizer you don't have to pay any attention to the order of the labels, just make sure same persons have the same label:

// holds images and labels
vector<Mat> images;
vector<int> labels;
// using Mat of type CV_32SC1
// Mat labels(number_of_samples, 1, CV_32SC1);
// images for first person
images.push_back(imread("person0/0.jpg", IMREAD_GRAYSCALE)); labels.push_back(0);
images.push_back(imread("person0/1.jpg", IMREAD_GRAYSCALE)); labels.push_back(0);
images.push_back(imread("person0/2.jpg", IMREAD_GRAYSCALE)); labels.push_back(0);
// images for second person
images.push_back(imread("person1/0.jpg", IMREAD_GRAYSCALE)); labels.push_back(1);
images.push_back(imread("person1/1.jpg", IMREAD_GRAYSCALE)); labels.push_back(1);
images.push_back(imread("person1/2.jpg", IMREAD_GRAYSCALE)); labels.push_back(1);

Now that you have read some images, we can create a new FaceRecognizer. In this example I'll create a Fisherfaces model and decide to keep all of the possible Fisherfaces:

// Create a new Fisherfaces model and retain all available Fisherfaces,
// this is the most common usage of this specific FaceRecognizer:
//
Ptr<FaceRecognizer> model =  FisherFaceRecognizer::create();

And finally train it on the given dataset (the face images and labels):

// This is the common interface to train all of the available cv::FaceRecognizer
// implementations:
//
model->train(images, labels);

Python prototype (for reference only):

train(src, labels) -> None
@spec update(Keyword.t()) :: any() | {:error, String.t()}
Link to this function

update(self, src, labels)

View Source
@spec update(t(), [Evision.Mat.maybe_mat_in()], Evision.Mat.maybe_mat_in()) ::
  t() | {:error, String.t()}

Updates a FaceRecognizer with given data and associated labels.

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()

  • src: [Evision.Mat].

    The training images, that means the faces you want to learn. The data has to be given as a vector\<Mat>.

  • labels: Evision.Mat.

    The labels corresponding to the images have to be given either as a vector\<int> or a Mat of type CV_32SC1.

This method updates a (probably trained) FaceRecognizer, but only if the algorithm supports it. The Local Binary Patterns Histograms (LBPH) recognizer (see createLBPHFaceRecognizer) can be updated. For the Eigenfaces and Fisherfaces method, this is algorithmically not possible and you have to re-estimate the model with FaceRecognizer::train. In any case, a call to train empties the existing model and learns a new model, while update does not delete any model data.

// Create a new LBPH model (it can be updated) and use the default parameters,
// this is the most common usage of this specific FaceRecognizer:
//
Ptr<FaceRecognizer> model =  LBPHFaceRecognizer::create();
// This is the common interface to train all of the available cv::FaceRecognizer
// implementations:
//
model->train(images, labels);
// Some containers to hold new image:
vector<Mat> newImages;
vector<int> newLabels;
// You should add some images to the containers:
//
// ...
//
// Now updating the model is as easy as calling:
model->update(newImages,newLabels);
// This will preserve the old model data and extend the existing model
// with the new features extracted from newImages!

Calling update on an Eigenfaces model (see EigenFaceRecognizer::create), which doesn't support updating, will throw an error similar to:

OpenCV Error: The function/feature is not implemented (This FaceRecognizer (FaceRecognizer.Eigenfaces) does not support updating, you have to use FaceRecognizer::train to update it.) in update, file /home/philipp/git/opencv/modules/contrib/src/facerec.cpp, line 305
terminate called after throwing an instance of 'cv::Exception'

Note: The FaceRecognizer does not store your training images, because this would be very memory intense and it's not the responsibility of te FaceRecognizer to do so. The caller is responsible for maintaining the dataset, he want to work with.

Python prototype (for reference only):

update(src, labels) -> None
@spec write(Keyword.t()) :: any() | {:error, String.t()}
@spec write(t(), binary()) :: t() | {:error, String.t()}

Saves a FaceRecognizer and its model state.

Positional Arguments
  • self: Evision.Face.FisherFaceRecognizer.t()

  • filename: String.

    The filename to store this FaceRecognizer to (either XML/YAML).

Saves this model to a given filename, either as XML or YAML.

Every FaceRecognizer overwrites FaceRecognizer::save(FileStorage& fs) to save the internal model state. FaceRecognizer::save(const String& filename) saves the state of a model to the given filename. The suffix const means that prediction does not affect the internal model state, so the method can be safely called from within different threads.

Python prototype (for reference only):

write(filename) -> None