View Source Evision.HOGDescriptor (Evision v0.2.9)
Summary
Functions
Checks if detector size equal to descriptor size.
Computes HOG descriptors of given image.
Computes HOG descriptors of given image.
Computes gradients and quantized gradient orientations.
Computes gradients and quantized gradient orientations.
Performs object detection without a multi-scale window.
Performs object detection without a multi-scale window.
Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.
Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.
Returns coefficients of the classifier trained for people detection (for 48x96 windows).
Returns coefficients of the classifier trained for people detection (for 64x128 windows).
Returns the number of coefficients required for the classification.
Returns winSigma value
Creates the HOG descriptor and detector with default parameters.
HOGDescriptor
loads HOGDescriptor parameters and coefficients for the linear SVM classifier from a file
loads HOGDescriptor parameters and coefficients for the linear SVM classifier from a file
saves HOGDescriptor parameters and coefficients for the linear SVM classifier to a file
saves HOGDescriptor parameters and coefficients for the linear SVM classifier to a file
Sets coefficients for the linear SVM classifier.
Enumerator
Types
Functions
@spec checkDetectorSize(Keyword.t()) :: any() | {:error, String.t()}
@spec checkDetectorSize(t()) :: boolean() | {:error, String.t()}
Checks if detector size equal to descriptor size.
Positional Arguments
- self:
Evision.HOGDescriptor.t()
Return
- retval:
bool
Python prototype (for reference only):
checkDetectorSize() -> retval
@spec compute(t(), Evision.Mat.maybe_mat_in()) :: [number()] | {:error, String.t()}
Computes HOG descriptors of given image.
Positional Arguments
self:
Evision.HOGDescriptor.t()
img:
Evision.Mat
.Matrix of the type CV_8U containing an image where HOG features will be calculated.
Keyword Arguments
winStride:
Size
.Window stride. It must be a multiple of block stride.
padding:
Size
.Padding
locations:
[Point]
.Vector of Point
Return
descriptors:
[float]
.Matrix of the type CV_32F
Python prototype (for reference only):
compute(img[, winStride[, padding[, locations]]]) -> descriptors
@spec compute( t(), Evision.Mat.maybe_mat_in(), [locations: term(), padding: term(), winStride: term()] | nil ) :: [number()] | {:error, String.t()}
Computes HOG descriptors of given image.
Positional Arguments
self:
Evision.HOGDescriptor.t()
img:
Evision.Mat
.Matrix of the type CV_8U containing an image where HOG features will be calculated.
Keyword Arguments
winStride:
Size
.Window stride. It must be a multiple of block stride.
padding:
Size
.Padding
locations:
[Point]
.Vector of Point
Return
descriptors:
[float]
.Matrix of the type CV_32F
Python prototype (for reference only):
compute(img[, winStride[, padding[, locations]]]) -> descriptors
@spec computeGradient( t(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in() ) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Computes gradients and quantized gradient orientations.
Positional Arguments
self:
Evision.HOGDescriptor.t()
img:
Evision.Mat
.Matrix contains the image to be computed
Keyword Arguments
paddingTL:
Size
.Padding from top-left
paddingBR:
Size
.Padding from bottom-right
Return
grad:
Evision.Mat.t()
.Matrix of type CV_32FC2 contains computed gradients
angleOfs:
Evision.Mat.t()
.Matrix of type CV_8UC2 contains quantized gradient orientations
Python prototype (for reference only):
computeGradient(img, grad, angleOfs[, paddingTL[, paddingBR]]) -> grad, angleOfs
@spec computeGradient( t(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), Evision.Mat.maybe_mat_in(), [paddingBR: term(), paddingTL: term()] | nil ) :: {Evision.Mat.t(), Evision.Mat.t()} | {:error, String.t()}
Computes gradients and quantized gradient orientations.
Positional Arguments
self:
Evision.HOGDescriptor.t()
img:
Evision.Mat
.Matrix contains the image to be computed
Keyword Arguments
paddingTL:
Size
.Padding from top-left
paddingBR:
Size
.Padding from bottom-right
Return
grad:
Evision.Mat.t()
.Matrix of type CV_32FC2 contains computed gradients
angleOfs:
Evision.Mat.t()
.Matrix of type CV_8UC2 contains quantized gradient orientations
Python prototype (for reference only):
computeGradient(img, grad, angleOfs[, paddingTL[, paddingBR]]) -> grad, angleOfs
@spec detect(t(), Evision.Mat.maybe_mat_in()) :: {[{number(), number()}], [number()]} | {:error, String.t()}
Performs object detection without a multi-scale window.
Positional Arguments
self:
Evision.HOGDescriptor.t()
img:
Evision.Mat
.Matrix of the type CV_8U or CV_8UC3 containing an image where objects are detected.
Keyword Arguments
hitThreshold:
double
.Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here.
winStride:
Size
.Window stride. It must be a multiple of block stride.
padding:
Size
.Padding
searchLocations:
[Point]
.Vector of Point includes set of requested locations to be evaluated.
Return
foundLocations:
[Point]
.Vector of point where each point contains left-top corner point of detected object boundaries.
weights:
[double]
.Vector that will contain confidence values for each detected object.
Python prototype (for reference only):
detect(img[, hitThreshold[, winStride[, padding[, searchLocations]]]]) -> foundLocations, weights
@spec detect( t(), Evision.Mat.maybe_mat_in(), [ hitThreshold: term(), padding: term(), searchLocations: term(), winStride: term() ] | nil ) :: {[{number(), number()}], [number()]} | {:error, String.t()}
Performs object detection without a multi-scale window.
Positional Arguments
self:
Evision.HOGDescriptor.t()
img:
Evision.Mat
.Matrix of the type CV_8U or CV_8UC3 containing an image where objects are detected.
Keyword Arguments
hitThreshold:
double
.Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here.
winStride:
Size
.Window stride. It must be a multiple of block stride.
padding:
Size
.Padding
searchLocations:
[Point]
.Vector of Point includes set of requested locations to be evaluated.
Return
foundLocations:
[Point]
.Vector of point where each point contains left-top corner point of detected object boundaries.
weights:
[double]
.Vector that will contain confidence values for each detected object.
Python prototype (for reference only):
detect(img[, hitThreshold[, winStride[, padding[, searchLocations]]]]) -> foundLocations, weights
@spec detectMultiScale(t(), Evision.Mat.maybe_mat_in()) :: {[{number(), number(), number(), number()}], [number()]} | {:error, String.t()}
Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.
Positional Arguments
self:
Evision.HOGDescriptor.t()
img:
Evision.Mat
.Matrix of the type CV_8U or CV_8UC3 containing an image where objects are detected.
Keyword Arguments
hitThreshold:
double
.Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here.
winStride:
Size
.Window stride. It must be a multiple of block stride.
padding:
Size
.Padding
scale:
double
.Coefficient of the detection window increase.
groupThreshold:
double
.Coefficient to regulate the similarity threshold. When detected, some objects can be covered by many rectangles. 0 means not to perform grouping.
useMeanshiftGrouping:
bool
.indicates grouping algorithm
Return
foundLocations:
[Rect]
.Vector of rectangles where each rectangle contains the detected object.
foundWeights:
[double]
.Vector that will contain confidence values for each detected object.
Python prototype (for reference only):
detectMultiScale(img[, hitThreshold[, winStride[, padding[, scale[, groupThreshold[, useMeanshiftGrouping]]]]]]) -> foundLocations, foundWeights
@spec detectMultiScale( t(), Evision.Mat.maybe_mat_in(), [ groupThreshold: term(), hitThreshold: term(), padding: term(), scale: term(), useMeanshiftGrouping: term(), winStride: term() ] | nil ) :: {[{number(), number(), number(), number()}], [number()]} | {:error, String.t()}
Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.
Positional Arguments
self:
Evision.HOGDescriptor.t()
img:
Evision.Mat
.Matrix of the type CV_8U or CV_8UC3 containing an image where objects are detected.
Keyword Arguments
hitThreshold:
double
.Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here.
winStride:
Size
.Window stride. It must be a multiple of block stride.
padding:
Size
.Padding
scale:
double
.Coefficient of the detection window increase.
groupThreshold:
double
.Coefficient to regulate the similarity threshold. When detected, some objects can be covered by many rectangles. 0 means not to perform grouping.
useMeanshiftGrouping:
bool
.indicates grouping algorithm
Return
foundLocations:
[Rect]
.Vector of rectangles where each rectangle contains the detected object.
foundWeights:
[double]
.Vector that will contain confidence values for each detected object.
Python prototype (for reference only):
detectMultiScale(img[, hitThreshold[, winStride[, padding[, scale[, groupThreshold[, useMeanshiftGrouping]]]]]]) -> foundLocations, foundWeights
@spec get_histogramNormType(t()) :: Evision.HOGDescriptor.HistogramNormType.enum()
Returns coefficients of the classifier trained for people detection (for 48x96 windows).
Return
- retval:
[float]
Python prototype (for reference only):
getDaimlerPeopleDetector() -> retval
Returns coefficients of the classifier trained for people detection (for 64x128 windows).
Return
- retval:
[float]
Python prototype (for reference only):
getDefaultPeopleDetector() -> retval
@spec getDescriptorSize(Keyword.t()) :: any() | {:error, String.t()}
@spec getDescriptorSize(t()) :: integer() | {:error, String.t()}
Returns the number of coefficients required for the classification.
Positional Arguments
- self:
Evision.HOGDescriptor.t()
Return
- retval:
size_t
Python prototype (for reference only):
getDescriptorSize() -> retval
@spec getWinSigma(Keyword.t()) :: any() | {:error, String.t()}
@spec getWinSigma(t()) :: number() | {:error, String.t()}
Returns winSigma value
Positional Arguments
- self:
Evision.HOGDescriptor.t()
Return
- retval:
double
Python prototype (for reference only):
getWinSigma() -> retval
Creates the HOG descriptor and detector with default parameters.
Return
- self:
Evision.HOGDescriptor.t()
aqual to HOGDescriptor(Size(64,128), Size(16,16), Size(8,8), Size(8,8), 9 )
Python prototype (for reference only):
HOGDescriptor() -> <HOGDescriptor object>
@spec hogDescriptor(Keyword.t()) :: any() | {:error, String.t()}
@spec hogDescriptor(binary()) :: t() | {:error, String.t()}
HOGDescriptor
Positional Arguments
filename:
String
.The file name containing HOGDescriptor properties and coefficients for the linear SVM classifier.
Return
- self:
Evision.HOGDescriptor.t()
Has overloading in C++
Creates the HOG descriptor and detector and loads HOGDescriptor parameters and coefficients for the linear SVM classifier from a file.
Python prototype (for reference only):
HOGDescriptor(filename) -> <HOGDescriptor object>
@spec hogDescriptor( {number(), number()}, {number(), number()}, {number(), number()}, {number(), number()}, integer() ) :: t() | {:error, String.t()}
HOGDescriptor
Positional Arguments
winSize:
Size
.sets winSize with given value.
blockSize:
Size
.sets blockSize with given value.
blockStride:
Size
.sets blockStride with given value.
cellSize:
Size
.sets cellSize with given value.
nbins:
integer()
.sets nbins with given value.
Keyword Arguments
derivAperture:
integer()
.sets derivAperture with given value.
winSigma:
double
.sets winSigma with given value.
histogramNormType:
HOGDescriptor_HistogramNormType
.sets histogramNormType with given value.
l2HysThreshold:
double
.sets L2HysThreshold with given value.
gammaCorrection:
bool
.sets gammaCorrection with given value.
nlevels:
integer()
.sets nlevels with given value.
signedGradient:
bool
.sets signedGradient with given value.
Return
- self:
Evision.HOGDescriptor.t()
Has overloading in C++
Python prototype (for reference only):
HOGDescriptor(_winSize, _blockSize, _blockStride, _cellSize, _nbins[, _derivAperture[, _winSigma[, _histogramNormType[, _L2HysThreshold[, _gammaCorrection[, _nlevels[, _signedGradient]]]]]]]) -> <HOGDescriptor object>
hogDescriptor(winSize, blockSize, blockStride, cellSize, nbins, opts)
View Source@spec hogDescriptor( {number(), number()}, {number(), number()}, {number(), number()}, {number(), number()}, integer(), [ derivAperture: term(), gammaCorrection: term(), histogramNormType: term(), l2HysThreshold: term(), nlevels: term(), signedGradient: term(), winSigma: term() ] | nil ) :: t() | {:error, String.t()}
HOGDescriptor
Positional Arguments
winSize:
Size
.sets winSize with given value.
blockSize:
Size
.sets blockSize with given value.
blockStride:
Size
.sets blockStride with given value.
cellSize:
Size
.sets cellSize with given value.
nbins:
integer()
.sets nbins with given value.
Keyword Arguments
derivAperture:
integer()
.sets derivAperture with given value.
winSigma:
double
.sets winSigma with given value.
histogramNormType:
HOGDescriptor_HistogramNormType
.sets histogramNormType with given value.
l2HysThreshold:
double
.sets L2HysThreshold with given value.
gammaCorrection:
bool
.sets gammaCorrection with given value.
nlevels:
integer()
.sets nlevels with given value.
signedGradient:
bool
.sets signedGradient with given value.
Return
- self:
Evision.HOGDescriptor.t()
Has overloading in C++
Python prototype (for reference only):
HOGDescriptor(_winSize, _blockSize, _blockStride, _cellSize, _nbins[, _derivAperture[, _winSigma[, _histogramNormType[, _L2HysThreshold[, _gammaCorrection[, _nlevels[, _signedGradient]]]]]]]) -> <HOGDescriptor object>
loads HOGDescriptor parameters and coefficients for the linear SVM classifier from a file
Positional Arguments
self:
Evision.HOGDescriptor.t()
filename:
String
.Name of the file to read.
Keyword Arguments
objname:
String
.The optional name of the node to read (if empty, the first top-level node will be used).
Return
- retval:
bool
Python prototype (for reference only):
load(filename[, objname]) -> retval
loads HOGDescriptor parameters and coefficients for the linear SVM classifier from a file
Positional Arguments
self:
Evision.HOGDescriptor.t()
filename:
String
.Name of the file to read.
Keyword Arguments
objname:
String
.The optional name of the node to read (if empty, the first top-level node will be used).
Return
- retval:
bool
Python prototype (for reference only):
load(filename[, objname]) -> retval
saves HOGDescriptor parameters and coefficients for the linear SVM classifier to a file
Positional Arguments
self:
Evision.HOGDescriptor.t()
filename:
String
.File name
Keyword Arguments
objname:
String
.Object name
Python prototype (for reference only):
save(filename[, objname]) -> None
saves HOGDescriptor parameters and coefficients for the linear SVM classifier to a file
Positional Arguments
self:
Evision.HOGDescriptor.t()
filename:
String
.File name
Keyword Arguments
objname:
String
.Object name
Python prototype (for reference only):
save(filename[, objname]) -> None
@spec setSVMDetector(t(), Evision.Mat.maybe_mat_in()) :: t() | {:error, String.t()}
Sets coefficients for the linear SVM classifier.
Positional Arguments
self:
Evision.HOGDescriptor.t()
svmdetector:
Evision.Mat
.coefficients for the linear SVM classifier.
Python prototype (for reference only):
setSVMDetector(svmdetector) -> None