Emgu.CV.World An abstract class that wrap around a disposable object Track whether Dispose has been called. The dispose function that implements IDisposable interface Dispose(bool disposing) executes in two distinct scenarios. If disposing equals true, the method has been called directly or indirectly by a user's code. Managed and unmanaged resources can be disposed. If disposing equals false, the method has been called by the runtime from inside the finalizer and you should not reference other objects. Only unmanaged resources can be disposed. If disposing equals false, the method has been called by the runtime from inside the finalizer and you should not reference other objects. Only unmanaged resources can be disposed. Release the managed resources. This function will be called during the disposal of the current object. override ride this function if you need to call the Dispose() function on any managed IDisposable object created by the current object Release the unmanaged resources Destructor A generic EventArgs The type of arguments Create a generic EventArgs with the specific value The value The value of the EventArgs A generic EventArgs The type of the first value The type of the second value Create a generic EventArgs with two values The first value The second value The first value The second value Implement this interface if the object can output code to generate it self. Return the code to generate the object itself from the specific language The programming language to output code The code to generate the object from the specific language An object that can be interpolated Interpolate base on this point and the other point with the given index The other point The interpolation index The interpolated point The index that will be used for interpolation A Pinnned array of the specific type The type of the array Create a Pinnned array of the specific type The size of the array Get the address of the pinned array A pointer to the address of the the pinned array Release the GCHandle Disposed the unmanaged data Get the array Provide information for the platform which is using. Get the type of the current operating system Get the type of the current runtime environment utilities functions for Emgu Convert an object to an xml document The type of the object to be converted The object to be serialized An xml document that represents the object Convert an object to an xml document The type of the object to be converted The object to be serialized Other types that it must known ahead to serialize the object An xml document that represents the object Convert an xml document to an object The type of the object to be converted to The xml document The object representation as a result of the deserialization of the xml document Convert an xml document to an object The type of the object to be converted to The xml document Other types that it must known ahead to deserialize the object The object representation as a result of the deserialization of the xml document Convert an xml string to an object The type of the object to be converted to The xml document as a string The object representation as a result of the deserialization of the xml string Merges two byte vector into one the first byte vector to be merged the second byte vector to be merged The bytes that is a concatenation of a and b Call a command from command line The name of the executable The arguments to the executable The standard output Use reflection to find the base type. If such type do not exist, null is returned The type to search from The name of the base class to search The base type Convert some generic vector to vector of Bytes type of the input vector array of data the byte vector Perform first degree interpolation give the sorted data and the interpolation The sorted data that will be interpolated from The indexes of the interpolate result Get subsamples with the specific rate The source which the subsamples will be derived from The subsample rate subsampled with the specific rate Joining multiple index ascending IInterpolatables together as a single index ascending IInterpolatable. The type of objects that will be joined The enumerables, each should be stored in index ascending order A single enumerable sorted in index ascending order Maps the specified executable module into the address space of the calling process. The name of the dll The handle to the library Decrements the reference count of the loaded dynamic-link library (DLL). When the reference count reaches zero, the module is unmapped from the address space of the calling process and the handle is no longer valid The handle to the library If the function succeeds, the return value is true. If the function fails, the return value is false. Adds a directory to the search path used to locate DLLs for the application The directory to be searched for DLLs True if success Type of operating system Windows Linux Mac OSX iOS devices. iPhone, iPad, iPod Touch Android devices The windows phone The runtime environment .Net runtime Windows Store app runtime Mono runtime The type of Programming languages C# C++ An Unmanaged Object is a disposable object with a Ptr property pointing to the unmanaged object A pointer to the unmanaged object Implicit operator for IntPtr The UnmanagedObject The unmanaged pointer for this object Pointer to the unmanaged object The base class for camera response calibration algorithms. The pointer the the calibrateCRF object Recovers inverse camera response. Vector of input images 256x1 matrix with inverse camera response function Vector of exposure time values for each image Library to invoke OpenCV functions string marshaling type Represent a bool value in C++ Represent a int value in C++ Opencv's calling convention The file name of the cvextern library The file name of the cvextern library The file name of the opencv_ffmpeg library Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Index of the camera to be used. If there is only one camera or it does not matter what camera to use -1 may be passed Pointer to the capture structure Allocates and initialized the CvCapture structure for reading the video stream from the specified file. After the allocated structure is not used any more it should be released by cvReleaseCapture function. Name of the video file. Pointer to the capture structure. The function cvReleaseCapture releases the CvCapture structure allocated by cvCreateFileCapture or cvCreateCameraCapture pointer to video capturing structure. Grabs a frame from camera or video file, decompresses and returns it. This function is just a combination of cvGrabFrame and cvRetrieveFrame in one call. Video capturing structure The output frame true id a frame is read The returned image should not be released or modified by user. Grab a frame Video capturing structure True on success Get the frame grabbed with cvGrabFrame(..) This function may apply some frame processing like frame decompression, flipping etc. Video capturing structure The output image The frame retrieve flag True on success The returned image should not be released or modified by user. Retrieves the specified property of camera or video file Video capturing structure Property identifier The specified property of camera or video file Sets the specified property of video capturing Video capturing structure Property identifier Value of the property True on success Check to make sure all the unmanaged libraries are loaded True if library loaded Attempts to load opencv modules from the specific location The directory where the unmanaged modules will be loaded. If it is null, the default location will be used. The names of opencv modules. e.g. "opencv_cxcore.dll" on windows. True if all the modules has been loaded successfully If is null, the default location on windows is the dll's path appended by either "x64" or "x86", depends on the applications current mode. Get the module format string. On Windows, "{0}".dll will be returned; On Linux, "lib{0}.so" will be returned; Otherwise {0} is returned. Attempts to load opencv modules from the specific location The names of opencv modules. e.g. "opencv_cxcore.dll" on windows. True if all the modules has been loaded successfully Static Constructor to setup opencv environment Get the corresponding depth type The opencv depth type The equivalent depth type Get the corresponding opencv depth type The element type The equivalent opencv depth type This function performs the same as MakeType macro The type of depth The number of channels An interger tha represent a mat type Check if the size of the C structures match those of C# True if the size matches Finds perspective transformation H=||h_ij|| between the source and the destination planes Point coordinates in the original plane Point coordinates in the destination plane The output homography matrix FindHomography method The maximum allowed reprojection error to treat a point pair as an inlier. The parameter is only used in RANSAC-based homography estimation. E.g. if dst_points coordinates are measured in pixels with pixel-accurate precision, it makes sense to set this parameter somewhere in the range ~1..3 Optional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored. The 3x3 homography matrix if found. Null if not found. Finds perspective transformation H=||hij|| between the source and the destination planes Point coordinates in the original plane, 2xN, Nx2, 3xN or Nx3 array (the latter two are for representation in homogeneous coordinates), where N is the number of points. Point coordinates in the destination plane, 2xN, Nx2, 3xN or Nx3 array (the latter two are for representation in homogeneous coordinates) The type of the method The maximum allowed re-projection error to treat a point pair as an inlier. The parameter is only used in RANSAC-based homography estimation. E.g. if dst_points coordinates are measured in pixels with pixel-accurate precision, it makes sense to set this parameter somewhere in the range ~1..3 The optional output mask set by a robust method (RANSAC or LMEDS). Output 3x3 homography matrix. Homography matrix is determined up to a scale, thus it is normalized to make h33=1 Converts a rotation vector to rotation matrix or vice versa. Rotation vector is a compact representation of rotation matrix. Direction of the rotation vector is the rotation axis and the length of the vector is the rotation angle around the axis. The input rotation vector (3x1 or 1x3) or rotation matrix (3x3). The output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively Optional output Jacobian matrix, 3x9 or 9x3 - partial derivatives of the output array components w.r.t the input array components Calculates fundamental matrix using one of four methods listed above and returns the number of fundamental matrices found (1 or 3) and 0, if no matrix is found. Array of N points from the first image. The point coordinates should be floating-point (single or double precision). Array of the second image points of the same size and format as points1 Method for computing the fundamental matrix Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise. Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. The optional pointer to output array of N elements, every element of which is set to 0 for outliers and to 1 for the "inliers", i.e. points that comply well with the estimated epipolar geometry. The array is computed only in RANSAC and LMedS methods. For other methods it is set to all 1. The calculated fundamental matrix For every point in one of the two images of stereo-pair the function cvComputeCorrespondEpilines finds equation of a line that contains the corresponding point (i.e. projection of the same 3D point) in the other image. Each line is encoded by a vector of 3 elements l=[a,b,c]^T, so that: l^T*[x, y, 1]^T=0, or a*x + b*y + c = 0 From the fundamental matrix definition (see cvFindFundamentalMatrix discussion), line l2 for a point p1 in the first image (which_image=1) can be computed as: l2=F*p1 and the line l1 for a point p2 in the second image (which_image=1) can be computed as: l1=F^T*p2Line coefficients are defined up to a scale. They are normalized (a2+b2=1) are stored into correspondent_lines The input points. 2xN, Nx2, 3xN or Nx3 array (where N number of points). Multi-channel 1xN or Nx1 array is also acceptable. Index of the image (1 or 2) that contains the points Fundamental matrix Computed epilines, 3xN or Nx3 array Converts points from Euclidean to homogeneous space. Input vector of N-dimensional points. Output vector of N+1-dimensional points. Converts points from homogeneous to Euclidean space. Input vector of N-dimensional points. Output vector of N-1-dimensional points. Transforms 1-channel disparity map to 3-channel image, a 3D surface. Disparity map 3-channel, 16-bit integer or 32-bit floating-point image - the output map of 3D points The reprojection 4x4 matrix, can be arbitrary, e.g. the one, computed by cvStereoRectify Indicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed to 3D points with a very large Z value (currently set to 10000). The optional output array depth. If it is -1, the output image will have CV_32F depth. ddepth can also be set to CV_16S, CV_32S or CV_32F. Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters. Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points) The array of object points. The rotation vector, 1x3 or 3x1 The translation vector, 1x3 or 3x1 The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's The output array of image points, 2xN or Nx2, where N is the total number of points in the view Aspect ratio Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters. The array of image points which is the projection of Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters. Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points). The array of object points, 3xN or Nx3, where N is the number of points in the view The rotation vector, 1x3 or 3x1 The translation vector, 1x3 or 3x1 The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's The output array of image points, 2xN or Nx2, where N is the total number of points in the view Aspect ratio Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters. Estimates intrinsic camera parameters and extrinsic parameters for each of the views The 3D location of the object points. The first index is the index of image, second index is the index of the point The 2D image location of the points. The first index is the index of the image, second index is the index of the point The size of the image, used only to initialize intrinsic camera matrix The output 3xM or Mx3 array of rotation vectors (compact representation of rotation matrices, see cvRodrigues2). The output 3xM or Mx3 array of translation vectors/// cCalibration type The termination criteria The output camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATION are specified, some or all of fx, fy, cx, cy must be initialized The output 4x1 or 1x4 vector of distortion coefficients [k1, k2, p1, p2] The final reprojection error Estimates intrinsic camera parameters and extrinsic parameters for each of the views The joint matrix of object points, 3xN or Nx3, where N is the total number of points in all views The joint matrix of corresponding image points, 2xN or Nx2, where N is the total number of points in all views Size of the image, used only to initialize intrinsic camera matrix The output camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATION are specified, some or all of fx, fy, cx, cy must be initialized The output 4x1 or 1x4 vector of distortion coefficients [k1, k2, p1, p2] The output 3xM or Mx3 array of rotation vectors (compact representation of rotation matrices, see cvRodrigues2). The output 3xM or Mx3 array of translation vectors Different flags The termination criteria The final reprojection error Computes various useful camera (sensor/lens) characteristics using the computed camera calibration matrix, image frame resolution in pixels and the physical aperture size The matrix of intrinsic parameters Image size in pixels Aperture width in real-world units (optional input parameter). Set it to 0 if not used Aperture width in real-world units (optional input parameter). Set it to 0 if not used Field of view angle in x direction in degrees Field of view angle in y direction in degrees Focal length in real-world units The principal point in real-world units The pixel aspect ratio ~ fy/f Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error. The array of object points The array of corresponding image points The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's. The output 3x1 or 1x3 rotation vector (compact representation of a rotation matrix, see cvRodrigues2). The output 3x1 or 1x3 translation vector Use the input rotation and translation parameters as a guess Method for solving a PnP problem The extrinsic parameters Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error The array of object points, 3xN or Nx3, where N is the number of points in the view The array of corresponding image points, 2xN or Nx2, where N is the number of points in the view The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's. The output 3x1 or 1x3 rotation vector (compact representation of a rotation matrix, see cvRodrigues2). The output 3x1 or 1x3 translation vector Use the input rotation and translation parameters as a guess Method for solving a PnP problem Finds an object pose from 3D-2D point correspondences using the RANSAC scheme. Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3D32f can be also passed here. Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPointF can be also passed here. Input camera matrix Input vector of distortion coefficients of 4, 5, 8 or 12 elements. If the vector is null/empty, the zero distortion coefficients are assumed. Output rotation vector Output translation vector. If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. Number of iterations. Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier. Number of inliers. If the algorithm at some stage finds more inliers than minInliersCount, it finishes. Output vector that contains indices of inliers in objectPoints and imagePoints . Method for solving a PnP problem Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the fist camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=R*R1, T2=R*T1 + T The 3D location of the object points. The first index is the index of image, second index is the index of the point The 2D image location of the points for camera 1. The first index is the index of the image, second index is the index of the point The 2D image location of the points for camera 2. The first index is the index of the image, second index is the index of the point The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5 The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5 Size of the image, used only to initialize intrinsic camera matrix The rotation matrix between the 1st and the 2nd cameras' coordinate systems The translation vector between the cameras' coordinate systems The optional output essential matrix The optional output fundamental matrix Termination criteria for the iterative optimization algorithm The calibration flags Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the fist camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=R*R1, T2=R*T1 + T The joint matrix of object points, 3xN or Nx3, where N is the total number of points in all views The joint matrix of corresponding image points in the views from the 1st camera, 2xN or Nx2, where N is the total number of points in all views The joint matrix of corresponding image points in the views from the 2nd camera, 2xN or Nx2, where N is the total number of points in all views The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5 The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5 Size of the image, used only to initialize intrinsic camera matrix The rotation matrix between the 1st and the 2nd cameras' coordinate systems The translation vector between the cameras' coordinate systems The optional output essential matrix The optional output fundamental matrix Termination criteria for the iterative optimization algorithm The calibration flags computes the rectification transformations without knowing intrinsic parameters of the cameras and their relative position in space, hence the suffix "Uncalibrated". Another related difference from cvStereoRectify is that the function outputs not the rectification transformations in the object (3D) space, but the planar perspective transformations, encoded by the homography matrices H1 and H2. The function implements the following algorithm [Hartley99]. Note that while the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have significant distortion, it would better be corrected before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using cvCalibrateCamera2 and then the images can be corrected using cvUndistort2 The array of 2D points The array of 2D points Fundamental matrix. It can be computed using the same set of point pairs points1 and points2 using cvFindFundamentalMat Size of the image The rectification homography matrices for the first images The rectification homography matrices for the second images If the parameter is greater than zero, then all the point pairs that do not comply the epipolar geometry well enough (that is, the points for which fabs(points2[i]T*F*points1[i])>threshold) are rejected prior to computing the homographies computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, that makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. On input the function takes the matrices computed by cvStereoCalibrate and on output it gives 2 rotation matrices and also 2 projection matrices in the new coordinates. The function is normally called after cvStereoCalibrate that computes both camera matrices, the distortion coefficients, R and T The camera matrices [fx_k 0 cx_k; 0 fy_k cy_k; 0 0 1] The camera matrices [fx_k 0 cx_k; 0 fy_k cy_k; 0 0 1] The vectors of distortion coefficients for first camera, 4x1, 1x4, 5x1 or 1x5 The vectors of distortion coefficients for second camera, 4x1, 1x4, 5x1 or 1x5 Size of the image used for stereo calibration The rotation matrix between the 1st and the 2nd cameras' coordinate systems The translation vector between the cameras' coordinate systems 3x3 Rectification transforms (rotation matrices) for the first camera 3x3 Rectification transforms (rotation matrices) for the second camera 3x4 Projection matrices in the new (rectified) coordinate systems 3x4 Projection matrices in the new (rectified) coordinate systems The optional output disparity-to-depth mapping matrix, 4x4, see cvReprojectImageTo3D. The operation flags, use ZeroDisparity for default Use -1 for default Use Size.Empty for default The valid pixel ROI for image1 The valid pixel ROI for image2 Attempts to determine whether the input image is a view of the chessboard pattern and locate internal chessboard corners Source chessboard view; it must be 8-bit grayscale or color image The number of inner corners per chessboard row and column Pointer to the output array of corners(PointF) detected Various operation flags True if all the corners have been found and they have been placed in a certain order (row by row, left to right in every row), otherwise, if the function fails to find all the corners or reorder them, it returns 0 The coordinates detected are approximate, and to determine their position more accurately, the user may use the function cvFindCornerSubPix Draws the individual chessboard corners detected (as red circles) in case if the board was not found (pattern_was_found=0) or the colored corners connected with lines when the board was found (pattern_was_found != 0). The destination image; it must be 8-bit color image The number of inner corners per chessboard row and column The array of corners detected Indicates whether the complete board was found (!=0) or not (=0). One may just pass the return value cvFindChessboardCorners here. Reconstructs points by triangulation. 3x4 projection matrix of the first camera. 3x4 projection matrix of the second camera. 2xN array of feature points in the first image. It can be also a vector of feature points or two-channel matrix of size 1xN or Nx1 2xN array of corresponding points in the second image. It can be also a vector of feature points or two-channel matrix of size 1xN or Nx1. 4xN array of reconstructed points in homogeneous coordinates. Refines coordinates of corresponding points. 3x3 fundamental matrix. 1xN array containing the first set of points. 1xN array containing the second set of points. The optimized points1. The optimized points2. The default Exception callback to handle Error thrown by OpenCV An error handler which will ignore any error and continue A custom error handler for OpenCV The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the source where error is encountered Arbitrary pointer that is transparently passed to the error handler. A custom error handler for OpenCV The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the source where error is encountered Arbitrary pointer that is transparently passed to the error handler. Sets a new error handler that can be one of standard handlers or a custom handler that has the certain interface. The handler takes the same parameters as cvError function. If the handler returns non-zero value, the program is terminated, otherwise, it continues. The error handler may check the current error mode with cvGetErrMode to make a decision. The new error handler Arbitrary pointer that is transparently passed to the error handler. Pointer to the previously assigned user data pointer. Sets a new error handler that can be one of standard handlers or a custom handler that has the certain interface. The handler takes the same parameters as cvError function. If the handler returns non-zero value, the program is terminated, otherwise, it continues. The error handler may check the current error mode with cvGetErrMode to make a decision. Pointer to the new error handler Arbitrary pointer that is transparently passed to the error handler. Pointer to the previously assigned user data pointer. Sets the specified error mode. The error mode Returns the current error mode Returns the current error status - the value set with the last cvSetErrStatus call. Note, that in Leaf mode the program terminates immediately after error occurred, so to always get control after the function call, one should call cvSetErrMode and set Parent or Silent error mode. the current error status Sets the error status to the specified value. Mostly, the function is used to reset the error status (set to it CV_StsOk) to recover after error. In other cases it is more natural to call cvError or CV_ERROR. The error status. Returns the textual description for the specified error status code. In case of unknown status the function returns NULL pointer. The error status the textual description for the specified error status code. initializes CvMat header so that it points to the same data as the original array but has different shape - different number of channels, different number of rows or both Input array Output header to be filled New number of channels. new_cn = 0 means that number of channels remains unchanged New number of rows. new_rows = 0 means that number of rows remains unchanged unless it needs to be changed according to new_cn value. destination array to be changed Fills the destination array with source array tiled: dst(i,j)=src(i mod rows(src), j mod cols(src))So the destination array may be as larger as well as smaller than the source array Source array, image or matrix Destination array, image or matrix Flag to specify how many times the src is repeated along the vertical axis. Flag to specify how many times the src is repeated along the horizontal axis. This function is the opposite to cvSplit. If the destination array has N channels then if the first N input channels are not IntPtr.Zero, all they are copied to the destination array, otherwise if only a single source channel of the first N is not IntPtr.Zero, this particular channel is copied into the destination array, otherwise an error is raised. Rest of source channels (beyond the first N) must always be IntPtr.Zero. For IplImage cvCopy with COI set can be also used to insert a single channel into the image. Input vector of matrices to be merged; all the matrices in mv must have the same size and the same depth. output array of the same size and the same depth as mv[0]; The number of channels will be the total number of channels in the matrix array. The function cvMixChannels is a generalized form of cvSplit and cvMerge and some forms of cvCvtColor. It can be used to change the order of the planes, add/remove alpha channel, extract or insert a single plane or multiple planes etc. The array of input arrays. The array of output arrays The array of pairs of indices of the planes copied. from_to[k*2] is the 0-based index of the input plane, and from_to[k*2+1] is the index of the output plane, where the continuous numbering of the planes over all the input and over all the output arrays is used. When from_to[k*2] is negative, the corresponding output plane is filled with 0's. Unlike many other new-style C++ functions in OpenCV, mixChannels requires the output arrays to be pre-allocated before calling the function. Extract the specific channel from the image The source image The channel 0 based index of the channel to be extracted Insert the specific channel to the image The source channel The destination image where the channel will be inserted into 0-based index of the channel to be inserted Shuffles the matrix by swapping randomly chosen pairs of the matrix elements on each iteration (where each element may contain several components in case of multi-channel arrays) The input/output matrix. It is shuffled in-place. Pointer to MCvRNG random number generator. Use 0 if not sure The relative parameter that characterizes intensity of the shuffling performed. The number of iterations (i.e. pairs swapped) is round(iter_factor*rows(mat)*cols(mat)), so iter_factor=0 means that no shuffling is done, iter_factor=1 means that the function swaps rows(mat)*cols(mat) random pairs etc Inverses every bit of every array element: The source array The destination array The optional mask for the operation, use null to ignore Calculates per-element maximum of two arrays: dst(I)=max(src1(I), src2(I)) All the arrays must have a single channel, the same data type and the same size (or ROI size). The first source array The second source array. The destination array Returns the number of non-zero elements in arr: result = sumI arr(I)!=0 In case of IplImage both ROI and COI are supported. The image the number of non-zero elements in image Find the location of the non-zero pixel The source array The output array where the location of the pixels are sorted Computes PSNR image/video quality metric The first source image The second source image the quality metric Calculates per-element minimum of two arrays: dst(I)=min(src1(I),src2(I)) All the arrays must have a single channel, the same data type and the same size (or ROI size). The first source array The second source array The destination array Adds one array to another one: dst(I)=src1(I)+src2(I) if mask(I)!=0All the arrays must have the same type, except the mask, and the same size (or ROI size) The first source array. The second source array. The destination array. Operation mask, 8-bit single channel array; specifies elements of destination array to be changed. Optional depth type of the output array Subtracts one array from another one: dst(I)=src1(I)-src2(I) if mask(I)!=0 All the arrays must have the same type, except the mask, and the same size (or ROI size) The first source array The second source array The destination array Operation mask, 8-bit single channel array; specifies elements of destination array to be changed Optional depth of the output array Divides one array by another: dst(I)=scale * src1(I)/src2(I), if src1!=IntPtr.Zero; dst(I)=scale/src2(I), if src1==IntPtr.Zero; All the arrays must have the same type, and the same size (or ROI size) The first source array. If the pointer is IntPtr.Zero, the array is assumed to be all 1s. The second source array The destination array Optional scale factor Optional depth of the output array Calculates per-element product of two arrays: dst(I)=scale*src1(I)*src2(I) All the arrays must have the same type, and the same size (or ROI size) The first source array. The second source array The destination array Optional scale factor Optional depth of the output array Calculates per-element bit-wise logical conjunction of two arrays: dst(I)=src1(I) & src2(I) if mask(I)!=0 In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size The first source array The second source array The destination array Operation mask, 8-bit single channel array; specifies elements of destination array to be changed Calculates per-element bit-wise disjunction of two arrays: dst(I)=src1(I)|src2(I) In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size The first source array The second source array The destination array Operation mask, 8-bit single channel array; specifies elements of destination array to be changed Calculates per-element bit-wise logical conjunction of two arrays: dst(I)=src1(I)^src2(I) if mask(I)!=0 In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size The first source array The second source array The destination array Mask, 8-bit single channel array; specifies elements of destination array to be changed. Copies selected elements from input array to output array: dst(I)=src(I) if mask(I)!=0. If any of the passed arrays is of IplImage type, then its ROI and COI fields are used. Both arrays must have the same type, the same number of dimensions and the same size. The function can also copy sparse arrays (mask is not supported in this case). The source array The destination array Operation mask, 8-bit single channel array; specifies elements of destination array to be changed Initializes scaled identity matrix: arr(i,j)=value if i=j, 0 otherwise The matrix to initialize (not necessarily square). The value to assign to the diagonal elements. Initializes the matrix as following: arr(i,j)=(end-start)*(i*cols(arr)+j)/(cols(arr)*rows(arr)) The matrix to initialize. It should be single-channel 32-bit, integer or floating-point The lower inclusive boundary of the range The upper exclusive boundary of the range Calculates either magnitude, angle, or both of every 2d vector (x(I),y(I)): magnitude(I)=sqrt( x(I)2+y(I)2 ), angle(I)=atan( y(I)/x(I) ) The angles are calculated with ~0.1 degree accuracy. For (0,0) point the angle is set to 0 The array of x-coordinates The array of y-coordinates The destination array of magnitudes, may be set to IntPtr.Zero if it is not needed The destination array of angles, may be set to IntPtr.Zero if it is not needed. The angles are measured in radians (0..2?) or in degrees (0..360?). The flag indicating whether the angles are measured in radians or in degrees Calculates either x-coordinate, y-coordinate or both of every vector magnitude(I)* exp(angle(I)*j), j=sqrt(-1): x(I)=magnitude(I)*cos(angle(I)), y(I)=magnitude(I)*sin(angle(I)) Input floating-point array of magnitudes of 2D vectors; it can be an empty matrix (=Mat()), in this case, the function assumes that all the magnitudes are =1; if it is not empty, it must have the same size and type as angle input floating-point array of angles of 2D vectors. Output array of x-coordinates of 2D vectors; it has the same size and type as angle. Output array of y-coordinates of 2D vectors; it has the same size and type as angle. The flag indicating whether the angles are measured in radians or in degrees Raises every element of input array to p: dst(I)=src(I)p, if p is integer dst(I)=abs(src(I))p, otherwise That is, for non-integer power exponent the absolute values of input array elements are used. However, it is possible to get true values for negative values using some extra operations, as the following sample, computing cube root of array elements, shows: CvSize size = cvGetSize(src); CvMat* mask = cvCreateMat( size.height, size.width, CV_8UC1 ); cvCmpS( src, 0, mask, CV_CMP_LT ); /* find negative elements */ cvPow( src, dst, 1./3 ); cvSubRS( dst, cvScalarAll(0), dst, mask ); /* negate the results of negative inputs */ cvReleaseMat( &mask ); For some values of power, such as integer values, 0.5 and -0.5, specialized faster algorithms are used. The source array The destination array, should be the same type as the source The exponent of power Calculates exponent of every element of input array: dst(I)=exp(src(I)) Maximum relative error is 7e-6. Currently, the function converts denormalized values to zeros on output The source array The destination array, it should have double type or the same type as the source Calculates natural logarithm of absolute value of every element of input array: dst(I)=log(abs(src(I))), src(I)!=0 dst(I)=C, src(I)=0 Where C is large negative number (-700 in the current implementation) The source array The destination array, it should have double type or the same type as the source finds real roots of a cubic equation: coeffs[0]*x^3 + coeffs[1]*x^2 + coeffs[2]*x + coeffs[3] = 0 (if coeffs is 4-element vector) or x^3 + coeffs[0]*x^2 + coeffs[1]*x + coeffs[2] = 0 (if coeffs is 3-element vector) The equation coefficients, array of 3 or 4 elements The output array of real roots. Should have 3 elements. Padded with zeros if there is only one root the number of real roots found Finds all real and complex roots of any degree polynomial with real coefficients The (degree + 1)-length array of equation coefficients (CV_32FC1 or CV_64FC1) The degree-length output array of real or complex roots (CV_32FC2 or CV_64FC2) The maximum number of iterations Solves linear system (src1)*(dst) = (src2) The source matrix in the LHS The source matrix in the RHS The result The method for solving the equation 0 if src1 is a singular and CV_LU method is used Performs forward or inverse transform of 1D or 2D floating-point array In case of real (single-channel) data, the packed format, borrowed from IPL, is used to to represent a result of forward Fourier transform or input for inverse Fourier transform Source array, real or complex Destination array of the same size and same type as the source Transformation flags Number of nonzero rows to in the source array (in case of forward 2d transform), or a number of rows of interest in the destination array (in case of inverse 2d transform). If the value is negative, zero, or greater than the total number of rows, it is ignored. The parameter can be used to speed up 2d convolution/correlation when computing them via DFT. See the sample below Returns the minimum number N that is greater to equal to size0, such that DFT of a vector of size N can be computed fast. In the current implementation N=2^p x 3^q x 5^r for some p, q, r. Vector size The minimum number N that is greater to equal to size0, such that DFT of a vector of size N can be computed fast. In the current implementation N=2^p x 3^q x 5^r for some p, q, r. Performs per-element multiplication of the two CCS-packed or complex matrices that are results of real or complex Fourier transform. The first source array The second source array The destination array of the same type and the same size of the sources Operation flags; currently, the only supported flag is DFT_ROWS, which indicates that each row of src1 and src2 is an independent 1D Fourier spectrum. Optional flag that conjugates the second input array before the multiplication (true) or not (false). Performs forward or inverse transform of 1D or 2D floating-point array Source array, real 1D or 2D array Destination array of the same size and same type as the source Transformation flags Calculates a part of the line segment which is entirely in the rectangle. The rectangle First ending point of the line segment. It is modified by the function Second ending point of the line segment. It is modified by the function. It returns false if the line segment is completely outside the rectangle and true otherwise. Calculates absolute difference between two arrays. dst(I)c = abs(src1(I)c - src2(I)c). All the arrays must have the same data type and the same size (or ROI size) The first source array The second source array The destination array Calculated weighted sum of two arrays as following: dst(I)=src1(I)*alpha+src2(I)*beta+gamma All the arrays must have the same type and the same size (or ROI size) The first source array. Weight of the first array elements. The second source array. Weight of the second array elements. Scalar, added to each sum. The destination array. Optional depth of the output array; when both input arrays have the same depth Performs range check for every element of the input array: dst(I)=lower(I)_0 <= src(I)_0 <= upper(I)_0 For single-channel arrays, dst(I)=lower(I)_0 <= src(I)_0 <= upper(I)_0 && lower(I)_1 <= src(I)_1 <= upper(I)_1 For two-channel arrays etc. dst(I) is set to 0xff (all '1'-bits) if src(I) is within the range and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size) The source image The lower values stored in an image of same type & size as The upper values stored in an image of same type & size as The resulting mask Returns the calculated norm. The multiple-channel array are treated as single-channel, that is, the results for all channels are combined. The first source image The second source image. If it is null, the absolute norm of arr1 is calculated, otherwise absolute or relative norm of arr1-arr2 is calculated Type of norm The optional operation mask The calculated norm Returns the calculated norm. The multiple-channel array are treated as single-channel, that is, the results for all channels are combined. The first source image Type of norm The optional operation mask The calculated norm Creates the header and allocates data. Image width and height. Bit depth of image elements Number of channels per element(pixel). Can be 1, 2, 3 or 4. The channels are interleaved, for example the usual data layout of a color image is: b0 g0 r0 b1 g1 r1 ... A pointer to IplImage Allocates, initializes, and returns the structure IplImage. Image width and height. Bit depth of image elements Number of channels per element(pixel). Can be 1, 2, 3 or 4. The channels are interleaved, for example the usual data layout of a color image is: b0 g0 r0 b1 g1 r1 ... The structure IplImage Initializes the image header structure, pointer to which is passed by the user, and returns the pointer. Image header to initialize. Image width and height. Image depth Number of channels IPL_ORIGIN_TL or IPL_ORIGIN_BL. Alignment for image rows, typically 4 or 8 bytes. Assigns user data to the array header. Array header. User data. Full row length in bytes. Releases the header. Pointer to the deallocated header. Initializes already allocated CvMat structure. It can be used to process raw data with OpenCV matrix functions. Pointer to the matrix header to be initialized. Number of rows in the matrix. Number of columns in the matrix. Type of the matrix elements. Optional data pointer assigned to the matrix header Full row width in bytes of the data assigned. By default, the minimal possible step is used, i.e., no gaps is assumed between subsequent rows of the matrix. Sets the channel of interest to a given value. Value 0 means that all channels are selected, 1 means that the first channel is selected etc. If ROI is NULL and coi != 0, ROI is allocated. Image header Channel of interest starting from 1. If 0, the COI is unset. Returns channel of interest of the image (it returns 0 if all the channels are selected). Image header. channel of interest of the image (it returns 0 if all the channels are selected) Releases image ROI. After that the whole image is considered selected. Image header Sets the image ROI to a given rectangle. If ROI is NULL and the value of the parameter rect is not equal to the whole image, ROI is allocated. Image header. ROI rectangle. Returns channel of interest of the image (it returns 0 if all the channels are selected). Image header. channel of interest of the image (it returns 0 if all the channels are selected) Allocates header for the new matrix and underlying data, and returns a pointer to the created matrix. Matrices are stored row by row. All the rows are aligned by 4 bytes. Number of rows in the matrix. Number of columns in the matrix. Type of the matrix elements. A pointer to the created matrix Initializes CvMatND structure allocated by the user Pointer to the array header to be initialized Number of array dimensions Array of dimension sizes Type of array elements Optional data pointer assigned to the matrix header Pointer to the array header Decrements the matrix data reference counter and releases matrix header Double pointer to the matrix. The function allocates a multi-dimensional sparse array. Initially the array contain no elements, that is Get or GetReal returns zero for every index Number of array dimensions Array of dimension sizes Type of array elements Pointer to the array header The function releases the sparse array and clears the array pointer upon exit. Reference of the pointer to the array Assign the new value to the particular element of single-channel array Input array The first zero-based component of the element index The assigned value Assign the new value to the particular element of single-channel array Input array The first zero-based component of the element index The second zero-based component of the element index The assigned value Assign the new value to the particular element of single-channel array Input array The first zero-based component of the element index The second zero-based component of the element index The third zero-based component of the element index The assigned value Assign the new value to the particular element of single-channel array Input array Array of the element indices The assigned value Clears (sets to zero) the particular element of dense array or deletes the element of sparse array. If the element does not exists, the function does nothing Input array Array of the element indices Assign the new value to the particular element of array Input array. The first zero-based component of the element index The second zero-based component of the element index The assigned value Flips the array in one of different 3 ways (row and column indices are 0-based) Source array. Destination array. Specifies how to flip the array. Returns header, corresponding to a specified rectangle of the input array. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. ROI is taken into account by the function so the sub-array of ROI is actually extracted. Input array Pointer to the resultant sub-array header. Zero-based coordinates of the rectangle of interest. the resultant sub-array header Return the header, corresponding to a specified row span of the input array Input array Pointer to the prelocated memory of resulting sub-array header Zero-based index of the starting row (inclusive) of the span Zero-based index of the ending row (exclusive) of the span Index step in the row span. That is, the function extracts every delta_row-th row from start_row and up to (but not including) end_row The header, corresponding to a specified row span of the input array Return the header, corresponding to a specified row of the input array Input array Pointer to the prelocate memory of the resulting sub-array header Zero-based index of the selected row The header, corresponding to a specified row of the input array Return the header, corresponding to a specified col span of the input array Input array Pointer to the prelocated memory of the resulting sub-array header Zero-based index of the selected column Zero-based index of the ending column (exclusive) of the span The header, corresponding to a specified col span of the input array Return the header, corresponding to a specified column of the input array Input array Pointer to the prelocate memory of the resulting sub-array header Zero-based index of the selected column The header, corresponding to a specified column of the input array returns the header, corresponding to a specified diagonal of the input array Input array Pointer to the resulting sub-array header Array diagonal. Zero corresponds to the main diagonal, -1 corresponds to the diagonal above the main etc., 1 corresponds to the diagonal below the main etc Pointer to the resulting sub-array header Returns number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In case of image the size of ROI is returned. array header number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In case of image the size of ROI is returned. Draws a simple or filled circle with given center and radius. The circle is clipped by ROI rectangle. Image where the circle is drawn Center of the circle Radius of the circle. Color of the circle Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn Line type Number of fractional bits in the center coordinates and radius value Divides a multi-channel array into separate single-channel arrays. Two modes are available for the operation. If the source array has N channels then if the first N destination channels are not IntPtr.Zero, all they are extracted from the source array, otherwise if only a single destination channel of the first N is not IntPtr.Zero, this particular channel is extracted, otherwise an error is raised. Rest of destination channels (beyond the first N) must always be IntPtr.Zero. For IplImage cvCopy with COI set can be also used to extract a single channel from the image Input multi-channel array Output array or vector of arrays Draws a simple or thick elliptic arc or fills an ellipse sector. The arc is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs. All the angles are given in degrees. Image Center of the ellipse Length of the ellipse axes Rotation angle Starting angle of the elliptic arc Ending angle of the elliptic arc Ellipse color Thickness of the ellipse arc Type of the ellipse boundary Number of fractional bits in the center coordinates and axes' values Draws a simple or thick elliptic arc or fills an ellipse sector. The arc is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs. All the angles are given in degrees. Image The box the define the ellipse area Ellipse color Thickness of the ellipse arc Type of the ellipse boundary Number of fractional bits in the center coordinates and axes' values Fills the destination array with values from the look-up table. Indices of the entries are taken from the source array. That is, the function processes each element of src as following: dst(I)=lut[src(I)+DELTA] where DELTA=0 if src has depth CV_8U, and DELTA=128 if src has depth CV_8S Source array of 8-bit elements Destination array of arbitrary depth and of the same number of channels as the source array Look-up table of 256 elements; should have the same depth as the destination array. In case of multi-channel source and destination arrays, the table should either have a single-channel (in this case the same table is used for all channels), or the same number of channels as the source/destination array This function has several different purposes and thus has several synonyms. It copies one array to another with optional scaling, which is performed first, and/or optional type conversion, performed after: dst(I)=src(I)*scale + (shift,shift,...) All the channels of multi-channel arrays are processed independently. The type conversion is done with rounding and saturation, that is if a result of scaling + conversion can not be represented exactly by a value of destination array element type, it is set to the nearest representable value on the real axis. In case of scale=1, shift=0 no prescaling is done. This is a specially optimized case and it has the appropriate cvConvert synonym. If source and destination array types have equal types, this is also a special case that can be used to scale and shift a matrix or an image and that fits to cvScale synonym. Source array Destination array Scale factor Value added to the scaled source array elements Similar to cvCvtScale but it stores absolute values of the conversion results: dst(I)=abs(src(I)*scale + (shift,shift,...)) The function supports only destination arrays of 8u (8-bit unsigned integers) type, for other types the function can be emulated by combination of cvConvertScale and cvAbs functions. Source array Destination array (should have 8u depth). ScaleAbs factor Value added to the scaled source array elements Calculates the average value M of array elements, independently for each channel: N = sumI mask(I)!=0 Mc = 1/N * sumI,mask(I)!=0 arr(I)c If the array is IplImage and COI is set, the function processes the selected channel only and stores the average to the first scalar component (S0). The array The optional operation mask average (mean) of array elements The function cvAvgSdv calculates the average value and standard deviation of array elements, independently for each channel If the array is IplImage and COI is set, the function processes the selected channel only and stores the average and standard deviation to the first compoenents of output scalars (M0 and S0). The array Pointer to the mean value Pointer to the standard deviation The optional operation mask Calculates a mean and standard deviation of array elements. Input array that should have from 1 to 4 channels so that the results can be stored in MCvScalar Calculated mean value Calculated standard deviation Optional operation mask Calculates sum S of array elements, independently for each channel Sc = sumI arr(I)c If the array is IplImage and COI is set, the function processes the selected channel only and stores the sum to the first scalar component (S0). The array The sum of array elements Reduces matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. The function can be used to compute horizontal and vertical projections of an raster image. In case of CV_REDUCE_SUM and CV_REDUCE_AVG the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes The input matrix The output single-row/single-column vector that accumulates somehow all the matrix rows/columns The dimension index along which the matrix is reduce. The reduction operation type Optional depth type of the output array Releases the header and the image data. Double pointer to the header of the deallocated image Draws contours outlines or filled contours. Image where the contours are to be drawn. Like in any other drawing function, the contours are clipped with the ROI All the input contours. Each contour is stored as a point vector. Parameter indicating a contour to draw. If it is negative, all the contours are drawn. Color of the contours Maximal level for drawn contours. If 0, only contour is drawn. If 1, the contour and all contours after it on the same level are drawn. If 2, all contours after and all contours one level below the contours are drawn, etc. If the value is negative, the function does not draw the contours following after contour but draws child contours of contour up to abs(maxLevel)-1 level. Thickness of lines the contours are drawn with. If it is negative the contour interiors are drawn Type of the contour segments Optional information about hierarchy. It is only needed if you want to draw only some of the contours Shift all the point coordinates by the specified value. It is useful in case if the contours retrieved in some image ROI and then the ROI offset needs to be taken into account during the rendering. Fills convex polygon interior. This function is much faster than The function cvFillPoly and can fill not only the convex polygons but any monotonic polygon, i.e. a polygon whose contour intersects every horizontal line (scan line) twice at the most Image Array of pointers to a single polygon Polygon color Type of the polygon boundaries Number of fractional bits in the vertex coordinates Fills the area bounded by one or more polygons. Image. Array of polygons where each polygon is represented as an array of points. Polygon color Type of the polygon boundaries. Number of fractional bits in the vertex coordinates. Optional offset of all points of the contours. Renders the text in the image with the specified font and color. The printed text is clipped by ROI rectangle. Symbols that do not belong to the specified font are replaced with the rectangle symbol. Input image String to print Coordinates of the bottom-left corner of the first letter Font type. Font scale factor that is multiplied by the font-specific base size. Text color Thickness of the lines used to draw a text. Line type When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner. Finds minimum and maximum element values and their positions. The extremums are searched over the whole array, selected ROI (in case of IplImage) or, if mask is not IntPtr.Zero, in the specified array region. If the array has more than one channel, it must be IplImage with COI set. In case if multi-dimensional arrays min_loc->x and max_loc->x will contain raw (linear) positions of the extremums The source array, single-channel or multi-channel with COI set Pointer to returned minimum value Pointer to returned maximum value Pointer to returned minimum location Pointer to returned maximum location The optional mask that is used to select a subarray. Use IntPtr.Zero if not needed Copies the source 2D array into interior of destination array and makes a border of the specified type around the copied area. The function is useful when one needs to emulate border type that is different from the one embedded into a specific algorithm implementation. For example, morphological functions, as well as most of other filtering functions in OpenCV, internally use replication border type, while the user may need zero border or a border, filled with 1's or 255's The source image The destination image Type of the border to create around the copied source image rectangle Value of the border pixels if bordertype=CONSTANT Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. Return the particular array element Input array. Must have a single channel The first zero-based component of the element index the particular array element Return the particular array element Input array. Must have a single channel The first zero-based component of the element index The second zero-based component of the element index the particular array element Return the particular array element Input array. Must have a single channel The first zero-based component of the element index The second zero-based component of the element index The third zero-based component of the element index the particular array element Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower. Input array. Must have a single channel The first zero-based component of the element index the particular element of single-channel array Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower. Input array. Must have a single channel The first zero-based component of the element index The second zero-based component of the element index the particular element of single-channel array Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower. Input array. Must have a single channel The first zero-based component of the element index The second zero-based component of the element index The third zero-based component of the element index the particular element of single-channel array Fills the array with normally distributed random numbers. Output array of random numbers; the array must be pre-allocated and have 1 to 4 channels. Mean value (expectation) of the generated random numbers. Standard deviation of the generated random numbers; it can be either a vector (in which case a diagonal standard deviation matrix is assumed) or a square matrix. Fills the array with normally distributed random numbers. Output array of random numbers; the array must be pre-allocated and have 1 to 4 channels. Mean value (expectation) of the generated random numbers. Standard deviation of the generated random numbers; it can be either a vector (in which case a diagonal standard deviation matrix is assumed) or a square matrix. Generates a single uniformly-distributed random number or an array of random numbers. Output array of random numbers; the array must be pre-allocated. Inclusive lower boundary of the generated random numbers. Exclusive upper boundary of the generated random numbers. Generates a single uniformly-distributed random number or an array of random numbers. Output array of random numbers; the array must be pre-allocated. Inclusive lower boundary of the generated random numbers. Exclusive upper boundary of the generated random numbers. Computes eigenvalues and eigenvectors of a symmetric matrix The input symmetric square matrix, modified during the processing The output matrix of eigenvectors, stored as subsequent rows The output vector of eigenvalues, stored in the descending order (order of eigenvalues and eigenvectors is syncronized, of course) Currently the function is slower than cvSVD yet less accurate, so if A is known to be positivelydefined (for example, it is a covariance matrix)it is recommended to use cvSVD to find eigenvalues and eigenvectors of A, especially if eigenvectors are not required. To calculate the largest eigenvector/-value set lowindex = highindex = 1. For legacy reasons this function always returns a square matrix the same size as the source matrix with eigenvectors and a vector the length of the source matrix with eigenvalues. The selected eigenvectors/-values are always in the first highindex - lowindex + 1 rows. normalizes the input array so that it's norm or value range takes the certain value(s). The input array The output array; in-place operation is supported The minimum/maximum value of the output array or the norm of output array The maximum/minimum value of the output array The normalization type The operation mask. Makes the function consider and normalize only certain array elements Optional depth type for the dst array Performs generalized matrix multiplication: dst = alpha*op(src1)*op(src2) + beta*op(src3), where op(X) is X or XT The first source array. The second source array. The scalar The third source array (shift). Can be null, if there is no shift. The scalar The destination array. The Gemm operation type Performs matrix transformation of every element of array src and stores the results in dst Both source and destination arrays should have the same depth and the same size or selected ROI size. transmat and shiftvec should be real floating-point matrices. The first source array The destination array transformation 2x2 or 2x3 floating-point matrix. Transforms every element of src in the following way: (x, y) -> (x'/w, y'/w), where (x', y', w') = mat3x3 * (x, y, 1) and w = w' if w'!=0, inf otherwise The source points 3x3 floating-point transformation matrix. The destination points Transforms every element of src (by treating it as 2D or 3D vector) in the following way: (x, y, z) -> (x'/w, y'/w, z'/w) or (x, y) -> (x'/w, y'/w), where (x', y', z', w') = mat4x4 * (x, y, z, 1) or (x', y', w') = mat3x3 * (x, y, 1) and w = w' if w'!=0, inf otherwise The source three-channel floating-point array The destination three-channel floating-point array 3x3 or 4x4 floating-point transformation matrix. Calculates the product of src and its transposition. The function evaluates dst=scale(src-delta)*(src-delta)^T if order=0, and dst=scale(src-delta)^T*(src-delta) otherwise. The source matrix The destination matrix Order of multipliers An optional array, subtracted from before multiplication An optional scaling Optional depth type of the output array Returns sum of diagonal elements of the matrix . the matrix sum of diagonal elements of the matrix src1 Transposes matrix src1: dst(i,j)=src(j,i) Note that no complex conjugation is done in case of complex matrix. Conjugation should be done separately: look at the sample code in cvXorS for example The source matrix The destination matrix Returns determinant of the square matrix mat. The direct method is used for small matrices and Gaussian elimination is used for larger matrices. For symmetric positive-determined matrices it is also possible to run SVD with U=V=NULL and then calculate determinant as a product of the diagonal elements of W The pointer to the matrix determinant of the square matrix mat Inverts matrix src1 and stores the result in src2 The source matrix. The destination matrix Inversion method Decomposes matrix A into a product of a diagonal matrix and two orthogonal matrices: A=U*W*VT Where W is diagonal matrix of singular values that can be coded as a 1D vector of singular values and U and V. All the singular values are non-negative and sorted (together with U and V columns) in descenting order. SVD algorithm is numerically robust and its typical applications include: 1. accurate eigenvalue problem solution when matrix A is square, symmetric and positively defined matrix, for example, when it is a covariation matrix. W in this case will be a vector of eigen values, and U=V is matrix of eigen vectors (thus, only one of U or V needs to be calculated if the eigen vectors are required) 2. accurate solution of poor-conditioned linear systems 3. least-squares solution of overdetermined linear systems. This and previous is done by cvSolve function with CV_SVD method 4. accurate calculation of different matrix characteristics such as rank (number of non-zero singular values), condition number (ratio of the largest singular value to the smallest one), determinant (absolute value of determinant is equal to the product of singular values). All the things listed in this item do not require calculation of U and V matrices. Source MxN matrix Resulting singular value matrix (MxN or NxN) or vector (Nx1). Optional left orthogonal matrix (MxM or MxN). If CV_SVD_U_T is specified, the number of rows and columns in the sentence above should be swapped Optional right orthogonal matrix (NxN) Operation flags Performs a singular value back substitution. Singular values Left singular vectors Transposed matrix of right singular vectors. Right-hand side of a linear system Found solution of the system. Calculates the covariance matrix of a set of vectors. Samples stored either as separate matrices or as rows/columns of a single matrix. Output covariance matrix of the type ctype and square size. Input or output (depending on the flags) array as the average value of the input vectors. Operation flags Type of the matrix Calculates the weighted distance between two vectors and returns it The first 1D source vector The second 1D source vector The inverse covariation matrix the Mahalanobis distance Performs Principal Component Analysis of the supplied dataset. Input samples stored as the matrix rows or as the matrix columns. Optional mean value; if the matrix is empty, the mean is computed from the data. The eigenvectors. Maximum number of components that PCA should retain; by default, all the components are retained. Performs Principal Component Analysis of the supplied dataset. Input samples stored as the matrix rows or as the matrix columns. Optional mean value; if the matrix is empty, the mean is computed from the data. The eigenvectors. Percentage of variance that PCA should retain. Using this parameter will let the PCA decided how many components to retain but it will always keep at least 2. Projects vector(s) to the principal component subspace. Input vector(s); must have the same dimensionality and the same layout as the input data used at PCA phase The mean. The eigenvectors. The result. Reconstructs vectors from their PC projections. Coordinates of the vectors in the principal component subspace The mean. The eigenvectors. The result. Fills output variables with low-level information about the array data. All output parameters are optional, so some of the pointers may be set to NULL. If the array is IplImage with ROI set, parameters of ROI are returned. Array header Output pointer to the whole image origin or ROI origin if ROI is set Output full row length in bytes Output ROI size Returns matrix header for the input array that can be matrix - CvMat, image - IplImage or multi-dimensional dense array - CvMatND* (latter case is allowed only if allowND != 0) . In the case of matrix the function simply returns the input pointer. In the case of IplImage* or CvMatND* it initializes header structure with parameters of the current image ROI and returns pointer to this temporary structure. Because COI is not supported by CvMat, it is returned separately. Input array Pointer to CvMat structure used as a temporary buffer Optional output parameter for storing COI If non-zero, the function accepts multi-dimensional dense arrays (CvMatND*) and returns 2D (if CvMatND has two dimensions) or 1D matrix (when CvMatND has 1 dimension or more than 2 dimensions). The array must be continuous Returns matrix header for the input array Returns image header for the input array that can be matrix - CvMat*, or image - IplImage*. Input array. Pointer to IplImage structure used as a temporary buffer. Returns image header for the input array Checks that every array element is neither NaN nor Infinity. If CV_CHECK_RANGE is set, it also checks that every element is greater than or equal to minVal and less than maxVal. The array to check. The operation flags, CHECK_NAN_INFINITY or combination of CHECK_RANGE - if set, the function checks that every value of array is within [minVal,maxVal) range, otherwise it just checks that every element is neither NaN nor Infinity. CHECK_QUIET - if set, the function does not raises an error if an element is invalid or out of range The inclusive lower boundary of valid values range. It is used only if CHECK_RANGE is set. The exclusive upper boundary of valid values range. It is used only if CHECK_RANGE is set. Returns nonzero if the check succeeded, i.e. all elements are valid and within the range, and zero otherwise. In the latter case if CV_CHECK_QUIET flag is not set, the function raises runtime error. Return the current number of threads that are used by parallelized (via OpenMP) OpenCV functions. the current number of threads that are used by parallelized (via OpenMP) OpenCV functions Sets the number of threads that are used by parallelized OpenCV functions. The number of threads that are used by parallelized OpenCV functions. When the argument is zero or negative, and at the beginning of the program, the number of threads is set to the number of processors in the system, as returned by the function omp_get_num_procs() from OpenMP runtime. Returns the index, from 0 to cvGetNumThreads()-1, of the thread that called the function. It is a wrapper for the function omp_get_thread_num() from OpenMP runtime. The retrieved index may be used to access local-thread data inside the parallelized code fragments. The index, from 0 to cvGetNumThreads()-1, of the thread that called the function. It is a wrapper for the function omp_get_thread_num() from OpenMP runtime. The retrieved index may be used to access local-thread data inside the parallelized code fragments. Compares the corresponding elements of two arrays and fills the destination mask array: dst(I)=src1(I) op src2(I), dst(I) is set to 0xff (all '1'-bits) if the particular relation between the elements is true and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size) The first image to compare with The second image to compare with dst(I) is set to 0xff (all '1'-bits) if the particular relation between the elements is true and 0 otherwise. The comparison operator type Converts CvMat, IplImage , or CvMatND to Mat. Input CvMat, IplImage , or CvMatND. When true (default value), CvMatND is converted to 2-dimensional Mat, if it is possible (see the discussion below); if it is not possible, or when the parameter is false, the function will report an error When false (default value), no data is copied and only the new header is created, in this case, the original array should not be deallocated while the new matrix header is used; if the parameter is true, all the data is copied and you may deallocate the original array right after the conversion. Parameter specifying how the IplImage COI (when set) is handled. If coiMode=0 and COI is set, the function reports an error. If coiMode=1 , the function never reports an error. Instead, it returns the header to the whole original image and you will have to check and process COI manually. The Mat header Horizontally concatenate two images The first image The second image The result image Vertically concatenate two images The first image The second image The result image Finishes OpenCL queue. Get the OpenCL platform summary as a string An OpenCL platform summary Set the default opencl device The name of the opencl device Implements k-means algorithm that finds centers of cluster_count clusters and groups the input samples around the clusters. On output labels(i) contains a cluster index for sample stored in the i-th row of samples matrix Floating-point matrix of input samples, one row per sample Output integer vector storing cluster indices for every sample Specifies maximum number of iterations and/or accuracy (distance the centers move by between the subsequent iterations) The number of attempts. Use 2 if not sure Flags, use 0 if not sure Pointer to array of centers, use IntPtr.Zero if not sure Number of clusters to split the set by. The grab cut algorithm for segmentation The 8-bit 3-channel image to be segmented Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have one of following values: 0 (GC_BGD) defines an obvious background pixels. 1 (GC_FGD) defines an obvious foreground (object) pixel. 2 (GC_PR_BGR) defines a possible background pixel. 3 (GC_PR_FGD) defines a possible foreground pixel. The rectangle to initialize the segmentation Temporary array for the background model. Do not modify it while you are processing the same image. Temporary arrays for the foreground model. Do not modify it while you are processing the same image. The number of iterations The initialization type Calculate square root of each source array element. in the case of multichannel arrays each channel is processed independently. The function accuracy is approximately the same as of the built-in std::sqrt. The source floating-point array The destination array; will have the same size and the same type as src Apply color map to the image The source image. This function expects Image<Bgr, Byte> or Image<Gray, Byte>. If the wrong image type is given, the original image will be returned. The destination image The type of color map Check that every array element is neither NaN nor +- inf. The functions also check that each value is between minVal and maxVal. in the case of multi-channel arrays each channel is processed independently. If some values are out of range, position of the first outlier is stored in pos, and then the functions either return false (when quiet=true) or throw an exception. The array to check The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception This will be filled with the position of the first outlier The inclusive lower boundary of valid values range The exclusive upper boundary of valid values range If quiet, return true if all values are in range Computes an optimal affine transformation between two 3D point sets. First input 3D point set. Second input 3D point set. Output 3D affine transformation matrix. Output vector indicating which points are inliers. Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation. Computes an optimal affine transformation between two 3D point sets. First input 3D point set. Second input 3D point set. Output 3D affine transformation matrix 3 x 4 Output vector indicating which points are inliers. Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation. Finds the global minimum and maximum in an array Input single-channel array. The returned minimum value The returned maximum value The returned minimum location The returned maximum location The extremums are searched across the whole array if mask is IntPtr.Zert. Otherwise, search is performed in the specified array region. Applies arbitrary linear filter to the image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values from the nearest pixels that is inside the image The source image The destination image Convolution kernel, single-channel floating point matrix. If you want to apply different kernels to different channels, split the image using cvSplit into separate color planes and process them individually The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center The optional value added to the filtered pixels before storing them in dst The pixel extrapolation method. Contrast Limited Adaptive Histogram Equalization (CLAHE) The source image Clip Limit, use 40 for default Tile grid size, use (8, 8) for default The destination image This function retrieve the Open CV structure sizes in unmanaged code The structure that will hold the Open CV structure sizes Finds centers in the grid of circles Source chessboard view The number of inner circle per chessboard row and column Various operation flags The feature detector. Use a SimpleBlobDetector for default The center of circles detected if the chess board pattern is found, otherwise null is returned Finds centers in the grid of circles Source chessboard view The number of inner circle per chessboard row and column Various operation flags The feature detector. Use a SimpleBlobDetector for default output array of detected centers. True if grid found. The List of the opencv modules Creates a window which can be used as a placeholder for images and trackbars. Created windows are reffered by their names. If the window with such a name already exists, the function does nothing. Name of the window which is used as window identifier and appears in the window caption Flags of the window. Waits for key event infinitely (delay <= 0) or for "delay" milliseconds. Delay in milliseconds. The code of the pressed key or -1 if no key were pressed until the specified timeout has elapsed Shows the image in the specified window Name of the window Image to be shown Destroys the window with a given name Name of the window to be destroyed Destroys all of the HighGUI windows. Loads an image from the specified file and returns the pointer to the loaded image. Currently the following file formats are supported: Windows bitmaps - BMP, DIB; JPEG files - JPEG, JPG, JPE; Portable Network Graphics - PNG; Portable image format - PBM, PGM, PPM; Sun rasters - SR, RAS; TIFF files - TIFF, TIF; OpenEXR HDR images - EXR; JPEG 2000 images - jp2. The name of the file to be loaded The image loading type The loaded image Saves the image to the specified file. The image format is chosen depending on the filename extension, see cvLoadImage. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use cvCvtScale and cvCvtColor to convert it before saving, or use universal cvSave to save the image to XML or YAML format The name of the file to be saved to The image to be saved The parameters true if success Decode image stored in the buffer The buffer The image loading type The output placeholder for the decoded matrix. Decode image stored in the buffer The buffer The image loading type The output placeholder for the decoded matrix. encode image and store the result as a byte vector. The image format The image Output buffer resized to fit the compressed image. The pointer to the array of intergers, which contains the parameter for encoding, use IntPtr.Zero for default Implements a particular case of application of line iterators. The function reads all the image points lying on the line between pt1 and pt2, including the ending points, and stores them into the buffer Image to sample the line from Starting the line point. Ending the line point Buffer to store the line points; must have enough size to store max( |pt2.x-pt1.x|+1, |pt2.y-pt1.y|+1 ) points in case of 8-connected line and |pt2.x-pt1.x|+|pt2.y-pt1.y|+1 in case of 4-connected line The line connectivity, 4 or 8 Extracts pixels from src: dst(x, y) = src(x + center.x - (width(dst)-1)*0.5, y + center.y - (height(dst)-1)*0.5) where the values of pixels at non-integer coordinates are retrieved using bilinear interpolation. Every channel of multiple-channel images is processed independently. Whereas the rectangle center must be inside the image, the whole rectangle may be partially occluded. In this case, the replication border mode is used to get pixel values beyond the image boundaries. Source image Size of the extracted patch. Extracted rectangle Depth of the extracted pixels. By default, they have the same depth as . Floating point coordinates of the extracted rectangle center within the source image. The center must be inside the image. Resizes the image src down to or up to the specified size Source image. Destination image Output image size; if it equals zero, it is computed as: dsize=Size(round(fx*src.cols), round(fy * src.rows)). Either dsize or both fx and fy must be non-zero. Scale factor along the horizontal axis Scale factor along the vertical axis; Interpolation method Applies an affine transformation to an image. Source image Destination image 2x3 transformation matrix Size of the output image. Interpolation method Warp method Pixel extrapolation method A value used to fill outliers Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2. Coordinates of 3 triangle vertices in the source image. If the array contains more than 3 points, only the first 3 will be used Coordinates of the 3 corresponding triangle vertices in the destination image. If the array contains more than 3 points, only the first 3 will be used The 2x3 rotation matrix that defines the Affine transform Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2. Pointer to an array of PointF, Coordinates of 3 triangle vertices in the source image. Pointer to an array of PointF, Coordinates of the 3 corresponding triangle vertices in the destination image The destination 2x3 matrix Calculates rotation matrix Center of the rotation in the source image. The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at top-left corner). Isotropic scale factor Pointer to the destination 2x3 matrix Pointer to the destination 2x3 matrix Applies a perspective transformation to an image Source image Destination image 3x3 transformation matrix Size of the output image Interpolation method Warp method Pixel extrapolation method value used in case of a constant border calculates matrix of perspective transform such that: (t_i x'_i,t_i y'_i,t_i)^T=map_matrix (x_i,y_i,1)T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..3. Coordinates of 4 quadrangle vertices in the source image Coordinates of the 4 corresponding quadrangle vertices in the destination image The perspective transform matrix calculates matrix of perspective transform such that: (t_i x'_i,t_i y'_i,t_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..3. Coordinates of 4 quadrangle vertices in the source image Coordinates of the 4 corresponding quadrangle vertices in the destination image The 3x3 Homography matrix Applies a generic geometrical transformation to an image. Source image Destination image The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1 , or CV_32FC2 . See convertMaps() for details on converting a floating point representation to fixed-point for speed. The second map of y values having the type CV_16UC1 , CV_32FC1 , or none (empty map if map1 is (x,y) points), respectively. Interpolation method (see resize() ). The method 'Area' is not supported by this function. Pixel extrapolation method A value used to fill outliers Inverts an affine transformation Original affine transformation Output reverse affine transformation. Returns the default new camera matrix. Input camera matrix. Camera view image size in pixels. Location of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not. The default new camera matrix. The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking etc. Source image Destination image The transformation center, where the output precision is maximal Magnitude scale parameter Interpolation method warp method The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking etc. Source image Destination image The transformation center, where the output precision is maximal Maximum radius Interpolation method Warp method Performs downsampling step of Gaussian pyramid decomposition. First it convolves source image with the specified filter and then downsamples the image by rejecting even rows and columns. The source image. The destination image, should have 2x smaller width and height than the source. Border type Performs up-sampling step of Gaussian pyramid decomposition. First it upsamples the source image by injecting even zero rows and columns and then convolves result with the specified filter multiplied by 4 for interpolation. So the destination image is four times larger than the source image. The source image. The destination image, should have 2x smaller width and height than the source. Border type Implements one of the variants of watershed, non-parametric marker-based segmentation algorithm, described in [Meyer92] Before passing the image to the function, user has to outline roughly the desired regions in the image markers with positive (>0) indices, i.e. every region is represented as one or more connected components with the pixel values 1, 2, 3 etc. Those components will be "seeds" of the future image regions. All the other pixels in markers, which relation to the outlined regions is not known and should be defined by the algorithm, should be set to 0's. On the output of the function, each pixel in markers is set to one of values of the "seed" components, or to -1 at boundaries between the regions. Note, that it is not necessary that every two neighbor connected components are separated by a watershed boundary (-1's pixels), for example, in case when such tangent components exist in the initial marker image. The input 8-bit 3-channel image The input/output Int32 depth single-channel image (map) of markers. Finds minimum area rectangle that contains both input rectangles inside First rectangle Second rectangle The minimum area rectangle that contains both input rectangles inside Fits line to 2D or 3D point set Input vector of 2D or 3D points, stored in std::vector or Mat. The distance used for fitting Numerical parameter (C) for some types of distances, if 0 then some optimal value is chosen Sufficient accuracy for radius (distance between the coordinate origin and the line), 0.01 would be a good default Sufficient accuracy for angle, 0.01 would be a good default Output line parameters. In case of 2D ?tting, it should be a vector of 4 elements (like Vec4f) - (vx, vy, x0, y0), where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is a point on the line. In case of 3D ?tting, it should be a vector of 6 elements (like Vec6f) - (vx, vy, vz, x0, y0, z0), where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is a point on the line. Fits line to 2D or 3D point set Input vector of 2D points. The distance used for fitting Numerical parameter (C) for some types of distances, if 0 then some optimal value is chosen Sufficient accuracy for radius (distance between the coordinate origin and the line), 0.01 would be a good default Sufficient accuracy for angle, 0.01 would be a good default A normalized vector collinear to the line A point on the line. Finds out if there is any intersection between two rotated rectangles. First rectangle Second rectangle The output array of the verticies of the intersecting region. It returns at most 8 vertices. Stored as VectorOfPointF or Mat as Mx1 of type CV_32FC2. The intersect type Calculates vertices of the input 2d box. The box The four vertices of rectangles. Calculates vertices of the input 2d box. The box The output array of four vertices of rectangles. Fits an ellipse around a set of 2D points. Input 2D point set The ellipse that fits best (in least-squares sense) to a set of 2D points Finds convex hull of 2D point set using Sklansky's algorithm The points to find convex hull from Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards. The convex hull of the points The function cvConvexHull2 finds convex hull of 2D point set using Sklansky's algorithm. Input 2D point set Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves. Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards. Operation flag. In case of a matrix, when the flag is true, the function returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is std::vector, the flag is ignored, and the output depends on the type of the vector The default morphology value. Erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken: dst=erode(src,element): dst(x,y)=min((x',y') in element)) src(x+x',y+y') The function supports the in-place mode. Erosion can be applied several (iterations) times. In case of color image each channel is processed independently. Source image. Destination image Structuring element used for erosion. If it is IntPtr.Zero, a 3x3 rectangular structuring element is used. Number of times erosion is applied. Pixel extrapolation method Border value in case of a constant border, use Constant for default Position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center. Dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken The function supports the in-place mode. Dilation can be applied several (iterations) times. In case of color image each channel is processed independently Source image Destination image Structuring element used for erosion. If it is IntPtr.Zero, a 3x3 rectangular structuring element is used Number of times erosion is applied Pixel extrapolation method Border value in case of a constant border Position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center. Blurs an image using a Gaussian filter. input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F. output image of the same size and type as src. Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma* . Gaussian kernel standard deviation in X direction. Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height , respectively (see getGaussianKernel() for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY. Pixel extrapolation method Blurs an image using the normalized box filter. input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F. Output image of the same size and type as src. Blurring kernel size. Anchor point; default value Point(-1,-1) means that the anchor is at the kernel center. Border mode used to extrapolate pixels outside of the image. Blurs an image using the median filter. Input 1-, 3-, or 4-channel image; when ksize is 3 or 5, the image depth should be CV_8U, CV_16U, or CV_32F, for larger aperture sizes, it can only be CV_8U. Destination array of the same size and type as src. Aperture linear size; it must be odd and greater than 1, for example: 3, 5, 7 ... Blurs an image using the box filter. Input image. Output image of the same size and type as src. The output image depth (-1 to use src.depth()). Blurring kernel size. Anchor point; default value Point(-1,-1) means that the anchor is at the kernel center. Specifying whether the kernel is normalized by its area or not. Border mode used to extrapolate pixels outside of the image. Applies the bilateral filter to an image. Source 8-bit or floating-point, 1-channel or 3-channel image. Destination image of the same size and type as src . Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace . Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color. Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace. Border mode used to extrapolate pixels outside of the image. The Sobel operators combine Gaussian smoothing and differentiation so the result is more or less robust to the noise. Most often, the function is called with (xorder=1, yorder=0, aperture_size=3) or (xorder=0, yorder=1, aperture_size=3) to calculate first x- or y- image derivative. The first case corresponds to
 
              |-1  0  1|
              |-2  0  2|
              |-1  0  1|
kernel and the second one corresponds to
              |-1 -2 -1|
              | 0  0  0|
              | 1  2  1|
or
              | 1  2  1|
              | 0  0  0|
              |-1 -2 -1|
kernel, depending on the image origin (origin field of IplImage structure). No scaling is done, so the destination image usually has larger by absolute value numbers than the source image. To avoid overflow, the function requires 16-bit destination image if the source image is 8-bit. The result can be converted back to 8-bit using cvConvertScale or cvConvertScaleAbs functions. Besides 8-bit images the function can process 32-bit floating-point images. Both source and destination must be single-channel images of equal size or ROI size
Source image. Destination image output image depth; the following combinations of src.depth() and ddepth are supported: src.depth() = CV_8U, ddepth = -1/CV_16S/CV_32F/CV_64F src.depth() = CV_16U/CV_16S, ddepth = -1/CV_32F/CV_64F src.depth() = CV_32F, ddepth = -1/CV_32F/CV_64F src.depth() = CV_64F, ddepth = -1/CV_64F when ddepth=-1, the destination image will have the same depth as the source; in the case of 8-bit input images it will result in truncated derivatives. Order of the derivative x Order of the derivative y Size of the extended Sobel kernel, must be 1, 3, 5 or 7. Pixel extrapolation method Optional scale factor for the computed derivative values Optional delta value that is added to the results prior to storing them in
Calculates Laplacian of the source image by summing second x- and y- derivatives calculated using Sobel operator: dst(x,y) = d2src/dx2 + d2src/dy2 Specifying aperture_size=1 gives the fastest variant that is equal to convolving the image with the following kernel: |0 1 0| |1 -4 1| |0 1 0| Similar to cvSobel function, no scaling is done and the same combinations of input and output formats are supported. Source image. Destination image. Should have type of float Desired depth of the destination image. Aperture size used to compute the second-derivative filters. Optional scale factor for the computed Laplacian values. By default, no scaling is applied. Optional delta value that is added to the results prior to storing them in dst. Pixel extrapolation method. Finds the edges on the input and marks them in the output image edges using the Canny algorithm. The smallest of threshold1 and threshold2 is used for edge linking, the largest - to find initial segments of strong edges. Input image Image to store the edges found by the function The first threshold The second threshold. Aperture parameter for Sobel operator a flag, indicating whether a more accurate norm should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default norm is enough ( L2gradient=false ). The function tests whether the input contour is convex or not. The contour must be simple, that is, without self-intersections. Otherwise, the function output is undefined. Input vector of 2D points true if input is convex Determines whether the point is inside contour, outside, or lies on an edge (or coinsides with a vertex). It returns positive, negative or zero value, correspondingly Input contour The point tested against the contour If != 0, the function estimates distance from the point to the nearest contour edge When measureDist = false, the return value is >0 (inside), <0 (outside) and =0 (on edge), respectively. When measureDist != true, it is a signed distance between the point and the nearest contour edge Finds the convexity defects of a contour. Input contour Convex hull obtained using ConvexHull that should contain pointers or indices to the contour points, not the hull points themselves, i.e. return_points parameter in cvConvexHull2 should be 0 The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0. Find the bounding rectangle for the specific array of points The collection of points The bounding rectangle for the array of points Finds a rotated rectangle of the minimum area enclosing the input 2D point set. Input vector of 2D points a circumscribed rectangle of the minimal area for 2D point set Finds the minimal circumscribed circle for 2D point set using iterative algorithm. It returns nonzero if the resultant circle contains all the input points and zero otherwise (i.e. algorithm failed) Sequence or array of 2D points The minimal circumscribed circle for 2D point set Finds the minimal circumscribed circle for 2D point set using iterative algorithm. It returns nonzero if the resultant circle contains all the input points and zero otherwise (i.e. algorithm failed) Sequence or array of 2D points The minimal circumscribed circle for 2D point set Finds a triangle of minimum area enclosing a 2D point set and returns its area. Input vector of 2D points with depth CV_32S or CV_32F Output vector of three 2D points defining the vertices of the triangle. The depth of the OutputArray must be CV_32F. The triangle's area Approximates a polygonal curve(s) with the specified precision. Input vector of a 2D point Result of the approximation. The type should match the type of the input curve. Parameter specifying the approximation accuracy. This is the maximum distance between the original curve and its approximation. If true, the approximated curve is closed (its first and last vertices are connected). Otherwise, it is not closed. Returns the up-right bounding rectangle for 2d point set Input 2D point set, stored in std::vector or Mat. The up-right bounding rectangle for 2d point set Calculates area of the whole contour or contour section. Input vector of 2D points (contour vertices), stored in std::vector or Mat. Oriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned. The area of the whole contour or contour section Calculates a contour perimeter or a curve length Sequence or array of the curve points Indicates whether the curve is closed or not. Contour perimeter or a curve length Applies fixed-level thresholding to single-channel array. The function is typically used to get bi-level (binary) image out of grayscale image (cvCmpS could be also used for this purpose) or for removing a noise, i.e. filtering out pixels with too small or too large values. There are several types of thresholding the function supports that are determined by threshold_type Source array (single-channel, 8-bit of 32-bit floating point). Destination array; must be either the same type as src or 8-bit. Threshold value Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Thresholding type Transforms grayscale image to binary image. Threshold calculated individually for each pixel. For the method CV_ADAPTIVE_THRESH_MEAN_C it is a mean of x pixel neighborhood, subtracted by param1. For the method CV_ADAPTIVE_THRESH_GAUSSIAN_C it is a weighted sum (gaussian) of x pixel neighborhood, subtracted by param1. Source array (single-channel, 8-bit of 32-bit floating point). Destination array; must be either the same type as src or 8-bit. Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Adaptive_method Thresholding type. must be one of CV_THRESH_BINARY, CV_THRESH_BINARY_INV The size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, ... Constant subtracted from mean or weighted mean. It may be negative. Retrieves contours from the binary image and returns the number of retrieved contours. The pointer firstContour is filled by the function. It will contain pointer to the first most outer contour or IntPtr.Zero if no contours is detected (if the image is completely black). Other contours may be reached from firstContour using h_next and v_next links. The sample in cvDrawContours discussion shows how to use contours for connected component detection. Contours can be also used for shape analysis and object recognition - see squares.c in OpenCV sample directory The function modifies the source image content The source 8-bit single channel image. Non-zero pixels are treated as 1s, zero pixels remain 0s - that is image treated as binary. To get such a binary image from grayscale, one may use cvThreshold, cvAdaptiveThreshold or cvCanny. The function modifies the source image content Detected contours. Each contour is stored as a vector of points. Optional output vector, containing information about the image topology. Retrieval mode Approximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation). Offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context The number of countours Retrieves contours from the binary image as a contour tree. The pointer firstContour is filled by the function. It is provided as a convenient way to obtain the hierarchy value as int[,]. The function modifies the source image content The source 8-bit single channel image. Non-zero pixels are treated as 1s, zero pixels remain 0s - that is image treated as binary. To get such a binary image from grayscale, one may use cvThreshold, cvAdaptiveThreshold or cvCanny. The function modifies the source image content Detected contours. Each contour is stored as a vector of points. Approximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation). Offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context The contour hierarchy Convert raw data to bitmap The pointer to the raw data The step The size of the image The source image color type The number of channels The source image depth type Try to create Bitmap that shares the data with the image The Bitmap Converts input image from one color space to another. The function ignores colorModel and channelSeq fields of IplImage header, so the source image color space should be specified correctly (including order of the channels in case of RGB space, e.g. BGR means 24-bit format with B0 G0 R0 B1 G1 R1 ... layout, whereas RGB means 24-bit format with R0 G0 B0 R1 G1 B1 ... layout). The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image The destination image of the same data type as the source one. The number of channels may be different Source color type. Destination color type Converts input image from one color space to another. The function ignores colorModel and channelSeq fields of IplImage header, so the source image color space should be specified correctly (including order of the channels in case of RGB space, e.g. BGR means 24-bit format with B0 G0 R0 B1 G1 R1 ... layout, whereas RGB means 24-bit format with R0 G0 B0 R1 G1 B1 ... layout). The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image The destination image of the same data type as the source one. The number of channels may be different Color conversion operation that can be specifed using CV_src_color_space2dst_color_space constants number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code . Finds circles in grayscale image using some modification of Hough transform The input 8-bit single-channel grayscale image The storage for the circles detected. It can be a memory storage (in this case a sequence of circles is created in the storage and returned by the function) or single row/single column matrix (CvMat*) of type CV_32FC3, to which the circles' parameters are written. The matrix header is modified by the function so its cols or rows will contain a number of lines detected. If circle_storage is a matrix and the actual number of lines exceeds the matrix size, the maximum possible number of circles is returned. Every circle is encoded as 3 floating-point numbers: center coordinates (x,y) and the radius Currently, the only implemented method is CV_HOUGH_GRADIENT Resolution of the accumulator used to detect centers of the circles. For example, if it is 1, the accumulator will have the same resolution as the input image, if it is 2 - accumulator will have twice smaller width and height, etc Minimum distance between centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed The first method-specific parameter. In case of CV_HOUGH_GRADIENT it is the higher threshold of the two passed to Canny edge detector (the lower one will be twice smaller). The second method-specific parameter. In case of CV_HOUGH_GRADIENT it is accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first Minimal radius of the circles to search for Maximal radius of the circles to search for. By default the maximal radius is set to max(image_width, image_height). Pointer to the sequence of circles Finds circles in a grayscale image using the Hough transform 8-bit, single-channel, grayscale input image. Detection method to use. Currently, the only implemented method is CV_HOUGH_GRADIENT , which is basically 21HT Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed. First method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny() edge detector (the lower one is twice smaller). Second method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. Minimum circle radius. Maximum circle radius. The circles detected Finds lines in a binary image using the standard Hough transform. 8-bit, single-channel binary source image. The image may be modified by the function. Output vector of lines. Each line is represented by a two-element vector Distance resolution of the accumulator in pixels. Angle resolution of the accumulator in radians. Accumulator threshold parameter. Only those lines are returned that get enough votes (> threshold) For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive. For the multi-scale Hough transform, it is a divisor for the distance resolution theta Finds line segments in a binary image using the probabilistic Hough transform. 8-bit, single-channel binary source image. The image may be modified by the function. Distance resolution of the accumulator in pixels Angle resolution of the accumulator in radians Accumulator threshold parameter. Only those lines are returned that get enough votes Minimum line length. Line segments shorter than that are rejected. Maximum allowed gap between points on the same line to link them. The found line segments Finds line segments in a binary image using the probabilistic Hough transform. 8-bit, single-channel binary source image. The image may be modified by the function. Output vector of lines. Each line is represented by a 4-element vector (x1, y1, x2, y2) Distance resolution of the accumulator in pixels Angle resolution of the accumulator in radians Accumulator threshold parameter. Only those lines are returned that get enough votes Minimum line length. Line segments shorter than that are rejected. Maximum allowed gap between points on the same line to link them. Calculates spatial and central moments up to the third order and writes them to moments. The moments may be used then to calculate gravity center of the shape, its area, main axises and various shape characeteristics including 7 Hu invariants. Image (1-channel or 3-channel with COI set) or polygon (CvSeq of points or a vector of points) (For images only) If the flag is true, all the zero pixel values are treated as zeroes, all the others are treated as 1s The moment This function is similiar to cvCalcBackProjectPatch. It slids through image, compares overlapped patches of size wxh with templ using the specified method and stores the comparison results to result Image where the search is running. It should be 8-bit or 32-bit floating-point Searched template; must be not greater than the source image and the same data type as the image A map of comparison results; single-channel 32-bit floating-point. If image is WxH and templ is wxh then result must be W-w+1xH-h+1. Specifies the way the template must be compared with image regions Compares two shapes. The 3 implemented methods all use Hu moments First contour or grayscale image Second contour or grayscale image Comparison method Method-specific parameter (is not used now) The result of the comparison Returns a structuring element of the specified size and shape for morphological operations. Element shape Size of the structuring element. Anchor position within the element. The value (-1, -1) means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted. The structuring element Performs advanced morphological transformations. Source image. Destination image. Structuring element. Type of morphological operation. Number of times erosion and dilation are applied. Pixel extrapolation method. Anchor position with the kernel. Negative values mean that the anchor is at the kernel center. Border value in case of a constant border. The algorithm normalizes brightness and increases contrast of the image The input 8-bit single-channel image The output image of the same size and the same data type as src Calculates a histogram of a set of arrays. Source arrays. They all should have the same depth, CV_8U or CV_32F , and the same size. Each of them can have an arbitrary number of channels. List of the channels used to compute the histogram. Optional mask. If the matrix is not empty, it must be an 8-bit array of the same size as images[i] . The non-zero mask elements mark the array elements counted in the histogram. Output histogram Array of histogram sizes in each dimension. Array of the dims arrays of the histogram bin boundaries in each dimension. Accumulation flag. If it is set, the histogram is not cleared in the beginning when it is allocated. This feature enables you to compute a single histogram from several sets of arrays, or to update the histogram in time. Calculates the back projection of a histogram. Source arrays. They all should have the same depth, CV_8U or CV_32F , and the same size. Each of them can have an arbitrary number of channels. Number of source images. Input histogram that can be dense or sparse. Destination back projection array that is a single-channel array of the same size and depth as images[0] . Array of arrays of the histogram bin boundaries in each dimension. Optional scale factor for the output back projection. Compares two histograms. First compared histogram. Second compared histogram of the same size as H1 . Comparison method The distance between the histogram Retrieves the spatial moment, which in case of image moments is defined as: M_{x_order,y_order}=sum_{x,y}(I(x,y) * x^{x_order} * y^{y_order}) where I(x,y) is the intensity of the pixel (x, y). The moment state x order of the retrieved moment, xOrder >= 0. y order of the retrieved moment, yOrder >= 0 and xOrder + y_order <= 3 The spatial moment Retrieves the central moment, which in case of image moments is defined as: mu_{x_order,y_order}=sum_{x,y}(I(x,y)*(x-x_c)^{x_order} * (y-y_c)^{y_order}), where x_c=M10/M00, y_c=M01/M00 - coordinates of the gravity center Reference to the moment state structure x order of the retrieved moment, xOrder >= 0. y order of the retrieved moment, yOrder >= 0 and xOrder + y_order <= 3 The center moment Retrieves normalized central moment, which in case of image moments is defined as: eta_{x_order,y_order}=mu_{x_order,y_order} / M00^{(y_order+x_order)/2+1}, where mu_{x_order,y_order} is the central moment Reference to the moment state structure x order of the retrieved moment, xOrder >= 0. y order of the retrieved moment, yOrder >= 0 and xOrder + y_order <= 3 The normalized center moment Adds the whole image or its selected region to accumulator sum Input image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently). Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point. Optional operation mask Adds the input or its selected region, raised to power 2, to the accumulator sqsum Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently) Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point Optional operation mask Adds product of 2 images or thier selected regions to accumulator acc First input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently) Second input image, the same format as the first one Accumulator of the same number of channels as input images, 32-bit or 64-bit floating-point Optional operation mask Calculates weighted sum of input and the accumulator acc so that acc becomes a running average of frame sequence: acc(x,y)=(1-) * acc(x,y) + * image(x,y) if mask(x,y)!=0 where regulates update speed (how fast accumulator forgets about previous frames). Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently). Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point. Weight of input image Optional operation mask Calculates seven Hu invariants Pointer to the moment state structure Pointer to Hu moments structure. Runs the Harris edge detector on image. Similarly to cvCornerMinEigenVal and cvCornerEigenValsAndVecs, for each pixel it calculates 2x2 gradient covariation matrix M over block_size x block_size neighborhood. Then, it stores det(M) - k*trace(M)^2 to the destination image. Corners in the image can be found as local maxima of the destination image. Input image Image to store the Harris detector responces. Should have the same size as image Neighborhood size Aperture parameter for Sobel operator (see cvSobel). format. In the case of floating-point input format this parameter is the number of the fixed float filter used for differencing. Harris detector free parameter. Pixel extrapolation method. Iterates to find the sub-pixel accurate location of corners, or radial saddle points Input image Initial coordinates of the input corners and refined coordinates on output Half sizes of the search window. For example, if win=(5,5) then 5*2+1 x 5*2+1 = 11 x 11 search window is used Half size of the dead region in the middle of the search zone over which the summation in formulae below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after certain number of iteration or when a required accuracy is achieved. The criteria may specify either of or both the maximum number of iteration and the required accuracy Calculates one or more integral images for the source image Using these integral images, one may calculate sum, mean, standard deviation over arbitrary up-right or rotated rectangular region of the image in a constant time. It makes possible to do a fast blurring or fast block correlation with variable window size etc. In case of multi-channel images sums for each channel are accumulated independently. The source image, WxH, 8-bit or floating-point (32f or 64f) image. The integral image, W+1xH+1, 32-bit integer or double precision floating-point (64f). The integral image for squared pixel values, W+1xH+1, double precision floating-point (64f). The integral for the image rotated by 45 degrees, W+1xH+1, the same data type as sum. Desired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F. Desired depth of the integral image of squared pixel values, CV_32F or CV_64F. Calculates distance to closest zero pixel for all non-zero pixels of source image Source 8-bit single-channel (binary) image. Output image with calculated distances (32-bit floating-point, single-channel). Type of distance Size of distance transform mask; can be 3 or 5. In case of CV_DIST_L1 or CV_DIST_C the parameter is forced to 3, because 3x3 mask gives the same result as 5x5 yet it is faster. The optional output 2d array of labels of integer type and the same size as src and dst. Can be null if not needed Type of the label array to build. If labelType==CCOMP then each connected component of zeros in src (as well as all the non-zero pixels closest to the connected component) will be assigned the same label. If labelType==PIXEL then each zero pixel (and all the non-zero pixels closest to it) gets its own label. Fills a connected component with given color. Input 1- or 3-channel, 8-bit or floating-point image. It is modified by the function unless CV_FLOODFILL_MASK_ONLY flag is set. The starting point. New value of repainted domain pixels. Maximal lower brightness/color difference between the currently observed pixel and one of its neighbor belong to the component or seed pixel to add the pixel to component. In case of 8-bit color images it is packed value. Maximal upper brightness/color difference between the currently observed pixel and one of its neighbor belong to the component or seed pixel to add the pixel to component. In case of 8-bit color images it is packed value. The operation flags. Lower bits contain connectivity value, 4 (by default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Upper bits can be 0 or combination of the following flags: CV_FLOODFILL_FIXED_RANGE - if set the difference between the current pixel and seed pixel is considered, otherwise difference between neighbor pixels is considered (the range is floating). CV_FLOODFILL_MASK_ONLY - if set, the function does not fill the image (new_val is ignored), but the fills mask (that must be non-NULL in this case). Operation mask, should be singe-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. If not IntPtr.Zero, the function uses and updates the mask, so user takes responsibility of initializing mask content. Floodfilling can't go across non-zero pixels in the mask, for example, an edge detector output can be used as a mask to stop filling at edges. Or it is possible to use the same mask in multiple calls to the function to make sure the filled area do not overlap. Note: because mask is larger than the filled image, pixel in mask that corresponds to (x,y) pixel in image will have coordinates (x+1,y+1). Output parameter set by the function to the minimum bounding rectangle of the repainted domain. Flood fill connectivity Filters image using meanshift algorithm Source image Result image The spatial window radius. The color window radius. Maximum level of the pyramid for the segmentation. Use 1 as default value Termination criteria: when to stop meanshift iterations. Use new MCvTermCriteria(5, 1) as default value Converts image transformation maps from one representation to another. The first input map of type CV_16SC2 , CV_32FC1 , or CV_32FC2 . The second input map of type CV_16UC1 , CV_32FC1 , or none (empty matrix), respectively. The first output map that has the type dstmap1type and the same size as src . The second output map. Depth type of the first output map that should be CV_16SC2 , CV_32FC1 , or CV_32FC2. The number of channels in the dst map. Flag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation. Transforms the image to compensate radial and tangential lens distortion. The input (distorted) image The output (corrected) image The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1]. The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. Camera matrix of the distorted image. By default it is the same as cameraMatrix, but you may additionally scale and shift the result by using some different matrix This function is an extended version of cvInitUndistortMap. That is, in addition to the correction of lens distortion, the function can also apply arbitrary perspective transformation R and finally it can scale and shift the image according to the new camera matrix The camera matrix A=[fx 0 cx; 0 fy cy; 0 0 1] The vector of distortion coefficients, 4x1, 1x4, 5x1 or 1x5 The rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used The new camera matrix A'=[fx' 0 cx'; 0 fy' cy'; 0 0 1] Depth type of the first output map that can be CV_32FC1 or CV_16SC2 . The first output map. The second output map. Undistorted image size. Similar to cvInitUndistortRectifyMap and is opposite to it at the same time. The functions are similar in that they both are used to correct lens distortion and to perform the optional perspective (rectification) transformation. They are opposite because the function cvInitUndistortRectifyMap does actually perform the reverse transformation in order to initialize the maps properly, while this function does the forward transformation. The observed point coordinates The ideal point coordinates, after undistortion and reverse perspective transformation. The camera matrix A=[fx 0 cx; 0 fy cy; 0 0 1] The vector of distortion coefficients, 4x1, 1x4, 5x1 or 1x5. The rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used. The new camera matrix (3x3) or the new projection matrix (3x4). P1 or P2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used. Computes the 'minimal work' distance between two weighted point configurations. First signature, a size1 x dims + 1 floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used. Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra 'dummy' point is added to either signature1 or signature2 Used metric. CV_DIST_L1, CV_DIST_L2 , and CV_DIST_C stand for one of the standard metrics. CV_DIST_USER means that a pre-calculated cost matrix cost is used. User-defined size1 x size2 cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function. Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). Resultant size1 x size2 flow matrix The 'minimal work' distance between two weighted point configurations. The function is used to detect translational shifts that occur between two images. The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. Source floating point array (CV_32FC1 or CV_64FC1) Source floating point array (CV_32FC1 or CV_64FC1) Floating point array with windowing coefficients to reduce edge effects (optional). Signal power within the 5x5 centroid around the peak, between 0 and 1 The translational shifts that occur between two images This function computes a Hanning window coefficients in two dimensions. Destination array to place Hann coefficients in The window size specifications Created array type Draws the line segment between pt1 and pt2 points in the image. The line is clipped by the image or ROI rectangle. For non-antialiased lines with integer coordinates the 8-connected or 4-connected Bresenham algorithm is used. Thick lines are drawn with rounding endings. Antialiased lines are drawn using Gaussian filtering. The image First point of the line segment Second point of the line segment Line color Line thickness. Type of the line: 8 (or 0) - 8-connected line. 4 - 4-connected line. CV_AA - antialiased line. Number of fractional bits in the point coordinates Draws a arrow segment pointing from the first point to the second one. Image The point the arrow starts from. The point the arrow points to. Line color. Line thickness. Type of the line. Number of fractional bits in the point coordinates. The length of the arrow tip in relation to the arrow length Draws a single or multiple polygonal curves Image Array points Indicates whether the polylines must be drawn closed. If !=0, the function draws the line from the last vertex of every contour to the first vertex. Polyline color Thickness of the polyline edges Type of the line segments, see cvLine description Number of fractional bits in the vertex coordinates Draws a single or multiple polygonal curves Image Array of pointers to polylines Indicates whether the polylines must be drawn closed. If !=0, the function draws the line from the last vertex of every contour to the first vertex. Polyline color Thickness of the polyline edges Type of the line segments, see cvLine description Number of fractional bits in the vertex coordinates Draws a rectangle specified by a CvRect structure /// Image The rectangle to be drawn Line color Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. Type of the line Number of fractional bits in the point coordinates Computes the connected components labeled image of boolean image The boolean image The connected components labeled image of boolean image 4 or 8 way connectivity Specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image N, the total number of labels [0, N-1] where 0 represents the background label. Computes the connected components labeled image of boolean image The boolean image The connected components labeled image of boolean image Statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes. The data type is CV_32S Centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F. 4 or 8 way connectivity Specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image N, the total number of labels [0, N-1] where 0 represents the background label. Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). What we mean here by “linear programming problem” (or LP problem, for short) can be formulated as: Maximize c x subject to: Ax <= b and x >= 0 This row-vector corresponds to c in the LP problem formulation (see above). It should contain 32- or 64-bit floating point numbers. As a convenience, column-vector may be also submitted, in the latter case it is understood to correspond to c^T. m-by-n+1 matrix, whose rightmost column corresponds to b in formulation above and the remaining to A. It should containt 32- or 64-bit floating point numbers. The solution will be returned here as a column-vector - it corresponds to c in the formulation above. It will contain 64-bit floating point numbers. The return codes Primal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented. This array should contain one or more noised versions of the image that is to be restored. Here the denoised image will be stored. There is no need to do pre-allocation of storage space, as it will be automatically allocated, if necessary. Corresponds to in the formulas above. As it is enlarged, the smooth (blurred) images are treated more favorably than detailed (but maybe more noised) ones. Roughly speaking, as it becomes smaller, the result will be more blur but more sever outliers will be removed. Number of iterations that the algorithm will run. Of course, as more iterations as better, but it is hard to quantitatively refine this statement, so just use the default and increase it if the results are poor. Reconstructs the selected image area from the pixel near the area boundary. The function may be used to remove dust and scratches from a scanned photo, or to remove undesirable objects from still images or video. The input 8-bit 1-channel or 3-channel image The inpainting mask, 8-bit 1-channel image. Non-zero pixels indicate the area that needs to be inpainted The output image of the same format and the same size as input The inpainting method The radius of circular neighborhood of each point inpainted that is considered by the algorithm Perform image denoising using Non-local Means Denoising algorithm: http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a gaussian white noise. Input 8-bit 1-channel, 2-channel or 3-channel image. Output image with the same size and type as src. Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. Size in pixels of the template patch that is used to compute weights. Should be odd. Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Perform image denoising using Non-local Means Denoising algorithm (modified for color image): http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a gaussian white noise. The function converts image to CIELAB colorspace and then separately denoise L and AB components with given h parameters using fastNlMeansDenoising function. Input 8-bit 1-channel, 2-channel or 3-channel image. Output image with the same size and type as src. Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. The same as h but for color components. For most images value equals 10 will be enought to remove colored noise and do not distort colors. Size in pixels of the template patch that is used to compute weights. Should be odd. Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Filtering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications. Input 8-bit 3-channel image Output 8-bit 3-channel image Edge preserving filters Range between 0 to 200 Range between 0 to 1 This filter enhances the details of a particular image. Input 8-bit 3-channel image Output image with the same size and type as src Range between 0 to 200 Range between 0 to 1 Pencil-like non-photorealistic line drawing Input 8-bit 3-channel image Output 8-bit 1-channel image Output image with the same size and type as src Range between 0 to 200 Range between 0 to 1 Range between 0 to 0.1 Stylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features. Input 8-bit 3-channel image. Output image with the same size and type as src. Range between 0 to 200. Range between 0 to 1. Given an original color image, two differently colored versions of this image can be mixed seamlessly. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src . R-channel multiply factor. Multiplication factor is between .5 to 2.5. G-channel multiply factor. Multiplication factor is between .5 to 2.5. B-channel multiply factor. Multiplication factor is between .5 to 2.5. Applying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src. Value ranges between 0-2. Value ranges between 0-2. By retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge Detector is used. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src. Range from 0 to 100. Value > 100 The size of the Sobel kernel to be used. Implements CAMSHIFT object tracking algorithm ([Bradski98]). First, it finds an object center using cvMeanShift and, after that, calculates the object size and orientation. Back projection of object histogram Initial search window Criteria applied to determine when the window search should be finished Circumscribed box for the object, contains object size and orientation Iterates to find the object center given its back projection and initial position of search window. The iterations are made until the search window center moves by less than the given value and/or until the function has done the maximum number of iterations. Back projection of object histogram Initial search window Criteria applied to determine when the window search should be finished. The number of iterations made Updates the motion history image as following: mhi(x,y)=timestamp if silhouette(x,y)!=0 0 if silhouette(x,y)=0 and mhi(x,y)<timestamp-duration mhi(x,y) otherwise That is, MHI pixels where motion occurs are set to the current timestamp, while the pixels where motion happened far ago are cleared. Silhouette mask that has non-zero pixels where the motion occurs. Motion history image, that is updated by the function (single-channel, 32-bit floating-point) Current time in milliseconds or other units. Maximal duration of motion track in the same units as timestamp. Calculates the derivatives Dx and Dy of mhi and then calculates gradient orientation as: orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) where both Dx(x,y)' and Dy(x,y)' signs are taken into account (as in cvCartToPolar function). After that mask is filled to indicate where the orientation is valid (see delta1 and delta2 description). Motion history image Mask image; marks pixels where motion gradient data is correct. Output parameter. Motion gradient orientation image; contains angles from 0 to ~360. The function finds minimum (m(x,y)) and maximum (M(x,y)) mhi values over each pixel (x,y) neihborhood and assumes the gradient is valid only if min(delta1,delta2) <= M(x,y)-m(x,y) <= max(delta1,delta2). The function finds minimum (m(x,y)) and maximum (M(x,y)) mhi values over each pixel (x,y) neihborhood and assumes the gradient is valid only if min(delta1,delta2) <= M(x,y)-m(x,y) <= max(delta1,delta2). Aperture size of derivative operators used by the function: CV_SCHARR, 1, 3, 5 or 7 (see cvSobel). Finds all the motion segments and marks them in segMask with individual values each (1,2,...). It also returns a sequence of CvConnectedComp structures, one per each motion components. After than the motion direction for every component can be calculated with cvCalcGlobalOrientation using extracted mask of the particular component (using cvCmp) Motion history image Image where the mask found should be stored, single-channel, 32-bit floating-point Current time in milliseconds or other units Segmentation threshold; recommended to be equal to the interval between motion history "steps" or greater Vector containing ROIs of motion connected components. Calculates the general motion direction in the selected region and returns the angle between 0 and 360. At first the function builds the orientation histogram and finds the basic orientation as a coordinate of the histogram maximum. After that the function calculates the shift relative to the basic orientation as a weighted sum of all orientation vectors: the more recent is the motion, the greater is the weight. The resultant angle is a circular sum of the basic orientation and the shift. Motion gradient orientation image; calculated by the function cvCalcMotionGradient. Mask image. It may be a conjunction of valid gradient mask, obtained with cvCalcMotionGradient and mask of the region, whose direction needs to be calculated. Motion history image. Current time in milliseconds or other units, it is better to store time passed to cvUpdateMotionHistory before and reuse it here, because running cvUpdateMotionHistory and cvCalcMotionGradient on large images may take some time. Maximal duration of motion track in milliseconds, the same as in cvUpdateMotionHistory The angle Calculates optical flow for a sparse feature set using iterative Lucas-Kanade method in pyramids First frame, at time t Second frame, at time t + dt Array of points for which the flow needs to be found Size of the search window of each pyramid level Maximal pyramid level number. If 0 , pyramids are not used (single level), if 1 , two levels are used, etc Specifies when the iteration process of finding the flow for each point on each pyramid level should be stopped Flags Array of 2D points containing calculated new positions of input features in the second image Array. Every element of the array is set to 1 if the flow for the corresponding feature has been found, 0 otherwise Array of double numbers containing difference between patches around the original and moved points the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost. Implements sparse iterative version of Lucas-Kanade optical flow in pyramids ([Bouguet00]). It calculates coordinates of the feature points on the current video frame given their coordinates on the previous frame. The function finds the coordinates with sub-pixel accuracy. Both parameters prev_pyr and curr_pyr comply with the following rules: if the image pointer is 0, the function allocates the buffer internally, calculates the pyramid, and releases the buffer after processing. Otherwise, the function calculates the pyramid and stores it in the buffer unless the flag CV_LKFLOW_PYR_A[B]_READY is set. The image should be large enough to fit the Gaussian pyramid data. After the function call both pyramids are calculated and the readiness flag for the corresponding image can be set in the next call (i.e., typically, for all the image pairs except the very first one CV_LKFLOW_PYR_A_READY is set). First frame, at time t. Second frame, at time t + dt . Array of points for which the flow needs to be found. Array of 2D points containing calculated new positions of input Size of the search window of each pyramid level. Maximal pyramid level number. If 0 , pyramids are not used (single level), if 1 , two levels are used, etc. Array. Every element of the array is set to 1 if the flow for the corresponding feature has been found, 0 otherwise. Array of double numbers containing difference between patches around the original and moved points. Optional parameter; can be NULL Specifies when the iteration process of finding the flow for each point on each pyramid level should be stopped. Miscellaneous flags the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost. Computes dense optical flow using Gunnar Farneback's algorithm The first 8-bit single-channel input image The second input image of the same size and the same type as prevImg The computed flow image for x-velocity; will have the same size as prevImg The computed flow image for y-velocity; will have the same size as prevImg Specifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous The number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used The averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field The number of iterations the algorithm does at each pyramid level Size of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7 Standard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5 The operation flags Computes dense optical flow using Gunnar Farneback's algorithm The first 8-bit single-channel input image The second input image of the same size and the same type as prevImg The computed flow image; will have the same size as prevImg and type CV 32FC2 Specifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous The number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used The averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field The number of iterations the algorithm does at each pyramid level Size of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7 Standard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5 The operation flags Estimate rigid transformation between 2 point sets. The points from the source image The corresponding points from the destination image Indicates if full affine should be performed If success, the 2x3 rotation matrix that defines the Affine transform. Otherwise null is returned. Estimate rigid transformation between 2 images or 2 point sets. First image or 2D point set (as a 2 channel Matrix<float>) First image or 2D point set (as a 2 channel Matrix<float>) Indicates if full affine should be performed The resulting Matrix<double> that represent the affine transformation Release the InputArray Pointer to the input array Release the input / output array Pointer to the input output array Release the input / output array Pointer to the input / output array Creates video writer structure. Name of the output video file. 4-character code of codec used to compress the frames. For example, CV_FOURCC('P','I','M','1') is MPEG-1 codec, CV_FOURCC('M','J','P','G') is motion-jpeg codec etc. Framerate of the created video stream. Size of video frames. If != 0, the encoder will expect and encode color frames, otherwise it will work with grayscale frames The video writer Finishes writing to video file and releases the structure. pointer to video file writer structure Writes/appends one frame to video file. video writer structure. the written frame True on success, false otherwise Enables or disables the optimized code. true if [use optimized]; otherwise, false. The function can be used to dynamically turn on and off optimized code (code that uses SSE2, AVX, and other instructions on the platforms that support it). It sets a global flag that is further checked by OpenCV functions. Since the flag is not checked in the inner OpenCV loops, it is only safe to call the function on the very top level in your application where you can be sure that no other OpenCV function is currently executed. Check if we have OpenCL Get or set if OpenCL should be used Gets a value indicating whether this device have open CL compatible gpu device. true if have open CL compatible gpu device; otherwise, false. Define an error callback that can be registered using cvRedirectError function The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the source where error is encountered Arbitrary pointer that is transparently passed to the error handler. Camera calibration functions Estimates intrinsic camera parameters and extrinsic parameters for each of the views The 3D location of the object points. The first index is the index of image, second index is the index of the point The 2D image location of the points. The first index is the index of the image, second index is the index of the point The size of the image, used only to initialize intrinsic camera matrix The intrisinc parameters, might contains some initial values. The values will be modified by this function. cCalibration type The termination criteria The output array of extrinsic parameters. The final reprojection error Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the fist camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=R*R1, T2=R*T1 + T The 3D location of the object points. The first index is the index of image, second index is the index of the point The 2D image location of the points for camera 1. The first index is the index of the image, second index is the index of the point The 2D image location of the points for camera 2. The first index is the index of the image, second index is the index of the point The intrisinc parameters for camera 1, might contains some initial values. The values will be modified by this function. The intrisinc parameters for camera 2, might contains some initial values. The values will be modified by this function. Size of the image, used only to initialize intrinsic camera matrix Different flags The extrinsic parameters which contains: R - The rotation matrix between the 1st and the 2nd cameras' coordinate systems; T - The translation vector between the cameras' coordinate systems. The essential matrix Termination criteria for the iterative optimiziation algorithm The fundamental matrix Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error. The array of object points The array of corresponding image points The intrinsic parameters Method for solving a PnP problem The extrinsic parameters Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters. Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points) The array of object points. Extrinsic parameters Intrinsic parameters Optional matrix supplied in the following order: dpdrot, dpdt, dpdf, dpdc, dpddist The array of image points which is the projection of Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2. Coordinates of 3 triangle vertices in the source image. If the array contains more than 3 points, only the first 3 will be used Coordinates of the 3 corresponding triangle vertices in the destination image. If the array contains more than 3 points, only the first 3 will be used The 2x3 rotation matrix that defines the Affine transform Estimate rigid transformation between 2 point sets. The points from the source image The corresponding points from the destination image Indicates if full affine should be performed If success, the 2x3 rotation matrix that defines the Affine transform. Otherwise null is returned. Extrinsic camera parameters Create the extrinsic camera parameters Create the extrinsic camera parameters using the specific rotation and translation matrix The rotation vector The translation vector Return true if the two extrinsic camera parameters are equal The other extrinsic camera parameters to compare with True if the two extrinsic camera parameters are equal Get or Set the rodrigus rotation vector Get or Set the translation vector ( as 3 x 1 matrix) Get the 3 x 4 extrinsic matrix: [[r11 r12 r13 t1] [r21 r22 r23 t2] [r31 r32 r33 t2]] Intrinsic camera parameters Create the intrinsic camera parameters Create the intrinsic camera parameters The number of distortion coefficients. Should be either 4, 5 or 8. Pre-computes the undistortion map - coordinates of the corresponding pixel in the distorted image for every pixel in the corrected image. Then, the map (together with input and output images) can be passed to cvRemap function. The width of the image The height of the image The output array of x-coordinates of the map The output array of y-coordinates of the map computes various useful camera (sensor/lens) characteristics using the computed camera calibration matrix, image frame resolution in pixels and the physical aperture size Image width in pixels Image height in pixels Aperture width in realworld units (optional input parameter). Set it to 0 if not used Aperture width in realworld units (optional input parameter). Set it to 0 if not used Field of view angle in x direction in degrees Field of view angle in y direction in degrees Focal length in realworld units The principal point in realworld units The pixel aspect ratio ~ fy/f Similar to cvInitUndistortRectifyMap and is opposite to it at the same time. The functions are similar in that they both are used to correct lens distortion and to perform the optional perspective (rectification) transformation. They are opposite because the function cvInitUndistortRectifyMap does actually perform the reverse transformation in order to initialize the maps properly, while this function does the forward transformation. The observed point coordinates Optional rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If null, the identity matrix is used. Optional new camera matrix (3x3) or the new projection matrix (3x4). P1 or P2, computed by cvStereoRectify can be passed here. If null, the identity matrix is used. Transforms the image to compensate radial and tangential lens distortion. The camera matrix and distortion parameters can be determined using cvCalibrateCamera2. For every pixel in the output image the function computes coordinates of the corresponding location in the input image using the formulae in the section beginning. Then, the pixel value is computed using bilinear interpolation. If the resolution of images is different from what was used at the calibration stage, fx, fy, cx and cy need to be adjusted appropriately, while the distortion coefficients remain the same The color type of the image The depth of the image The distorted image The corrected image Return true if the two intrinsic camera parameters are equal The other intrinsic camera parameters to compare with True if the two intrinsic camera parameters are equal Get or Set the DistortionCoeffs ( as a 5x1 (default), 4x1 or 8x1 matrix ). The ordering of the distortion coefficients is the following: (k1, k2, p1, p2[, k3 [,k4, k5, k6]]). That is, the first 2 radial distortion coefficients are followed by 2 tangential distortion coefficients and then, optionally, by the third radial distortion coefficients. Such ordering is used to keep backward compatibility with previous versions of OpenCV Get or Set the intrinsic matrix (3x3) A unit quaternions that defines rotation in 3D Create a quaternion with the specific values The W component of the quaternion: the value for cos(rotation angle / 2) The X component of the vector: rotation axis * sin(rotation angle / 2) The Y component of the vector: rotation axis * sin(rotation angle / 2) The Z component of the vector: rotation axis * sin(rotation angle / 2) Set the value of the quaternions using euler angle Rotation around x-axis (roll) in radian Rotation around y-axis (pitch) in radian rotation around z-axis (yaw) in radian Get the equivalent euler angle Rotation around x-axis (roll) in radian Rotation around y-axis (pitch) in radian rotation around z-axis (yaw) in radian Fill the (3x3) rotation matrix with the value such that it represent the quaternions The (3x3) rotation matrix which values will be set to represent this quaternions Rotate the points in and save the result in . In-place operation is supported ( == ). The points to be rotated The result of the rotation, should be the same size as , can be as well for inplace rotation Rotate the specific point and return the result The point to be rotated The rotated point Multiply the current Quaternions with The other rotation A composition of the two rotations Perform quaternions linear interpolation The other quaternions to interpolate with If 0.0, the result is the same as this quaternions. If 1.0 the result is the same as The linear interpolated quaternions Computes the multiplication of two quaternions The quaternions to be multiplied The quaternions to be multiplied The multiplication of two quaternions Get the quaternions that represent a rotation of 0 degrees. Compute the conjugate of the quaternions Check if this quaternions equals to The quaternions to be compared True if two quaternions equals, false otherwise Get the string representation of the Quaternions The string representation The W component of the quaternion: the value for cos(rotation angle / 2) The X component of the vector: rotation axis * sin(rotation angle / 2) The Y component of the vector: rotation axis * sin(rotation angle / 2) The Z component of the vector: rotation axis * sin(rotation angle / 2) Get or set the equivalent axis angle representation. (x,y,z) is the rotation axis and |(x,y,z)| is the rotation angle in radians Get the rotation axis of the quaternion Get the rotation angle in radian A (2x3) 2D rotation matrix. This Matrix defines an Affine Transform The equivalent of cv::Mat Matrix data allocator. Base class for Mat that handles the matrix data allocation and deallocation Release resource associated with this object Get the managed data used by the Mat IImage interface This type is very similar to InputArray except that it is used for input/output function parameters. This is the proxy class for passing read-only input arrays into OpenCV functions. The unmanaged pointer to the input array. This type is very similar to InputArray except that it is used for output function parameters. The unmanaged pointer to the output array InputArrayOfArrays The unmanaged pointer to the input/output array Returns the min / max location and values for the image Returns the min / max location and values for the image Split current IImage into an array of gray scale images where each element in the array represent a single color channel of the original image An array of gray scale images where each element in the array represent a single color channel of the original image Save the image to the specific The file name of the image Convert this image into Bitmap, when available, data is shared with this image. The Bitmap, when available, data is shared with this image The size of this image Get the pointer to the unmanaged memory Get the number of channels for this image Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime deserailization of the object Serialization info Streaming context A function used for runtime serialization of the object Serialization info streaming context Copy data from this Mat to the managed array The type of managed data array The managed array where data will be copied to. Copy data from managed array to this Mat The type of managed data array The managed array where data will be copied from Create an empty cv::Mat Create a mat of the specific type. Number of rows in a 2D array. Number of columns in a 2D array. Mat element type Number of channels Create a mat of the specific type. Size of the Mat Mat element type Number of channels Create a Mat header from existing data Number of rows in a 2D array. Number of columns in a 2D array. Mat element type Number of channels Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. Create multi-dimension mat using existing data. The sizes of each dimension The type of data The pointer to the unmanaged data The steps Create a Mat header from existing data Size of the Mat Mat element type Number of channels Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. Load the Mat from file The name of the file File loading method Create a mat header for the specific ROI The mat where the new Mat header will share data from The region of interest Convert this Mat to UMat Access type The UMat Allocates new array data if needed. New number of rows. New number of columns. New matrix element depth type. New matrix number of channels Gets the binary data from the specific indices. The indices. Indices of length more than 2 is not implemented Copy the data in this cv::Mat to an output array The output array to copy to Operation mask. Its non-zero elements indicate which matrix elements need to be copied. Converts an array to another data type with optional scaling. Output matrix; if it does not have a proper size or type before the operation, it is reallocated. Desired output matrix type or, rather, the depth since the number of channels are the same as the input has; if rtype is negative, the output matrix will have the same type as the input. Optional scale factor. Optional delta added to the scaled values. Changes the shape and/or the number of channels of a 2D matrix without copying the data. New number of channels. If the parameter is 0, the number of channels remains the same. New number of rows. If the parameter is 0, the number of rows remains the same. A new mat header that has different shape Release all the unmanaged memory associated with this object. Pointer to the InputArray Pointer to the OutputArray Pointer to the InputOutputArray Get the minimum and maximum value across all channels of the mat The range that contains the minimum and maximum values Convert this Mat to Image The type of Color The type of Depth The image Set the mat to the specific value The value to set to Optional mask Set the mat to the specific value The value to set to Optional mask Returns the min / max location and values for the image The maximum locations for each channel The maximum values for each channel The minimum locations for each channel The minimum values for each channel Create a Mat object with data pointed towards the specific row of the original matrix The row number A Mat object with data pointed towards the specific row of the original matrix Save this image to the specific file. The name of the file to be saved to The image format is chosen depending on the filename extension, see cvLoadImage. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use cvCvtScale and cvCvtColor to convert it before saving, or use universal cvSave to save the image to XML or YAML format. Make a clone of the current Mat A clone fo the current Mat Split current Image into an array of gray scale images where each element in the array represent a single color channel of the original image An array of gray scale images where each element in the array represent a single color channel of the original image Compares two Mats and check if they are equal The other mat to compare with True if the two Mats are equal Computes a dot-product of two vectors. Another dot-product operand The dot-product of two vectors. Computes a cross-product of two 3-element vectors. Another cross-product operand. Cross-product of two 3-element vectors. The method removes one or more rows from the bottom of the matrix Adds elements to the bottom of the matrix Gets or sets the data as byte array. The bytes. The size of this matrix The number of rows The number of columns Pointer to the beginning of the raw data Step The size of the elements in this matrix Get the width of the mat Get the height of the mat. The Get property provide a more efficient way to convert Image<Gray, Byte>, Image<Bgr, Byte> and Image<Bgra, Byte> into Bitmap such that the image data is shared with Bitmap. If you change the pixel value on the Bitmap, you change the pixel values on the Image object as well! For other types of image this property has the same effect as ToBitmap() Take extra caution not to use the Bitmap after the Mat object is disposed The Set property convert the bitmap to this Image type. Get an array of the size of the dimensions. e.g. if the mat is 9x10x11, the array of {9, 10, 11} will be returned. True if the data is continues True if the matrix is a submatrix of another matrix Depth type True if the Mat is empty Number of channels The method returns the number of array elements (a number of pixels if the array represents an image) The matrix dimensionality Create an empty (2x3) 2D rotation matrix Create a (2x3) 2D rotation matrix Center of the rotation in the source image The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at top-left corner). Isotropic scale factor. Set the values of the rotation matrix Center of the rotation in the source image The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at top-left corner). Isotropic scale factor. Rotate the , the value of the input will be changed. The points to be rotated, its value will be modified Rotate the , the value of the input will be changed. The points to be rotated, its value will be modified Rotate the , the value of the input will be changed. The line segments to be rotated Rotate the single channel Nx2 matrix where N is the number of 2D points. The value of the matrix is changed after rotation. The depth of the points, must be double or float The N 2D-points to be rotated Return a clone of the Matrix A clone of the Matrix Create a rotation matrix for rotating an image The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at image centre). The rotation center The source image size The minimun size of the destination image The rotation matrix that rotate the source image to the destination image. A (3x1) Rodrigues rotation vector. Rotation vector is a compact representation of rotation matrix. Direction of the rotation vector is the rotation axis and the length of the vector is the rotation angle around the axis. A Matrix is a wrapper to cvMat of OpenCV. Depth of this matrix (either Byte, SByte, Single, double, UInt16, Int16 or Int32) Wrapped CvArr The type of elements in this CvArray The size of the elements in the CvArray, it is the cached value of Marshal.SizeOf(typeof(TDepth)). The pinned GCHandle to _array; Allocate data for the array The number of rows The number of columns The number of channels of this cvArray Calculates and returns the Euclidean dot product of two arrays. src1 dot src2 = sumI(src1(I)*src2(I)) In case of multiple channel arrays the results for all channels are accumulated. In particular, cvDotProduct(a,a), where a is a complex vector, will return ||a||^2. The function can process multi-dimensional arrays, row by row, layer by layer and so on. The other Array to apply dot product with src1 dot src2 Check that every array element is neither NaN nor +- inf. The functions also check that each value is between and . in the case of multi-channel arrays each channel is processed independently. If some values are out of range, position of the first outlier is stored in pos, and then the functions return false. The inclusive lower boundary of valid values range The exclusive upper boundary of valid values range This will be filled with the position of the first outlier True if all values are in range Reduces matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. The function can be used to compute horizontal and vertical projections of an raster image. In case of CV_REDUCE_SUM and CV_REDUCE_AVG the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes The destination single-row/single-column vector that accumulates somehow all the matrix rows/columns The dimension index along which the matrix is reduce. The reduction operation type The type of depth of the reduced array Copy the current array to The destination Array Set the element of the Array to , using the specific The value to be set The mask for the operation Set the element of the Array to , using the specific The value to be set The mask for the operation Inplace fills Array with uniformly distributed random numbers the inclusive lower boundary of random numbers range the exclusive upper boundary of random numbers range Inplace fills Array with normally distributed random numbers the mean value of random numbers the standard deviation of random numbers Initializes scaled identity matrix The value on the diagonal Set the values to zero Initialize the identity matrix Inplace multiply elements of the Array by The scale to be multiplyed Inplace elementwise multiply the current Array with The other array to be elementwise multiplied with Free the _dataHandle if it is set Inplace compute the elementwise minimum value The value to compare with Inplace elementwise minimize the current Array with The other array to be elementwise minimized with this array Inplace compute the elementwise maximum value with The value to be compare with Inplace elementwise maximize the current Array with The other array to be elementwise maximized with this array Inplace And operation with The other array to perform AND operation Inplace Or operation with The other array to perform OR operation Inplace compute the complement for all array elements Save the CvArray as image The name of the image to save Get the xml schema the xml schema Function to call when deserializing this object from XML The xml reader Function to call when serializing this object to XML The xml writer A function used for runtime serialization of the object Serialization info Streaming context A function used for runtime deserailization of the object Serialization info Streaming context The Mat header that represent this CvArr The unmanaged pointer to the input array. The unmanaged pointer to the output array. The unmanaged pointer to the input output array. Get the umat representation of this mat The UMat Get or set the Compression Ratio for serialization. A number between 0 - 9. 0 means no compression at all, while 9 means best compression Get the size of element in bytes The pointer to the internal structure Get the size of the array Get the width (#Cols) of the cvArray. If ROI is set, the width of the ROI Get the height (#Rows) of the cvArray. If ROI is set, the height of the ROI Get the number of channels of the array The number of rows for this array The number of cols for this array Get or Set an Array of bytes that represent the data in this array Should only be used for serialization & deserialization Get the underneath managed array Sum of diagonal elements of the matrix The norm of this Array Get the Mat header that represent this CvArr The default constructor which allows Data to be set later on Create a Matrix (only header is allocated) using the Pinned/Unmanaged . The is not freed by the disposed function of this class The number of rows The number of cols The Pinned/Unmanaged data, the data must not be release before the Matrix is Disposed The step (row stride in bytes) The caller is responsible for allocating and freeing the block of memory specified by the data parameter, however, the memory should not be released until the related Matrix is released. Create a Matrix (only header is allocated) using the Pinned/Unmanaged . The is not freed by the disposed function of this class The number of rows The number of cols The number of channels The Pinned/Unmanaged data, the data must not be release before the Matrix is Disposed The step (row stride in bytes) The caller is responsible for allocating and freeing the block of memory specified by the data parameter, however, the memory should not be released until the related Matrix is released. Create a Matrix (only header is allocated) using the Pinned/Unmanaged . The is not freed by the disposed function of this class The number of rows The number of cols The Pinned/Unmanaged data, the data must not be release before the Matrix is Disposed The caller is responsible for allocating and freeing the block of memory specified by the data parameter, however, the memory should not be released until the related Matrix is released. Create a matrix of the specific size The number of rows (height) The number of cols (width) Create a matrix of the specific size The size of the matrix Create a matrix of the specific size and channels The number of rows The number of cols The number of channels Create a matrix using the specific data Create a matrix using the specific the data for this matrix Return a matrix of the same size with all elements equals 0 A matrix of the same size with all elements equals 0 Make a copy of this matrix A copy if this matrix Get reshaped matrix which also share the same data with the current matrix the new number of channles The new number of rows A reshaped matrix which also share the same data with the current matrix Convert this matrix to different depth The depth type to convert to Matrix of different depth Returns the transpose of this matrix The transpose of this matrix Allocate data for the array The number of rows The number of columns The number of channels for this matrix Get a submatrix corresponding to a specified rectangle the rectangle area of the sub-matrix A submatrix corresponding to a specified rectangle Get the specific row of the matrix the index of the row to be reterived the specific row of the matrix Return the matrix corresponding to a specified row span of the input array Zero-based index of the starting row (inclusive) of the span Zero-based index of the ending row (exclusive) of the span Index step in the row span. That is, the function extracts every delta_row-th row from start_row and up to (but not including) end_row A matrix corresponding to a specified row span of the input array Get the specific column of the matrix the index of the column to be reterived the specific column of the matrix Get the Matrix, corresponding to a specified column span of the input array Zero-based index of the ending column (exclusive) of the span Zero-based index of the selected column the specific column span of the matrix Return the specific diagonal elements of this matrix Array diagonal. Zero corresponds to the main diagonal, -1 corresponds to the diagonal above the main etc., 1 corresponds to the diagonal below the main etc The specific diagonal elements of this matrix Return the main diagonal element of this matrix The main diagonal element of this matrix Return the matrix without a specified row span of the input array Zero-based index of the starting row (inclusive) of the span Zero-based index of the ending row (exclusive) of the span The matrix without a specified row span of the input array Return the matrix without a specified column span of the input array Zero-based index of the starting column (inclusive) of the span Zero-based index of the ending column (exclusive) of the span The matrix without a specified column span of the input array Concate the current matrix with another matrix vertically. If this matrix is n1 x m and is n2 x m, the resulting matrix is (n1+n2) x m. The other matrix to concate A new matrix that is the vertical concatening of this matrix and Concate the current matrix with another matrix horizontally. If this matrix is n x m1 and is n x m2, the resulting matrix is n x (m1 + m2). The other matrix to concate A matrix that is the horizontal concatening of this matrix and Returns the min / max locations and values for the matrix Elementwise add another matrix with the current matrix The matrix to be added to the current matrix The result of elementwise adding mat2 to the current matrix Elementwise add a color to the current matrix The value to be added to the current matrix The result of elementwise adding from the current matrix Elementwise subtract another matrix from the current matrix The matrix to be subtracted to the current matrix The result of elementwise subtracting mat2 from the current matrix Elementwise subtract a color to the current matrix The value to be subtracted from the current matrix The result of elementwise subtracting from the current matrix result = val - this The value which subtract this matrix val - this Multiply the current matrix with The scale to be multiplied The scaled matrix Multiply the current matrix with The matrix to be multiplied Result matrix of the multiplication Elementwise add with The Matrix to be added The Matrix to be added The elementwise sum of the two matrices Elementwise add with The Matrix to be added The value to be added The matrix plus the value + The Matrix to be added The value to be added The matrix plus the value - The Matrix to be subtracted The value to be subtracted - - The Matrix to be subtracted The matrix to subtract - - The Matrix to be subtracted The value to be subtracted - * The Matrix to be multiplied The value to be multiplied * * The matrix to be multiplied The value to be multiplied * / The Matrix to be divided The value to be divided / * The Matrix to be multiplied The Matrix to be multiplied * Constructor used to deserialize runtime serialized object The serialization info The streaming context Release the matrix and all the memory associate with it This function compare the current matrix with and returns the comparison mask The other matrix to compare with Comparison type The comparison mask Get all channels for the multi channel matrix Each individual channel of this matrix Return true if every element of this matrix equals elements in The other matrix to compare with true if every element of this matrix equals elements in Get the underneath managed array Get or Set the data for this matrix Get the number of channels for this matrix The MCvMat structure format Returns determinant of the square matrix Return the sum of the elements in this matrix Get or Set the value in the specific and the row of the element the col of the element The element on the specific and Get the size of the array Constructor used to deserialize 3D rotation vector The serialization info The streaming context Create a 3D rotation vector (3x1 Matrix). Create a rotation vector using the specific values The values of the (3 x 1) Rodrigues rotation vector Get or Set the (3x3) rotation matrix represented by this rotation vector. Capture images from either camera or video file. The interface to request a duplex image capture Request a frame from server Request a frame from server which is half width and half height The interface that is used for WCF to provide a image capture service Capture a Bgr image frame A Bgr image frame Capture a Bgr image frame that is half width and half heigh A Bgr image frame that is half width and half height the type of flipping Create a capture using the specific camera The capture type Create a capture using the default camera Create a capture using the specific camera The index of the camera to create capture from, starting from 0 Create a capture from file or a video stream The name of a file, or an url pointed to a stream. Release the resource for this capture Obtain the capture property The index for the property The value of the specific property Sets the specified property of video capturing Property identifier Value of the property True if success Grab a frame True on success Start the grab process in a separate thread. Once started, use the ImageGrabbed event handler and RetrieveGrayFrame/RetrieveBgrFrame to obtain the images. Pause the grab process if it is running. Stop the grabbing thread Retrieve a Gray image frame after Grab() The output image The channel to retrieve image True if the frame can be retrieved Capture a Bgr image frame A Bgr image frame. If no more frames are available, null will be returned. Capture a Bgr image frame that is half width and half height. Mainly used by WCF when sending image to remote locations in a bandwidth conservative scenario Internally, this is a cvQueryFrame operation follow by a cvPyrDown A Bgr image frame that is half width and half height Query a frame duplexly over WCF Query a small frame duplexly over WCF Get the type of the capture module Get and set the flip type Get or Set if the captured image should be flipped horizontally Get or Set if the captured image should be flipped vertically The width of this capture The height of this capture The event to be called when an image is grabbed The type of capture source Capture from camera Capture from file using HighGUI The interface for DuplexCaptureCallback Function to call when an image is received The image received Kinect Camera capture Create the Kinect Camera capture object The kinect device type The output mode Retrieve Gray frame from Kinect A Gray frame from Kinect Retrieve Bgr frame from Kinect A Bgr frame from Kinect Retrieve disparity map (in pixels) from Kinect The disparity map from Kinect Retrieve disparity map (in pixels) from Kinect The disparity map from Kinect Retrieve the valid depth map from Kinect The valid depth map from Kinect Retrieve the depth map from Kinect (in mm) The depth map from Kinect (in mm) Retrieve all the points (x, y, z position in meters) from Kinect, row by row. All the points (x, y, z position in meters) from Kinect, row by row. Get an enumerator of the colored points from Kinect. This function can only be called after the Grab() function. The mask that controls which points should be returned. You can use the result from RetrieveValidDepthMap() function. Use null if you want all points to be returned An enumerator of the colored points from Kinect Given the minimum distance in mm, return the maximum valid disparity value. The minimum distance that an object is away from the camera The maximum valid disparity Camera output mode VGA resolution SXVGA resolution SXVGA resolution QVGA resolution QVGA resolution Open ni data type used by the retrieve functions Depth values in mm (CV_16UC1) XYZ in meters (CV_32FC3) Disparity in pixels (CV_8UC1) Disparity in pixels (CV_32FC1) CV_8UC1 Bgr image Gray Image Kinect device type kinect Asus xtion The Cascade Classifier A dummy constructor that mainly aimed for those who would like to inherite this class Create a CascadeClassifier from the specific file The name of the file that contains the CascadeClassifier Load the cascade classifier from a file node The file node, The file may contain a new cascade classifier only. True if the classifier can be imported. Finds rectangular regions in the given image that are likely to contain objects the cascade has been trained for and returns those regions as a sequence of rectangles. The function scans the image several times at different scales. Each time it considers overlapping regions in the image. It may also apply some heuristics to reduce number of analyzed regions, such as Canny prunning. After it has proceeded and collected the candidate rectangles (regions that passed the classifier cascade), it groups them and returns a sequence of average rectangles for each large enough group. The image where the objects are to be detected from The factor by which the search window is scaled between the subsequent scans, for example, 1.1 means increasing window by 10% Minimum number (minus 1) of neighbor rectangles that makes up an object. All the groups of a smaller number of rectangles than min_neighbors-1 are rejected. If min_neighbors is 0, the function does not any grouping at all and returns all the detected candidate rectangles, which may be useful if the user wants to apply a customized grouping procedure. Use 3 for default. Minimum window size. Use Size.Empty for default, where it is set to the size of samples the classifier has been trained on (~20x20 for face detection) Maximum window size. Use Size.Empty for default, where the parameter will be ignored. The objects detected, one array per channel Release the CascadeClassifier Object and all the memory associate with it Get if the cascade is old format Get the original window size Defines a Bgr (Blue Green Red) color A color type The equivalent MCvScalar value Get the dimension of the color type The MCvScalar representation of the color intensity Create a BGR color using the specific values The blue value for this color The green value for this color The red value for this color Create a Bgr color using the System.Drawing.Color System.Drawing.Color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the blue color channel Get or set the intensity of the green color channel Get or set the intensity of the red color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a Bgra (Blue Green Red Alpha) color The MCvScalar representation of the color intensity Create a BGRA color using the specific values The blue value for this color The green value for this color The red value for this color The alpha value for this color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the blue color channel Get or set the intensity of the green color channel Get or set the intensity of the red color channel Get or set the intensity of the alpha color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Attribute used to specify color information The code which is used for color conversion The code which is used for color conversion The code which is used for color conversion Defines a Gray color The MCvScalar representation of the color intensity Create a Gray color with the given intensity The intensity for this color Returns the hash code for this color the hash code Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color The intensity of the gray color The intensity of the gray color Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a Hls (Hue Lightness Satuation) color The MCvScalar representation of the color intensity Create a Hls color using the specific values The hue value for this color ( 0 < hue < 180 ) The satuation for this color The lightness for this color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the hue color channel ( 0 < hue < 180 ) Get or set the intensity of the lightness color channel Get or set the intensity of the satuation color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a HSV (Hue Satuation Value) color The MCvScalar representation of the color intensity Create a HSV color using the specific values The hue value for this color ( 0 < hue < 180 ) The satuation value for this color The value for this color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the hue color channel ( 0 < hue < 180 ) Get or set the intensity of the satuation color channel Get or set the intensity of the value color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a CIE Lab color The MCvScalar representation of the color intensity Create a CIE Lab color using the specific values The z value for this color The y value for this color The x value for this color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the x color channel Get or set the intensity of the y color channel Get or set the intensity of the z color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a CIE Luv color The MCvScalar representation of the color intensity Create a CIE Lab color using the specific values The z value for this color The y value for this color The x value for this color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color The intensity of the x color channel The intensity of the y color channel The intensity of the z color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a Rgb (Red Green Blue) color The MCvScalar representation of the color intensity Create a RGB color using the specific values The blue value for this color The green value for this color The red value for this color Create a Rgb color using the System.Drawing.Color System.Drawing.Color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the red color channel Get or set the intensity of the green color channel Get or set the intensity of the blue color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a Bgr565 (Blue Green Red) color The MCvScalar representation of the color intensity Create a Bgr565 color using the specific values The blue value for this color The green value for this color The red value for this color Create a Bgr565 color using the System.Drawing.Color System.Drawing.Color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the red color channel Get or set the intensity of the green color channel Get or set the intensity of the blue color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a Rgba (Red Green Blue Alpha) color The MCvScalar representation of the color intensity Create a RGBA color using the specific values The blue value for this color The green value for this color The red value for this color The alpha value for this color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the red color channel Get or set the intensity of the green color channel Get or set the intensity of the blue color channel Get or set the intensity of the alpha color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a Xyz color (CIE XYZ.Rec 709 with D65 white point) The MCvScalar representation of the color intensity Create a Xyz color using the specific values The z value for this color The y value for this color The x value for this color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the z color channel Get or set the intensity of the y color channel Get or set the intensity of the x color channel Get the dimension of this color Get or Set the equivalent MCvScalar value Defines a Ycc color (YCrCb JPEG) The MCvScalar representation of the color intensity Create a Ycc color using the specific values The Y value for this color The Cr value for this color The Cb value for this color Return true if the two color equals The other color to compare with true if the two color equals Represent this color as a String The string representation of this color Get or set the intensity of the Y color channel Get or set the intensity of the Cr color channel Get or set the intensity of the Cb color channel Get the dimension of this color Get or Set the equivalent MCvScalar value A convolution kernel The center of the convolution kernel Create a convolution kernel with the specific number of and The number of raws for the convolution kernel The number of columns for the convolution kernel Create a convolution kernel using the specific matrix and center The values for the convolution kernel The center of the kernel Create a convolution kernel using the specific floating point matrix The values for the convolution kernel Create a convolution kernel using the specific floating point matrix and center The values for the convolution kernel The center for the convolution kernel Get a flipped copy of the convolution kernel The type of the flipping The flipped copy of this image Obtain the transpose of the convolution kernel A transposed convolution kernel The center of the convolution kernel CvBlob Get the contour that defines the blob The contour of the blob Implicit operator for IntPtr The CvBlob The unmanaged pointer for this object Get the blob label The minimum bounding box of the blob Get the Blob Moments The centroid of the blob The number of pixels in this blob Pointer to the blob Blob Moments Mement 00 Moment 10 Moment 01 Moment 11 Moment 20 Moment 02 Central moment 11 Central moment 20 Central moment 02 Normalized central moment 11 Normalized central moment 20 Normalized central moment 02 Hu moment 1 Hu moment 2 Wrapper for the CvBlob detection functions. The Ptr property points to the label image of the cvb::cvLabel function. Algorithm based on paper "A linear-time component-labeling algorithm using contour tracing technique" of Fu Chang, Chun-Jen Chen and Chi-Jen Lu. Detect blobs from input image. The input image The storage for the detected blobs Number of pixels that has been labeled. Calculates mean color of a blob in an image. The blob. The original image Average color Draw the blobs on the image The binary mask. The blobs. Drawing type. The alpha value. 1.0 for solid color and 0.0 for transparent The images with the blobs drawn Get the binary mask for the blobs listed in the CvBlobs The blobs The binary mask for the specific blobs Release all the unmanaged memory associated with this Blob detector Blob rendering type Render each blog with a different color. Render centroid. Render bounding box. Render angle. Print blob data to log out. Print blob data to std out. The default rendering type CvBlobs Create a new CvBlobs Release all the unmanaged resources used by this CvBlobs Filter blobs by area. Those blobs whose areas are not in range will be erased from the input list of blobs. Minimun area Maximun area Adds the specified label and blob to the dictionary. The label of the blob The blob Determines whether the CvBlobs contains the specified label. The label (key) to be located True if the CvBlobs contains an element with the specific label Removes the blob with the specific label The label of the blob True if the element is successfully found and removed; otherwise, false. Gets the blob associated with the specified label. The blob label When this method returns, contains the blob associated with the specified labe, if the label is found; otherwise, null. This parameter is passed uninitialized. True if the blobs contains a blob with the specific label; otherwise, false Adds the specified label and blob to the CvBlobs. The structure representing the label and blob to add to the CvBlobs Removes all keys and values Determines whether the CvBlobs contains a specific label and CvBlob. The label and blob to be located True if the specific label and blob is found in the CvBlobs; otherwise, false. Copies the elements to the , starting at the specific arrayIndex. The one-dimensional array that is the defination of the elements copied from the CvBlobs. The array must have zero-base indexing. The zero-based index in at which copying begins. Removes a key and value from the dictionary. The structure representing the key and value to be removed True if the key are value is sucessfully found and removed; otherwise false. Returns an enumerator that iterates through the collection. An enumerator that can be used to iterate through the collection Returns a pointer to CvBlobs Pointer to CvBlobs Get a collection containing the labels in the CvBlobs Get a collection containing the blobs in the CvBlobs. Get the blob with the speicific label. Set function is not implemented The label for the blob Gets the number of label/Blob pairs contained in the collection Always false CvTrack Track identification number Label assigned to the blob related to this track X min X max Y min y max Centroid Indicates how much frames the object has been in scene Indicates number of frames that has been active from last inactive period. Indicates number of frames that has been missing. Compares CvTrack for equality The other track to compares with True if the two CvTrack are equal; otherwise false. Get the minimun bounding rectanble for this track Blobs tracking Tracking based on: A. Senior, A. Hampapur, Y-L Tian, L. Brown, S. Pankanti, R. Bolle. Appearance Models for Occlusion Handling. Second International workshop on Performance Evaluation of Tracking and Surveillance Systems & CVPR'01. December, 2001. (http://www.research.ibm.com/peoplevision/PETS2001.pdf) Create a new CvTracks Release all the unmanaged resources used by this CvBlobs Updates list of tracks based on current blobs. List of blobs Distance Max distance to determine when a track and a blob match Inactive Max number of frames a track can be inactive Active If a track becomes inactive but it has been active less than thActive frames, the track will be deleted. Adds the specified id and track to the dictionary. The id of the track The track Determines whether the CvTracks contains the specified id. The id (key) to be located True if the CvTracks contains an element with the specific id Removes the track with the specific id The id of the track True if the element is successfully found and removed; otherwise, false. Gets the track associated with the specified id. The track id When this method returns, contains the track associated with the specified id, if the id is found; otherwise, an empty track. This parameter is passed uninitialized. True if the tracks contains a track with the specific id; otherwise, false Adds the specified id and track to the CvTracks. The structure representing the id and track to add to the CvTracks Removes all keys and values Determines whether the CvTracks contains a specific id and CvTrack. The id and CvTrack to be located True if the is found in the CvTracks; otherwise, false. Copies the elements to the , starting at the specific arrayIndex. The one-dimensional array that is the defination of the elements copied from the CvTracks. The array must have zero-base indexing. The zero-based index in at which copying begins. Removes a key and value from the dictionary. The structure representing the key and value to be removed True if the key are value is sucessfully found and removed; otherwise false. Returns an enumerator that iterates through the collection. An enumerator that can be used to iterate through the collection Returns a pointer to CvBlobs Pointer to CvBlobs Get a collection containing the ids in the CvTracks. Get a collection containing the tracks in the CvTracks. Get or Set the Track with the specific id. The id of the Track Gets the number of id/track pairs contained in the collection. Always false. A Uniform Multi-dimensional Dense Histogram Creates a uniform 1-D histogram of the specified size The number of bins in this 1-D histogram. The upper and lower boundary of the bin Creates a uniform multi-dimension histogram of the specified size The length of this array is the dimension of the histogram. The values of the array contains the number of bins in each dimension. The total number of bins eaquals the multiplication of all numbers in the array the upper and lower boundaries of the bins Clear this histogram Project the images to the histogram bins The type of depth of the image images to project If it is true, the histogram is not cleared in the beginning. This feature allows user to compute a single histogram from several images, or to update the histogram online. Can be null if not needed. The operation mask, determines what pixels of the source images are counted Project the matrices to the histogram bins The type of depth of the image Matrices to project If it is true, the histogram is not cleared in the beginning. This feature allows user to compute a single histogram from several images, or to update the histogram online. Can be null if not needed. The operation mask, determines what pixels of the source images are counted Backproject the histogram into a gray scale image Source images, all are of the same size and type Destination back projection image of the same type as the source images The type of depth of the image Backproject the histogram into a matrix Source matrices, all are of the same size and type Destination back projection matrix of the sametype as the source matrices The type of depth of the matrix Gets the bin values. The bin values Get the size of the bin dimensions Get the ranges of this histogram Dense Optical flow This is the algorithm class Return the pointer to the algorithm object The pointer to the algorithm object Gets the dense optical flow pointer. The dense optical flow . Extension methods for IDenseOpticalFlow Calculates an optical flow. First 8-bit single-channel input image. Second input image of the same size and the same type as prev. Computed flow image that has the same size as prev and type CV_32FC2 The dense optical flow object Dual TV L1 Optical Flow Algorithm. Create Dual TV L1 Optical Flow. Release the unmanaged resources Gets the dense optical flow pointer. The pointer to the dense optical flow object. Return the pointer to the algorithm object Time step of the numerical scheme Weight parameter for the data term, attachment parameter Weight parameter for (u - v)^2, tightness parameter Coefficient for additional illumination variation term Number of scales used to create the pyramid of images Number of warpings per scale Stopping criterion threshold used in the numerical scheme, which is a trade-off between precision and running time Inner iterations (between outlier filtering) used in the numerical scheme Outer iterations (number of inner loops) used in the numerical scheme Use initial flow Step between scales (less than 1) Median filter kernel size (1 = no filter) (3 or 5) Wrapped AKAZE detector The feature 2D base class The pointer to the Feature2D object The pointer to the Algorithm object. Detect keypoints in an image and compute the descriptors on the image from the keypoint locations. The image The optional mask, can be null if not needed The detected keypoints will be stored in this vector The descriptors from the keypoints If true, the method will skip the detection phase and will compute descriptors for the provided keypoints Reset the pointers Detect the features in the image The result vector of keypoints The image from which the features will be detected from The optional mask. Detect the keypoints from the image The image to extract keypoints from The optional mask. An array of key points Compute the descriptors on the image from the given keypoint locations. The image to compute descriptors from The keypoints where the descriptor computation is perfromed The descriptors from the given keypoints Get the pointer to the Feature2D object The pointer to the Feature2D object Get the number of elements in the descriptor. The number of elements in the descriptor Create AKAZE using the specific values Type of the extracted descriptor Size of the descriptor in bits. 0 -> Full size Number of channels in the descriptor (1, 2, 3) Detector response threshold to accept point Default number of sublevels per scale level Maximum octave evolution of the image Diffusivity type Release the unmanaged resources associated with this object Type of the extracted descriptor The kaze upright The kaze The MLDB upright The MLDB The match distance type Manhattan distance (city block distance) Squared Euclidean distance Euclidean distance Hamming distance functor - counts the bit differences between two strings - useful for the Brief descriptor, bit count of A exclusive XOR'ed with B. Hamming distance functor - counts the bit differences between two strings - useful for the Brief descriptor, bit count of A exclusive XOR'ed with B. Wrapped BFMatcher Descriptor matcher The pointer to the Descriptor matcher Find the k-nearest match An n x m matrix of descriptors to be query for nearest neighbours. n is the number of descriptor and m is the size of the descriptor Number of nearest neighbors to search for Can be null if not needed. An n x 1 matrix. If 0, the query descriptor in the corresponding row will be ignored. Matches. Each matches[i] is k or less matches for the same query descriptor. Add the model descriptors The model descriptors Create a BFMatcher of the specific distance type The distance type Specify whether or not cross check is needed. Use false for default. Release the unmanaged resource associated with the BFMatcher Class to compute an image descriptor using the bag of visual words. Such a computation consists of the following steps: 1. Compute descriptors for a given image and its key points set. 2. Find the nearest visual words from the vocabulary for each key point descriptor. 3. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The i-th bin of the histogram is a frequency of i-th word of the vocabulary in the given image. Descriptor extractor that is used to compute descriptors for an input image and its key points. Descriptor matcher that is used to find the nearest word of the trained vocabulary for each key point descriptor of the image. Sets a visual vocabulary. The vocabulary Computes an image descriptor using the set visual vocabulary. Image, for which the descriptor is computed Key points detected in the input image. The output image descriptors. Release all the unmanaged memory associated with this object Kmeans-based class to train visual vocabulary using the bag of visual words approach. Create a new BOWKmeans trainer Number of clusters to split the set by. Specifies maximum number of iterations and/or accuracy (distance the centers move by between the subsequent iterations). Use empty termcrit for default. The number of attemps. Use 3 for default Kmeans initialization flag. Use PPCenters for default. Add the descriptors to the trainer The descriptors to be added to the trainer Cluster the descriptors and return the cluster centers The cluster centers Release all the unmanaged memory associated with this object Get the number of descriptors BRISK: Binary Robust Invariant Scalable Keypoints Create a BRISK keypoint detector and descriptor extractor. Feature parameters. The number of octave layers. Pattern scale Release the unmanaged resources associated with this object FAST(Features from Accelerated Segment Test) keypoint detector. See Detects corners using FAST algorithm by E. Rosten ("Machine learning for high-speed corner detection, 2006). Create a fast detector with the specific parameters Threshold on difference between intensity of center pixel and pixels on circle around this pixel. Specify if non-maximum suppression should be used. One of the three neighborhoods as defined in the paper Release the unmanaged memory associated with this detector. One of the three neighborhoods as defined in the paper The type5_8 The type7_12 The type9_16 Tools for features 2D Draw the keypoints found on the image. The image The keypoints to be drawn The color used to draw the keypoints The drawing type The image with the keypoints drawn Draw the matched keypoints between the model image and the observered image. The model image The keypoints in the model image The observed image The keypoints in the observed image The color for the match correspondence lines The color for highlighting the keypoints The mask for the matches. Use null for all matches. The drawing type The image where model and observed image is displayed side by side. Matches are drawn as indicated by the flag Matches. Each matches[i] is k or less matches for the same query descriptor. Eliminate the matched features whose scale and rotation do not aggree with the majority's scale and rotation. The numbers of bins for rotation, a good value might be 20 (which means each bin covers 18 degree) This determines the different in scale for neighbor hood bins, a good value might be 1.5 (which means matched features in bin i+1 is scaled 1.5 times larger than matched features in bin i The keypoints from the model image The keypoints from the observed image This is both input and output. This matrix indicates which row is valid for the matches. Matches. Each matches[i] is k or less matches for the same query descriptor. The number of non-zero elements in the resulting mask Recover the homography matrix using RANDSAC. If the matrix cannot be recovered, null is returned. The model keypoints The observed keypoints The maximum allowed reprojection error to treat a point pair as an inlier. If srcPoints and dstPoints are measured in pixels, it usually makes sense to set this parameter somewhere in the range 1 to 10. The mask matrix of which the value might be modified by the function. As input, if the value is 0, the corresponding match will be ignored when computing the homography matrix. If the value is 1 and RANSAC determine the match is an outlier, the value will be set to 0. The homography matrix, if it cannot be found, null is returned Matches. Each matches[i] is k or less matches for the same query descriptor. Filter the matched Features, such that if a match is not unique, it is rejected. The distance different ratio which a match is consider unique, a good number will be 0.8 This is both input and output. This matrix indicates which row is valid for the matches. Matches. Each matches[i] is k or less matches for the same query descriptor. Define the Keypoint draw type Two source image, matches and single keypoints will be drawn. For each keypoint only the center point will be drawn (without the circle around keypoint with keypoint size and orientation). Single keypoints will not be drawn. For each keypoint the circle around keypoint with keypoint size and orientation will be drawn. Wrapping class for feature detection using the goodFeaturesToTrack() function. Create a Good Feature to Track detector The function first calculates the minimal eigenvalue for every source image pixel using cvCornerMinEigenVal function and stores them in eig_image. Then it performs non-maxima suppression (only local maxima in 3x3 neighborhood remain). The next step is rejecting the corners with the minimal eigenvalue less than quality_level?max(eig_image(x,y)). Finally, the function ensures that all the corners found are distanced enough one from another by considering the corners (the most strongest corners are considered first) and checking that the distance between the newly considered feature and the features considered earlier is larger than min_distance. So, the function removes the features than are too close to the stronger features The maximum number of features to be detected. Multiplier for the maxmin eigenvalue; specifies minimal accepted quality of image corners. Limit, specifying minimum possible distance between returned corners; Euclidian distance is used. Size of the averaging block, passed to underlying cvCornerMinEigenVal or cvCornerHarris used by the function. If true, will use Harris corner detector. K Release the unmanaged memory associated with this detector. Wrapped KAZE detector Create KAZE using the specific values Release the unmanaged resources associated with this object The diffusivity PM G1 PM G2 Weickert Charbonnier MSER detector Create a MSER detector using the specific parameters In the code, it compares (size_{i}-size_{i-delta})/size_{i-delta} Prune the area which bigger than max_area Prune the area which smaller than min_area Prune the area have similar size to its children Trace back to cut off mser with diversity < min_diversity For color image, the evolution steps The area threshold to cause re-initialize Ignore too small margin The aperture size for edge blur Release the unmanaged memory associated with this detector. Wrapped ORB detector Create a ORBDetector using the specific values The number of desired features. Coefficient by which we divide the dimensions from one scale pyramid level to the next. The number of levels in the scale pyramid. The level at which the image is given. If 1, that means we will also look at the image. times bigger How far from the boundary the points should be. How many random points are used to produce each cell of the descriptor (2, 3, 4 ...). Type of the score to use. Patch size. FAST threshold Release the unmanaged resources associated with this object The score type Harris Fast Simple Blob detector Create a simple blob detector Release the unmanaged memory associated with this detector. Create parameters for simple blob detector and use default values. Release all the unmanaged memory associated with this simple blob detector parameter. Threshold step Min threshold Max threshold Min dist between blobs Filter by color Blob color Filter by area Min area Max area Filter by circularity Min circularity Max circularity Filter by inertia Min inertia ratio Max inertia ratio Filter by convexity Min Convexity Max Convexity Min Repeatability File Storage Node class. The node is used to store each and every element of the file storage opened for reading. When XML/YAML file is read, it is first parsed and stored in the memory as a hierarchical collection of nodes. Each node can be a “leaf” that is contain a single number or a string, or be a collection of other nodes. There can be named collections (mappings) where each element has a name and it is accessed by a name, and ordered collections (sequences) where elements do not have names but rather accessed by index. Type of the file node can be determined using FileNode::type method. Note that file nodes are only used for navigating file storages opened for reading. When a file storage is opened for writing, no data is stored in memory after it is written. Reads a Mat from the node The Mat where the result is read into The default mat. Release the unmanaged resources Reads the string from the node The string from the node Reads the int from the node. The int from the node. Reads the float from the node. The float from the node. Reads the double from the node. The double from the node. Gets a value indicating whether this instance is empty. true if this instance is empty; otherwise, false. Gets the type of the node. The type of the node. Type of the file storage node Empty node an integer Floating-point number Synonym or Real Text string in UTF-8 encoding Synonym for Str Integer of size size_t. Typically used for storing complex dynamic structures where some elements reference the others The sequence Mapping The type mask Compact representation of a sequence or mapping. Used only by YAML writer A registered object (e.g. a matrix) Empty structure (sequence or mapping) The node has a name (i.e. it is element of a mapping) XML/YAML file storage class that encapsulates all the information necessary for writing or reading data to/from a file. Initializes a new instance of the class. Name of the file to open or the text string to read the data from. Extension of the file (.xml or .yml/.yaml) determines its format (XML or YAML respectively). Also you can append .gz to work with compressed files, for example myHugeMatrix.xml.gz. If both FileStorage::WRITE and FileStorage::MEMORY flags are specified, source is used just to specify the output file format (e.g. mydata.xml, .yml etc.). Mode of operation. Encoding of the file. Note that UTF-16 XML encoding is not supported currently and you should use 8-bit encoding instead of it. Writes the specified Mat to the node with the specific The Mat to be written to the file storage The name of the node. Writes the specified Mat to the node with the specific The value to be written to the file storage The name of the node. Writes the specified Mat to the node with the specific The value to be written to the file storage The name of the node. Writes the specified Mat to the node with the specific The value to be written to the file storage The name of the node. Writes the specified Mat to the node with the specific The value to be written to the file storage The name of the node. Closes the file and releases all the memory buffers Call this method after all I/O operations with the storage are finished. If the storage was opened for writing data and FileStorage.Mode.Write was specified The string that represent the text in the FileStorage Gets the top-level mapping. Zero-based index of the stream. In most cases there is only one stream in the file. However, YAML supports multiple streams and so there can be several. The top-level mapping Gets the first element of the top-level mapping. The first element of the top-level mapping. Gets the specified element of the top-level mapping. Name of the node. The specified element of the top-level mapping. Release the unmanaged resources Gets a value indicating whether this instance is opened. true if the object is associated with the current file; otherwise, false. Gets the with the specified node name. The . Name of the node. File storage mode Open the file for reading Open the file for writing Open the file for appending ReadMat data from source or write data to the internal buffer Mask for format flags Auto format XML format YAML format The Kmeans center initiation types Random The index parameters interface Gets the pointer to the index parameter. The index parameter pointer. Flann index Create a flann index A row by row matrix of descriptors The index parameter Perform k-nearest-neighbours (KNN) search A row by row matrix of descriptors to be query for nearest neighbours The result of the indices of the k-nearest neighbours The square of the Eculidean distance between the neighbours Number of nearest neighbors to search for The number of times the tree(s) in the index should be recursively traversed. A higher value for this parameter would give better search precision, but also take more time. If automatic configuration was used when the index was created, the number of checks required to achieve the specified precision was also computed, in which case this parameter is ignored Performs a radius nearest neighbor search for multiple query points The query points, one per row Indices of the nearest neighbors found The square of the Eculidean distance between the neighbours The search radius The maximum number of results The number of times the tree(s) in the index should be recursively traversed. A higher value for this parameter would give better search precision, but also take more time. If automatic configuration was used when the index was created, the number of checks required to achieve the specified precision was also computed, in which case this parameter is ignored The number of points in the search radius Release the unmanaged memory associated with this Flann Index Create index for 3D points Create a flann index for 3D points The IPosition3D array The index parameters Find the approximate nearest position in 3D The position to start the search from The square distance of the nearest neighbour The index with the nearest 3D position Release the resource used by this object When passing an object of this type, the index will perform a linear, brute-force search. Initializes a new instance of the class. Release all the memory associated with this IndexParam When passing an object of this type the index constructed will consist of a set of randomized kd-trees which will be searched in parallel. Initializes a new instance of the class. The number of parallel kd-trees to use. Good values are in the range [1..16] Release all the memory associated with this IndexParam When using a parameters object of this type the index created uses multi-probe LSH (by Multi-Probe LSH: Efficient Indexing for High-Dimensional Similarity Search by Qin Lv, William Josephson, Zhe Wang, Moses Charikar, Kai Li., Proceedings of the 33rd International Conference on Very Large Data Bases (VLDB). Vienna, Austria. September 2007) Initializes a new instance of the class. The number of hash tables to use (between 10 and 30 usually). The size of the hash key in bits (between 10 and 20 usually). The number of bits to shift to check for neighboring buckets (0 is regular LSH, 2 is recommended). Release all the memory associated with this IndexParam When passing an object of this type the index constructed will be a hierarchical k-means tree. Initializes a new instance of the class. The branching factor to use for the hierarchical k-means tree The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence The algorithm to use for selecting the initial centers when performing a k-means clustering step. The possible values are CENTERS_RANDOM (picks the initial cluster centers randomly), CENTERS_GONZALES (picks the initial centers using Gonzales’ algorithm) and CENTERS_KMEANSPP (picks the initial centers using the algorithm suggested in arthur_kmeanspp_2007 ) This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When cb_index is zero the next kmeans domain to be explored is chosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain. Release all the memory associated with this IndexParam When using a parameters object of this type the index created combines the randomized kd-trees and the hierarchical k-means tree. Initializes a new instance of the class. The number of parallel kd-trees to use. Good values are in the range [1..16] The branching factor to use for the hierarchical k-means tree The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence The algorithm to use for selecting the initial centers when performing a k-means clustering step. The possible values are CENTERS_RANDOM (picks the initial cluster centers randomly), CENTERS_GONZALES (picks the initial centers using Gonzales’ algorithm) and CENTERS_KMEANSPP (picks the initial centers using the algorithm suggested in arthur_kmeanspp_2007 ) This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When cb_index is zero the next kmeans domain to be explored is chosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain. Release all the memory associated with this IndexParam When passing an object of this type the index created is automatically tuned to offer the best performance, by choosing the optimal index type (randomized kd-trees, hierarchical kmeans, linear) and parameters for the dataset provided. Initializes a new instance of the class. Is a number between 0 and 1 specifying the percentage of the approximate nearest-neighbor searches that return the exact nearest-neighbor. Using a higher value for this parameter gives more accurate results, but the search takes longer. The optimum value usually depends on the application. Specifies the importance of the index build time reported to the nearest-neighbor search time. In some applications it’s acceptable for the index build step to take a long time if the subsequent searches in the index can be performed very fast. In other applications it’s required that the index be build as fast as possible even if that leads to slightly longer search times. Is used to specify the trade off between time (index build time and search time) and memory used by the index. A value less than 1 gives more importance to the time spent and a value greater than 1 gives more importance to the memory usage. Is a number between 0 and 1 indicating what fraction of the dataset to use in the automatic parameter configuration algorithm. Running the algorithm on the full dataset gives the most accurate results, but for very large datasets can take longer than desired. In such case using just a fraction of the data helps speeding up this algorithm while still giving good approximations of the optimum parameters. Release all the memory associated with this IndexParam Hierarchical Clustering Index Parameters Initializes a new instance of the . Release all the memory associated with this IndexParam Search parameters Initializes a new instance of the class. how many leafs to visit when searching for neighbors (-1 for unlimited) Search for eps-approximate neighbors Only for radius search, require neighbors sorted by distance Release all the memory associated with this IndexParam A geodetic coordinate that is defined by its latitude, longitude and altitude Indicates the origin of the Geodetic Coordinate Create a geodetic coordinate using the specific values Latitude in radian Longitude in radian Altitude in meters Compute the sum of two GeodeticCoordinates The first coordinate to be added The second coordinate to be added The sum of two GeodeticCoordinates Compute - The first coordinate The coordinate to be subtracted - Compute * The coordinate The scale to be multiplied * Check if this Geodetic coordinate equals The other coordinate to be compared with True if two coordinates equals Convert radian to degree radian degree Convert degree to radian degree radian Latitude (phi) in radian Longitude (lambda) in radian Altitude (height) in meters A HOG descriptor Create a new HOGDescriptor Create a new HOGDescriptor using the specific parameters. Block size in cells. Use (16, 16) for default. Cell size. Use (8, 8) for default. Block stride. Must be a multiple of cell size. Use (8,8) for default. Do gamma correction preprocessing or not. Use true for default. L2-Hys normalization method shrinkage. Number of bins. Gaussian smoothing window parameter. Detection window size. Must be aligned to block size and block stride. Must match the size of the training image. Use (64, 128) for default. Create a new HOGDescriptor using the specific parameters. The template image to be detected. Block size in cells. Use (16, 16) for default. Cell size. Use (8, 8) for default. Block stride. Must be a multiple of cell size. Use (8,8) for default. Do gamma correction preprocessing or not. Use true for default. L2-Hys normalization method shrinkage. Use 0.2 for default. Number of bins. Use 9 for default. Gaussian smoothing window parameter. Use -1 for default. Use 1 for default. Create a new HogDescriptor using the specific template and default parameters. The template image to be detected. Return the default people detector The default people detector Set the SVM detector The SVM detector Performs object detection with increasing detection window. The image to search in Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here. Window stride. Must be a multiple of block stride. Coefficient of the detection window increase. After detection some objects could be covered by many rectangles. This coefficient regulates similarity threshold. 0 means don't perform grouping. Should be an integer if not using meanshift grouping. Use 2.0 for default If true, it will use meanshift grouping. The regions where positives are found The image Window stride. Must be a multiple of block stride. Use Size.Empty for default Padding. Use Size.Empty for default Locations for the computation. Can be null if not needed The descriptor vector Release the unmanaged memory associated with this HOGDescriptor Get the size of the descriptor Apply converter and compute result for each channel of the image, for single channel image, apply converter directly, for multiple channel image, make a copy of each channel to a temperary image and apply the convertor The return type The source image The converter such that accept the IntPtr of a single channel IplImage, and image channel index which returning result of type R An array which contains result for each channel Apply converter and compute result for each channel of the image, for single channel image, apply converter directly, for multiple channel image, make a copy of each channel to a temperary image and apply the convertor The source image The converter such that accept the IntPtr of a single channel IplImage, and image channel index which returning result of type R An array which contains result for each channel An Image is a wrapper to IplImage of OpenCV. Color type of this image (either Gray, Bgr, Bgra, Hsv, Hls, Lab, Luv, Xyz, Ycc, Rgb or Rbga) Depth of this image (either Byte, SByte, Single, double, UInt16, Int16 or Int32) The dimension of color Create an empty Image Create image from the specific multi-dimensional data, where the 1st dimesion is # of rows (height), the 2nd dimension is # cols (width) and the 3rd dimension is the channel The multi-dimensional data where the 1st dimension is # of rows (height), the 2nd dimension is # cols (width) and the 3rd dimension is the channel Create an Image from unmanaged data. The width of the image The height of the image Size of aligned image row in bytes Pointer to aligned image data, where each row should be 4-align The caller is responsible for allocating and freeing the block of memory specified by the scan0 parameter, however, the memory should not be released until the related Image is released. Allocate the image from the image header. This should be only a header to the image. When the image is disposed, the cvReleaseImageHeader will be called on the pointer. Read image from a file the name of the file that contains the image Load the specific file using Bitmap Load the specific file using OpenCV Obtain the image from the specific Bitmap The bitmap which will be converted to the image Create a blank Image of the specified width, height and color. The width of the image The height of the image The initial color of the image Create a blank Image of the specified width and height. The width of the image The height of the image Create a blank Image of the specific size The size of the image Allocate data for the array The number of rows The number of columns The number of channels of this image Create a multi-channel image from multiple gray scale images The image channels to be merged into a single image Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info streaming context Get the average value on this image The average color of the image Get the average value on this image, using the specific mask The mask for find the average value The average color of the masked area Get the sum for each color channel The sum for each color channel Set every pixel of the image to the specific color The color to be set Set every pixel of the image to the specific color, using a mask The color to be set The mask for setting color Copy the masked area of this image to destination the destination to copy to the mask for copy Make a copy of the image using a mask, if ROI is set, only copy the ROI the mask for coping A copy of the image Make a copy of the specific ROI (Region of Interest) from the image The roi to be copied The roi region on the image Get a copy of the boxed region of the image The boxed region of the image A copy of the boxed region of the image Make a copy of the image, if ROI is set, only copy the ROI A copy of the image Create an image of the same size The initial pixel in the image equals zero The image of the same size Make a clone of the current image. All image data as well as the COI and ROI are cloned A clone of the current image. All image data as well as the COI and ROI are cloned Get a subimage which image data is shared with the current image. The rectangle area of the sub-image A subimage which image data is shared with the current image Draw an Rectangle of the specific color and thickness The rectangle to be drawn The color of the rectangle If thickness is less than 1, the rectangle is filled up Line type Number of fractional bits in the center coordinates and radius value Draw a 2D Cross using the specific color and thickness The 2D Cross to be drawn The color of the cross Must be > 0 Draw a line segment using the specific color and thickness The line segment to be drawn The color of the line segment The thickness of the line segment Line type Number of fractional bits in the center coordinates and radius value Draw a line segment using the specific color and thickness The line segment to be drawn The color of the line segment The thickness of the line segment Line type Number of fractional bits in the center coordinates and radius value Draw a convex polygon using the specific color and thickness The convex polygon to be drawn The color of the triangle If thickness is less than 1, the triangle is filled up Fill the convex polygon with the specific color The array of points that define the convex polygon The color to fill the polygon with Line type Number of fractional bits in the center coordinates and radius value Draw the polyline defined by the array of 2D points A polyline defined by its point if true, the last line segment is defined by the last point of the array and the first point of the array the color used for drawing the thinkness of the line Line type Number of fractional bits in the center coordinates and radius value Draw the polylines defined by the array of array of 2D points An array of polylines each represented by an array of points if true, the last line segment is defined by the last point of the array and the first point of the array the color used for drawing the thinkness of the line Line type Number of fractional bits in the center coordinates and radius value Draw a Circle of the specific color and thickness The circle to be drawn The color of the circle If thickness is less than 1, the circle is filled up Line type Number of fractional bits in the center coordinates and radius value Draw a Ellipse of the specific color and thickness The ellipse to be draw The color of the ellipse If thickness is less than 1, the ellipse is filled up Line type Number of fractional bits in the center coordinates and radius value Draw the text using the specific font on the image The text message to be draw Font type. Font scale factor that is multiplied by the font-specific base size. The location of the bottom left corner of the font The color of the text Thickness of the lines used to draw a text. Line type When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner. Draws contour outlines in the image if thickness>=0 or fills area bounded by the contours if thickness<0 All the input contours. Each contour is stored as a point vector. Parameter indicating a contour to draw. If it is negative, all the contours are drawn. Color of the contours Maximal level for drawn contours. If 0, only contour is drawn. If 1, the contour and all contours after it on the same level are drawn. If 2, all contours after and all contours one level below the contours are drawn, etc. If the value is negative, the function does not draw the contours following after contour but draws child contours of contour up to abs(maxLevel)-1 level. Thickness of lines the contours are drawn with. If it is negative the contour interiors are drawn Type of the contour segments Optional information about hierarchy. It is only needed if you want to draw only some of the contours Shift all the point coordinates by the specified value. It is useful in case if the contours retrived in some image ROI and then the ROI offset needs to be taken into account during the rendering. Draws contour outlines in the image if thickness>=0 or fills area bounded by the contours if thickness<0 The input contour stored as a point vector. Color of the contours Thickness of lines the contours are drawn with. If it is negative the contour interiors are drawn Type of the contour segments Shift all the point coordinates by the specified value. It is useful in case if the contours retrived in some image ROI and then the ROI offset needs to be taken into account during the rendering. Apply Probabilistic Hough transform to find line segments. The current image must be a binary image (eg. the edges as a result of the Canny edge detector) Distance resolution in pixel-related units. Angle resolution measured in radians A line is returned by the function if the corresponding accumulator value is greater than threshold Minimum width of a line Minimum gap between lines The line segments detected for each of the channels Apply Canny Edge Detector follows by Probabilistic Hough transform to find line segments in the image The threshhold to find initial segments of strong edges The threshold used for edge Linking Distance resolution in pixel-related units. Angle resolution measured in radians A line is returned by the function if the corresponding accumulator value is greater than threshold Minimum width of a line Minimum gap between lines The line segments detected for each of the channels First apply Canny Edge Detector on the current image, then apply Hough transform to find circles The higher threshold of the two passed to Canny edge detector (the lower one will be twice smaller). Accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first Resolution of the accumulator used to detect centers of the circles. For example, if it is 1, the accumulator will have the same resolution as the input image, if it is 2 - accumulator will have twice smaller width and height, etc Minimal radius of the circles to search for Maximal radius of the circles to search for Minimum distance between centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed The circle detected for each of the channels Return parameters based on ROI The Pointer to the IplImage The address of the pointer that point to the start of the Bytes taken into consideration ROI ROI.Width * ColorType.Dimension The number of bytes in a row taken into consideration ROI The number of rows taken into consideration ROI The width step required to jump to the next row Apply convertor and compute result for each channel of the image. For single channel image, apply converter directly. For multiple channel image, set the COI for the specific channel before appling the convertor The return type The converter such that accept the IntPtr of a single channel IplImage, and image channel index which returning result of type R An array which contains result for each channel If the image has only one channel, apply the action directly on the IntPtr of this image and , otherwise, make copy each channel of this image to a temperary one, apply action on it and another temperory image and copy the resulting image back to image2 The type of the depth of the image The function which acepts the src IntPtr, dest IntPtr and index of the channel as input The destination image Calculates the image derivative by convolving the image with the appropriate kernel The Sobel operators combine Gaussian smoothing and differentiation so the result is more or less robust to the noise. Most often, the function is called with (xorder=1, yorder=0, aperture_size=3) or (xorder=0, yorder=1, aperture_size=3) to calculate first x- or y- image derivative. Order of the derivative x Order of the derivative y Size of the extended Sobel kernel, must be 1, 3, 5 or 7. In all cases except 1, aperture_size xaperture_size separable kernel will be used to calculate the derivative. The result of the sobel edge detector Calculates Laplacian of the source image by summing second x- and y- derivatives calculated using Sobel operator. Specifying aperture_size=1 gives the fastest variant that is equal to convolving the image with the following kernel: |0 1 0| |1 -4 1| |0 1 0| Aperture size The Laplacian of the image Find the edges on this image and marked them in the returned image. The threshhold to find initial segments of strong edges The threshold used for edge Linking The edges found by the Canny edge detector Find the edges on this image and marked them in the returned image. The threshhold to find initial segments of strong edges The threshold used for edge Linking The aperture size, use 3 for default a flag, indicating whether a more accurate norm should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default norm is enough ( L2gradient=false ). The edges found by the Canny edge detector Iterates to find the sub-pixel accurate location of corners, or radial saddle points Coordinates of the input corners, the values will be modified by this function call Half sizes of the search window. For example, if win=(5,5) then 5*2+1 x 5*2+1 = 11 x 11 search window is used Half size of the dead region in the middle of the search zone over which the summation in formulae below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after certain number of iteration or when a required accuracy is achieved. The criteria may specify either of or both the maximum number of iteration and the required accuracy Refined corner coordinates The function slids through image, compares overlapped patches of size wxh with templ using the specified method and return the comparison results Searched template; must be not greater than the source image and the same data type as the image Specifies the way the template must be compared with image regions The comparison result: width = this.Width - template.Width + 1; height = this.Height - template.Height + 1 Perform an elementwise AND operation with another image and return the result The second image for the AND operation The result of the AND operation Perform an elementwise AND operation with another image, using a mask, and return the result The second image for the AND operation The mask for the AND operation The result of the AND operation Perform an binary AND operation with some color The color for the AND operation The result of the AND operation Perform an binary AND operation with some color using a mask The color for the AND operation The mask for the AND operation The result of the AND operation Perform an elementwise OR operation with another image and return the result The second image for the OR operation The result of the OR operation Perform an elementwise OR operation with another image, using a mask, and return the result The second image for the OR operation The mask for the OR operation The result of the OR operation Perform an elementwise OR operation with some color The value for the OR operation The result of the OR operation Perform an elementwise OR operation with some color using a mask The color for the OR operation The mask for the OR operation The result of the OR operation Perform an elementwise XOR operation with another image and return the result The second image for the XOR operation The result of the XOR operation Perform an elementwise XOR operation with another image, using a mask, and return the result The second image for the XOR operation The mask for the XOR operation The result of the XOR operation Perform an binary XOR operation with some color The value for the XOR operation The result of the XOR operation Perform an binary XOR operation with some color using a mask The color for the XOR operation The mask for the XOR operation The result of the XOR operation Compute the complement image The complement image Find the elementwise maximum value The second image for the Max operation An image where each pixel is the maximum of this image and the parameter image Find the elementwise maximum value The value to compare with An image where each pixel is the maximum of this image and Find the elementwise minimum value The second image for the Min operation An image where each pixel is the minimum of this image and the parameter image Find the elementwise minimum value The value to compare with An image where each pixel is the minimum of this image and Checks that image elements lie between two scalars The inclusive lower limit of color value The inclusive upper limit of color value res[i,j] = 255 if <= this[i,j] <= , 0 otherwise Checks that image elements lie between values defined by two images of same size and type The inclusive lower limit of color value The inclusive upper limit of color value res[i,j] = 255 if [i,j] <= this[i,j] <= [i,j], 0 otherwise Compare the current image with and returns the comparison mask The other image to compare with The comparison type The result of the comparison as a mask Compare the current image with and returns the comparison mask The value to compare with The comparison type The result of the comparison as a mask Compare two images, returns true if the each of the pixels are equal, false otherwise The other image to compare with true if the each of the pixels for the two images are equal, false otherwise Use grabcut to perform background foreground segmentation. The initial rectangle region for the foreground The number of iterations to run GrabCut The background foreground mask where 2 indicates background and 3 indicates foreground Elementwise subtract another image from the current image The second image to be subtracted from the current image The result of elementwise subtracting img2 from the current image Elementwise subtract another image from the current image, using a mask The image to be subtracted from the current image The mask for the subtract operation The result of elementwise subtrating img2 from the current image, using the specific mask Elementwise subtract a color from the current image The color value to be subtracted from the current image The result of elementwise subtracting color 'val' from the current image result = val - this the value which subtract this image val - this result = val - this, using a mask The value which subtract this image The mask for subtraction val - this, with mask Elementwise add another image with the current image The image to be added to the current image The result of elementwise adding img2 to the current image Elementwise add with the current image, using a mask The image to be added to the current image The mask for the add operation The result of elementwise adding img2 to the current image, using the specific mask Elementwise add a color to the current image The color value to be added to the current image The result of elementwise adding color from the current image Elementwise multiply another image with the current image and the The image to be elementwise multiplied to the current image The scale to be multiplied this .* img2 * scale Elementwise multiply with the current image The image to be elementwise multiplied to the current image this .* img2 Elementwise multiply the current image with The scale to be multiplied The scaled image Accumulate to the current image using the specific mask The image to be added to the current image the mask Accumulate to the current image using the specific mask The image to be added to the current image Return the weighted sum such that: res = this * alpha + img2 * beta + gamma img2 in: res = this * alpha + img2 * beta + gamma alpha in: res = this * alpha + img2 * beta + gamma beta in: res = this * alpha + img2 * beta + gamma gamma in: res = this * alpha + img2 * beta + gamma this * alpha + img2 * beta + gamma Update Running Average. this = (1-alpha)*this + alpha*img Input image, 1- or 3-channel, Byte or Single (each channel of multi-channel image is processed independently). the weight of Update Running Average. this = (1-alpha)*this + alpha*img, using the mask Input image, 1- or 3-channel, Byte or Single (each channel of multi-channel image is processed independently). The weight of The mask for the running average Computes absolute different between this image and the other image The other image to compute absolute different with The image that contains the absolute different value Computes absolute different between this image and the specific color The color to compute absolute different with The image that contains the absolute different value Raises every element of input array to p dst(I)=src(I)^p, if p is integer dst(I)=abs(src(I))^p, otherwise The exponent of power The power image Calculates exponent of every element of input array: dst(I)=exp(src(I)) Maximum relative error is ~7e-6. Currently, the function converts denormalized values to zeros on output. The exponent image Calculates natural logarithm of absolute value of every element of input array Natural logarithm of absolute value of every element of input array Sample the pixel values on the specific line segment The line to obtain samples The values on the (Eight-connected) line Sample the pixel values on the specific line segment The line to obtain samples The sampling type The values on the line, the first dimension is the index of the point, the second dimension is the index of color channel Scale the image to the specific size The width of the returned image. The height of the returned image. The type of interpolation The resized image Scale the image to the specific size The width of the returned image. The height of the returned image. The type of interpolation if true, the scale is preservered and the resulting image has maximum width(height) possible that is <= (), if false, this function is equaivalent to Resize(int width, int height) The resized image Scale the image to the specific size: width *= scale; height *= scale The scale to resize The type of interpolation The scaled image Rotate the image the specified angle cropping the result to the original size The angle of rotation in degrees. The color with wich to fill the background The image rotates by the specific angle Transforms source image using the specified matrix 2x3 transformation matrix Interpolation type Warp type Pixel extrapolation method A value used to fill outliers The result of the transformation Transforms source image using the specified matrix 2x3 transformation matrix The width of the resulting image the height of the resulting image Interpolation type Warp type Pixel extrapolation method A value used to fill outliers The result of the transformation Transforms source image using the specified matrix 3x3 transformation matrix Interpolation type Warp type Pixel extrapolation method A value used to fill outliers The depth type of , should be either float or double The result of the transformation Transforms source image using the specified matrix 3x3 transformation matrix The width of the resulting image the height of the resulting image Interpolation type Warp type Border type A value used to fill outliers The depth type of , should be either float or double The result of the transformation Rotate this image the specified The angle of rotation in degrees. The color with wich to fill the background If set to true the image is cropped to its original size, possibly losing corners information. If set to false the result image has different size than original and all rotation information is preserved The rotated image Rotate this image the specified The angle of rotation in degrees. Positive means clockwise. The color with with to fill the background If set to true the image is cropped to its original size, possibly losing corners information. If set to false the result image has different size than original and all rotation information is preserved The center of rotation The interpolation method The rotated image Convert the image to log polar, simulating the human foveal vision The transformation center, where the output precision is maximal Magnitude scale parameter interpolation type Warp type The converted image Convert the current image to the specific color and depth The type of color to be converted to The type of pixel depth to be converted to Image of the specific color and depth Convert the source image to the current image, if the size are different, the current image will be a resized version of the srcImage. The color type of the source image The color depth of the source image The sourceImage Convert the source image to the current image, if the size are different, the current image will be a resized version of the srcImage. The sourceImage Convert the current image to the specific depth, at the same time scale and shift the values of the pixel The value to be multipled with the pixel The value to be added to the pixel The type of depth to convert to Image of the specific depth, val = val * scale + shift Utility function for Bitmap Set property Convert this image into Bitmap, the pixel values are copied over to the Bitmap For better performance on Image<Gray, Byte> and Image<Bgr, Byte>, consider using the Bitmap property This image in Bitmap format, the pixel data are copied over to the Bitmap Create a Bitmap image of certain size The width of the bitmap The height of the bitmap This image in Bitmap format of the specific size Performs downsampling step of Gaussian pyramid decomposition. First it convolves this image with the specified filter and then downsamples the image by rejecting even rows and columns. The downsampled image Performs up-sampling step of Gaussian pyramid decomposition. First it upsamples this image by injecting even zero rows and columns and then convolves result with the specified filter multiplied by 4 for interpolation. So the resulting image is four times larger than the source image. The upsampled image Compute the image pyramid The number of level's for the pyramid; Level 0 referes to the current image, level n is computed by calling the PyrDown() function on level n-1 The image pyramid Use inpaint to recover the intensity of the pixels which location defined by mask on this image The inpainting mask. Non-zero pixels indicate the area that needs to be inpainted The radius of circular neighborhood of each point inpainted that is considered by the algorithm The inpainted image Perform advanced morphological transformations using erosion and dilation as basic operations. Structuring element Anchor position with the kernel. Negative values mean that the anchor is at the kernel center. Type of morphological operation Number of times erosion and dilation are applied Border type Border value The result of the morphological operation Perform inplace advanced morphological transformations using erosion and dilation as basic operations. Structuring element Anchor position with the kernel. Negative values mean that the anchor is at the kernel center. Type of morphological operation Number of times erosion and dilation are applied Border type Border value Erodes this image using a 3x3 rectangular structuring element. Erosion are applied several (iterations) times The number of erode iterations The eroded image Dilates this image using a 3x3 rectangular structuring element. Dilation are applied several (iterations) times The number of dilate iterations The dialated image Erodes this image inplace using a 3x3 rectangular structuring element. Erosion are applied several (iterations) times The number of erode iterations Dilates this image inplace using a 3x3 rectangular structuring element. Dilation are applied several (iterations) times The number of dilate iterations perform an generic action based on each element of the image The action to be applied to each element of the image Perform an generic operation based on the elements of the two images The depth of the second image The second image to perform action on An action such that the first parameter is the a single channel of a pixel from the first image, the second parameter is the corresponding channel of the correspondind pixel from the second image Compute the element of a new image based on the value as well as the x and y positions of each pixel on the image Compute the element of the new image based on element of this image Compute the element of the new image based on the elements of the two image Compute the element of the new image based on the elements of the three image Compute the element of the new image based on the elements of the four image Release all unmanaged memory associate with the image Perform an elementwise AND operation on the two images The first image to AND The second image to AND The result of the AND operation Perform an elementwise AND operation using an images and a color The first image to AND The color to AND The result of the AND operation Perform an elementwise AND operation using an images and a color The first image to AND The color to AND The result of the AND operation Perform an elementwise AND operation using an images and a color The first image to AND The color to AND The result of the AND operation Perform an elementwise AND operation using an images and a color The first image to AND The color to AND The result of the AND operation Perform an elementwise OR operation with another image and return the result The first image to apply bitwise OR operation The second image to apply bitwise OR operation The result of the OR operation Perform an binary OR operation with some color The image to OR The color to OR The result of the OR operation Perform an binary OR operation with some color The image to OR The color to OR The result of the OR operation Perform an binary OR operation with some color The image to OR The color to OR The result of the OR operation Perform an binary OR operation with some color The image to OR The color to OR The result of the OR operation Compute the complement image The image to be inverted The complement image Elementwise add with The first image to be added The second image to be added The sum of the two images Elementwise add with The image to be added The value to be added The images plus the color Elementwise add with The image to be added The value to be added The images plus the color Elementwise add with The image to be added The color to be added The images plus the color Elementwise add with The image to be added The color to be added The images plus the color Elementwise subtract another image from the current image The image to be subtracted The second image to be subtracted from The result of elementwise subtracting img2 from Elementwise subtract another image from the current image The image to be subtracted The color to be subtracted The result of elementwise subtracting from Elementwise subtract another image from the current image The image to be subtracted The color to be subtracted - - The image to be subtracted The value to be subtracted - Elementwise subtract another image from the current image The image to be subtracted The value to be subtracted - * The image The multiplication scale * * The image The multiplication scale * Perform the convolution with on The image The kernel Result of the convolution / The image The division scale / / The image The scale / Summation over a pixel param1 x param2 neighborhood with subsequent scaling by 1/(param1 x param2) The width of the window The height of the window The result of blur Summation over a pixel param1 x param2 neighborhood. If scale is true, the result is subsequent scaled by 1/(param1 x param2) The width of the window The height of the window If true, the result is subsequent scaled by 1/(param1 x param2) The result of blur Finding median of x neighborhood The size (width & height) of the window The result of mediam smooth Applying bilateral 3x3 filtering Color sigma Space sigma The size of the bilatral kernel The result of bilateral smooth Perform Gaussian Smoothing in the current image and return the result The size of the Gaussian kernel ( x ) The smoothed image Perform Gaussian Smoothing in the current image and return the result The width of the Gaussian kernel The height of the Gaussian kernel The standard deviation of the Gaussian kernel in the horizontal dimwnsion The standard deviation of the Gaussian kernel in the vertical dimwnsion The smoothed image Perform Gaussian Smoothing inplace for the current image The size of the Gaussian kernel ( x ) Perform Gaussian Smoothing inplace for the current image The width of the Gaussian kernel The height of the Gaussian kernel The standard deviation of the Gaussian kernel in the horizontal dimwnsion The standard deviation of the Gaussian kernel in the vertical dimwnsion Performs a convolution using the specific The convolution kernel The result of the convolution Calculates integral images for the source image The integral image Calculates integral images for the source image The integral image The integral image for squared pixel values The integral image Calculates one or more integral images for the source image The integral image The integral image for squared pixel values The integral for the image rotated by 45 degrees Transforms grayscale image to binary image. Threshold calculated individually for each pixel. For the method CV_ADAPTIVE_THRESH_MEAN_C it is a mean of x pixel neighborhood, subtracted by param1. For the method CV_ADAPTIVE_THRESH_GAUSSIAN_C it is a weighted sum (gaussian) of x pixel neighborhood, subtracted by param1. Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Adaptive_method Thresholding type. must be one of CV_THRESH_BINARY, CV_THRESH_BINARY_INV The size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, ... Constant subtracted from mean or weighted mean. It may be negative. The result of the adaptive threshold the base threshold method shared by public threshold functions Threshold the image such that: dst(x,y) = src(x,y), if src(x,y)>threshold; 0, otherwise The threshold value dst(x,y) = src(x,y), if src(x,y)>threshold; 0, otherwise Threshold the image such that: dst(x,y) = 0, if src(x,y)>threshold; src(x,y), otherwise The threshold value The image such that: dst(x,y) = 0, if src(x,y)>threshold; src(x,y), otherwise Threshold the image such that: dst(x,y) = threshold, if src(x,y)>threshold; src(x,y), otherwise The threshold value The image such that: dst(x,y) = threshold, if src(x,y)>threshold; src(x,y), otherwise Threshold the image such that: dst(x,y) = max_value, if src(x,y)>threshold; 0, otherwise The image such that: dst(x,y) = max_value, if src(x,y)>threshold; 0, otherwise Threshold the image such that: dst(x,y) = 0, if src(x,y)>threshold; max_value, otherwise The threshold value The maximum value of the pixel on the result The image such that: dst(x,y) = 0, if src(x,y)>threshold; max_value, otherwise Threshold the image inplace such that: dst(x,y) = src(x,y), if src(x,y)>threshold; 0, otherwise The threshold value Threshold the image inplace such that: dst(x,y) = 0, if src(x,y)>threshold; src(x,y), otherwise The threshold value Threshold the image inplace such that: dst(x,y) = threshold, if src(x,y)>threshold; src(x,y), otherwise The threshold value Threshold the image inplace such that: dst(x,y) = max_value, if src(x,y)>threshold; 0, otherwise The threshold value The maximum value of the pixel on the result Threshold the image inplace such that: dst(x,y) = 0, if src(x,y)>threshold; max_value, otherwise The threshold value The maximum value of the pixel on the result Calculates the average value and standard deviation of array elements, independently for each channel The avg color The standard deviation for each channel The operation mask Calculates the average value and standard deviation of array elements, independently for each channel The avg color The standard deviation for each channel Count the non Zero elements for each channel Count the non Zero elements for each channel Returns the min / max location and values for the image The maximum locations for each channel The maximum values for each channel The minimum locations for each channel The minimum values for each channel Return a flipped copy of the current image The type of the flipping The flipped copy of this image Inplace flip the image The type of the flipping The flipped copy of this image Concate the current image with another image vertically. The other image to concate A new image that is the vertical concatening of this image and Concate the current image with another image horizontally. The other image to concate A new image that is the horizontal concatening of this image and Calculates spatial and central moments up to the third order and writes them to moments. The moments may be used then to calculate gravity center of the shape, its area, main axises and various shape characteristics including 7 Hu invariants. If the flag is true, all the zero pixel values are treated as zeroes, all the others are treated as 1's spatial and central moments up to the third order Gamma corrects this image inplace. The image must have a depth type of Byte. The gamma value Split current Image into an array of gray scale images where each element in the array represent a single color channel of the original image An array of gray scale images where each element in the array represent a single color channel of the original image Save this image to the specific file. The name of the file to be saved to The image format is chosen depending on the filename extension, see cvLoadImage. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use cvCvtScale and cvCvtColor to convert it before saving, or use universal cvSave to save the image to XML or YAML format. The algorithm inplace normalizes brightness and increases contrast of the image. For color images, a HSV representation of the image is first obtained and the V (value) channel is histogram normalized This function load the image data from Mat The Mat This function load the image data from the iplImage pointer The pointer to the iplImage Get the managed image from an unmanaged IplImagePointer The pointer to the iplImage The managed image from the iplImage pointer Get the jpeg representation of the image An byte array that contains the image as jpeg data Get or Set the data for this matrix. The Get function has O(1) complexity. The Set function make a copy of the data If the image contains Byte and width is not a multiple of 4. The second dimension of the array might be larger than the Width of this image. This is necessary since the length of a row need to be 4 align for OpenCV optimization. The Set function always make a copy of the specific value. If the image contains Byte and width is not a multiple of 4. The second dimension of the array created might be larger than the Width of this image. The IplImage structure Get or Set the region of interest for this image. To clear the ROI, set it to System.Drawing.Rectangle.Empty Get the number of channels for this image Get the underneath managed array Get the equivalent opencv depth type for this image Indicates if the region of interest has been set Get or Set the specific channel of the current image. For Get operation, a copy of the specific channel is returned. For Set operation, the specific channel is copied to this image. The channel to get from the current image, zero based index The specific channel of the current image Get or Set the color in the th row (y direction) and th column (x direction) The zero-based row (y direction) of the pixel The zero-based column (x direction) of the pixel The color in the specific and Get or Set the color in the the location of the pixel the color in the The Get property provide a more efficient way to convert Image<Gray, Byte>, Image<Bgr, Byte> and Image<Bgra, Byte> into Bitmap such that the image data is shared with Bitmap. If you change the pixel value on the Bitmap, you change the pixel values on the Image object as well! For other types of image this property has the same effect as ToBitmap() Take extra caution not to use the Bitmap after the Image object is disposed The Set property convert the bitmap to this Image type. Get the size of the array Constants used by the image class Offset of roi The stereo matcher interface Pointer to the stereo matcher The class implements a standard Kalman filter. However, you can modify transitionMatrix, controlMatrix, and measurementMatrix to get an extended Kalman filter functionality. Initializes a new instance of the class. Dimensionality of the state. Dimensionality of the measurement. Dimensionality of the control vector. Type of the created matrices that should be Cv32F or Cv64F Perform the predict operation using the option control input The control. The predicted state. Updates the predicted state from the measurement. The measured system parameters Release the unmanaged resources Predicted state (x'(k)): x(k)=A*x(k-1)+B*u(k) Corrected state (x(k)): x(k)=x'(k)+K(k)*(z(k)-H*x'(k)) State transition matrix (A) Control matrix (B) (not used if there is no control) Measurement matrix (H) Process noise covariance matrix (Q) Measurement noise covariance matrix (R) priori error estimate covariance matrix (P'(k)): P'(k)=A*P(k-1)*At + Q) Kalman gain matrix (K(k)): K(k)=P'(k)*Ht*inv(H*P'(k)*Ht+R) posteriori error estimate covariance matrix (P(k)): P(k)=(I-K(k)*H)*P'(k) A Map is similar to an Image, except that the location of the pixels is defined by its area and resolution The color of this map The depth of this map Create a new Image Map defined by the Rectangle area. The center (0.0, 0.0) of this map is defined by the center of the rectangle. The resolution of x (y), (e.g. a value of 0.5 means each cell in the map is 0.5 unit in x (y) dimension) The initial color of the map Create a new Image Map defined by the Rectangle area. The center (0.0, 0.0) of this map is defined by the center of the rectangle. The initial value of the map is 0.0 The resolution of x (y), (e.g. a value of 0.5 means each cell in the map is 0.5 unit in x (y) dimension) Map a point to a position in the internal image Map a point to a position in the internal image Map an image point to a Map point The point on image The point on map Get a copy of the map in the specific area the area of the map to be retrieve The area of the map Draw a rectangle in the map The rectangle to draw The color for the rectangle The thickness of the rectangle, any value less than or equal to 0 will result in a filled rectangle Draw a line segment in the map The line to be draw The color for the line The thickness of the line Line type Number of fractional bits in the center coordinates and radius value Draw a Circle of the specific color and thickness The circle to be drawn The color of the circle If thickness is less than 1, the circle is filled up Line type Number of fractional bits in the center coordinates and radius value Draw a convex polygon of the specific color and thickness The convex polygon to be drawn The color of the convex polygon If thickness is less than 1, the triangle is filled up Draw the text using the specific font on the image The text message to be draw Font type. Font scale factor that is multiplied by the font-specific base size. The location of the bottom left corner of the font The color of the text Thickness of the lines used to draw a text. Line type When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner. Draw the polyline defined by the array of 2D points the points that defines the poly line if true, the last line segment is defined by the last point of the array and the first point of the array the color used for drawing the thinkness of the line Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info streaming context Get the area of this map as a rectangle Get the resolution of this map as a 2D point Get or Set the region of interest for this map. To clear the ROI, set it to System.Drawing.RectangleF.Empty A MatND is a wrapper to cvMatND of OpenCV. The type of depth Create a N-dimensional matrix The size for each dimension Constructor used to deserialize runtime serialized object The serialization info The streaming context This function is not implemented for MatND Not implemented Not implemented Not implemented Release the matrix and all the memory associate with it A function used for runtime serialization of the object Serialization info Streaming context A function used for runtime deserailization of the object Serialization info Streaming context Not Implemented The XmlReader Not Implemented The XmlWriter Convert this matrix to different depth The depth type to convert to Matrix of different depth Check if the two MatND are equal The other MatND to compares to True if the two MatND equals This function is not implemented for MatND Get the underneath managed array Get the depth representation for openCV The MCvMatND structure The base class algorithms that can merge exposure sequence to a single image. The pointer to the unmanaged MergeExposure object Merges images. Vector of input images Result image Vector of exposure time values for each image 256x1 matrix with inverse camera response function for each pixel value, it should have the same number of channels as images. The motion history class For help on using this class, take a look at the Motion Detection example Create a motion history object In second, the duration of motion history you wants to keep In second. Any change happens between a time interval greater than this will not be considered In second. Any change happens between a time interval smaller than this will not be considered. Create a motion history object In second, the duration of motion history you wants to keep In second. Any change happens between a time interval larger than this will not be considered In second. Any change happens between a time interval smaller than this will not be considered. The start time of the motion history Update the motion history with the specific image and current timestamp The image to be added to history Update the motion history with the specific image and the specific timestamp The foreground of the image to be added to history The time when the image is captured Get a sequence of motion component A sequence of motion components Given a rectangle area of the motion, output the angle of the motion and the number of pixels that are considered to be motion pixel The rectangle area of the motion The orientation of the motion Number of motion pixels within silhouette ROI The foreground mask used to calculate the motion info. Release unmanaged resources Release any images associated with this object The motion mask. Do not dispose this image. This class contains ocl runtime information Create a empty OclDevice object Release all the unmanaged memory associated with this OclInfo Set the native device pointer Get the string representation of this oclDevice A string representation of this oclDevice Get the default OclDevice. Do not dispose this device. Get the native device pointer Indicates if this is an NVidia device Indicates if this is an Intel device Indicates if this is an AMD device The AddressBits Indicates if the linker is available Indicates if the compiler is available Indicates if the device is available The maximum work group size The max compute unit The local memory size The maximum memory allocation size The device major version number The device minor version number The device half floating point configuration The device single floating point configuration The device double floating point configuration True if the device use unified memory The global memory size The image 2d max width The image2d max height The ocl device type The device name The device version The device vendor name The device driver version The device extensions The device OpenCL version The device OpenCL C version Ocl Device Type Default Cpu Gpu Accerlerator DGpu IGpu All Floating point configuration Denorm inf, nan round to nearest round to zero round to infinite FMA soft float Correctly rounded divide sqrt Class that contains ocl functions Class that contains ocl functions. Class that contains ocl functions. Get all the platform info as a vector The vector of Platfom info An opencl kernel Create an opencl kernel Create an opencl kernel The name of the kernel The program source code The build options Option error message container that can be passed to this function True if the kernel can be created Release the opencl kernel Indicates if the kernel is empty The pointer to the native kernel This class contains ocl platform information Release all the unmanaged memory associated with this OclInfo Get the OclDevice with the specific index The index of the ocl device The ocl device with the specific index Get the string that represent this oclPlatformInfo object A string that represent this oclPlatformInfo object The platform name The platform version The platform vendor The number of devices Type for cvNorm if arr2 is NULL, norm = ||arr1||_C = max_I abs(arr1(I)); if arr2 is not NULL, norm = ||arr1-arr2||_C = max_I abs(arr1(I)-arr2(I)) if arr2 is NULL, norm = ||arr1||_L1 = sum_I abs(arr1(I)); if arr2 is not NULL, norm = ||arr1-arr2||_L1 = sum_I abs(arr1(I)-arr2(I)) if arr2 is NULL, norm = ||arr1||_L2 = sqrt( sum_I arr1(I)^2); if arr2 is not NULL, norm = ||arr1-arr2||_L2 = sqrt( sum_I (arr1(I)-arr2(I))^2 ) It is used in combination with either CV_C, CV_L1 or CV_L2 It is used in combination with either CV_C, CV_L1 or CV_L2 norm = ||arr1-arr2||_C/||arr2||_C norm = ||arr1-arr2||_L1/||arr2||_L1 norm = ||arr1-arr2||_L2/||arr2||_L2 Type used for cvReduce function The output is the sum of all the matrix rows/columns The output is the mean vector of all the matrix rows/columns The output is the maximum (column/row-wise) of all the matrix rows/columns The output is the minimum (column/row-wise) of all the matrix rows/columns Type used for cvReduce function The matrix is reduced to a single row The matrix is reduced to a single column The dimension is chosen automatically by analysing the dst size Type used for cvCmp function src1(I) "equal to" src2(I) src1(I) "greater than" src2(I) src1(I) "greater or equal" src2(I) src1(I) "less than" src2(I) src1(I) "less or equal" src2(I) src1(I) "not equal to" src2(I) CV Capture property identifier Turn the feature off (not controlled manually nor automatically) Set automatically when a value of the feature is set by the user DC1394 mode auto DC1394 mode one push auto Film current position in milliseconds or video capture timestamp 0-based index of the frame to be decoded/captured next Position in relative units (0 - start of the file, 1 - end of the file) Width of frames in the video stream Height of frames in the video stream Frame rate 4-character code of codec Number of frames in video file Format Mode Brightness Contrast Saturation Hue Gain Exposure Convert RGB White balance blue u Rectification Monochrome Sharpness Exposure control done by camera, user can adjust reference level using this feature Gamma Temperature Trigger Trigger delay White balance red v Zoom Focus GUID ISO SPEED MAX DC1394 Backlight Pan Tilt Roll Iris Settings property for highgui class CvCapture_Android only readonly, tricky property, returns cpnst char* indeed readonly, tricky property, returns cpnst char* indeed OpenNI map generators OpenNI map generators OpenNI map generators Properties of cameras available through OpenNI interfaces Properties of cameras available through OpenNI interfaces, in mm. Properties of cameras available through OpenNI interfaces, in mm. Properties of cameras available through OpenNI interfaces, in pixels. Flag that synchronizes the remapping depth map to image map by changing depth generator's view point (if the flag is "on") or sets this view point to its normal one (if the flag is "off"). Flag that synchronizes the remapping depth map to image map by changing depth generator's view point (if the flag is "on") or sets this view point to its normal one (if the flag is "off"). Approx frame sync Max buffer size Circle buffer Max time duration Generator present Openni image generator present Image generator output mode Depth generator baseline, in mm. Depth generator focal length, in pixels. Openni generator registration Openni generator registration on Properties of cameras available through GStreamer interface. Default is 1 Ip for enable multicast master mode. 0 for disable multicast Change image resolution by binning or skipping. Output data format Horizontal offset from the origin to the area of interest (in pixels). Vertical offset from the origin to the area of interest (in pixels). Defines source of trigger. Generates an internal trigger. PRM_TRG_SOURCE must be set to TRG_SOFTWARE. Selects general purpose input Set general purpose input mode Get general purpose level Selects general purpose output Set general purpose output mode Selects camera signaling LED Define camera signaling LED functionality Calculates White Balance(must be called during acquisition) Automatic white balance Automatic exposure/gain Exposure priority (0.5 - exposure 50%, gain 50%). Maximum limit of exposure in AEAG procedure Maximum limit of gain in AEAG procedure Average intensity of output signal AEAG should achieve(in %) Image capture timeout in milliseconds Android flash mode Android focus mode Android white balance Android anti banding Android focal length Android focus distance near Android focus distance optimal Android focus distance far iOS device focus iOS device exposure iOS device flash iOS device white-balance iOS device torch Smartek Giganetix Ethernet Vision: frame offset X Smartek Giganetix Ethernet Vision: frame offset Y Smartek Giganetix Ethernet Vision: frame width max Smartek Giganetix Ethernet Vision: frame height max Smartek Giganetix Ethernet Vision: frame sens width Smartek Giganetix Ethernet Vision: frame sens height The named window type The user can resize the window (no constraint) / also use to switch a fullscreen window to a normal size The user cannot resize the window, the size is constrainted by the image displayed Window with opengl support Change the window to fullscreen The image expends as much as it can (no ratio constraint) the ratio of the image is respected contour approximation method output contours in the Freeman chain code. All other methods output polygons (sequences of vertices). translate all the points from the chain code into points; compress horizontal, vertical, and diagonal segments, that is, the function leaves only their ending points; apply one of the flavors of Teh-Chin chain approximation algorithm use completely different contour retrieval algorithm via linking of horizontal segments of 1s. Only LIST retrieval mode can be used with this method Color Conversion code Convert BGR color to BGRA color Convert RGB color to RGBA color Convert BGRA color to BGR color Convert RGBA color to RGB color Convert BGR color to RGBA color Convert RGB color to BGRA color Convert RGBA color to BGR color Convert BGRA color to RGB color Convert BGR color to RGB color Convert RGB color to BGR color Convert BGRA color to RGBA color Convert RGBA color to BGRA color Convert BGR color to GRAY color Convert RGB color to GRAY color Convert GRAY color to BGR color Convert GRAY color to RGB color Convert GRAY color to BGRA color Convert GRAY color to RGBA color Convert BGRA color to GRAY color Convert RGBA color to GRAY color Convert BGR color to BGR565 color Convert RGB color to BGR565 color Convert BGR565 color to BGR color Convert BGR565 color to RGB color Convert BGRA color to BGR565 color Convert RGBA color to BGR565 color Convert BGR565 color to BGRA color Convert BGR565 color to RGBA color Convert GRAY color to BGR565 color Convert BGR565 color to GRAY color Convert BGR color to BGR555 color Convert RGB color to BGR555 color Convert BGR555 color to BGR color Convert BGR555 color to RGB color Convert BGRA color to BGR555 color Convert RGBA color to BGR555 color Convert BGR555 color to BGRA color Convert BGR555 color to RGBA color Convert GRAY color to BGR555 color Convert BGR555 color to GRAY color Convert BGR color to XYZ color Convert RGB color to XYZ color Convert XYZ color to BGR color Convert XYZ color to RGB color Convert BGR color to YCrCb color Convert RGB color to YCrCb color Convert YCrCb color to BGR color Convert YCrCb color to RGB color Convert BGR color to HSV color Convert RGB colot to HSV color Convert BGR color to Lab color Convert RGB color to Lab color Convert BayerBG color to BGR color Convert BayerGB color to BGR color Convert BayerRG color to BGR color Convert BayerGR color to BGR color Convert BayerBG color to BGR color Convert BayerRG color to BGR color Convert BayerRG color to RGB color Convert BayerGR color to RGB color Convert BGR color to Luv color Convert RGB color to Luv color Convert BGR color to HLS color Convert RGB color to HLS color Convert HSV color to BGR color Convert HSV color to RGB color Convert Lab color to BGR color Convert Lab color to RGB color Convert Luv color to BGR color Convert Luv color to RGB color Convert HLS color to BGR color Convert HLS color to RGB color Convert BayerBG pattern to BGR color using VNG Convert BayerGB pattern to BGR color using VNG Convert BayerRG pattern to BGR color using VNG Convert BayerGR pattern to BGR color using VNG Convert BayerBG pattern to RGB color using VNG Convert BayerGB pattern to RGB color using VNG Convert BayerRG pattern to RGB color using VNG Convert BayerGR pattern to RGB color using VNG Convert BGR to HSV Convert RGB to HSV Convert BGR to HLS Convert RGB to HLS Convert HSV color to BGR color Convert HSV color to RGB color Convert HLS color to BGR color Convert HLS color to RGB color Convert sBGR color to Lab color Convert sRGB color to Lab color Convert sBGR color to Luv color Convert sRGB color to Luv color Convert Lab color to sBGR color Convert Lab color to sRGB color Convert Luv color to sBGR color Convert Luv color to sRGB color Convert BGR color to YUV Convert RGB color to YUV Convert YUV color to BGR Convert YUV color to RGB Convert BayerBG to GRAY Convert BayerGB to GRAY Convert BayerRG to GRAY Convert BayerGR to GRAY Convert YUV420i to RGB Convert YUV420i to BGR Convert YUV420sp to RGB Convert YUV320sp to BGR Convert YUV320i to RGBA Convert YUV420i to BGRA Convert YUV420sp to RGBA Convert YUV420sp to BGRA Convert YUV (YV12) to RGB Convert YUV (YV12) to BGR Convert YUV (iYUV) to RGB Convert YUV (iYUV) to BGR Convert YUV (i420) to RGB Convert YUV (i420) to BGR Convert YUV (420p) to RGB Convert YUV (420p) to BGR Convert YUV (YV12) to RGBA Convert YUV (YV12) to BGRA Convert YUV (iYUV) to RGBA Convert YUV (iYUV) to BGRA Convert YUV (i420) to RGBA Convert YUV (i420) to BGRA Convert YUV (420p) to RGBA Convert YUV (420p) to BGRA Convert YUV 420 to Gray Convert YUV NV21 to Gray Convert YUV NV12 to Gray Convert YUV YV12 to Gray Convert YUV (iYUV) to Gray Convert YUV (i420) to Gray Convert YUV (420sp) to Gray Convert YUV (420p) to Gray Convert YUV (UYVY) to RGB Convert YUV (UYVY) to BGR Convert YUV (Y422) to RGB Convert YUV (Y422) to BGR Convert YUV (UYNY) to RGB Convert YUV (UYNV) to BGR Convert YUV (UYVY) to RGBA Convert YUV (VYUY) to BGRA Convert YUV (Y422) to RGBA Convert YUV (Y422) to BGRA Convert YUV (UYNV) to RGBA Convert YUV (UYNV) to BGRA Convert YUV (YUY2) to RGB Convert YUV (YUY2) to BGR Convert YUV (YVYU) to RGB Convert YUV (YVYU) to BGR Convert YUV (YUYV) to RGB Convert YUV (YUYV) to BGR Convert YUV (YUNV) to RGB Convert YUV (YUNV) to BGR Convert YUV (YUY2) to RGBA Convert YUV (YUY2) to BGRA Convert YUV (YVYU) to RGBA Convert YUV (YVYU) to BGRA Convert YUV (YUYV) to RGBA Convert YUV (YUYV) to BGRA Convert YUV (YUNV) to RGBA Convert YUV (YUNV) to BGRA Convert YUV (UYVY) to Gray Convert YUV (YUY2) to Gray Convert YUV (Y422) to Gray Convert YUV (UYNV) to Gray Convert YUV (YVYU) to Gray Convert YUV (YUYV) to Gray Convert YUV (YUNV) to Gray Alpha premultiplication Alpha premultiplication Convert RGB to YUV_I420 Convert BGR to YUV_I420 Convert RGB to YUV_IYUV Convert BGR to YUV_IYUV Convert RGBA to YUV_I420 Convert BGRA to YUV_I420 Convert RGBA to YUV_IYUV Convert BGRA to YUV_IYUV Convert RGB to YUV_YV12 Convert BGR to YUV_YV12 Convert RGBA to YUV_YV12 Convert BGRA to YUV_YV12 Convert BayerBG to BGR (Edge-Aware Demosaicing) Convert BayerGB to BGR (Edge-Aware Demosaicing) Convert BayerRG to BGR (Edge-Aware Demosaicing) Convert BayerGR to BGR (Edge-Aware Demosaicing) Convert BayerBG to RGB (Edge-Aware Demosaicing) Convert BayerGB to RGB (Edge-Aware Demosaicing) Convert BayerRG to RGB (Edge-Aware Demosaicing) Convert BayerGR to RGB (Edge-Aware Demosaicing) The max number, do not use Fonts Hershey simplex Hershey plain Hershey duplex Hershey complex Hershey triplex Hershey complex small Hershey script simplex Hershey script complex Flags used for GEMM function Do not apply transpose to neither matrices transpose src1 transpose src2 transpose src3 Hough detection type Inpaint type Navier-Stokes based method. The method by Alexandru Telea Edge preserving filter flag Recurs filter Norm conv filter Interpolation types Nearest-neighbor interpolation Bilinear interpolation Resampling using pixel area relation. It is the preferred method for image decimation that gives moire-free results. In case of zooming it is similar to CV_INTER_NN method Bicubic interpolation LANCZOS 4 Interpolation type (simple blur with no scaling) - summation over a pixel param1xparam2 neighborhood. If the neighborhood size may vary, one may precompute integral image with cvIntegral function (simple blur) - summation over a pixel param1xparam2 neighborhood with subsequent scaling by 1/(param1xparam2). (Gaussian blur) - convolving image with param1xparam2 Gaussian kernel. (median blur) - finding median of param1xparam1 neighborhood (i.e. the neighborhood is square). (bilateral filter) - applying bilateral 3x3 filtering with color sigma=param1 and space sigma=param2. Information about bilateral filtering can be found cvLoadImage type 8bit, color or not 8bit, gray ?, color any depth, ? ?, any color OpenCV depth type default Byte SByte UInt16 Int16 Int32 float double contour retrieval mode retrieve only the extreme outer contours retrieve all the contours and puts them in the list retrieve all the contours and organizes them into two-level hierarchy: top level are external boundaries of the components, second level are bounda boundaries of the holes retrieve all the contours and reconstructs the full hierarchy of nested contours The bit to shift for SEQ_ELTYPE The mask of CV_SEQ_ELTYPE The bits to shift for SEQ_KIND The bits to shift for SEQ_FLAG Sequence element type (x,y) freeman code: 0..7 unspecified type of sequence elements =6 pointer to element of other sequence index of element of some other sequence next_o, next_d, vtx_o, vtx_d first_edge, (x,y) vertex of the binary tree connected component (x,y,z) The kind of sequence available generic (unspecified) kind of sequence dense sequence subtypes dense sequence subtypes sparse sequence (or set) subtypes sparse sequence (or set) subtypes Sequence flag close sequence Sequence type for point sets CV_TERMCRIT Iteration Epsilon Types of thresholding value = value > threshold ? max_value : 0 value = value > threshold ? 0 : max_value value = value > threshold ? threshold : value value = value > threshold ? value : 0 value = value > threshold ? 0 : value use Otsu algorithm to choose the optimal threshold value; combine the flag with one of the above CV_THRESH_* values Methods for comparing two array R(x,y)=sumx',y'[T(x',y')-I(x+x',y+y')]2 R(x,y)=sumx',y'[T(x',y')-I(x+x',y+y')]2/sqrt[sumx',y'T(x',y')2 sumx',y'I(x+x',y+y')2] R(x,y)=sumx',y'[T(x',y') I(x+x',y+y')] R(x,y)=sumx',y'[T(x',y') I(x+x',y+y')]/sqrt[sumx',y'T(x',y')2 sumx',y'I(x+x',y+y')2] R(x,y)=sumx',y'[T'(x',y') I'(x+x',y+y')], where T'(x',y')=T(x',y') - 1/(wxh) sumx",y"T(x",y") I'(x+x',y+y')=I(x+x',y+y') - 1/(wxh) sumx",y"I(x+x",y+y") R(x,y)=sumx',y'[T'(x',y') I'(x+x',y+y')]/sqrt[sumx',y'T'(x',y')2 sumx',y'I'(x+x',y+y')2] IPL_DEPTH indicates if the value is signed 1bit unsigned 8bit unsigned (Byte) 16bit unsigned 32bit float (Single) 8bit signed 16bit signed 32bit signed double Enumeration used by cvFlip No flipping Flip horizontally Flip vertically Enumeration used by cvCheckArr Checks that every element is neither NaN nor Infinity If set, the function checks that every value of array is within [minVal,maxVal) range, otherwise it just checks that every element is neigther NaN nor Infinity If set, the function does not raises an error if an element is invalid or out of range Type of floodfill operation The default type If set the difference between the current pixel and seed pixel is considered, otherwise difference between neighbor pixels is considered (the range is floating). If set, the function does not fill the image (new_val is ignored), but the fills mask (that must be non-NULL in this case). The type for cvSampleLine 8-connected 4-connected The type of line for drawing 8-connected 4-connected Anti-alias Distance transform algorithm flags Connected component The pixel Defines for Distance Transform User defined distance distance = |x1-x2| + |y1-y2| Simple euclidean distance distance = max(|x1-x2|,|y1-y2|) L1-L2 metric: distance = 2(sqrt(1+x*x/2) - 1)) distance = c^2(|x|/c-log(1+|x|/c)), c = 1.3998 distance = c^2/2(1-exp(-(x/c)^2)), c = 2.9846 distance = |x|<c ? x^2/2 : c(|x|-c/2), c=1.345 The types for cvMulSpectrums The default type Do forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms etc Conjugate the second argument of cvMulSpectrums Flag used for cvDFT Do forward 1D or 2D transform. The result is not scaled Do inverse 1D or 2D transform. The result is not scaled. CV_DXT_FORWARD and CV_DXT_INVERSE are mutually exclusive, of course Scale the result: divide it by the number of array elements. Usually, it is combined with CV_DXT_INVERSE, and one may use a shortcut Do forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms etc Inverse and scale Flag used for cvDCT Do forward 1D or 2D transform. The result is not scaled Do inverse 1D or 2D transform. The result is not scaled. CV_DXT_FORWARD and CV_DXT_INVERSE are mutually exclusive, of course Do forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms etc Calculates fundamental matrix given a set of corresponding points for 7-point algorithm. N == 7 for 8-point algorithm. N >= 8 for LMedS algorithm. N >= 8 for RANSAC algorithm. N >= 8 CV_FM_LMEDS_ONLY | CV_FM_8POINT CV_FM_RANSAC_ONLY | CV_FM_8POINT General enumeration Error codes Types for WarpAffine Neither FILL_OUTLIERS nor CV_WRAP_INVERSE_MAP Fill all the destination image pixels. If some of them correspond to outliers in the source image, they are set to fillval. Indicates that matrix is inverse transform from destination image to source and, thus, can be used directly for pixel interpolation. Otherwise, the function finds the inverse transform from map_matrix. Types of Adaptive Threshold indicates that "Mean minus C" should be used for adaptive threshold. indicates that "Gaussian minus C" should be used for adaptive threshold. Shape of the Structuring Element A rectangular element. A cross-shaped element. An elliptic element. A user-defined element. PCA Type the vectors are stored as rows (i.e. all the components of a certain vector are stored continously) the vectors are stored as columns (i.e. values of a certain vector component are stored continuously) use pre-computed average vector cvInvert method Gaussian elimination with optimal pivot element chose In case of LU method the function returns src1 determinant (src1 must be square). If it is 0, the matrix is not inverted and src2 is filled with zeros. Singular value decomposition (SVD) method In case of SVD methods the function returns the inversed condition number of src1 (ratio of the smallest singular value to the largest singular value) and 0 if src1 is all zeros. The SVD methods calculate a pseudo-inverse matrix if src1 is singular Eig method for a symmetric positively-defined matrix QR decomposition Normal cvCalcCovarMatrix method types Calculates covariation matrix for a set of vectors transpose([v1-avg, v2-avg,...]) * [v1-avg,v2-avg,...] [v1-avg, v2-avg,...] * transpose([v1-avg,v2-avg,...]) Do not calc average (i.e. mean vector) - use the input vector instead (useful for calculating covariance matrix by parts) Scale the covariance matrix coefficients by number of the vectors All the input vectors are stored in a single matrix, as its rows All the input vectors are stored in a single matrix, as its columns Type for cvSVD The default type enables modification of matrix src1 during the operation. It speeds up the processing. indicates that only a vector of singular values `w` is to be processed, while u and vt will be set to empty matrices when the matrix is not square, by default the algorithm produces u and vt matrices of sufficiently large size for the further A reconstruction; if, however, FULL_UV flag is specified, u and vt will be full-size square orthogonal matrices. Type for cvCalcOpticalFlowPyrLK The default type Uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate. use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure. Various camera calibration flags The default value intrinsic_matrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center (image_size is used here), and focal distances are computed in some least-squares fashion The optimization procedure consider only one of fx and fy as independent variable and keeps the aspect ratio fx/fy the same as it was set initially in intrinsic_matrix. In this case the actual initial values of (fx, fy) are either taken from the matrix (when CV_CALIB_USE_INTRINSIC_GUESS is set) or estimated somehow (in the latter case fx, fy may be set to arbitrary values, only their ratio is used) The principal point is not changed during the global optimization, it stays at the center and at the other location specified (when CV_CALIB_FIX_FOCAL_LENGTH - Both fx and fy are fixed. CV_CALIB_USE_INTRINSIC_GUESS is set as well) Tangential distortion coefficients are set to zeros and do not change during the optimization The focal length is fixed The 1st distortion coefficient (k1) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed The 2nd distortion coefficient (k2) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed The 3rd distortion coefficient (k3) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed The 4th distortion coefficient (k4) is fixed (see above) The 5th distortion coefficient (k5) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed The 6th distortion coefficient (k6) is fixed to 0 or to the initial passed value if CV_CALIB_USE_INTRINSIC_GUESS is passed Rational model Type of chessboard calibration Default type Use adaptive thresholding to convert the image to black-n-white, rather than a fixed threshold level (computed from the average image brightness) Normalize the image using cvNormalizeHist before applying fixed or adaptive thresholding. Use additional criteria (like contour area, perimeter, square-like shape) to filter out false quads that are extracted at the contour retrieval stage If it is on, then this check is performed before the main algorithm and if a chessboard is not found, the function returns 0 instead of wasting 0.3-1s on doing the full search. Type of circles grid calibration symmetric grid asymmetric grid Clustering IO type for eigen object related functions No callback input callback output callback both callback CvNextEdgeType next around the edge origin (eOnext) next around the edge vertex (eDnext) previous around the edge origin (reversed eRnext) previous around the edge destination (reversed eLnext) next around the left facet (eLnext) next around the right facet (eRnext) previous around the left facet (reversed eOnext) previous around the right facet (reversed eDnext) orientation clockwise counter clockwise Stereo Block Matching Prefilter type No prefilter XSobel Type of cvHomography method regular method using all the point pairs Least-Median robust method RANSAC-based robust method Type used by cvMatchShapes I_1(A,B)=sum_{i=1..7} abs(1/m^A_i - 1/m^B_i) where m^A_i=sign(h^A_i) log(h^A_i), m^B_i=sign(h^B_i) log(h^B_i), h^A_i, h^B_i - Hu moments of A and B, respectively I_2(A,B)=sum_{i=1..7} abs(m^A_i - m^B_i) where m^A_i=sign(h^A_i) log(h^A_i), m^B_i=sign(h^B_i) log(h^B_i), h^A_i, h^B_i - Hu moments of A and B, respectively I_3(A,B)=sum_{i=1..7} abs(m^A_i - m^B_i)/abs(m^A_i) where m^A_i=sign(h^A_i) log(h^A_i), m^B_i=sign(h^B_i) log(h^B_i), h^A_i, h^B_i - Hu moments of A and B, respectively The result type of cvSubdiv2DLocate. One of input arguments is invalid. Point is outside the subdivision reference rectangle Point falls into some facet Point coincides with one of subdivision vertices Point falls onto the edge Type used in cvStereoRectify Shift one of the image in horizontal or vertical direction (depending on the orientation of epipolar lines) in order to maximise the useful image area Makes the principal points of each camera have the same pixel coordinates in the rectified views The type for CopyMakeBorder function Used by some cuda methods, will pass the value -1 to the function Border is filled with the fixed value, passed as last parameter of the function The pixels from the top and bottom rows, the left-most and right-most columns are replicated to fill the border Reflect Wrap Reflect 101 Transparent The default border interpolation type. do not look outside of ROI The types for haar detection The default type where no optimization is done. If it is set, the function uses Canny edge detector to reject some image regions that contain too few or too much edges and thus can not contain the searched object. The particular threshold values are tuned for face detection and in this case the pruning speeds up the processing For each scale factor used the function will downscale the image rather than "zoom" the feature coordinates in the classifier cascade. Currently, the option can only be used alone, i.e. the flag can not be set together with the others If it is set, the function finds the largest object (if any) in the image. That is, the output sequence will contain one (or zero) element(s) It should be used only when CV_HAAR_FIND_BIGGEST_OBJECT is set and min_neighbors > 0. If the flag is set, the function does not look for candidates of a smaller size as soon as it has found the object (with enough neighbor candidates) at the current scale. Typically, when min_neighbors is fixed, the mode yields less accurate (a bit larger) object rectangle than the regular single-object mode (flags=CV_HAAR_FIND_BIGGEST_OBJECT), but it is much faster, up to an order of magnitude. A greater value of min_neighbors may be specified to improve the accuracy Specific if it is back or front Back Front The file storage operation type The storage is open for reading The storage is open for writing The storage is open for append Histogram comparison method Correlation/ Chi-Square Intersection Bhattacharyya distance Synonym for Bhattacharyya Alternative Chi-Square The available flags for Farneback optical flow computation Default Use the input flow as the initial flow approximation Use a Gaussian winsize x winsizefilter instead of box filter of the same size for optical flow estimation. Usually, this option gives more accurate flow than with a box filter, at the cost of lower speed (and normally winsize for a Gaussian window should be set to a larger value to achieve the same level of robustness) Grabcut initialization type Initialize with rectangle Initialize with mask Eval CvCapture type. This is the equivalent to CV_CAP_ macros. Auto detect Platform native Platform native Platform native IEEE 1394 drivers IEEE 1394 drivers IEEE 1394 drivers IEEE 1394 drivers QuickTime Unicap drivers DirectShow (via videoInput) PvAPI, Prosilica GigE SDK OpenNI (for Kinect) OpenNI (for Asus Xtion) Android XIMEA Camera API AVFoundation framework for iOS (OS X Lion will have the same API) Smartek Giganetix GigEVisionSDK Microsoft Media Foundation (via videoInput) Microsoft Windows Runtime using Media Foundation Intel Perceptual Computing SDK OpenNI2 (for Kinect) OpenNI2 (for Asus Xtion and Occipital Structure sensors) gPhoto2 connection GStreamer FFMPEG OpenCV Image Sequence (e.g. img_%02d.jpg) KMeans initialization type Chooses random centers for k-Means initialization Uses the user-provided labels for K-Means initialization Uses k-Means++ algorithm for initialization The type of color map Autumn Bone Jet Winter Rainbow Ocean Summer Spring Cool Hsv Pink Hot The return value for solveLP function Problem is unbounded (target function can achieve arbitrary high values) Problem is unfeasible (there are no points that satisfy all the constraints imposed) There is only one maximum for target function there are multiple maxima for target function - the arbitrary one is returned Morphology operation type Erode Dilate Open Close Gradient Tophat Blackhat Access type Read Write Read and write Mask Dast Rectangle intersect type No intersection There is a partial intersection One of the rectangle is fully enclosed in the other Method for solving a PnP problem Iterative F.Moreno-Noguer, V.Lepetit and P.Fua "EPnP: Efficient Perspective-n-Point Camera Pose Estimation" X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang; "Complete Solution Classification for the Perspective-Three-Point Problem" White balance algorithms Simple Grayworld Connected components algorithm output formats The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction. The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction. The horizontal size of the bounding box. The vertical size of the bounding box. The total area (in pixels) of the connected component. Fisheye Camera model Projects points using fisheye model. The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes Jacobians - matrices of partial derivatives of image points coordinates (as functions of all the input parameters) with respect to the particular parameters, intrinsic and/or extrinsic. Array of object points, 1xN/Nx1 3-channel (or vector<Point3f> ), where N is the number of points in the view. Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or vector<Point2f>. rotation vector translation vector Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). The skew coefficient. Optional output 2Nx15 jacobian matrix of derivatives of image points with respect to components of the focal lengths, coordinates of the principal point, distortion coefficients, rotation vector, translation vector, and the skew. In the old interface different components of the jacobian are returned via different output parameters. Computes undistortion and rectification maps for image transform by cv::remap(). If D is empty zero distortion is used, if R or P is empty identity matrixes are used. Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel New camera matrix (3x3) or new projection matrix (3x4) Undistorted image size. Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps() for details. The first output map. The second output map. Transforms an image to compensate for fisheye lens distortion. The function is simply a combination of fisheye::initUndistortRectifyMap (with unity R ) and remap (with bilinear interpolation). Image with fisheye lens distortion. Output image with compensated fisheye lens distortion. Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). Camera matrix of the distorted image. By default, it is the identity matrix but you may additionally scale and shift the result by using a different matrix. The function transforms an image to compensate radial and tangential lens distortion. Estimates new camera matrix for undistortion or rectification. Camera matrix Input vector of distortion coefficients (k1,k2,k3,k4). Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel New camera matrix (3x3) or new projection matrix (3x4) Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1] Divisor for new focal length. Stereo rectification for fisheye camera model. First camera matrix. First camera distortion parameters. Second camera matrix. Second camera distortion parameters. Size of the image used for stereo calibration. Rotation matrix between the coordinate systems of the first and the second cameras. Translation vector between coordinate systems of the cameras. Output 3x3 rectification transform (rotation matrix) for the first camera. Output 3x3 rectification transform (rotation matrix) for the second camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera. Output 4×4 disparity-to-depth mapping matrix (see reprojectImageTo3D ). Operation flags that may be zero or ZeroDisparity . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area. New image resolution after rectification. The same size should be passed to initUndistortRectifyMap. When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion. Sets the new focal length in range between the min focal length and the max focal length. Balance is in range of [0, 1]. Divisor for new focal length. Performs camera calibaration. vector of vectors of calibration pattern points in the calibration pattern coordinate space. vector of vectors of the projections of calibration pattern points. imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i. Size of the image used only to initialize the intrinsic camera matrix. Output 3x3 floating-point camera matrix. If UseIntrisicGuess is specified, some or all of fx, fy, cx, cy must be initialized before calling the function. Output vector of distortion coefficients (k1,k2,k3,k4). Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1). Output vector of translation vectors estimated for each pattern view. Different flags Termination criteria for the iterative optimization algorithm. Performs stereo calibration. Vector of vectors of the calibration pattern points. Vector of vectors of the projections of the calibration pattern points, observed by the first camera. Vector of vectors of the projections of the calibration pattern points, observed by the second camera. Input/output first camera matrix.If FixIntrinsic is specified, some or all of the matrix components must be initialized. Input/output vector of distortion coefficients (k1,k2,k3,k4) of 4 elements. Input/output second camera matrix. The parameter is similar to Input/output lens distortion coefficients for the second camera. The parameter is similar to Size of the image used only to initialize intrinsic camera matrix. Output rotation matrix between the 1st and the 2nd camera coordinate systems. Output translation vector between the coordinate systems of the cameras. Fish eye calibration flags Termination criteria for the iterative optimization algorithm. Default flag cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a least-squares fashion. Extrinsic will be recomputed after each iteration of intrinsic optimization. The functions will check validity of condition number. Skew coefficient (alpha) is set to zero and stay zero. Selected distortion coefficients are set to zeros and stay zero. Selected distortion coefficients are set to zeros and stay zero. Selected distortion coefficients are set to zeros and stay zero. Selected distortion coefficients are set to zeros and stay zero. Fix intrinsic A point with Bgr color information The position in meters The blue color The green color The red color A solid resembling a cube, with the rectangular faces not all equal; a rectangular parallelepiped. The coordinate of the upper corner The coordinate of the lower corner Check if the specific point is in the Cuboid The point to be checked True if the point is in the cuboid Get the centroid of this cuboid This is used to hold the sizes of the Open CV structures The size of CvPoint The size of CvPoint2D32f The size of CvPoint3D32f The size of CvSize The size of CvSize2D32f The size of CvScalar The size of CvRect The size of CvBox2D The size of CvMat The size of CvMatND The size of CvTermCriteria The size of CvSeq The size of CvContour The size of IplImage Result of cvHaarDetectObjects Bounding rectangle for the object (average rectangle of a group) Number of neighbor rectangles in the group Wrapper to the CvBlob structure The center of the blob Blob size Blob ID Convert a MCvBlob to RectangleF The blob The equivalent RectangleF Convert a MCvBlob to RectangleF The blob The equivalent RectangleF Check if the two blobs are equal The blob to compares with True if equals Get an empty blob Managed structure equivalent to CvChain micsellaneous flags size of sequence header previous sequence next sequence 2nd previous sequence 2nd next sequence total number of elements size of sequence element in bytes maximal bound of the last block current write pointer how many elements allocated when the seq grows where the seq is stored free blocks list pointer to the first sequence block The origin of the chain Managed structure equivalent to CvConDensation Dimension of measurement vector Dimension of state vector Matrix of the linear Dynamics system Vector of State Number of the Samples Array of the Sample Vectors Temporary array of the Sample Vectors Confidence for each Sample Cumulative confidence Temporary vector RandomVector to update sample set Array of structures to generate random vectors Managed structure equivalent to CvConnectedComp area of the segmented component scalar value ROI of the segmented component Pointer to the CvSeq Managed structure equivalent to CvContour Micsellaneous flags Size of sequence header Pointer to the previous sequence Pointer to the next sequence Pointer to the 2nd previous sequence Pointer to the 2nd next sequence Total number of elements Size of sequence element in bytes Maximal bound of the last block Current write pointer How many elements allocated when the seq grows Where the seq is stored Free blocks list Pointer to the first sequence block If computed, stores the minimum enclosing rectangle Color Reserved0 Reserved1 Reserved2 Managed CvKalman structure number of measurement vector dimensions number of state vector dimensions number of control vector dimensions =state_pre->data.fl =state_post->data.fl =transition_matrix->data.fl =measurement_matrix->data.fl =measurement_noise_cov->data.fl =process_noise_cov->data.fl =gain->data.fl =error_cov_pre->data.fl =error_cov_post->data.fl temp1->data.fl temp2->data.fl predicted state (x'(k)): x(k)=A*x(k-1)+B*u(k) corrected state (x(k)): x(k)=x'(k)+K(k)*(z(k)-H*x'(k)) state transition matrix (A) control matrix (B) (it is not used if there is no control) measurement matrix (H) process noise covariance matrix (Q) measurement noise covariance matrix (R) priori error estimate covariance matrix P'(k)=A*P(k-1)*At + Q) Kalman gain matrix (K(k)): K(k)=P'(k)*Ht*inv(H*P'(k)*Ht+R) posteriori error estimate covariance matrix P(k)=(I-K(k)*H)*P'(k) temporary matrices temporary matrices temporary matrices temporary matrices temporary matrices Managed structure equivalent to CvMat CvMat signature (CV_MAT_MAGIC_VAL), element type and flags full row length in bytes underlying data reference counter Header reference count data pointers number of rows number of columns Width Height Get the number of channels Constants used by the MCvMat structure Offset of roi Managed structure equivalent to CvMatND CvMatND signature (CV_MATND_MAGIC_VAL), element type and flags number of array dimensions underlying data reference counter Header reference count data pointers pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) pairs (number of elements, distance between elements in bytes) for every dimension The MatND Dimension Number of elements in this dimension distance between elements in bytes for this dimension spatial and central moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments central moments central moments central moments central moments central moments central moments central moments m00 != 0 ? 1/sqrt(m00) : 0 Retrieves the spatial moment, which in case of image moments is defined as: M_{x_order,y_order}=sum_{x,y}(I(x,y) * x^{x_order} * y^{y_order}) where I(x,y) is the intensity of the pixel (x, y). x order of the retrieved moment, x_order >= 0 y order of the retrieved moment, y_order >= 0 and x_order + y_order <= 3 The spatial moment of the specific order Retrieves the central moment, which in case of image moments is defined as: mu_{x_order,y_order}=sum_{x,y}(I(x,y)*(x-x_c)^{x_order} * (y-y_c)^{y_order}), where x_c=M10/M00, y_c=M01/M00 - coordinates of the gravity center x order of the retrieved moment, x_order >= 0. y order of the retrieved moment, y_order >= 0 and x_order + y_order <= 3 The center moment Retrieves normalized central moment, which in case of image moments is defined as: eta_{x_order,y_order}=mu_{x_order,y_order} / M00^{(y_order+x_order)/2+1}, where mu_{x_order,y_order} is the central moment x order of the retrieved moment, x_order >= 0. y order of the retrieved moment, y_order >= 0 and x_order + y_order <= 3 The normalized center moment Get the HuMoments The Hu moment computed from this moment The Gravity Center of this Moment Structure contains the bounding box and confidence level for detected object Bounding box for a detected object Confidence level The class identifier Managed Structure equivalent to CvPoint2D64f x-coordinate y-coordinate Create a MCvPoint2D64f structure with the specific x and y coordinates x-coordinate y-coordinate Compute the sum of two 3D points The first point to be added The second point to be added The sum of two points Subtract from The first point The point to be added The sum of two points Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Returns true if the two points equals. The other point to compare with True if the two points equals Managed Structure equivalent to CvPoint3D32f x-coordinate y-coordinate z-coordinate Create a MCvPoint3D32f structure with the specific x and y coordinates x-coordinate y-coordinate z-coordinate Return the cross product of two 3D point the other 3D point The cross product of the two 3D point Return the dot product of two 3D point the other 3D point The dot product of the two 3D point Get the normalized point The implicit operator to convert MCvPoint3D32f to MCvPoint3D64f The point to be converted The converted point Subtract one point from the other The point to subtract from The value to be subtracted The subtraction of one point from the other Compute the sum of two 3D points The first point to be added The second point to be added The sum of two points Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Return true if the location of the two points are equal The other point to compare with True if the location of the two points are equal return the norm of this 3D point Managed Structure equivalent to CvPoint3D64f x-coordinate y-coordinate z-coordinate Create a MCvPoint3D64f structure with the specific x and y coordinates x-coordinate y-coordinate z-coordinate Return the cross product of two 3D point the other 3D point The cross product of the two 3D point Return the dot product of two 3D point the other 3D point The dot product of the two 3D point Compute the sum of two 3D points The first point to be added The second point to be added The sum of two points Subtract from The first point The point to be added The sum of two points Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Multiply the point with a scale The point to be multiplied The scale The point multiplied by the scale Check if the other point equals to this point The point to be compared True if the two points are equal Managed structure equivalent to CvScalar The scalar value The scalar value The scalar value The scalar value The scalar values as a vector (of size 4) The scalar values as an array Create a new MCvScalar structure using the specific values v0 Create a new MCvScalar structure using the specific values v0 v1 Create a new MCvScalar structure using the specific values v0 v1 v2 Create a new MCvScalar structure using the specific values v0 v1 v2 v3 Return the code to generate this MCvScalar from specific language The programming language to generate code from The code to generate this MCvScalar from specific language Return true if the two MCvScalar equals The other MCvScalar to compare with true if the two MCvScalar equals Managed structure equivalent to CvSeq Micsellaneous flags Size of sequence header Previous sequence Next sequence 2nd previous sequence 2nd next sequence Total number of elements Size of sequence element in bytes Maximal bound of the last block Current write pointer How many elements allocated when the sequence grows (sequence granularity Where the seq is stored Free blocks list Pointer to the first sequence block Managed structure equivalent to CvSeqBlock Previous sequence block. Next sequence block. Index of the first element in the block + sequence->first->start_index. Number of elements in the block. Pointer to the first element of the block. Wrapped CvSeqReader structure The size of the header sequence, beign read current block pointer to element be read next pointer to the beginning of block pointer to the end of block = seq->first->start_index pointer to previous element Wrapper CvSet structure micsellaneous flags size of sequence header previous sequence next sequence 2nd previous sequence 2nd next sequence total number of elements size of sequence element in bytes maximal bound of the last block current write pointer how many elements allocated when the sequence grows (sequence granularity where the seq is stored free blocks list pointer to the first sequence block Wrapper CvSetElem structure flags next_free Managed structure equivalent to CvSlice Start index End index Create a new MCvSlice using the specific start and end index start index end index Get the equivalent of CV_WHOLE_SEQ Wrapped CvStereoBMState structure 0 for now ~5x5..21x21 up to ~31 Could be 5x5..21x21. Correspondence using Sum of Absolute Difference (SAD): minimum disparity (=0) maximum disparity - minimum disparity areas with no texture are ignored Filter out pixels if there are other close matches Disparity variation window (not used) Acceptable range of variation in window (not used) If 1, the results may be more accurate at the expense of slower processing. internal buffers, do not modify (!) internal buffers, do not modify (!) internal buffers, do not modify (!) internal buffers, do not modify (!) internal buffers, do not modify (!) Wrapped CvStereoGCState structure Threshold for piece-wise linear data cost function (5 by default) Radius for smoothness cost function (1 by default; means Potts model) Parameters for the cost function Parameters for the cost function Parameters for the cost function Parameters for the cost function 10000 by default, (usually computed adaptively from the input data) 0 by default; see CvStereoBMState Defined by user; see CvStereoBMState Number of iterations; defined by user. Internal buffers Internal buffers Internal buffers Internal buffers Internal buffers Internal buffers Internal buffers Internal buffers Managed structure equivalent to CvTermCriteria CV_TERMCRIT value Maximum iteration Epsilon Create the termination criteria using the constrain of maximum iteration The maximum number of iteration allowed Create the termination Criteria using only the constrain of epsilon The epsilon value Create the termination criteria using the constrain of maximum iteration as well as epsilon The maximum number of iteration allowed The epsilon value OpenCV's DMatch structure Query descriptor index Train descriptor index Train image index Distance Managed structure equivalent to IplImage sizeof(IplImage) version (=0) Most of OpenCV functions support 1,2,3 or 4 channels ignored by OpenCV pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16U, IPL_DEPTH_16S, IPL_DEPTH_32S, IPL_DEPTH_32F and IPL_DEPTH_64F are supported ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV ignored by OpenCV 0 - interleaved color channels, 1 - separate color channels. cvCreateImage can only create interleaved images 0 - top-left origin, 1 - bottom-left origin (Windows bitmaps style) Alignment of image rows (4 or 8). OpenCV ignores it and uses widthStep instead image width in pixels image height in pixels image ROI. when it is not NULL, this specifies image region to process must be NULL in OpenCV ditto ditto image data size in bytes (=image->height*image->widthStep in case of interleaved data) pointer to aligned image data size of aligned image row in bytes border completion mode, ignored by OpenCV border completion mode, ignored by OpenCV border completion mode, ignored by OpenCV border completion mode, ignored by OpenCV border const, ignored by OpenCV border const, ignored by OpenCV border const, ignored by OpenCV border const, ignored by OpenCV pointer to a very origin of image data (not necessarily aligned) - it is needed for correct image deallocation OpenCV's KeyPoint class The location of the keypoint Size of the keypoint Orientation of the keypoint Response of the keypoint octave class id The range use to setup the histogram Create a range of the specific min/max value The min value of this range The max value of this range Return true if the two RangeF equals The other RangeF to compare with True if the two RangeF equals The minimum value of this range The Maximum value of this range Managed structure equivalent to CvBox2D An interface for the convex polygon Get the vertices of this convex polygon The vertices of this convex polygon The center of the box The size of the box The angle between the horizontal axis and the first side (i.e. width) in degrees Possitive value means counter-clock wise rotation Create a RotatedRect structure with the specific parameters The center of the box The size of the box The angle of the box in degrees. Possitive value means counter-clock wise rotation Shift the box by the specific amount The x value to be offseted The y value to be offseted Get the 4 verticies of this Box. The vertives of this RotatedRect Get the minimum enclosing rectangle for this Box The minimum enclosing rectangle for this Box Returns true if the two box are equal The other box to compare with True if two boxes are equal Convert a RectangleF to RotatedRect The rectangle The equivalent RotatedRect Represent an uninitialized RotatedRect A line segment A point on the line An other point on the line Create a line segment with the specific starting point and end point The first point on the line segment The second point on the line segment Determine which side of the line the 2D point is at the point 1 if on the right hand side; 0 if on the line; -1 if on the left hand side; Get the exterior angle between this line and The other line The exterior angle between this line and A point on the line An other point on the line The direction of the line, the norm of which is 1 Get the length of the line segment A line segment A point on the line An other point on the line Create a line segment with the specific start point and end point The first point on the line segment The second point on the line segment Obtain the Y value from the X value using first degree interpolation The X value The Y value Determin which side of the line the 2D point is at the point 1 if on the right hand side; 0 if on the line; -1 if on the left hand side; Get the exterior angle between this line and The other line The exterior angle between this line and A point on the line An other point on the line Get the length of the line segment The direction of the line, the norm of which is 1 A 3D line segment A point on the line An other point on the line Create a line segment with the specific start point and end point The first point on the line segment The second point on the line segment A point on the line An other point on the line Get the length of the line segment A collection of points Fit an ellipse to the points collection The points to be fitted An ellipse convert a series of points to LineSegment2D the array of points if true, the last line segment is defined by the last point of the array and the first point of the array array of LineSegment2D convert a series of System.Drawing.Point to LineSegment2D the array of points if true, the last line segment is defined by the last point of the array and the first point of the array array of LineSegment2D Find the bounding rectangle for the specific array of points The collection of points The bounding rectangle for the array of points Re-project pixels on a 1-channel disparity map to array of 3D points. Disparity map The re-projection 4x4 matrix, can be arbitrary, e.g. the one, computed by cvStereoRectify The reprojected 3D points Generate a random point cloud around the ellipse. The region where the point cloud will be generated. The axes of e corresponds to std of the random point cloud. The number of points to be generated A random point cloud around the ellipse Attribute used by ImageBox to generate Operation Menu Constructor Get or Set the exposable value, if true, this function will be displayed in Operation Menu of ImageBox The catefory of this function The size for each generic parameter Options The options for generic parameters A generic parameter for the Operation class Create a generic parameter for the Operation class The selected generic parameter typ The types that can be used The selected generic parameter type The types that can be used A collection of reflection function that can be applied to ColorType object Get the display color for each channel The color The display color for each channel Get the names of the channels The color The names of the channels A collection of reflection function that can be applied to IImage object Get all the methods that belongs to the IImage and Image class with ExposableMethodAttribute set true. The IImage object to be refelected for methods marked with ExposableMethodAttribute All the methods that belongs to the IImage and Image class with ExposableMethodAttribute set true Get the color type of the image The image to apply reflection on The color type of the image Get the depth type of the image The image to apply reflection on The depth type of the image Get the color at the specific location of the image The image to obtain pixel value from The location to sample a pixel The color at the specific location A circle Create a circle with the specific center and radius The center of this circle The radius of this circle Compare this circle with The other box to be compared true if the two boxes equals Get or Set the center of the circle The radius of the circle The area of the circle A 2D cross Construct a cross The center of the cross the width of the cross the height of the cross The center of this cross The size of this cross Get the horizonal linesegment of this cross Get the vertical linesegment of this cross An ellipse Create an ellipse with specific parameters The center of the ellipse The width and height of the ellipse The rotation angle in radian for the ellipse Create an ellipse from the specific RotatedRect The RotatedRect representation of this ellipse The RotatedRect representation of this ellipse An interface for the convex polygon Get the vertices of this convex polygon The vertices of this convex polygon A 2D triangle Create a triangle using the specific vertices The first vertex The second vertex The third vertex Compare two triangles and return true if equal the other triangles to compare with true if the two triangles equals, false otherwise Get the vertices of this triangle The vertices of this triangle One of the vertex of the triangle One of the vertex of the triangle One of the vertex of the triangle Get the area of this triangle Returns the centroid of this triangle A 3D triangle Create a triangle using the specific vertices The first vertex The second vertex The third vertex Compare two triangles and return true if equal the other triangles to compare with true if the two triangles equals, false otherwise One of the vertex of the triangle One of the vertex of the triangle One of the vertex of the triangle Get the area of this triangle Get the normal of this triangle Returns the centroid of this triangle Create a sparse matrix The type of elements in this matrix Create a sparse matrix of the specific dimension The dimension of the sparse matrix Release the unmanaged memory associated with this sparse matrix Get or Set the value in the specific and the row of the element the col of the element The element on the specific and Class for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by K. Konolige. Create a stereoBM object the linear size of the blocks compared by the algorithm. The size should be odd (as the block is centered at the current pixel). Larger block size implies smoother, though less accurate disparity map. Smaller block size gives more detailed disparity map, but there is higher chance for algorithm to find a wrong correspondence. the disparity search range. For each pixel algorithm will find the best disparity from 0 (default minimum disparity) to . The search range can then be shifted by changing the minimum disparity. Release the stereo state and all the memory associate with it Pointer to the stereo matcher Extension methods for StereoMather Computes disparity map for the specified stereo pair The stereo matcher Left 8-bit single-channel image. Right image of the same size and the same type as the left one. Output disparity map. It has the same size as the input images. Some algorithms, like StereoBM or StereoSGBM compute 16-bit fixed-point disparity map (where each disparity value has 4 fractional bits), whereas other algorithms output 32-bit floating-point disparity map This is a variation of "Stereo Processing by Semiglobal Matching and Mutual Information" by Heiko Hirschmuller. We match blocks rather than individual pixels, thus the algorithm is called SGBM (Semi-global block matching) Create a stereo disparity solver using StereoSGBM algorithm (combination of H. Hirschmuller + K. Konolige approaches) Minimum possible disparity value. Normally, it is zero but sometimes rectification algorithms can shift images, so this parameter needs to be adjusted accordingly. Maximum disparity minus minimum disparity. The value is always greater than zero. In the current implementation, this parameter must be divisible by 16. Matched block size. It must be an odd number >=1 . Normally, it should be somewhere in the 3..11 range. Use 0 for default. The first parameter controlling the disparity smoothness. It is the penalty on the disparity change by plus or minus 1 between neighbor pixels. Reasonably good value is 8*number_of_image_channels*SADWindowSize*SADWindowSize. Use 0 for default The second parameter controlling the disparity smoothness. It is the penalty on the disparity change by more than 1 between neighbor pixels. The algorithm requires > . Reasonably good value is 32*number_of_image_channels*SADWindowSize*SADWindowSize. Use 0 for default Maximum allowed difference (in integer pixel units) in the left-right disparity check. Set it to a non-positive value to disable the check. Truncation value for the prefiltered image pixels. The algorithm first computes x-derivative at each pixel and clips its value by [-preFilterCap, preFilterCap] interval. The result values are passed to the Birchfield-Tomasi pixel cost function. Margin in percentage by which the best (minimum) computed cost function value should “win” the second best value to consider the found match correct. Normally, a value within the 5-15 range is good enough. Maximum size of smooth disparity regions to consider their noise speckles and invalidate. Set it to 0 to disable speckle filtering. Otherwise, set it somewhere in the 50-200 range Maximum disparity variation within each connected component. If you do speckle filtering, set the parameter to a positive value, it will be implicitly multiplied by 16. Normally, 1 or 2 is good enough. Set it to HH to run the full-scale two-pass dynamic programming algorithm. It will consume O(W*H*numDisparities) bytes, which is large for 640x480 stereo and huge for HD-size pictures. By default, it is set to false. Release the unmanged memory associated with this stereo solver Pointer to the StereoMatcher The SGBM mode This is the default mode, the algorithm is single-pass, which means that you consider only 5 directions instead of 8 Run the full-scale two-pass dynamic programming algorithm. It will consume O(W*H*numDisparities) bytes, which is large for 640x480 stereo and huge for HD-size pictures. Planar Subdivision, can be use to compute Delaunnay's triangulation or Voroni diagram. Start the Delaunay's triangulation in the specific region of interest. The region of interest of the triangulation Create a planar subdivision from the given points. The ROI is computed as the minimum bounding Rectangle for the input points If true, any exception during insert will be ignored The points to be inserted to this planar subdivision Insert a collection of points to this planar subdivision The points to be inserted to this planar subdivision If true, any exception during insert will be ignored Insert a point to the triangulation. The point to be inserted Locates input point within subdivision The point to locate The output edge the point falls onto or right to Optional output vertex double pointer the input point coincides with The type of location for the point Finds subdivision vertex that is the closest to the input point. It is not necessarily one of vertices of the facet containing the input point, though the facet (located using cvSubdiv2DLocate) is used as a starting point. Input point The nearest subdivision vertex The location type of the point Obtains the list of Voronoi Facets The list of Voronoi Facets Returns the triangles subdivision of the current planar subdivision. The triangles might contains virtual points that do not belongs to the inserted points, if you do not want those points, set to false The triangles subdivision in the current planar subdivision Release unmanaged resources A Voronoi Facet Create a Voronoi facet using the specific and The point this facet associate with The points that defines the contour of this facet The point this facet associates to Get or set the vertices of this facet A class that can be used for writing geotiff The color type of the image to be written The depth type of the image to be written Create a tiff writer to save an image The file name to be saved Write the image to the tiff file The image to be written Write the geo information into the tiff file Model Tie Point, an array of size 6 Model pixel scale, an array of size 3 Release the writer and write all data on to disk. A writer for writing GeoTiff The color type of the image to be written The depth type of the image to be written Create a TitleTiffWriter. The name of the file to be written to The size of the image The tile size in pixels Write a tile into the tile tiff The starting row for the tile The starting col for the tile The tile to be written Write the whole image as tile tiff The image to be written Get the equivalent size for a tile of data as it would be returned in a call to TIFFReadTile or as it would be expected in a call to TIFFWriteTile. Get the number of bytes of a row of data in a tile. Get tile size in pixels. The Image which contains time stamp which specified what time this image is created Create a empty Image Create a blank Image of the specified width, height, depth and color. The width of the image The height of the image The initial color of the image Create an empty Image of the specified width and height The width of the image The height of the image The time this image is captured The equivalent of cv::Mat, should only be used if you know what you are doing. In most case you should use the Matrix class instead Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime deserailization of the object Serialization info Streaming context A function used for runtime serialization of the object Serialization info streaming context Create an empty cv::UMat Create a umat of the specific type. Number of rows in a 2D array. Number of columns in a 2D array. Mat element type Number of channels Create a umat of the specific type. Size of the UMat Mat element type Number of channels Get the Umat header for the specific roi of the parent The parent Umat The region of interest Allocates new array data if needed. New number of rows. New number of columns. New matrix element depth type. New matrix number of channels Copy the data in this umat to the other mat Operation mask. Its non-zero elements indicate which matrix elements need to be copied. The input array to copy to Sets all or some of the array elements to the specified value. Assigned scalar converted to the actual array type. Operation mask of the same size as the umat. Sets all or some of the array elements to the specified value. Assigned scalar value. Operation mask of the same size as the umat. Return the Mat representation of the UMat Release all the unmanaged memory associated with this object. Pointer to the InputArray Pointer to the OutputArray Pointer to the InputOutputArray Changes the shape and/or the number of channels of a 2D matrix without copying the data. New number of channels. If the parameter is 0, the number of channels remains the same. New number of rows. If the parameter is 0, the number of rows remains the same. A new mat header that has different shape Convert this Mat to Image The type of Color The type of Depth The image Returns the min / max location and values for the image The maximum locations for each channel The maximum values for each channel The minimum locations for each channel The minimum values for each channel Converts an array to another data type with optional scaling. Output matrix; if it does not have a proper size or type before the operation, it is reallocated. Desired output matrix type or, rather, the depth since the number of channels are the same as the input has; if rtype is negative, the output matrix will have the same type as the input. Optional scale factor. Optional delta added to the scaled values. Split current Image into an array of gray scale images where each element in the array represent a single color channel of the original image An array of gray scale images where each element in the array represent a single color channel of the original image Save this image to the specific file. The name of the file to be saved to The image format is chosen depending on the filename extension, see cvLoadImage. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use cvCvtScale and cvCvtColor to convert it before saving, or use universal cvSave to save the image to XML or YAML format. Make a clone of the current UMat. A clone of the current UMat. Indicates whether the current object is equal to another object of the same type. An object to compare with this object. true if the current object is equal to the parameter; otherwise, false. Copy data from this Mat to the managed array The type of managed data array The managed array where data will be copied to. Copy data from managed array to this Mat The type of managed data array The managed array where data will be copied from Computes the dot product of two mats The matrix to compute dot product with The dot product The size of this matrix The number of rows The number of columns The size of the elements in this matrix The Get property provide a more efficient way to convert Image<Gray, Byte>, Image<Bgr, Byte> and Image<Bgra, Byte> into Bitmap such that the image data is shared with Bitmap. If you change the pixel value on the Bitmap, you change the pixel values on the Image object as well! For other types of image this property has the same effect as ToBitmap() Take extra caution not to use the Bitmap after the Image object is disposed The Set property convert the bitmap to this Image type. True if the data is continues True if the matrix is a submatrix of another matrix Depth type True if the matrix is empty Number of channels The method returns the number of array elements (a number of pixels if the array represents an image) The matrix dimensionality Allocation usage. Default Buffer allocation policy is platform and usage specific Buffer allocation policy is platform and usage specific Buffer allocation policy is platform and usage specific It is not equal to: AllocateHostMemory | AllocateDeviceMemory A raw data storage The type of elements in the storage The file info Create a binary File Storage The file name of the storage Create a binary File Storage The file name of the storage The data will be read in trunk of this size internally. Can be use to seed up the file read. A good number will be 4096 Create a binary File Storage with the specific data The file name of the storage, all data in the existing file will be replaced The data which will be stored in the storage Append the samples to the end of the storage The samples to be appended to the storage Delete all data in the existing storage, if there is any. Estimate the number of elements in this storage as the size of the storage divided by the size of the elements An estimation of the number of elements in this storage Get a copy of the first element in the storage. If the storage is empty, a default value will be returned A copy of the first element in the storage. If the storage is empty, a default value will be returned Get the subsampled data in this storage The subsample rate The sub-sampled data in this storage Get the data in this storage The data in this storage The file name of the storage The default exception to be thrown when error encounter in Open CV The default exception to be thrown when error is encountered in Open CV The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the souce where error is encountered The numeric code for error status The corresponding error string for the Status code The name of the function the error is encountered A description of the error The source file name where error is encountered The line number in the souce where error is encountered Wrapper for cv::String. This class support UTF-8 chars. Create a CvString from System.String The System.String object to be converted to CvString Create an empty CvString Get the string representation of the CvString The string representation of the CvString Release all the unmanaged resource associated with this object. Gets the length of the string The length of the string Utilities class The ColorPalette of Grayscale for Bitmap Format8bppIndexed Convert the color palette to four lookup tables The color palette to transform Lookup table for the B channel Lookup table for the G channel Lookup table for the R channel Lookup table for the A channel Convert arrays of data to matrix Arrays of data A two dimension matrix that represent the array Convert arrays of points to matrix Arrays of points A two dimension matrix that represent the points Compute the minimum and maximum value from the points The points The minimum x,y,z values The maximum x,y,z values Copy a generic vector to the unmanaged memory The data type of the vector The source vector Pointer to the destination unmanaged memory Specify the number of bytes to copy. If this is -1, the number of bytes equals the number of bytes in the array The number of bytes copied Copy a jagged two dimensional array to the unmanaged memory The data type of the jagged two dimensional The source array Pointer to the destination unmanaged memory Copy a jagged two dimensional array from the unmanaged memory The data type of the jagged two dimensional The src array Pointer to the destination unmanaged memory memcpy function the destination of memory copy the source of memory copy the number of bytes to be copied Given the source and destination color type, compute the color conversion code for CvInvoke.cvCvtColor function The source color type. Must be a type inherited from IColor The dest color type. Must be a type inherited from IColor The color conversion code for CvInvoke.cvCvtColor function A DataLogger for unmanaged code to log data back to managed code, using callback. Create a MessageLogger and register the callback function The log level. Log some data Pointer to some unmanaged data The logLevel. The Log function only logs when the is greater or equals to the DataLogger's logLevel Release the DataLogger and all the unmanaged memory associated with it. The event that will be raised when the unmanaged code send over data A generic version of the DataLogger The supported type includes System.String and System.ValueType Create a new DataLogger The log level. Log some data The data to be logged The logLevel. The Log function only logs when the is greater or equals to the DataLogger's logLevel Implicit operator for IntPtr The DataLogger The unmanaged pointer for this DataLogger Release the unmanaged memory associated with this DataLogger The event that will be raised when the unmanaged code send over data Pointer to the unmanaged object The event that will be raised when the unmanaged code send over data Extension methods to the IAlgorithm interface Reads algorithm parameters from a file storage. The algorithm. The node from file storage. Stores algorithm parameters in a file storage The algorithm. The storage. Extension methods for IInputArrays Determines whether the specified input array is umat. The array True if it is a umat This is the proxy class for passing read-only input arrays into OpenCV functions. Create a Input array from an existing unmanaged inputArray pointer The unmanaged pointer the the InputArray Get an empty input array An empty input array Get the Mat from the input array The index, in case if this is an VectorOfMat The Mat Get the UMat from the input array The index, in case if this is an VectorOfUMat The UMat Get the size of the input array The optional index The size of the input array Return true if the input array is empty True if the input array is empty Get the depth type The optional index The depth type Get the number of channels The optional index The number of channels Release all the unmanaged memory associated with this InputArray True if the input array is a Mat True if the input array is an UMat True if the input array is a vector of Mat True if the input array is a vector of UMat True if the input array is a Matx This type is very similar to InputArray except that it is used for input/output function parameters. This type is very similar to InputArray except that it is used for output function parameters. Create an OutputArray from an existing unmanaged outputArray pointer The pointer to the unmanaged outputArray Get an empty output array An empty output array Release the unmanaged memory associated with this output array. True if the output array is fixed size True if the output array is fixed type True if the output array is needed Create an InputOutputArray from an existing unmanaged inputOutputArray pointer The pointer to the existing inputOutputArray Get an empty InputOutputArray An empty InputOutputArray Release all the memory associated with this InputOutputArry An implementation of IInputArray intented to convert data to IInputArray Create an InputArray from MCvScalar The MCvScalar to be converted to InputArray Create an InputArray from a double value The double value to be converted to InputArray Convert double scalar to InputArray The double scalar The InputArray Convert MCvSalar to InputArray The MCvScalar The InputArray Release all the unmanaged memory associated with this InputArray The pointer to the input array Cache the size of various header in bytes The size of PointF The size of RangF The size of PointF The size of MCvMat The size of MCvSeq The size of MCvContour The size of IplImage The size of CvSeqBlock The size of MCvPoint3D32f The size of MCvMatND The size of MCvBlob This class canbe used to initiate TBB. Only usefull if it is compiled with TBB support Initialize the TBB task scheduler Release the TBB task scheduler Wrapped class of the C++ standard vector of Byte. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Byte Create an standard vector of Byte of the specific size The size of the vector Create an standard vector of Byte with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of Byte An array of Byte Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of ColorPoint. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of ColorPoint Create an standard vector of ColorPoint of the specific size The size of the vector Create an standard vector of ColorPoint with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of ColorPoint An array of ColorPoint Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of CvString. Create an empty standard vector of CvString Create an standard vector of CvString of the specific size The size of the vector Create an standard vector of CvString with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of DMatch. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of DMatch Create an standard vector of DMatch of the specific size The size of the vector Create an standard vector of DMatch with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of DMatch An array of DMatch Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of Double. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Double Create an standard vector of Double of the specific size The size of the vector Create an standard vector of Double with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of Double An array of Double Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of Float. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Float Create an standard vector of Float of the specific size The size of the vector Create an standard vector of Float with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of Float An array of Float Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of Int. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Int Create an standard vector of Int of the specific size The size of the vector Create an standard vector of Int with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of Int An array of Int Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of KeyPoint. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of KeyPoint Create an standard vector of KeyPoint of the specific size The size of the vector Create an standard vector of KeyPoint with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of KeyPoint An array of KeyPoint Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Remove keypoints within borderPixels of an image edge. Image size Border size in pixel Remove keypoints of sizes out of range. Minimum size Maximum size Remove keypoints from some image by mask for pixels of this image. The mask Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of Mat. Create an empty standard vector of Mat Create an standard vector of Mat of the specific size The size of the vector Create an standard vector of Mat with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Convert a CvArray to cv::Mat and push it into the vector The type of depth of the cvArray The cvArray to be pushed into the vector Convert a group of CvArray to cv::Mat and push them into the vector The type of depth of the cvArray The values to be pushed to the vector Get the size of the vector Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of OclPlatformInfo. Create an empty standard vector of OclPlatformInfo Create an standard vector of OclPlatformInfo of the specific size The size of the vector Create an standard vector of OclPlatformInfo with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of Point. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Point Create an standard vector of Point of the specific size The size of the vector Create an standard vector of Point with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of Point An array of Point Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of Point3D32F. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Point3D32F Create an standard vector of Point3D32F of the specific size The size of the vector Create an standard vector of Point3D32F with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of Point3D32F An array of Point3D32F Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of PointF. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of PointF Create an standard vector of PointF of the specific size The size of the vector Create an standard vector of PointF with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of PointF An array of PointF Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of Rect. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Rect Create an standard vector of Rect of the specific size The size of the vector Create an standard vector of Rect with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of Rect An array of Rect Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of Triangle2DF. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of Triangle2DF Create an standard vector of Triangle2DF of the specific size The size of the vector Create an standard vector of Triangle2DF with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of Triangle2DF An array of Triangle2DF Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of UMat. Create an empty standard vector of UMat Create an standard vector of UMat of the specific size The size of the vector Create an standard vector of UMat with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of VectorOfDMatch. Create an empty standard vector of VectorOfDMatch Create an standard vector of VectorOfDMatch of the specific size The size of the vector Create an standard vector of VectorOfDMatch with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Create the standard vector of VectorOfDMatch Convert the standard vector to arrays of int Arrays of int Get the size of the vector Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of VectorOfInt. Create an empty standard vector of VectorOfInt Create an standard vector of VectorOfInt of the specific size The size of the vector Create an standard vector of VectorOfInt with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Create the standard vector of VectorOfInt Convert the standard vector to arrays of int Arrays of int Get the size of the vector Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of VectorOfPoint. Create an empty standard vector of VectorOfPoint Create an standard vector of VectorOfPoint of the specific size The size of the vector Create an standard vector of VectorOfPoint with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Create the standard vector of VectorOfPoint Convert the standard vector to arrays of int Arrays of int Get the size of the vector Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of VectorOfPoint3D32F. Create an empty standard vector of VectorOfPoint3D32F Create an standard vector of VectorOfPoint3D32F of the specific size The size of the vector Create an standard vector of VectorOfPoint3D32F with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Create the standard vector of VectorOfPoint3D32F Convert the standard vector to arrays of int Arrays of int Get the size of the vector Get the item in the specific index The index The item in the specific index Wrapped class of the C++ standard vector of VectorOfPointF. Create an empty standard vector of VectorOfPointF Create an standard vector of VectorOfPointF of the specific size The size of the vector Create an standard vector of VectorOfPointF with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Create the standard vector of VectorOfPointF Convert the standard vector to arrays of int Arrays of int Get the size of the vector Get the item in the specific index The index The item in the specific index Use zlib included in OpenCV to perform in-memory binary compression and decompression Compress the data using the specific compression level The data to be compressed The compression level, 0-9 where 0 mean no compression at all The compressed bytes Uncompress the data The compressed data The estimated size fo the uncompress data. Must be large enough to hold the decompressed data. The decompressed data An abstract class that can be use the perform background / forground detection. Update the background model The image that is used to update the background model Use -1 for default The output forground mask K-nearest neighbors - based Background/Foreground Segmentation Algorithm. Create a K-nearest neighbors - based Background/Foreground Segmentation Algorithm. Length of the history. Threshold on the squared distance between the pixel and the sample to decide whether a pixel is close to that sample. This parameter does not affect the background update. If true, the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to false. Release all the unmanaged memory associated with this background model. The class implements the following algorithm: "Improved adaptive Gaussian mixture model for background subtraction" Z.Zivkovic International Conference Pattern Recognition, UK, August, 2004. http://www.zoranz.net/Publications/zivkovic2004ICPR.pdf Create an "Improved adaptive Gaussian mixture model for background subtraction". The length of the history. The maximum allowed number of mixture components. Actual number is determined dynamically per pixel. If true, the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to false. Release all the unmanaged memory associated with this background model. Create a video writer that write images to video format Create a video writer using the specific information. On windows, it will open a codec selection dialog. On linux, it will use the default codec for the specified filename The name of the video file to be written to frame rate per second the size of the frame true if this is a color video, false otherwise Create a video writer using the specific information The name of the video file to be written to Compression code. Usually computed using CvInvoke.CV_FOURCC. On windows use -1 to open a codec selection dialog. On Linux, use CvInvoke.CV_FOURCC('I', 'Y', 'U', 'V') for default codec for the specific file name. frame rate per second the size of the frame true if this is a color video, false otherwise Write a single frame to the video writer The frame to be written to the video writer Generate 4-character code of codec used to compress the frames. For example, CV_FOURCC('P','I','M','1') is MPEG-1 codec, CV_FOURCC('M','J','P','G') is motion-jpeg codec etc. Release the video writer and all the memory associate with it Neural network Interface for statistical models in OpenCV ML. Return the pointer to the StatModel object The pointer to the StatModel object Create a neural network using the specific parameters Release the memory associated with this neural network Sets the layer sizes. Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is SigmoidSym The first parameter of the activation function. The second parameter of the activation function. Sets training method and common parameters. The training method. The param1. The param2. Termination criteria of the training algorithm BPROP: Strength of the weight gradient term BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations) RPROP: Initial value Delta_0 of update-values Delta_{ij} RPROP: Increase factor RPROP: Decrease factor RPROP: Update-values lower limit RPROP: Update-values upper limit Possible activation functions Identity sigmoid symmetric Gaussian Training method for ANN_MLP Back-propagation algorithm Batch RPROP algorithm This class contains functions to call into machine learning library Release the ANN_MLP model The ANN_MLP model to be released Save the statistic model to the specific file The statistic model to save The file name to save to Clear the statistic model The model to be cleared Create a normal bayes classifier The normal bayes classifier Release the memory associated with the bayes classifier The classifier to release Create a KNearest classifier The KNearest classifier Release the KNearest classifier The classifier to release Create a default EM model Pointer to the EM model Release the EM model Given the EM , predict the probability of the The EM model The input samples The prediction results, should have the same # of rows as the The result. Create a default SVM model Pointer to the SVM model Release the SVM model and all the memory associated to ir The SVM model to be released Get the default parameter grid for the specific SVM type The SVM type The parameter grid reference, values will be filled in by the function call The method trains the SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree from CvSVMParams. By the optimality one mean that the cross-validation estimate of the test set error is minimal. The SVM model The training data. Cross-validation parameter. The training set is divided into k_fold subsets, one subset being used to train the model, the others forming the test set. So, the SVM algorithm is executed k_fold times cGrid gammaGrid pGrid nuGrid coedGrid degreeGrid If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset. The method retrieves a given support vector The SVM model The output support vectors Create a default decision tree Pointer to the decision tree Release the decision tree model The decision tree model to be released Create a default random tree Pointer to the random tree Release the random tree model The random tree model to be released Create a default boost classifier Pointer to the boost classifier Release the boost classifier The boost classifier to be released Boost Tree Create a default Boost classifier Release the Boost classifier and all memory associate with it Cluster possible values of a categorical variable into K less than or equals maxCategories clusters to find a suboptimal split The maximum possible depth of the tree If the number of samples in a node is less than this parameter then the node will not be split If CVFolds greater than 1 then algorithms prunes the built decision tree using K-fold If true then surrogate splits will be built If true then a pruning will be harsher If true then pruned branches are physically removed from the tree Termination criteria for regression trees Boost Type Discrete AdaBoost. Real AdaBoost. It is a technique that utilizes confidence-rated predictions and works well with categorical data. LogitBoost. It can produce good regression fits. Gentle AdaBoost. It puts less weight on outlier data points and for that reason is often good with regression data. Decision Trees Create a default decision tree Release the decision tree and all the memory associate with it Cluster possible values of a categorical variable into K less than or equals maxCategories clusters to find a suboptimal split The maximum possible depth of the tree If the number of samples in a node is less than this parameter then the node will not be split If CVFolds greater than 1 then algorithms prunes the built decision tree using K-fold If true then surrogate splits will be built If true then a pruning will be harsher If true then pruned branches are physically removed from the tree Termination criteria for regression trees Expectation Maximization model Create an Expectation Maximization model Estimate the Gaussian mixture parameters from a samples set. This variation starts with Expectation step. You need to provide initial means of mixture components. Optionally you can pass initial weights and covariance matrices of mixture components. Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing. Initial means of mixture components. It is a one-channel matrix of nclusters x dims size. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing. The vector of initial covariance matrices of mixture components. Each of covariance matrices is a one-channel matrix of dims x dims size. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing. Initial weights of mixture components. It should be a one-channel floating-point matrix with 1 x nclusters or nclusters x 1 size. The optional output matrix that contains a likelihood logarithm value for each sample. It has nsamples x 1 size and CV_64FC1 type. The optional output "class label" (indices of the most probable mixture component for each sample). It has nsamples x 1 size and CV_32SC1 type. The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has nsamples x nclusters size and CV_64FC1 type. Estimate the Gaussian mixture parameters from a samples set. This variation starts with Expectation step. Initial values of the model parameters will be estimated by the k-means algorithm. Unlike many of the ML models, EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the Maximum Likelihood Estimate of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure, and optionally computes the output "class label" for each sample. The trained model can be used further for prediction, just like any other classifier. Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing. The probs0. The optional output matrix that contains a likelihood logarithm value for each sample. It has nsamples x 1 size and CV_64FC1 type. The optional output "class label" for each sample(indices of the most probable mixture component for each sample). It has nsamples x 1 size and CV_32SC1 type. The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has nsamples x nclusters size and CV_64FC1 type. Predict the probability of the The input samples The prediction results, should have the same # of rows as the Release the memory associated with this EM model The number of mixtures The type of the mixture covariation matrices Termination criteria of the procedure. EM algorithm stops either after a certain number of iterations (term_crit.num_iter), or when the parameters change too little (no more than term_crit.epsilon) from iteration to iteration The type of the mixture covariation matrices A covariation matrix of each mixture is a scaled identity matrix, ?k*I, so the only parameter to be estimated is ?k. The option may be used in special cases, when the constraint is relevant, or as a first step in the optimization (e.g. in case when the data is preprocessed with PCA). The results of such preliminary estimation may be passed again to the optimization procedure, this time with cov_mat_type=COV_MAT_DIAGONAL A covariation matrix of each mixture may be arbitrary diagonal matrix with positive diagonal elements, that is, non-diagonal elements are forced to be 0's, so the number of free parameters is d for each matrix. This is most commonly used option yielding good estimation results A covariation matrix of each mixture may be arbitrary symmetrical positively defined matrix, so the number of free parameters in each matrix is about d2/2. It is not recommended to use this option, unless there is pretty accurate initial estimation of the parameters and/or a huge number of training samples The default The KNearest classifier Create a default KNearest classifier Release the classifier and all the memory associated with it Default number of neighbors to use in predict method Whether classification or regression model should be trained Parameter for KDTree implementation Algorithm type ML implements logistic regression, which is a probabilistic classification technique. Initializes a new instance of the class. Release the unmanaged resources Return the pointer to the StatModel object Return the pointer to the algorithm object Learning rate Number of iterations Kind of regularization to be applied Kind of training method to be applied Specifies the number of training samples taken in each step of Mini-Batch Gradient Descent Termination criteria of the algorithm Specifies the kind of training method used. Batch method Set MiniBatchSize to a positive integer when using this method. Specifies the kind of regularization to be applied. Regularization disabled. L1 norm L2 norm A Normal Bayes Classifier Create a normal Bayes classifier Release the memory associated with this classifier An OpenCV decision Tree Node The assigned to the node normalized class index (to 0..class_count-1 range), it is used internally in classification trees and tree ensembles. The tree index in a ordered sequence of trees. The indices are used during and after the pruning procedure. The root node has the maximum value Tn of the whole tree, child nodes have Tn less than or equal to the parent's Tn, and the nodes with Tn<=CvDTree::pruned_tree_idx are not taken into consideration at the prediction stage (the corresponding branches are considered as cut-off), even if they have not been physically deleted from the tree at the pruning stage. The value assigned to the tree node. It is either a class label, or the estimated function value. Pointer to the parent tree node Pointer to the left tree node Pointer to the right tree node Pointer to CvDTreeSplit The number of samples that fall into the node at the training stage. It is used to resolve the difficult cases - when the variable for the primary split is missing, and all the variables for other surrogate splits are missing too, the sample is directed to the left if left->sample_count>right->sample_count and to the right otherwise The node depth, the root node depth is 0, the child nodes depth is the parent's depth + 1. Internal parameters Internal parameters Internal parameters Internal parameters Global pruning data Global pruning data Global pruning data Global pruning data Global pruning data Cross-validation pruning data Cross-validation pruning data Cross-validation pruning data Decision tree node split Index of the variable used in the split When it equals to 1, the inverse split rule is used (i.e. left and right branches are exchanged in the expressions below) The split quality, a positive number. It is used to choose the best primary split, then to choose and sort the surrogate splits. After the tree is constructed, it is also used to compute variable importance Pointer to the next split in the node split list Get or Set the Order of this TreeSplit Get the bit array indicating the value subset in case of split on a categorical variable. The rule is: if var_value in subset then next_node<-left else next_node<-right Wrapped Order structure The threshold value in case of split on an ordered variable. The rule is: if var_value < c then next_node<-left else next_node<-right Used internally by the training algorithm Wrapped CvParamGrid structure used by SVM Minimum value Maximum value step The flags for the neural network training function The data layout type Feature vectors are stored as cols Feature vectors are stored as rows Boosting type Discrete AdaBoost Real AdaBoost LogitBoost Gentle AdaBoost Splitting criteria, used to choose optimal splits during a weak tree construction Use the default criteria for the particular boosting method, see below Use Gini index. This is default option for Real AdaBoost; may be also used for Discrete AdaBoost Use misclassification rate. This is default option for Discrete AdaBoost; may be also used for Real AdaBoost Use least squares criteria. This is default and the only option for LogitBoost and Gentle AdaBoost Variable type Numerical or Ordered Catagorical Random trees Create a random tree Release the random tree and all memory associate with it Cluster possible values of a categorical variable into K less than or equals maxCategories clusters to find a suboptimal split The maximum possible depth of the tree If the number of samples in a node is less than this parameter then the node will not be split If CVFolds greater than 1 then algorithms prunes the built decision tree using K-fold If true then surrogate splits will be built If true then a pruning will be harsher If true then pruned branches are physically removed from the tree Termination criteria for regression trees If true then variable importance will be calculated The size of the randomly selected subset of features at each tree node and that are used to find the best split(s) The termination criteria that specifies when the training algorithm stops A statistic model Save the statistic model to file The StatModel The file name where this StatModel will be saved Trains the statistical model. The stat model. The training samples. Type of the layout. Vector of responses associated with the training samples. Trains the statistical model. The model. The train data. The flags. Predicts response(s) for the provided sample(s) The model. The input samples, floating-point matrix. The optional output matrix of results. The optional flags, model-dependent. Response for the provided sample Clear the statistic model Support Vector Machine Create a support Vector Machine Release all the memory associated with the SVM Get the default parameter grid for the specific SVM type The SVM type The default parameter grid for the specific SVM type The method trains the SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree from CvSVMParams. By the optimality one mean that the cross-validation estimate of the test set error is minimal. The training data. Cross-validation parameter. The training set is divided into k_fold subsets, one subset being used to train the model, the others forming the test set. So, the SVM algorithm is executed k_fold times The method trains the SVM model automatically by choosing the optimal parameters C, gamma, p, nu, coef0, degree from CvSVMParams. By the optimality one mean that the cross-validation estimate of the test set error is minimal. The training data. Cross-validation parameter. The training set is divided into k_fold subsets, one subset being used to train the model, the others forming the test set. So, the SVM algorithm is executed k_fold times cGrid grid for gamma grid for p grid for nu grid for coeff grid for degree If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset. Retrieves all the support vectors. All the support vector as floating-point matrix, where support vectors are stored as matrix rows. Initialize with one of predefined kernels Type of a SVM formulation Parameter gamma of a kernel function Parameter coef0 of a kernel function Parameter degree of a kernel function Parameter C of a SVM optimization problem Parameter nu of a SVM optimization problem Parameter epsilon of a SVM optimization problem Termination criteria of the iterative SVM training procedure which solves a partial case of constrained quadratic optimization problem Type of a SVM kernel Type of SVM n-class classification (n>=2), allows imperfect separation of classes with penalty multiplier C for outliers n-class classification with possible imperfect separation. Parameter nu (in the range 0..1, the larger the value, the smoother the decision boundary) is used instead of C one-class SVM. All the training data are from the same class, SVM builds a boundary that separates the class from the rest of the feature space Regression. The distance between feature vectors from the training set and the fitting hyper-plane must be less than p. For outliers the penalty multiplier C is used Regression; nu is used instead of p. SVM kernel type Custom svm kernel type No mapping is done, linear discrimination (or regression) is done in the original feature space. It is the fastest option. d(x,y) = x y == (x,y) polynomial kernel: d(x,y) = (gamma*(xy)+coef0)^degree Radial-basis-function kernel; a good choice in most cases: d(x,y) = exp(-gamma*|x-y|^2) sigmoid function is used as a kernel: d(x,y) = tanh(gamma*(xy)+coef0) Exponential Chi2 kernel, similar to the RBF kernel Histogram intersection kernel. A fast kernel. K(xi,xj)=min(xi,xj). The type of SVM parameters C Gamma P NU COEF DEGREE Train data Creates training data from in-memory arrays. Matrix of samples. It should have CV_32F type. Type of the layout. Matrix of responses. If the responses are scalar, they should be stored as a single row or as a single column. The matrix should have type CV_32F or CV_32S (in the former case the responses are considered as ordered by default; in the latter case - as categorical) Vector specifying which variables to use for training. It can be an integer vector (CV_32S) containing 0-based variable indices or byte vector (CV_8U) containing a mask of active variables. Vector specifying which samples to use for training. It can be an integer vector (CV_32S) containing 0-based sample indices or byte vector (CV_8U) containing a mask of training samples. Optional vector with weights for each sample. It should have CV_32F type. Optional vector of type CV_8U and size <number_of_variables_in_samples> + <number_of_variables_in_responses>, containing types of each input and output variable. Release the unmanaged resources Use the Capture class as a FrameSource A FrameSource that can be used by the Video Stabilizer The unmanaged pointer the the frameSource Retrieve the next frame from the FrameSoure Release the unmanaged memory associated with this FrameSource Get or Set the capture type Create a Capture frame source The capture object that will be converted to a FrameSource Release the unmanaged memory associated with this CaptureFrameSource Gaussian motion filter Create a Gaussian motion filter The radius, use 15 for default. The standard deviation, use -1.0f for default Release all the unmanaged memory associated with this object A one pass video stabilizer Create a one pass stabilizer The capture object to be stabalized Set the Motion Filter The motion filter Release the unmanaged memory associated with the stabilizer A two pass video stabilizer Create a two pass video stabilizer. The capture object to be stabilized. Should not be a camera stream. Release the unmanaged memory Create a video frame source The pointer to the frame source Create video frame source from video file The name of the file If true, it will try to create video frame source using gpu Create a framesource using the specific camera The index of the camera to create capture from, starting from 0 Get the next frame Release all the unmanaged memory associated with this framesource Supper resolution Create a super resolution solver for the given frameSource The type of optical flow algorithm to use The frameSource Release all the unmanaged memory associated to this object The type of optical flow algorithms used for super resolution BTVL BTVL using gpu Finds features in the given image. Pointer to the unmanaged FeaturesFinder object ORB features finder. Creates an ORB features finder Use (3, 1) for default grid size The number of desired features. Coefficient by which we divide the dimensions from one scale pyramid level to the next. The number of levels in the scale pyramid. Release all the unmanaged memory associated with this FeaturesFinder Image Stitching. Creates a stitcher with the default parameters. If true, the stitcher will try to use GPU for processing when available Compute the panoramic images given the images The input images. This can be, for example, a VectorOfMat The panoramic image true if successful Release memory associated with this stitcher Abstract base class for histogram cost algorithms. Release the histogram cost extractor A norm based cost extraction. Create a norm based cost extraction. Distance type Number of dummies Default cost An EMD based cost extraction. Create an EMD based cost extraction. Distance type Number of dummies Default cost An Chi based cost extraction. Create an Chi based cost extraction. Number of dummies Default cost An EMD-L1 based cost extraction. Create an EMD-L1 based cost extraction. Number of dummies Default cost Library to invoke functions that belongs to the shape module Abstract base class for shape distance algorithms. Compute the shape distance between two shapes defined by its contours. Contour defining first shape Contour defining second shape The shape distance between two shapes defined by its contours. Implementation of the Shape Context descriptor and matching algorithm proposed by Belongie et al. in “Shape Matching and Object Recognition Using Shape Contexts” (PAMI 2002). Create a shape context distance extractor The histogram cost extractor The shape transformer Establish the number of angular bins for the Shape Context Descriptor used in the shape matching pipeline. Establish the number of radial bins for the Shape Context Descriptor used in the shape matching pipeline. Set the inner radius of the shape context descriptor. Set the outer radius of the shape context descriptor. Iterations Release the memory associated with this shape context distance extractor A simple Hausdorff distance measure between shapes defined by contours, according to the paper “Comparing Images using the Hausdorff distance.” by D.P. Huttenlocher, G.A. Klanderman, and W.J. Rucklidge. (PAMI 1993). Create Hausdorff distance extractor Rhe norm used to compute the Hausdorff value between two shapes. It can be L1 or L2 norm. The rank proportion (or fractional value) that establish the Kth ranked value of the partial Hausdorff distance. Experimentally had been shown that 0.6 is a good value to compare shapes. Release the memory associated with this Hausdorff distance extrator Abstract base class for shape transformation algorithms. Release the unmanaged memory associated with this ShapeTransformer object Definition of the transformation ocupied in the paper “Principal Warps: Thin-Plate Splines and Decomposition of Deformations”, by F.L. Bookstein (PAMI 1989). Create a thin plate spline shape transformer The regularization parameter for relaxing the exact interpolation requirements of the TPS algorithm. Wrapper class for the OpenCV Affine Transformation algorithm. Create an affine transformer Full affine Library to invoke Tesseract OCR functions The tesseract page iterator Block of text/image/separator line. Paragraph within a block. Line within a paragraph. Word within a textline. Symbol/character within a word. The tesseract OCR engine Create a default tesseract engine. Needed to Call Init function to load language files in a later stage. Create an tesseract OCR engine. The datapath must be the name of the parent directory of tessdata and must end in / . Any name after the last / will be stripped. The language is (usually) an ISO 639-3 string or NULL will default to eng. It is entirely safe (and eventually will be efficient too) to call Init multiple times on the same instance to change language, or just to reset the classifier. The language may be a string of the form [~]%lt;lang>[+[~]<lang>]* indicating that multiple languages are to be loaded. Eg hin+eng will load Hindi and English. Languages may specify internally that they want to be loaded with one or more other languages, so the ~ sign is available to override that. Eg if hin were set to load eng by default, then hin+~eng would force loading only hin. The number of loaded languages is limited only by memory, with the caveat that loading additional languages will impact both speed and accuracy, as there is more work to do to decide on the applicable language, and there is more chance of hallucinating incorrect words. OCR engine mode Create an tesseract OCR engine. The datapath must be the name of the parent directory of tessdata and must end in / . Any name after the last / will be stripped. The language is (usually) an ISO 639-3 string or NULL will default to eng. It is entirely safe (and eventually will be efficient too) to call Init multiple times on the same instance to change language, or just to reset the classifier. The language may be a string of the form [~]%lt;lang>[+[~]<lang>]* indicating that multiple languages are to be loaded. Eg hin+eng will load Hindi and English. Languages may specify internally that they want to be loaded with one or more other languages, so the ~ sign is available to override that. Eg if hin were set to load eng by default, then hin+~eng would force loading only hin. The number of loaded languages is limited only by memory, with the caveat that loading additional languages will impact both speed and accuracy, as there is more work to do to decide on the applicable language, and there is more chance of hallucinating incorrect words. OCR engine mode This can be used to specify a white list for OCR. e.g. specify "1234567890" to recognize digits only. Note that the white list currently seems to only work with OcrEngineMode.OEM_TESSERACT_ONLY Initialize the OCR engine using the specific dataPath and language name. The datapath must be the name of the parent directory of tessdata and must end in / . Any name after the last / will be stripped. The language is (usually) an ISO 639-3 string or NULL will default to eng. It is entirely safe (and eventually will be efficient too) to call Init multiple times on the same instance to change language, or just to reset the classifier. The language may be a string of the form [~]%lt;lang>[+[~]<lang>]* indicating that multiple languages are to be loaded. Eg hin+eng will load Hindi and English. Languages may specify internally that they want to be loaded with one or more other languages, so the ~ sign is available to override that. Eg if hin were set to load eng by default, then hin+~eng would force loading only hin. The number of loaded languages is limited only by memory, with the caveat that loading additional languages will impact both speed and accuracy, as there is more work to do to decide on the applicable language, and there is more chance of hallucinating incorrect words. OCR engine mode Release the unmanaged resource associated with this class The image where detection took place Set the variable to the specific value. The name of the tesseract variable. e.g. use "tessedit_char_blacklist" to black list characters and ""tessedit_char_whitelist" to white list characters The value to be set Get all the text in the image All the text in the image Detect all the characters in the image. All the characters in the image Get the tesseract version Gets or sets the page seg mode. The page seg mode. This represent a character that is detected by the OCR engine The text The cost. The lower it is, the more confident is the result The region where the character is detected. When Tesseract/Cube is initialized we can choose to instantiate/load/run only the Tesseract part, only the Cube part or both along with the combiner. The preference of which engine to use is stored in tessedit_ocr_engine_mode. Run Tesseract only - fastest Run Cube only - better accuracy, but slower Run both and combine results - best accuracy Specify this mode to indicate that any of the above modes should be automatically inferred from the variables in the language-specific config, or if not specified in any of the above should be set to the default OEM_TESSERACT_ONLY. Tesseract page segmentation mode PageOrientation and script detection only. Automatic page segmentation with orientation and script detection. (OSD) Automatic page segmentation, but no OSD, or OCR. Fully automatic page segmentation, but no OSD. Assume a single column of text of variable sizes. Assume a single uniform block of vertically aligned text. Assume a single uniform block of text. (Default.) Treat the image as a single text line. Treat the image as a single word. Treat the image as a single word in a circle. Treat the image as a single character. Find as much text as possible in no particular order. Sparse text with orientation and script det. Treat the image as a single text line, bypassing hacks that are Tesseract-specific. Number of enum entries. This structure is primary used for PInvoke The length The cost The region Wrapped class of the C++ standard vector of TesseractResult. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of TesseractResult Create an standard vector of TesseractResult of the specific size The size of the vector Create an standard vector of TesseractResult with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of TesseractResult An array of TesseractResult Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index Gaussian Mixture-based Background/Foreground Segmentation Algorithm. Create a Gaussian Mixture-based Background/Foreground Segmentation model Updates the background model Next video frame. The learning rate, use -1.0f for default value. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). The foregroundMask Release all the unmanaged resource associated with this object This class wraps the functional calls to the opencv_gpu module Get the compute capability of the device The device The major version of the compute capability The minor version of the compute capability Get the number of multiprocessors on device The device The number of multiprocessors on device Get the device name Get the opencl platform summary as a string An opencl platfor summary Get the number of Cuda enabled devices The number of Cuda enabled devices Set the current Gpu Device The id of the device to be setted as current Get the current Cuda device id The current Cuda device id Create a GpuMat from the specific region of . The data is shared between the two GpuMat. The gpuMat to extract regions from. The column range. Use MCvSlice.WholeSeq for all columns. The row range. Use MCvSlice.WholeSeq for all rows. Pointer to the GpuMat Resize the GpuMat The input GpuMat The resulting GpuMat The interpolation type Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). gpuMatReshape the src GpuMat The source GpuMat The resulting GpuMat, as input it should be an empty GpuMat. The new number of channels The new number of rows Converts image from one color space to another The source GpuMat The destination GpuMat The color conversion code Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Swap channels. The image where the channels will be swapped Integer array describing how channel values are permutated. The n-th entry of the array contains the number of the channel that is stored in the n-th channel of the output image. E.g. Given an RGBA image, aDstOrder = [3,2,1,0] converts this to ABGR channel order. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Returns header, corresponding to a specified rectangle of the input GpuMat. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. Input GpuMat Zero-based coordinates of the rectangle of interest. Pointer to the resultant sub-array header. Shifts a matrix to the left (c = a << scalar) The matrix to be shifted. The scalar to shift by. The result of the shift Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Shifts a matrix to the right (c = a >> scalar) The matrix to be shifted. The scalar to shift by. The result of the shift Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Adds one matrix to another (c = a + b). The first matrix to be added. The second matrix to be added. The sum of the two matrix The optional mask that is used to select a subarray. Use null if not needed Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Subtracts one matrix from another (c = a - b). The matrix where subtraction take place The matrix to be substracted The result of a - b The optional mask that is used to select a subarray. Use null if not needed Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes element-wise product of the two GpuMat: c = scale * a * b. The first GpuMat to be element-wise multiplied. The second GpuMat to be element-wise multiplied. The element-wise multiplication of the two GpuMat The scale Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes element-wise quotient of the two GpuMat (c = scale * a / b). The first GpuMat The second GpuMat The element-wise quotient of the two GpuMat The scale Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes the weighted sum of two arrays (dst = alpha*src1 + beta*src2 + gamma) The first source GpuMat The weight for The second source GpuMat The weight for The constant to be added The result Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes element-wise absolute difference of two GpuMats (c = abs(a - b)). The first GpuMat The second GpuMat The result of the element-wise absolute difference. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes absolute value of each pixel in an image The source GpuMat, support depth of Int16 and float. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes square of each pixel in an image The source GpuMat, support depth of byte, UInt16, Int16 and float. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes square root of each pixel in an image The source GpuMat, support depth of byte, UInt16, Int16 and float. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Compares elements of two GpuMats (c = a <cmpop> b). Supports CV_8UC4, CV_32FC1 types The first GpuMat The second GpuMat The result of the comparison. The type of comparison Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Resizes the image. The source image. Has to be GpuMat<Byte>. If stream is used, the GpuMat has to be either single channel or 4 channels. The destination image. The interpolation type. Supports INTER_NEAREST, INTER_LINEAR. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Copies each plane of a multi-channel GpuMat to a dedicated GpuMat The multi-channel gpuMat Pointer to an array of single channel GpuMat pointers Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Makes multi-channel GpuMat out of several single-channel GpuMats Pointer to an array of single channel GpuMat pointers The multi-channel gpuMat Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Computes exponent of each matrix element (b = exp(a)) The source GpuMat. Supports Byte, UInt16, Int16 and float type. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes power of each matrix element: (dst(i,j) = pow( src(i,j) , power), if src.type() is integer; (dst(i,j) = pow(fabs(src(i,j)), power), otherwise. supports all, except depth == CV_64F The source GpuMat The power The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes natural logarithm of absolute value of each matrix element: b = log(abs(a)) The source GpuMat. Supports Byte, UInt16, Int16 and float type. The resulting GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes magnitude of each (x(i), y(i)) vector The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes squared magnitude of each (x(i), y(i)) vector The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes angle (angle(i)) of each (x(i), y(i)) vector The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type If true, the output angle is in degrees, otherwise in radian Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Converts Cartesian coordinates to polar The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type If true, the output angle is in degrees, otherwise in radian Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Converts polar coordinates to Cartesian The source GpuMat. Supports only floating-point type The source GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type The destination GpuMat. Supports only floating-point type If true, the input angle is in degrees, otherwise in radian Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Finds minimum and maximum element values and their positions. The extremums are searched over the whole GpuMat or, if mask is not IntPtr.Zero, in the specified GpuMat region. The source GpuMat, single-channel Pointer to returned minimum value Pointer to returned maximum value Pointer to returned minimum location Pointer to returned maximum location The optional mask that is used to select a subarray. Use null if not needed Performs downsampling step of Gaussian pyramid decomposition. The source CudaImage. The destination CudaImage, should have 2x smaller width and height than the source. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs up-sampling step of Gaussian pyramid decomposition. The source CudaImage. The destination image, should have 2x smaller width and height than the source. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes mean value and standard deviation The GpuMat. Supports only CV_8UC1 type The mean value The standard deviation Computes norm of the difference between two GpuMats The GpuMat. Supports only CV_8UC1 type If IntPtr.Zero, norm operation is apply to only. Otherwise, this is the GpuMat of type CV_8UC1 The norm type. Supports NORM_INF, NORM_L1, NORM_L2. The norm of the if is IntPtr.Zero. Otherwise the norm of the difference between two GpuMats. Counts non-zero array elements The GpuMat The number of non-zero GpuMat elements Reduces GpuMat to a vector by treating the GpuMat rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. The input GpuMat The destination GpuMat. Must be preallocated 1 x n matrix and have the same number of channels as the input GpuMat The dimension index along which the matrix is reduce. The reduction operation type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Flips the GpuMat in one of different 3 ways (row and column indices are 0-based): dst(i,j)=src(rows(src)-i-1,j) if flip_mode = 0 dst(i,j)=src(i,cols(src1)-j-1) if flip_mode > 0 dst(i,j)=src(rows(src)-i-1,cols(src)-j-1) if flip_mode < 0 Source GpuMat. Destination GpuMat. Specifies how to flip the GpuMat. flip_mode = 0 means flipping around x-axis, flip_mode > 0 (e.g. 1) means flipping around y-axis and flip_mode < 0 (e.g. -1) means flipping around both axises. Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Flips the GpuMat<Byte> in one of different 3 ways (row and column indices are 0-based). The source GpuMat. supports 1, 3 and 4 channels GpuMat with Byte, UInt16, int or float depth Destination GpuMat. The same source and type as Specifies how to flip the GpuMat. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates per-element bit-wise logical conjunction of two GpuMats: dst(I)=src1(I)^src2(I) if mask(I)!=0 In the case of floating-point GpuMats their bit representations are used for the operation. All the GpuMats must have the same type, except the mask, and the same size The first source GpuMat The second source GpuMat The destination GpuMat Mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates per-element bit-wise logical or of two GpuMats: dst(I)=src1(I) | src2(I) if mask(I)!=0 In the case of floating-point GpuMats their bit representations are used for the operation. All the GpuMats must have the same type, except the mask, and the same size The first source GpuMat The second source GpuMat The destination GpuMat Mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates per-element bit-wise logical and of two GpuMats: dst(I)=src1(I) & src2(I) if mask(I)!=0 In the case of floating-point GpuMats their bit representations are used for the operation. All the GpuMats must have the same type, except the mask, and the same size The first source GpuMat The second source GpuMat The destination GpuMat Mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates per-element bit-wise logical not dst(I)=~src(I) if mask(I)!=0 In the case of floating-point GpuMats their bit representations are used for the operation. All the GpuMats must have the same type, except the mask, and the same size The source GpuMat The destination GpuMat Mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes per-element minimum of two GpuMats (dst = min(src1, src2)) The first GpuMat The second GpuMat The result GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes per-element maximum of two GpuMats (dst = max(src1, src2)) The first GpuMat The second GpuMat The result GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Applies fixed-level thresholding to single-channel array. The function is typically used to get bi-level (binary) image out of grayscale image or for removing a noise, i.e. filtering out pixels with too small or too large values. There are several types of thresholding the function supports that are determined by thresholdType Source array (single-channel, 8-bit of 32-bit floating point). Destination array; must be either the same type as src or 8-bit. Threshold value Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types Thresholding type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs generalized matrix multiplication: dst = alpha*op(src1)*op(src2) + beta*op(src3), where op(X) is X or XT The first source array. The second source array. The scalar The third source array (shift). Can be IntPtr.Zero, if there is no shift. The scalar The destination array. The gemm operation type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Warps the image using affine transformation The source GpuMat The destination GpuMat The 2x3 transformation matrix (pointer to CvArr) Supports NN, LINEAR, CUBIC The border mode, use BORDER_TYPE.CONSTANT for default. The border value, use new MCvScalar() for default. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Warps the image using perspective transformation The source GpuMat The destination GpuMat The 2x3 transformation matrix (pointer to CvArr) Supports NN, LINEAR, CUBIC The border mode, use BORDER_TYPE.CONSTANT for default. The border value, use new MCvScalar() for default. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). DST[x,y] = SRC[xmap[x,y],ymap[x,y]] with bilinear interpolation. The source GpuMat. Supports CV_8UC1, CV_8UC3 source types. The dstination GpuMat. Supports CV_8UC1, CV_8UC3 source types. The xmap. Supports CV_32FC1 map type. The ymap. Supports CV_32FC1 map type. Interpolation type. Border mode. Use BORDER_CONSTANT for default. The value of the border. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs mean-shift filtering for each point of the source image. It maps each point of the source image into another point, and as the result we have new color and new position of each point. Source CudaImage. Only CV 8UC4 images are supported for now. Destination CudaImage, containing color of mapped points. Will have the same size and type as src. Spatial window radius. Color window radius. Termination criteria. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs mean-shift procedure and stores information about processed points (i.e. their colors and positions) into two images. Source CudaImage. Only CV 8UC4 images are supported for now. Destination CudaImage, containing color of mapped points. Will have the same size and type as src. Destination CudaImage, containing position of mapped points. Will have the same size as src and CV 16SC2 type. Spatial window radius. Color window radius. Termination criteria. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs mean-shift segmentation of the source image and eleminates small segments. Source CudaImage. Only CV 8UC4 images are supported for now. Segmented Image. Will have the same size and type as src. Note that this is an Image type and not CudaImage type Spatial window radius. Color window radius. Minimum segment size. Smaller segements will be merged. Termination criteria. Rotates an image around the origin (0,0) and then shifts it. Source image. Supports 1, 3 or 4 channels images with Byte, UInt16 or float depth Destination image with the same type as src. Must be pre-allocated Angle of rotation in degrees Shift along the horizontal axis Shift along the verticle axis Interpolation method. Only INTER_NEAREST, INTER_LINEAR, and INTER_CUBIC are supported. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Copies a 2D array to a larger destination array and pads borders with the given constant. Source image. Destination image with the same type as src. The size is Size(src.cols+left+right, src.rows+top+bottom). Number of pixels in each direction from the source image rectangle to extrapolate. Number of pixels in each direction from the source image rectangle to extrapolate. Number of pixels in each direction from the source image rectangle to extrapolate. Number of pixels in each direction from the source image rectangle to extrapolate. Border Type Border value. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes the integral image and integral for the squared image The source GpuMat, supports only CV_8UC1 source type The sum GpuMat, supports only CV_32S source type, but will contain unsigned int values Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Computes squared integral image The source GpuMat, supports only CV_8UC1 source type The sqsum GpuMat, supports only CV32F source type. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Performs a forward or inverse discrete Fourier transform (1D or 2D) of floating point matrix. Param dft_size is the size of DFT transform. If the source matrix is not continous, then additional copy will be done, so to avoid copying ensure the source matrix is continous one. If you want to use preallocated output ensure it is continuous too, otherwise it will be reallocated. Being implemented via CUFFT real-to-complex transform result contains only non-redundant values in CUFFT's format. Result as full complex matrix for such kind of transform cannot be retrieved. For complex-to-real transform it is assumed that the source matrix is packed in CUFFT's format. The source GpuMat The resulting GpuMat of the DST, must be pre-allocated and continious. If single channel, the result is real. If double channel, the result is complex DFT flags Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Calculates histogram with evenly distributed bins for single channel source. The source GpuMat. Supports CV_8UC1, CV_16UC1 and CV_16SC1 types. Histogram with evenly distributed bins. A GpuMat<int> type. The size of histogram (number of levels) The lower level The upper level Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Histogram with evenly distributed bins Performs linear blending of two images. First image. Supports only CV_8U and CV_32F depth. Second image. Must have the same size and the same type as img1 . Weights for first image. Must have tha same size as img1. Supports only CV_32F type. Weights for second image. Must have tha same size as img2. Supports only CV_32F type. Destination image. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Applies bilateral filter to the image. The source image The destination image; should have the same size and the same type as src The diameter of each pixel neighborhood, that is used during filtering. Filter sigma in the color space. Larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color Filter sigma in the coordinate space. Larger value of the parameter means that farther pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0, it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace. Pixel extrapolation method, use DEFAULT for default Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Routines for correcting image color gamma Source image (3- or 4-channel 8 bit). Destination image. True for forward gamma correction or false for inverse gamma correction. Stream for the asynchronous version. Release the GpuMat Pointer to the GpuMat Create an empty GpuMat Pointer to an empty GpuMat Convert a CvArr to a GpuMat Pointer to a CvArr Pointer to the GpuMat Get the GpuMat size: width == number of columns, height == number of rows The GpuMat The size of the matrix Get the GpuMat type The GpuMat The GpuMat type Create a GpuMat of the specified size The number of rows (height) The number of columns (width) The type of GpuMat Pointer to the GpuMat Create a GpuMat of the specified size. The allocated data is continuous within this GpuMat. The number of rows (height) The number of columns (width) The type of GpuMat Pointer to the GpuMat Pefroms blocking upload data to GpuMat. The destination gpuMat The CvArray to be uploaded to GPU Downloads data from device to host memory. Blocking calls. The source GpuMat The CvArray where data will be downloaded to Copy the source GpuMat to destination GpuMat, using an optional mask. The GpuMat to be copied from The GpuMat to be copied to The optional mask, use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). This function has several different purposes and thus has several synonyms. It copies one GpuMat to another with optional scaling, which is performed first, and/or optional type conversion, performed after: dst(I)=src(I)*scale + (shift,shift,...) All the channels of multi-channel GpuMats are processed independently. The type conversion is done with rounding and saturation, that is if a result of scaling + conversion can not be represented exactly by a value of destination GpuMat element type, it is set to the nearest representable value on the real axis. In case of scale=1, shift=0 no prescaling is done. This is a specially optimized case and it has the appropriate convertTo synonym. Source GpuMat Destination GpuMat Scale factor Value added to the scaled source GpuMat elements Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Changes shape of GpuMat without copying data. The GpuMat to be reshaped. The result GpuMat. New number of channels. newCn = 0 means that the number of channels remains unchanged. New number of rows. newRows = 0 means that the number of rows remains unchanged unless it needs to be changed according to newCn value. A GpuMat of different shape This function is similiar to cvCalcBackProjectPatch. It slids through image, compares overlapped patches of size wxh with templ using the specified method and stores the comparison results to result Image where the search is running. It should be 8-bit or 32-bit floating-point Searched template; must be not greater than the source image and the same data type as the image A map of comparison results; single-channel 32-bit floating-point. If image is WxH and templ is wxh then result must be W-w+1xH-h+1. Pointer to cv::gpu::TemplateMatching Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Return true if Cuda is found on the system Gaussian Mixture-based Background/Foreground Segmentation Algorithm. Create a Gaussian Mixture-based Background/Foreground Segmentation model Updates the background model Next video frame. The learning rate, use -1.0f for default value. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release all the unmanaged resource associated with this object Works only under Windows, Supports olny H264 video codec and AVI files. Surface format Cascade Classifier for object detection using Cuda Create a Cuda cascade classifier using the specific file The file to create the classifier from Finds rectangular regions in the given image that are likely to contain objects the cascade has been trained for and returns those regions as a sequence of rectangles. The image where search will take place An array of regions for the detected objects Release all unmanaged resources associated with this object Contrast Limited Adaptive Histogram Equalization Create the Contrast Limited Adaptive Histogram Equalization Threshold for contrast limiting. Use 40.0 for default Size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. This parameter defines the number of tiles in row and column. Use (8, 8) for default Equalizes the histogram of a grayscale image using Contrast Limited Adaptive Histogram Equalization. Source image Destination image Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release all the unmanaged memory associated with this object The Cuda device information Query the information of the gpu device that is currently in use. Query the information of the cuda device with the specific id. The device id Indicates if the decive has the specific feature Release the unmanaged resource related to the GpuDevice The id of the device The name of the device The compute capability The number of single multi processors Get the amount of free memory at the moment Get the amount of total memory Checks whether the Cuda module can be run on the given device GPU feature Cuda compute 1.0 Cuda compute 1.1 Cuda compute 1.2 Cuda compute 1.3 Cuda compute 2.0 Cuda compute 2.1 Global Atomic Shared Atomic Native double A HOG descriptor Create a new HOGDescriptor using the specific parameters Block size in cells. Use (16, 16) for default. Cell size. Use (8, 8) for default. Block stride. Must be a multiple of cell size. Use (8,8) for default. Number of bins. Detection window size. Must be aligned to block size and block stride. Must match the size of the training image. Use (64, 128) for default. Returns coefficients of the classifier trained for people detection (for default window size). The default people detector Set the SVM detector The SVM detector Release the unmanaged memory associated with this HOGDescriptor An CudaImage is very similar to the Emgu.CV.Image except that it is being used for GPU processing Color type of this image (either Gray, Bgr, Bgra, Hsv, Hls, Lab, Luv, Xyz, Ycc, Rgb or Rbga) Depth of this image (either Byte, SByte, Single, double, UInt16, Int16 or Int32) Similar to CvArray but use GPU for processing The type of element in the matrix A GpuMat, use the generic version if possible. The non generic version is good for use as buffer in stream calls. Create an empty GpuMat Create a GpuMat of the specified size The number of rows (height) The number of columns (width) The number of channels The type of depth Indicates if the data should be continuous Create a GpuMat from the specific pointer Pointer to the unmanaged gpuMat Create a GpuMat from an CvArray of the same depth type The CvArry to be converted to GpuMat Create a GpuMat from the specific region of . The data is shared between the two GpuMat The matrix where the region is extracted from The column range. Use MCvSlice.WholeSeq for all columns. The row range. Use MCvSlice.WholeSeq for all rows. Release the unmanaged memory associated with this GpuMat Pointer to the InputArray Pointer to the OutputArray Pointer to the InputOutputArray Performs blocking upload data to GpuMat The CvArray to be uploaded to GpuMat Downloads data from device to host memory. Blocking calls The destination CvArray where the GpuMat data will be downloaded to. Copies scalar value to every selected element of the destination GpuMat: arr(I)=value if mask(I)!=0 Fill value Operation mask, 8-bit single channel GpuMat; specifies elements of destination GpuMat to be changed. Can be IntPtr.Zero if not used Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Copy the source GpuMat to destination GpuMat, using an optional mask. The output array to be copied to The optional mask, use IntPtr.Zero if not needed. Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). This function has several different purposes and thus has several synonyms. It copies one GpuMat to another with optional scaling, which is performed first, and/or optional type conversion, performed after: dst(I)=src(I)*scale + (shift,shift,...) All the channels of multi-channel GpuMats are processed independently. The type conversion is done with rounding and saturation, that is if a result of scaling + conversion can not be represented exactly by a value of destination GpuMat element type, it is set to the nearest representable value on the real axis. In case of scale=1, shift=0 no prescaling is done. This is a specially optimized case and it has the appropriate convertTo synonym. Destination GpuMat Scale factor Value added to the scaled source GpuMat elements Use a Stream to call the function asynchronously (non-blocking) or IntPtr.Zero to call the function synchronously (blocking). Changes shape of GpuMat without copying data. New number of channels. newCn = 0 means that the number of channels remains unchanged. New number of rows. newRows = 0 means that the number of rows remains unchanged unless it needs to be changed according to newCn value. A GpuMat of different shape Returns a GpuMat corresponding to the ith row of the GpuMat. The data is shared with the current GpuMat. The row to be extracted The ith row of the GpuMat The parent GpuMat should never be released before the returned GpuMat that represent the subregion Returns a GpuMat corresponding to the [ ) rows of the GpuMat. The data is shared with the current GpuMat. The inclusive stating row to be extracted The exclusive ending row to be extracted The [ ) rows of the GpuMat The parent GpuMat should never be released before the returned GpuMat that represent the subregion Returns a GpuMat corresponding to the ith column of the GpuMat. The data is shared with the current GpuMat. The column to be extracted The ith column of the GpuMat The parent GpuMat should never be released before the returned GpuMat that represent the subregion Returns a GpuMat corresponding to the [ ) columns of the GpuMat. The data is shared with the current GpuMat. The inclusive stating column to be extracted The exclusive ending column to be extracted The [ ) columns of the GpuMat The parent GpuMat should never be released before the returned GpuMat that represent the subregion Returns true if the two GpuMat equals The other GpuMat to be compares with True if the two GpuMat equals Makes multi-channel array out of several single-channel arrays An array of single channel GpuMat where each item in the array represent a single channel of the GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Split current Image into an array of gray scale images where each element in the array represent a single color channel of the original image An array of single channel GpuMat where each item in the array represent a single channel of the original GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Split current GpuMat into an array of single channel GpuMat where each element in the array represent a single channel of the original GpuMat Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). An array of single channel GpuMat where each element in the array represent a single channel of the original GpuMat Returns the min / max location and values for the image The maximum locations for each channel The maximum values for each channel The minimum locations for each channel The minimum values for each channel Get the GpuMat size: width == number of columns, height == number of rows Get the type of the GpuMat True if the data is continues Depth type True if the matrix is empty Number of channels Create a GpuMat from the unmanaged pointer The unmanaged pointer to the GpuMat Create an empty GpuMat Create a GpuMat from an CvArray of the same depth type The CvArry to be converted to GpuMat Create a GpuMat of the specified size The number of rows (height) The number of columns (width) The number of channels Indicates if the data should be continuous Create a GpuMat of the specified size The size of the GpuMat The number of channels Convert this GpuMat to a Matrix The matrix that contains the same values as this GpuMat Returns a GpuMat corresponding to a specified rectangle of the current GpuMat. The data is shared with the current matrix. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. Zero-based coordinates of the rectangle of interest. A GpuMat that represent the region of the current matrix. The parent GpuMat should never be released before the returned GpuMat the represent the subregion Create an empty CudaImage Create the CudaImage from the unmanaged pointer. The unmanaged pointer to the GpuMat. It is the user's responsibility that the Color type and depth matches between the managed class and unmanaged pointer. Create a GPU image from a regular image The image to be converted to GPU image Create a CudaImage of the specific size The number of rows (height) The number of columns (width) Indicates if the data should be continuous Create a CudaImage of the specific size The number of rows (height) The number of columns (width) Create a CudaImage of the specific size The size of the image Create a CudaImage from the specific region of . The data is shared between the two CudaImage The CudaImage where the region is extracted from The column range. Use MCvSlice.WholeSeq for all columns. The row range. Use MCvSlice.WholeSeq for all rows. Convert the current CudaImage to a regular Image. A regular image Convert the current CudaImage to the specific color and depth The type of color to be converted to The type of pixel depth to be converted to CudaImage of the specific color and depth Convert the source image to the current image, if the size are different, the current image will be a resized version of the srcImage. The color type of the source image The color depth of the source image The sourceImage Create a clone of this CudaImage Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). A clone of this CudaImage Resize the CudaImage. The calling GpuMat be GpuMat%lt;Byte>. If stream is specified, it has to be either 1 or 4 channels. The new size The interpolation type Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). A CudaImage of the new size Returns a CudaImage corresponding to a specified rectangle of the current CudaImage. The data is shared with the current matrix. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. Zero-based coordinates of the rectangle of interest. A CudaImage that represent the region of the current CudaImage. The parent CudaImage should never be released before the returned CudaImage that represent the subregion Returns a CudaImage corresponding to the ith row of the CudaImage. The data is shared with the current Image. The row to be extracted The ith row of the CudaImage The parent CudaImage should never be released before the returned CudaImage that represent the subregion Returns a CudaImage corresponding to the [ ) rows of the CudaImage. The data is shared with the current Image. The inclusive stating row to be extracted The exclusive ending row to be extracted The [ ) rows of the CudaImage The parent CudaImage should never be released before the returned CudaImage that represent the subregion Returns a CudaImage corresponding to the ith column of the CudaImage. The data is shared with the current Image. The column to be extracted The ith column of the CudaImage The parent CudaImage should never be released before the returned CudaImage that represent the subregion Returns a CudaImage corresponding to the [ ) columns of the CudaImage. The data is shared with the current Image. The inclusive stating column to be extracted The exclusive ending column to be extracted The [ ) columns of the CudaImage The parent CudaImage should never be released before the returned CudaImage that represent the subregion convert the current CudaImage to its equivalent Bitmap representation Gpu look up table Create the look up table It should be either 1 or 3 channel matrix of 1x256 Transform the image using the lookup table The image to be transformed The transformation result Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release all the unmanaged memory associated with this look up table Descriptor matcher Find the k-nearest match An n x m matrix of descriptors to be query for nearest neighbors. n is the number of descriptor and m is the size of the descriptor Number of nearest neighbors to search for Can be null if not needed. An n x 1 matrix. If 0, the query descriptor in the corresponding row will be ignored. Matches. Each matches[i] is k or less matches for the same query descriptor. Add the model descriptors The model descriptors Release all the unmanaged memory associated with this matcher A Brute force matcher using Cuda Create a CudaBruteForceMatcher using the specific distance type The distance type A FAST detector using Cuda The feature 2D base class Get the pointer to the Feature2DAsync object The pointer to the Feature2DAsync object Create a fast detector with the specific parameters Threshold on difference between intensity of center pixel and pixels on circle around this pixel. Use 10 for default. Specifiy if non-maximum supression should be used. Release the unmanaged resource associate to the Detector An ORB detector using Cuda Create a ORBDetector using the specific values The number of desired features. Coefficient by which we divide the dimensions from one scale pyramid level to the next. The number of levels in the scale pyramid. The level at which the image is given. If 1, that means we will also look at the image. times bigger How far from the boundary the points should be. How many random points are used to produce each cell of the descriptor (2, 3, 4 ...). Type of the score to use. Patch size. Release the unmanaged resource associate to the Detector Detect keypoints in an image and compute the descriptors on the image from the keypoint locations. The image The optional mask, can be null if not needed The detected keypoints will be stored in this vector The descriptors from the keypoints If true, the method will skip the detection phase and will compute descriptors for the provided keypoints Detect the features in the image The result vector of keypoints The image from which the features will be detected from The optional mask. Compute the descriptors on the image from the given keypoint locations. The image to compute descriptors from The keypoints where the descriptor computation is perfromed The descriptors from the given keypoints BoxMax filter Base Cuda filter class Release all the unmanaged memory associated with this gpu filter Apply the cuda filter The source CudaImage where the filter will be applied to The destination CudaImage Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Create a BoxMax filter. Size of the kernel The center of the kernel. User (-1, -1) for the default kernel center. The border type. The border value. BoxMin filter Create a BoxMin filter. Size of the kernel The center of the kernel. User (-1, -1) for the default kernel center. The border type. The border value. Gaussian filter Create a Gaussian filter. The size of the kernel This parameter may specify Gaussian sigma (standard deviation). If it is zero, it is calculated from the kernel size. In case of non-square Gaussian kernel the parameter may be used to specify a different (from param3) sigma in the vertical direction. Use 0 for default The row border type. The column border type. Laplacian filter Create a Laplacian filter. Either 1 or 3 Optional scale. Use 1.0 for default The border type. The border value. Applies arbitrary linear filter to the image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values from the nearest pixels that is inside the image Create a Gpu LinearFilter Convolution kernel, single-channel floating point matrix (e.g. Emgu.CV.Matrix). If you want to apply different kernels to different channels, split the gpu image into separate color planes and process them individually The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center Border type. Use REFLECT101 as default. The border value Morphology filter Create a Morphology filter. Type of morphological operation 2D 8-bit structuring element for the morphological operation. Anchor position within the structuring element. Negative values mean that the anchor is at the center. Number of times erosion and dilation to be applied. Sobel filter Create a Sobel filter. Order of the derivative x Order of the derivative y Size of the extended Sobel kernel Optional scale, use 1 for default. The row border type. The column border type. Cascade Classifier for object detection using Cuda Canny edge detector using Cuda. The first threshold, used for edge linking The second threshold, used to find initial segments of strong edges Aperture parameter for Sobel operator, use 3 for default Use false for default Finds the edges on the input and marks them in the output image edges using the Canny algorithm. Input image Image to store the edges found by the function Release all the unmanaged memory associate with this Canny edge detector. Base CornernessCriteria class Release all the unmanaged memory associated with this gpu filter Apply the cuda filter The source CudaImage where the filter will be applied to The destination CudaImage Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Cuda implementation of GoodFeaturesToTrackDetector Create the Cuda implementation of GoodFeaturesToTrackDetector Find the good features to track Release all the unmanaged memory associated with this detector Runs the Harris edge detector on image. Similarly to cvCornerMinEigenVal and cvCornerEigenValsAndVecs, for each pixel it calculates 2x2 gradient covariation matrix M over block_size x block_size neighborhood. Then, it stores det(M) - k*trace(M)^2 to the destination image. Corners in the image can be found as local maxima of the destination image. Create a Cuda Harris Corner detector Neighborhood size Harris detector free parameter. Boreder type, use REFLECT101 for default Base class for circles detector algorithm. Create hough circles detector Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed. The higher threshold of the two passed to Canny edge detector (the lower one is twice smaller). The accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Minimum circle radius. Maximum circle radius. Maximum number of output circles. Finds circles in a grayscale image using the Hough transform. 8-bit, single-channel grayscale input image. Output vector of found circles. Each vector is encoded as a 3-element floating-point vector. Release the unmanaged memory associated with this circle detector. Base class for lines detector algorithm. Create a hough lines detector Distance resolution of the accumulator in pixels. Angle resolution of the accumulator in radians. Accumulator threshold parameter. Only those lines are returned that get enough votes (> threshold). Performs lines sort by votes. Maximum number of output lines. Finds line segments in a binary image using the probabilistic Hough transform. 8-bit, single-channel binary source image Output vector of lines.Output vector of lines. Each line is represented by a two-element vector. The first element is the distance from the coordinate origin (top-left corner of the image). The second element is the line rotation angle in radians. Release the unmanaged memory associated to this line detector. Base class for line segments detector algorithm. Create a hough segment detector Distance resolution of the accumulator in pixels. Angle resolution of the accumulator in radians. Minimum line length. Line segments shorter than that are rejected. Maximum allowed gap between points on the same line to link them. Maximum number of output lines. Finds line segments in a binary image using the probabilistic Hough transform. 8-bit, single-channel binary source image Output vector of lines. Each line is represented by a 4-element vector (x1, y1, x2, y2) , where (x1, y1) and (x2, y2) are the ending points of each detected line segment. Release the unmanaged memory associated with this segment detector Cuda template matching filter. Create a Cuda template matching filter Specifies the way the template must be compared with image regions The block size This function is similiar to cvCalcBackProjectPatch. It slids through image, compares overlapped patches of size wxh with templ using the specified method and stores the comparison results to result Image where the search is running. It should be 8-bit or 32-bit floating-point Searched template; must be not greater than the source image and the same data type as the image A map of comparison results; single-channel 32-bit floating-point. If image is WxH and templ is wxh then result must be W-w+1xH-h+1. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the buffer Background/Foreground Segmentation Algorithm. Create a Background/Foreground Segmentation model Background reference image update parameter Stat model update parameter. 0.002f ~ 1K frame(~45sec), 0.005 ~ 18sec (if 25fps and absolutely static BG) start value for alpha parameter (to fast initiate statistic model) Updates the background model Next video frame. The learning rate, use -1.0f for default value. Release all the unmanaged resource associated with this object Background/Foreground Segmentation Algorithm. Create a Background/Foreground Segmentation model Updates the background model Next video frame. The learning rate, use -1.0f for default value. Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release all the unmanaged resource associated with this object Brox optical flow Create the Brox optical flow solver Flow smoothness Gradient constancy importance Pyramid scale factor Number of lagged non-linearity iterations (inner loop) Number of warping iterations (number of pyramid levels) Number of linear system solver iterations Release all the unmanaged memory associated with this optical flow solver. PyrLK optical flow Create the PyrLK optical flow solver Windows size. Use 21x21 for default The maximum number of pyramid levels. The number of iterations. Weather or not use the initial flow in the input matrix. Release all the unmanaged memory associated with this optical flow solver. Farneback optical flow Release all the unmanaged memory associated with this optical flow solver. DualTvl1 optical flow Initializes a new instance of the class. Release all the unmanaged memory associated with this optical flow solver. Sparse PyrLK optical flow Create the PyrLK optical flow solver Windows size. Use 21x21 for default The maximum number of pyramid levels. The number of iterations. Weather or not use the initial flow in the input matrix. Release all the unmanaged memory associated with this optical flow solver. Disparity map refinement using joint bilateral filtering given a single color image. Qingxiong Yang, Liang Wang†, Narendra Ahuja http://vision.ai.uiuc.edu/~qyang6/ Create a GpuDisparityBilateralFilter Number of disparities. Use 64 as default Filter radius, use 3 as default Number of iterations, use 1 as default Apply the filter to the disparity image The input disparity map The image The output disparity map, should have the same size as the input disparity map Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the unmanaged resources associated with the filter. Use Block Matching algorithm to find stereo correspondence Create a stereoBM The number of disparities. Must be multiple of 8. Use 64 for default The SAD window size. Use 19 for default Computes disparity map for the input rectified stereo pair. The left single-channel, 8-bit image The right image of the same size and the same type The disparity map Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the stereo state and all the memory associate with it A Constant-Space Belief Propagation Algorithm for Stereo Matching. Qingxiong Yang, Liang Wang, Narendra Ahuja. http://vision.ai.uiuc.edu/~qyang6/ A Constant-Space Belief Propagation Algorithm for Stereo Matching The number of disparities. Use 128 as default The number of BP iterations on each level. Use 8 as default. The number of levels. Use 4 as default The number of active disparity on the first level. Use 4 as default. Computes disparity map for the input rectified stereo pair. The left single-channel, 8-bit image The right image of the same size and the same type The disparity map Use a Stream to call the function asynchronously (non-blocking) or null to call the function synchronously (blocking). Release the unmanaged memory Encapculates Cuda Stream. Provides interface for async coping. Passed to each function that supports async kernel execution. Reference counting is enabled Create a new Cuda Stream Wait for the completion Release the stream Check if the stream is completed Gives information about what GPU archs this OpenCV GPU module was compiled for Check if the GPU module is build with the specific feature set. The feature set to be checked. True if the GPU module is build with the specific feature set. Check if the GPU module is targeted for the specific device version The major version The minor version True if the GPU module is targeted for the specific device version. Check if the GPU module is targeted for the specific PTX version The major version The minor version True if the GPU module is targeted for the specific PTX version. Check if the GPU module is targeted for the specific BIN version The major version The minor version True if the GPU module is targeted for the specific BIN version. Check if the GPU module is targeted for equal or less PTX version The major version The minor version True if the GPU module is targeted for equal or less PTX version. Check if the GPU module is targeted for equal or greater device version The major version The minor version True if the GPU module is targeted for equal or greater device version. Check if the GPU module is targeted for equal or greater PTX version The major version The minor version True if the GPU module is targeted for equal or greater PTX version. Check if the GPU module is targeted for equal or greater BIN version The major version The minor version True if the GPU module is targeted for equal or greater BIN version. Wrapped class of the C++ standard vector of GpuMat. Create an empty standard vector of GpuMat Create an standard vector of GpuMat of the specific size The size of the vector Create an standard vector of GpuMat with the initial values The initial values Clear the vector Push a value into the standard vector The value to be pushed to the vector Push multiple values into the standard vector The values to be pushed to the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector Get the item in the specific index The index The item in the specific index Background Subtractor module based on the algorithm given in: Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg, “Visual Tracking of Human Visitors under Variable-Lighting Conditions for a Responsive Audio Art Installation”, American Control Conference, Montreal, June 2012. Create a background subtractor module based on GMG Number of frames used to initialize the background models. Threshold value, above which it is marked foreground, else background. Release all the unmanaged memory associated with this background model. Gaussian Mixture-based Background/Foreground Segmentation Algorithm. The class implements the following algorithm: "An improved adaptive background mixture model for real-time tracking with shadow detection" P. KadewTraKuPong and R. Bowden, Proc. 2nd European Workshp on Advanced Video-Based Surveillance Systems, 2001." http://personal.ee.surrey.ac.uk/Personal/R.Bowden/publications/avbs01/avbs01.pdf Create an "Improved adaptive Gaussian mixture model for background subtraction". The length of the history. The maximum number of gaussian mixtures. Background ratio Noise strength (standard deviation of the brightness or each color channel). 0 means some automatic value. Release all the unmanaged memory associated with this background model. Face Recognizer Train the face recognizer with the specific images and labels The images used in the training. This can be a VectorOfMat The labels of the images. This can be a VectorOfInt Train the face recognizer with the specific images and labels The images used in the training. The labels of the images. Predict the label of the image The image where prediction will be based on The prediction label Save the FaceRecognizer to a file The file name to be saved to Load the FaceRecognizer from the file The file where the FaceRecognizer will be loaded from Release the unmanaged memory associated with this FaceRecognizer The prediction result The label The distance Eigen face recognizer Create an EigenFaceRecognizer The number of components The distance threshold Fisher face recognizer Create a FisherFaceRecognizer The number of components The distance threshold LBPH face recognizer Create a LBPH face recognizer Radius Neighbors Grid X Grid Y The distance threshold Updates a FaceRecognizer with given data and associated labels. The training images, that means the faces you want to learn. The data has to be given as a VectorOfMat. The labels corresponding to the images Update the face recognizer with the specific images and labels The images used for updating the face recognizer The labels of the images The ERStat structure represents a class-specific Extremal Region (ER). An ER is a 4-connected set of pixels with all its grey-level values smaller than the values in its outer boundary. A class-specific ER is selected (using a classifier) from all the ER’s in the component tree of the image. Seed point Threshold (max grey-level value) Area Perimeter Euler number Bounding box Order 1 raw moments to derive the centroid Order 1 raw moments to derive the centroid Order 2 central moments to construct the covariance matrix Order 2 central moments to construct the covariance matrix Order 2 central moments to construct the covariance matrix Pointer to horizontal crossings Median of the crossings at three different height levels Hole area ratio Convex hull ratio Number of inflexion points Pointer to pixels Probability that the ER belongs to the class we are looking for Pointer to the parent ERStat Pointer to the child ERStat Pointer to the next ERStat Pointer to the previous ERStat If or not the regions is a local maxima of the probability Pointer to the ERStat that is the max probability ancestor Pointer to the ERStat that is the min probability ancestor Get the center of the region The source image width The center of the region Base class for 1st and 2nd stages of Neumann and Matas scene text detection algorithm Release all the unmanaged memory associate with this ERFilter Takes image on input and returns the selected regions in a vector of ERStat only distinctive ERs which correspond to characters are selected by a sequential classifier Sinle channel image CV_8UC1 Output for the 1st stage and Input/Output for the 2nd. The selected Extremal Regions are stored here. Find groups of Extremal Regions that are organized as text blocks. The image where ER grouping is to be perform on Array of single channel images from which the regions were extracted Vector of ER’s retrieved from the ERFilter algorithm from each channel The XML or YAML file with the classifier model (e.g. trained_classifier_erGrouping.xml) The minimum probability for accepting a group. The grouping methods The output of the algorithm that indicates the text regions The grouping method Only perform grouping horizontally. Perform grouping in any orientation. Extremal Region Filter for the 1st stage classifier of N&M algorithm Create an Extremal Region Filter for the 1st stage classifier of N&M algorithm The file name of the classifier Threshold step in subsequent thresholds when extracting the component tree. The minimum area (% of image size) allowed for retreived ER’s. The maximum area (% of image size) allowed for retreived ER’s. The minimum probability P(er|character) allowed for retreived ER’s. Whenever non-maximum suppression is done over the branch probabilities. The minimum probability difference between local maxima and local minima ERs. Extremal Region Filter for the 2nd stage classifier of N&M algorithm Create an Extremal Region Filter for the 2nd stage classifier of N&M algorithm The file name of the classifier The minimum probability P(er|character) allowed for retreived ER’s. Wrapped class of the C++ standard vector of ERStat. Constructor used to deserialize runtime serialized object The serialization info The streaming context A function used for runtime serialization of the object Serialization info Streaming context Create an empty standard vector of ERStat Create an standard vector of ERStat of the specific size The size of the vector Create an standard vector of ERStat with the initial values The initial values Push an array of value into the standard vector The value to be pushed to the vector Convert the standard vector to an array of ERStat An array of ERStat Clear the vector Release the standard vector Get the pointer to cv::_InputArray Get the pointer to cv::_OutputArray Get the pointer to cv::_InputOutputArray Get the size of the vector The pointer to the first element on the vector. In case of an empty vector, IntPtr.Zero will be returned. Get the item in the specific index The index The item in the specific index BRIEF Descriptor Create a BRIEF descriptor extractor. The size of descriptor. It can be equal 16, 32 or 64 bytes. Release all the unmanaged resource associated with BRIEF This class wraps the functional calls to the opencv contrib modules A SURF detector using Cuda Create a Cuda SURF detector The interest operator threshold. The number of octaves to process. The number of layers in each octave. True, if generate 128-len descriptors, false - 64-len descriptors. Max features = featuresRatio * img.size().srea(). If set to true, the orientation is not computed for the keypoints Detect keypoints in the CudaImage The image where keypoints will be detected from The optional mask, can be null if not needed The keypoints GpuMat that will have 1 row. keypoints.at<float[6]>(1, i) contains i'th keypoint format: (x, y, size, response, angle, octave) Detect keypoints in the CudaImage The image where keypoints will be detected from The optional mask, can be null if not needed An array of keypoints Obtain the keypoints array from GpuMat The keypoints obtained from DetectKeyPointsRaw The vector of keypoints Obtain a GpuMat from the keypoints array The keypoints array A GpuMat that represent the keypoints Compute the descriptor given the image and the point location The image where the descriptor will be computed from The optional mask, can be null if not needed The keypoint where the descriptor will be computed from. The order of the keypoints might be changed unless the GPU_SURF detector is UP-RIGHT. The image features founded on the keypoint location Release the unmanaged resource associate to the Detector Return the size of the descriptor (64/128) Daisy descriptor. Create DAISY descriptor extractor Radius of the descriptor at the initial scale. Amount of radial range division quantity. Amount of angular range division quantity. Amount of gradient orientations range division quantity. Descriptors normalization type. optional 3x3 homography matrix used to warp the grid of daisy but sampling keypoints remains unwarped on image Switch to disable interpolation for speed improvement at minor quality loss Sample patterns using keypoints orientation, disabled by default. Release all the unmanaged resource associated with BRIEF Normalization type Will not do any normalization (default) Histograms are normalized independently for L2 norm equal to 1.0 Descriptors are normalized for L2 norm equal to 1.0 Descriptors are normalized for L2 norm equal to 1.0 but no individual one is bigger than 0.154 as in SIFT The FREAK (Fast Retina Keypoint) keypoint descriptor: Alahi, R. Ortiz, and P. Vandergheynst. FREAK: Fast Retina Keypoint. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. CVPR 2012 Open Source Award Winner. The algorithm propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Key- point (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern. FREAKs are in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK. They are competitive alternatives to existing keypoints in particular for embedded applications. Create a Freak descriptor extractor. Enable orientation normalization Enable scale normalization Scaling of the description pattern Number of octaves covered by the detected keypoints. Release all the unmanaged resource associated with BRIEF latch Class for computing the LATCH descriptor. If you find this code useful, please add a reference to the following paper in your work: Gil Levi and Tal Hassner, "LATCH: Learned Arrangements of Three Patch Codes", arXiv preprint arXiv:1501.03719, 15 Jan. 2015 LATCH is a binary descriptor based on learned comparisons of triplets of image patches. Create LATCH descriptor extractor The size of the descriptor - can be 64, 32, 16, 8, 4, 2 or 1 Whether or not the descriptor should compensate for orientation changes. the size of half of the mini-patches size. For example, if we would like to compare triplets of patches of size 7x7x then the half_ssd_size should be (7-1)/2 = 3. Release all the unmanaged resource associated with BRIEF The locally uniform comparison image descriptor: An image descriptor that can be computed very fast, while being about as robust as, for example, SURF or BRIEF. Create a locally uniform comparison image descriptor. Kernel for descriptor construction, where 1=3x3, 2=5x5, 3=7x7 and so forth kernel for blurring image prior to descriptor construction, where 1=3x3, 2=5x5, 3=7x7 and so forth Release all the unmanaged resource associated with BRIEF Wrapped SIFT detector Create a SIFT using the specific values The desired number of features. Use 0 for un-restricted number of features The number of octave layers. Use 3 for default Contrast threshold. Use 0.04 as default Detector parameter. Use 10.0 as default Use 1.6 as default Release the unmanaged resources associated with this object StarDetector Create a star detector with the specific parameters Maximum size of the features. The following values of the parameter are supported: 4, 6, 8, 11, 12, 16, 22, 23, 32, 45, 46, 64, 90, 128 Threshold for the approximated laplacian, used to eliminate weak features. The larger it is, the less features will be retrieved Another threshold for the laplacian to eliminate edges. The larger the threshold, the more points you get. Another threshold for the feature size to eliminate edges. The larger the threshold, the more points you get. Release the unmanaged memory associated with this detector. Class for extracting Speeded Up Robust Features from an image Create a SURF detector using the specific values Only features with keypoint.hessian larger than that are extracted. good default value is ~300-500 (can depend on the average local contrast and sharpness of the image). user can further filter out some features based on their hessian values and other characteristics false means basic descriptors (64 elements each), true means extended descriptors (128 elements each) The number of octaves to be used for extraction. With each next octave the feature size is doubled The number of layers within each octave False means that detector computes orientation of each feature. True means that the orientation is not computed (which is much, much faster). For example, if you match images from a stereo pair, or do image stitching, the matched features likely have very similar angles, and you can speed up feature extraction by setting upright=true. Release the unmanaged memory associated with this detector. The function implements different algorithm of automatic white balance, i.e. it tries to map image’s white color to perceptual white (this can be violated due to specific illumination or camera settings). The source. The DST. Type of the algorithm to use. Use SIMPLE to perform smart histogram adjustments (ignoring 4% pixels with minimal and maximal values) for each channel. Minimum value in the input image Maximum value in the input image Minimum value in the output image Maximum value in the output image The function implements simple dct-based denoising, link: http://www.ipol.im/pub/art/2011/ys-dct/. Source image Destination image Expected noise standard deviation Size of block side where dct is computed