eyetracker-ng 0.1-1 documentation

analysis Module

«  eyetracker-ng   ::   Contents   ::   camera Module  »

analysis Module

Analysis module implements image processing algorithms. It is divided into three parts: calibration, detect and processing.

calibration Module


Checks current screen resolution

Only to be used in Linux and when PySide is not in use, as there the size of the screen can be queried from app = QtGui.QApplication(sys.argv) with app.dektop().screenGeometry().

Function uses Linux command “xrandr” to query the system about current resoulution.


size : tuple of ints (width, height)

size of screen in pixels

eyetracker.analysis.calibration.where_circles(resolution=False, rad=30)[source]

Counts where the circles should be drawn.


resolution : tuple

ints (width, height)

rad : int

radius of the biggest circle that will be drawn


coordinates : tuples

coordinates of nine places where the circles will be drawn

detect Module

eyetracker.analysis.detect.glint(image, maxCorners=2, quality=0.0001, minDist=20, mask=None, blockSize=3)[source]

Function detects glint on the retina.

Based on the funccv2.goodFeaturesToTrack. For more info on parameters check the docs for opencv and goofFeaturesToTrack.


image : np.array

image of the eye where the glints are supposed to be detected

maxCorners : int

how many glints should it detect, default is 2

quality : :

minimal accepted quality of image corners

minDist : :

minimum distance between the detected glints, default is 20

mask : :

area of the image that should be used for glint detection default is None so it looks through the whole picture

blockSize : :

size of an average block for computing a derivative

covariation : :

matrix over each pixel neighborhood


where : array

ccoordinates(a list [x, y]) for found glints

eyetracker.analysis.detect.pupil(image, dp=1, minDist=100, param1=50, param2=10, minRadius=20, maxRadius=70)[source]

Function detects pupil on the image of an eye.

Based on the funccv2.HoughCircles. for more info on parameters check the docs for opencv and HoughCircles.


image - np.array :

image of the eye where the pupil is supposed to be detected

dp : int

inverse ratio of the accumulator resolution to the image’s,

minDist : int

minimum distance between found circles

param1 : int

higher threshold for the Canny edge detector

param2 : int

accumulator threshold for the circles centers, smaller it is, more false positives

minRadius : int

minimal radius of the detected circle

maxRadius : int

maximal radius of the detected circle


where : array

coordinates and the radius of the circle (a list [x, y, z]) for found pupils

processing Module

eyetracker.analysis.processing.adaptiveThreshold(image, max_v=255, adaptiveMethod='gaussian', thresh_type='bin', blockSize=33, subtConstant=10)[source]

Threshold the image using adaptive methods.

For corresponding adaptive methods see docs.opencv.org: {‘gaussian’ : cv2.ADAPTIVE_THRESH_GAUSSIAN_C, ‘mean’ : cv2.ADAPTIVE_THRESH_MEAN_C}


image : np.array

2D array depicting an image in one-scale color(ex. grayscale)

max_v : int

maximal value to be used in threshold

adaptiveMethod : string

method used for thresholding, possible ‘gaussian’ or ‘mean’

thresh_type : string

prethresholding, possible thresholds ‘bin’(binary) or ‘bin_inv’(inversed binary)

blockSize : int

Size of a pixel neighborhood that is used to calculate a threshold value, the size must be an odd number.

subtConstant : int

a constant that will be subtracted from mean or weighted mean (depending on adaptiveMethod chosen)


thresholded : np.array

image array processed accordingly.


Convert color image(BGR) to gray image.


image : np.array

2d 24-bit array depicting an image in three-channel color: Blue,Green,Red


image : np.array

2d 8-bit array depicting a given image converted to gray scale.

eyetracker.analysis.processing.find_purkinje(purkinje1, purkinje2)[source]

Find virtual purkinje image in a two IR LED setting.

Simple finding of the middle between two points.


purkinje1 : tuple of (int, int)

the coordinates of first purkinje image

purkinje2 : tuple of (int, int)

the coordinates of second purkinje image


middle : tuple of (int, int)

which are the coordinates of the middle between purkinje1 and purkinje2.


Convert gray image to color image(BGR).


image : np.array

2d 8-bit array depicting an image in one-channel color (greyscale)


image : np.array

2d 24-bit array depicting a given image converted to three-channel color scale: Blue,Green,Red.

eyetracker.analysis.processing.imageFlipMirror(im, mirrored, flipped)[source]

Flip and/or mirror the given image.


im : np.array

2D array depicting an image as an numpy array

mirrored : np.array

self explanatory boolean parameter (left - right)

fliped np.array :

self explanatory boolean parameter (top - bottom)


im : np.array

image array processed accordingly.

eyetracker.analysis.processing.mark(image, where, radius=10, color='red', thickness=3)[source]

Mark object with a circle.


image : np.array

3d array depicting the original image that is to be marked, needs to be in three scale color

where : np.array

array of sets of coordinates (x, y or x, y, radius) of the object

radius : int

set same radius for all objects, if a set of coordinates has a third value this will be overruled

color : string

color of circles marking the object, possible: ‘blue’, ‘green’ or ‘red’

thickness : int

thickness of the circle


true : True

eyetracker.analysis.processing.runningAverage(image, average, alpha)[source]

Calculates running average of given pictures stream.

Using cv2.accumulateWeighted.


image : np.array

new image to be averaged along with past image stream,

average : np.array

past averaged image,

alpha : int

control parameter of the running average, it describes how fast previous images would be forgotten, 1 - no average, 0 - never forget anything.


image : np.array

averaged image as numpy array.

eyetracker.analysis.processing.threshold(image, thresh_v=30, max_v=255, thresh_type='trunc')[source]

Threshold the image.

For corresponding threshold types description see docs.opencv.org: {‘otsu’ : cv2.THRESH_OTSU, ‘bin’ : cv2.THRESH_BINARY, ‘bin_inv’ : cv2.THRESH_BINARY_INV, ‘zero’ : cv2.THRESH_TOZERO, ‘zero_inv’ : cv2.THRESH_TOZERO_INV, ‘trunc’ : cv2.THRESH_TRUNC}


image : np.array

2d 8-bit array depicting an image in one-channel color(ex. grayscale)

thresh_v : int

value of threshold cut-off

max_v : int

maximal value when thresholding, relevant only if thresh_type is ‘bin’ or ‘bin_inv’

thresh_type : string

type of thresholding, possible ‘otsu’, ‘bin’, ‘bin_inv’, ‘zero’, ‘zero_inv’, ‘trunc’


thresholded_image : np.array

given image after aplication of a given threshold.

«  eyetracker-ng   ::   Contents   ::   camera Module  »