pivis package¶
Submodules¶
pivis.aoi module¶
- class pivis.aoi.AreasOfInterest(mus: ndarray, covs: ndarray, mvns: List[rv_continuous], track: Track, cluster_len: int, no_cluster: ndarray, cluster: ndarray, transition_probs: ndarray)¶
Bases:
object
Represents the various AOIs in an image as ovals (via Multivariate normal distribution)
- Attributes:
- musnp.ndarray
Centers of AOIS
- covsnp.ndarray
Covariance matrices for AOIS
- mvnsList[stats.rv_continuous]
List of multivariate normal distributions corresponding to mus/covs.
- trackTrack
Instance of Track, containing data for plotting
- no_clusternp.ndarray
Boolean array indicating if the raw data is clustered properly (i.e. eye tracker found eyes) at the last cluster_len observations.
- clusternp.ndarray
Raw data, clustered by last cluster_len observations, corresponds to indices in mvns/mus/covs.
- transition_probsnp.ndarray
Probability of transitioning from 1 cluster to another
Methods
from_track
(track[, threshold, group_lim, ...])Generates AOIs from tracking information
plot
([method, figsize, plot_transitions, ...])Plot the AOIs, Plot is left in memory so the user can plot extra things on it/change the title, etc.
video
([figsize, fileloc, ops, show_obs, verbose])Generates a video from the AOIs for a given track
- classmethod from_track(track: Track, threshold: float = 7.0, group_lim=3, det_lim=10000000.0, shortThreshold: int = 500, verbose: bool = True, cluster_len: int = 5) Self ¶
Generates AOIs from tracking information
- Parameters:
- trackTrack
Tracking information to use
- thresholdfloat
Threshold for KL-divergence used to combine multiple AOIs into larger AOIs, higher is more permissive. Default is 7
- group_limint
Controls how many clusters can be rolled into 1 larger cluster in each stage of the algorithm. Higher numbers create more large groups. Default is 5.
- det_limfloat
Controls the maximum size of clusters via the determinant of their covariance matrices. Larger = larger clusters. Default is 1e7
- shortThresholdint
Threshold for (max timestamp - min timestamp) that this AOI is looked at, in milliseconds, AOI will be removed if below this threshold. Note: This is NOT a threshold for the cumulative amount of time that the AOI is looked at. Default is 500
- verbosebool
Whether or not to print out the number of AOIs remaining after each iteration. Default is False.
- cluster_lenint
After generating clusters based on a cleaned version of the data, when looking at the raw data, how many previous observations should be used to determine the cluster of the current observation. (i.e. a smoothing factor for clustering the raw data, higher is smoother) Default is 5
- plot(method: Literal['draws', 'ovals'] = 'draws', figsize: Tuple[int] = (15, 10), plot_transitions: bool = False, transitionThreshold: float = 0.3) Tuple[Figure, Axes, List[QuadContourSet | PathCollection]] ¶
Plot the AOIs, Plot is left in memory so the user can plot extra things on it/change the title, etc.
- Parameters:
- methodLiteral[“draws”, “ovals”]
Choose whether to plot the AOIs as ovals or as multiple draws from their distribution Default is “draws”
- figsizeTuple[int]
Size of figure to plot, in inches Default is (15, 10)
- plot_transitions: bool
Whether or not to plot common transitions Default is False.
- transitionThreshold: float
Threshold for probability of a transition to occur, above which the transition will be plotted. Default is 0.3
- Returns
- ——–
- Tuple[figure.Figure, axes.Axes, List[contour.QuadContourSet | collections.PathCollection]]
Figure, axes, and list of matplotlib artists created
- video(figsize: Tuple[int] = (15, 10), fileloc: str = 'vid.mp4', ops: int = 25, show_obs: int = 5, verbose: bool = False) None ¶
Generates a video from the AOIs for a given track
- Parameters:
- figsize: Tuple[int]
Size for matplotlib figure in vidoe Default is (15, 10)
- fileloc: str
Location to save video Default is “vid.mp4”
- ops: int
Number of observations per second to show in the video, i.e. if period is 25 for the track, then at 40 ops, 1 second of video corresponds to 1 second of real time. Default is 25
- show_obs: int
Number of observations on screen in any 1 frame. Default is 5
- verbose: bool
Whether or not to print progress, prints every 50 frames.
pivis.track module¶
- class pivis.track.Track(df: DataFrame, raw_df: DataFrame, period: int, img: ndarray, cols: Iterable[str])¶
Bases:
object
Wrapper for dataframe containing eye tracking information
- Attributes:
- dfpd.DataFrame
DataFrame containing tracking information
- raw_dfpd.DataFrame
DataFrame containing raw tracking information for visualization
- periodint
Number of milliseconds between observations
- imgnp.ndarray
Image on which points are tracked
- colsIterable[str]
Column names for df
Methods
from_excel
(file, period, img, time_col, ...)Reads tracking data from an excel spreadsheet
- classmethod from_excel(file: str, period: int, img: str, time_col: str, IMG_X_col: str, IMG_Y_col: str, type_col: str, fixation_X_col: str, fixation_Y_col: str) Self ¶
Reads tracking data from an excel spreadsheet
- Parameters:
- filestr
Excel file containing tracking information
- periodint
Number of milliseconds between measurements in file
- imgstr
Path to image file
- time_colstr
Name of column containing timing information, must contain “Fixation” during fixations
- IMG_X_colstr
Name of column containing X (horizontal) position in image
- IMG_Y_colstr
Name of column containing Y (vertical) position in image
- type_colstr
Name of column classifying eye movement types
- Fixation_X_colstr
Name of column containing Fixation X information
- Fixation_Y_colstr
Name of column containing Fixation Y information