

Theoretically one could now delete parts of the video that are ’empty’ and cut down storage by a lot.
#Processing mouse coordinates code
To reduce the impact of this, the code performs three sequential levels of processing: It also means that random access to arbitrary frames in the videos is fast. The uncompressed format is a bit extravagant for mouse tracking, but can be somewhat important for whisker tracking. I use uncompressed avi straight off the AVT ( link) api (data is acquired via the api from matlab, just to make it very easy to load configurations – here is an example of such a minimalistic setup). Uncompressed high speed video can reach around 4-8GB/min, so single sessions can easily reach 200GB, so data access is a major bottleneck. The platforms are at least somewhat transparent and leave some of the backlight through, making it possible, if still hard to track the mouse once it intersects them.I made sure that platforms stay decently aligned to the axes of the image.One reason for this is that I need white or black mice to appear completely black against the backlight. There are no room lights, and minimal glare from computer screens etc. I moved the LEDs around to get pretty decent uniformity – this means I can use simple thresholding for many things, and is important for whisker tracking later. The diffuser needs to be removed for cleaning just about every session so design with that in mind. On this sits a box (just 4 sides) made from mirrored acrylic, and on top of that two sheets of frosted glass as a diffuser (a few mm between the two sheets make the diffuser much more efficient). I made this from ~12 red 700mA LEDs glued to a thin (~2mm) aluminum plate that is bolted to the air table, which then acts as a huge heatsink. I use a high frame rate (~330Hz), low (650nm), because it’s fairly invisible to mice.All automatic adjustments in the camera are disabled, and all focus/aperture rings screwed in tight. This means that one set of hand-tuned parameters work for all sessions. The platforms and camera are bolted down so they can’t move by even a mm, even when bumped.Here, the setup was designed with the following things in mind:

Often, small easy changes can make an otherwise hard image processing step very easy. By far the most important thing to get right with this kind of tracking is the physical setup. To make this a bit more interesting, a fiber optic cable obscures parts of the image sometimes, and the mouse will sometimes assume odd poses and not just stick its head across the gap. The ‘target’ platform (on the right) is mostly static but occasionally retracts by a few mm very quickly (same method as in our anticipatory whisking paper). The two platforms between which the mouse moves can be moved: The ‘base’ platform (left) has a variable position and is re-positioned by hand (slowly) between trials. MoveToClickPoint.The setup consists of a camera that gets a top-down view of a mouse crossing from one side of the frame to the other and back. Since the GetComponent function is fairly slow to execute, the script stores its result in a variable during the Start function rather than call it repeatedly in Update.
#Processing mouse coordinates full
The position of the click is determined by a raycast, rather like pointing a laser beam at the object to see where it hits (see the page Rays from the Camera for a full description of this technique). More info See in Glossary by clicking the mouse on the object’s surface. This script lets you choose the destination point on the NavMesh A mesh that Unity generates to approximate the walkable areas and obstacles in your environment for path finding and AI-controlled navigation.
