I have been working on a system to detect epileptic seizures (fits) to raise an alarm without requiring sensors to be attached to the subject.
I am going down three routes to try to do this:
This is about my first ‘proof of concept’ go at a video based system.
I am trying to detect the shaking of a fit. I will do this by monitoring the signal from an infrared video camera, so it will work in monochrome. The approach is:
- Reduce the size of the image by averaging pixels into ‘meta pixels’ – I do this using the openCV pyrDown function that does the averaging (it is used to build image pyramids of various resolution versions of an image). I am reducing the 640×480 video stream down to 10×7 pixels to reduce the amount of data I have to handle.
- Collect a series of images to produce a time series of images. I am using 100 images at 30 fps, which is about 3 seconds of video.
- For each pixel in the images, calculate the fourier transform of the series of measured pixel intensities – this gives the frequency at which the pixel intensity is varying.
- If the amplitude of oscillation at a given frequency is above a threshold value, treat this as a motion at that particular frequency (ie, it could be a fit).
- The final version will check that this motion continues for several seconds before raising an alarm. In this test version, I am just highlighting the detected frequency of oscillation on the original video stream.
The code uses the OpenCV
library, which provides a lot of video and image handling functions – far more than I understand…
My intention had been to write it in C, but I struggled with memory leaks (I must have been doing something wrong and not releasing storage, because it just ate all my computer’s memory until it crashed…).
Instead I used the Python bindings for OpenCV – this ran faster and used much less memory than my C version (this is a sign that I made mistakes in the C one, rather than Python being better!).
The code for the seizure detector is here
– very rough ‘proof of concept’ one at the moment – it will have a major rewrite if it works.
Test Set Up
To test the system, I have created a simple ‘test card’ video, which has a number of circles oscillating at different frequencies – the test is to see if I can pick out the various frequencies of oscillation. The code to produce the test video is here
….And here is the test video (not very exciting to watch I’m afraid).
The circles are oscillating at between 0 and 8 Hz (when played at 30 fps).
The output of the system is shown in the video below. The coloured circles indicate areas where motion has been detected. The thickness of the line and the colour shows the frequency of the detected motion.
- Blue = <3 hz="" li="">
- Yellow = 3-6 Hz
- Red = 6-9 Hz
- White = >9 Hz
The things to note are:
- No motion detected near the stationary 0 Hz circle (good!).
- <3hz 1="" 2="" and="" circles="" detected="" good="" hz="" li="" motion="" near="" the="">
- 3-6 Hz motion detected near the 2,3,4 and 5 Hz circles (ok, but why is it near the 2Hz one?)
- 6-9 Hz motion detected near the 5 and 6 Hz circles (a bit surprising)
- >9Hz motion detected near the 4 and 7 Hz circles and sometimes the 8Hz one (?)
So, I think it is sometimes getting the frequency too high. This may be as simple as how I am doing the check – it is using the highest frequency that exceeds the threshold. I think I should update it to use the frequency with maximum amplitude (which exceeds the thershold).
Also, I have something wrong with positioning the markers to show the motion – I am having to convert from a pixel in the low res image to the location in the high resolution one, and it does not always match up with the position of the moving circles.
But, it is looking quite promising. Rather computer intensive at the moment though – it is using pretty much 100% of one of the CPU cores on my Intel Core I5 laptop, so not much chance of getting this to run on a Raspberry Pi, which was my intention.