Breathing Detection with Kinect – A working Prototype Seizure Detector!

The seizure detector project has come forward a long way since I have been using the Kinect.
I now have a working prototype that monitors breathing and can alarm if the breathing rate is abnormally low.   It sends data to our ‘bentv’ monitors (image right), and has a web interface so I can see what it is doing (image below).   It is on soak test now…..

Details at http://openseizuredetector.org.uk.

Breathing Detection using Kinect and OpenCV – Part 2 – Peak detection

A few days ago I published a post about how I am using a Microsoft Kinect depth camera and the OpenCV image processing library to identify a test subject from a background, and analyse the series of images from the camera to detect small movements.

The next stage is to calculate the brightness of the test subject at each frame, and turn that into a time series so we can see how it changes with time, and analyse it to detect specific events.

We can use the openCV ‘mean’ function to work out the average brightness of the test image easily, then just add it onto the end of an array, and trim the first value off the start to keep the length the same.
The resulting image and time series are shown below:


 The image here shows that we can extract the subject from the background quite accurately (this is Benjamin’s body and legs as he lies on the floor).  the shading is the movement relative to the average position.

The resulting time series is shown here – the measured data is the blue spiky line.  The red one is the smoothed version (I know I have a half second offset between the two…).

The red dots are peaks detected using a very simple peak searching algorithm.
The chart clearly shows a ‘fidget’ being detected as a large peak.  There is a breathing event at about 8 seconds that has been detected too.

So, the detection system is looking promising – I have had better breathing detection when I was testing it on myself – I think I will have to change the position of the camera a bit to improve sensitivity.

I have now set up a simple python based web server to allow other applications to connect to this one to request the data.

We are getting there.  The outstanding issues are:

  • Memory Leak – after the application has run for 30 min the computer gets very slow and eventually crashes – I suspect a memory leak somewhere – this will have to be fixed!
  • Optimum camera position – I think I can get better breathing detection sensitivity by altering the camera position – will have to experiment a bit.
  • Add some code to identify whether we are looking at Benjamin or just noise – at the moment I analyse the largest bright subject in the image, and assume that is Benjamin – I should probably have a minimum size limit so it gives up if it can not see Benjamin.
  • Summarise what we are seeing automatically – “normal breathing”, “can’t see Benjamin”, “abnormal breathing”, “fidgeting” etc.
  • Modify our monitors that we use to keep an eye on Benjamin to talk to the new web server and display the status messages and raise an alarm if necessary.

The code is available here.

Breathing Detection using Kinect and OpenCV – Part 1 – Image Processing

I have had a go at detecting breathing using an XBox Kinnect depth sensor and the OpenCV image processing library.
I have seen a research paper that did breathing detection, but it relied on fitting the output of the Kinect to a skeleton model to identify the chest area to monitor.  I would like to do it with a less calculation intensive route, so am trying to just use image processing.

To detect the small movements of the chest during breathing, I am doing the following:

Start with a background depth image of empty room.

Grab a depth image from kinect
Subtract Background so we have only the test subject.
Subtract a rolling average background image, and amplify the resulting small differences – makes image very sensitive to small movements.
Resulting video shows image brightness changing due to chest movements from breathing.
We can calculate the average brightness of the test subject image – the value clearly changes due to breathing movements – job for tomorrow night is to do some statistics to work out the breathing rate from this data.
The source code of the python script that does this is the ‘benfinder’ program in the OpenSeizureDetector archive.