masterhead masterhead  masterhead

SHIP-v page is renewaled! The old page is here .

Services for High-speed Image Processing - Videos


High-speed image processing technology allows us to take advantage of information from high-frame-rate video, something that traditional technology cannot do. At low frame rates, it is impossible to completely capture the dynamics of high-speed objects using image information obtained from processing those objects with traditional image processing technology. Therefore, complex algorithms (prediction, calculation, learning and wide searching) are often used to make up for shortage in information, but with this approach, it is difficult to speed up the image processing. On the other hand, with the high-speed image processing technology being developed in our laboratory, we can acquire and process images at a frame rate that is high enough to capture the necessary dynamics. In addition, high-speed image processing simplifies the processing algorithms, makes high-speed processing easy, and will pave the way to a number of new applications.

We believe that research on high-speed image processing will become increasingly important. Therefore, we set up (Services for High-speed Image Processing – Videos) for encouraging the development of related work , so that researchers can easily use sample high-speed videos needed for their research. Our aim in developing is to allow researchers to engage in high-speed image processing research with the support offered by high-speed videos and associated necessary data that we make available , so that even researchers who do not have cameras capable of acquiring high-speed video can begin to investigate high-speed image processing without difficulty. Another benefit of is that it helps researchers by setting benchmarks for new high-speed image processing algorithms by using a common set of videos.

High-speed image processing has a wide range of applications, and therefore, we are going to add new high-speed videos for various situations in the future. Please give us your ideas about the kinds of videos you want. Also, we are planning to update the videos to replace them with videos taken with higher-spec equipment as better cameras become available in the future.

Meanwhile, includes many high-frame-rate videos of gestures. The reason for this is that the number of applications that take advantage of gesture recognition has been on the increase in recent years. However, in this situation, there are differences between gestures and the meanings of the corresponding operations, depending on the applications or developers. Uncertain meanings have the potential to nip the progress of applications that use gesture recognition. Therefore, in this project, we would like to obtain diverse feedback from you about creating meanings for recognized gestures, and we hope to use as an opportunity to collect suitable gestures.

howto thumbnail


Please see the following sample video. (The following video is a sample. Please download it if you wish to use it for your research.)

  arm gesture
up-and-down movement (diagonally)
250 fps

You can use other 60fps, 125fps, 500fps, 1000fps movies.

You can handle videos that you download with code written in various programming languages. As an example, we prepared an entry-level program (for input-output of data). We hope you find it informative.

Here is the example program.


There are several terms which you should know before using . We hope that you will use as a reference. The following terms are links to “Technical Terms” explained on the Ishikawa Watanabe Laboratory Website

High Speed Image Processing, Frame Rate, Spatial Resolution, Global Shutter, Gesture Recognition


We are grateful to Photron Inc. for supplying the camera used to record the high-speed videos.

Ishikawa Watanabe Laboratory, Department of Information Physics and Computing, Department of Creative Informatics,
Graduate School of Information Science and Technology, University of Tokyo
Ishikawa Watanabe Laboratory WWW admin:
Copyright © 2008 Ishikawa Watanabe Laboratory. All rights reserved.