JPL First-Person Interaction dataset (JPL-Interaction dataset) is composed of human activity videos taken from a first-person viewpoint. The dataset parti…
action, human, interactive, motion, recognition, videoBackground Models Challenge (BMC) is a complete dataset and competition for the comparison of background subtraction algorithms. The main topics concern:…
background, change, detection, modeling, motion, segmentation, surveillance, videoThis dataset contains 7 challenging volleyball activity classes annotated in 6 videos from professionals in the Austrian Volley League (season 2011/12). A…
action, activity recognition, analysis, detection, sport, video, volleyballThe SPHERE human skeleton movements dataset was created using a Kinect camera, that measures distances and provides a depth map of the scene instead of th…
action, behavior, depth, human, kinect, motion, movement, skeleton, videoThe Multicamera Human Action Video Data (MuHAVi) Manually Annotated Silhouette Data (MAS) are two datasets consisting of selected action sequences for th…
action, background, behavior, human, segmentation, videoIt is composed of ADL (activity daily living) and fall actions simulated by 11 volunteers. The people involved in the test are aged between 22 and 39, wit…
accelerometer, action, depth, fall detection - adl, human, kinect, recognition, video, wearableThe UCF Person and Car VideoSeg dataset consists of six videos with groundtruth for video object segmentation. Surfing, jumping, skiing, sliding, big ca…
camera, groundtruth, model, motion, object, segmentation, videoThe domain-specific personal videos highlight dataset from the paper [1] describes a fully automatic method to train domain-specific highlight ranker for…
action, domain, human, recognition, saliency, summarization, video, wearableThe UMD Dynamic Scene Recognition dataset consists of 13 classes and 10 videos per class and is used to classify dynamic scenes. The dataset has been de…
classification, dynamic, motion, recognition, scene, videoThe Fish4Knowledge project (groups.inf.ed.ac.uk/f4k/) is pleased to announce the availability of 2 subsets of our tropical coral reef fish video and ext…
animal, camera, classification, fish, motion, nature, recognition, video, waterWelcome to the homepage of the gvvperfcapeva datasets. This site serves as a hub to access a wide range of datasets that have been created for projects of…
action, depth, face, human, mesh, multiview, pose, reconstruction, tracking, videoThe SegTrack dataset consists of six videos (five are used) with ground truth pixelwise segmentation (6th penguin is not usable). The dataset is used for …
camera, flow, groundtruth, model, motion, object, optical, proposal, segmentation, stationary, videoDataset A (former NLPR Gait Database) was created on Dec. 10, 2001, including 20 persons. Each person has 12 image sequences, 4 sequences for each of the …
action, biometry, classification, foot, gait, human, motion, pressure, recognitionThe Leeds Cows dataset by Derek Magee consists of 14 different video sequences showing a total of 18 cows walking from right to left in front of different…
animal, background, cow, detection, segmentation, videoThe .enpeda.. Image Sequence Analysis Test Site (EISATS) offers sets of long bi- or trinocular image sequences recorded in the context of vision-based dri…
analysis, flow, motion, optical, segmentation, semantic, stereo, visionThis dataset consist 51 oral presentation recorded with 2 ambient visual sensor (web-cam), 3 First Person View (FPV) cameras (1 on presenter and 2 on rand…
analysis, kinect, multi-sensor, presentation, quality, videoThe QMUL Junction dataset is a busy traffic scenario for research on activity analysis and behavior understanding. Video length: 1 hour (90000 frames)…
behavior, counting, crowd, detection, motion, pedestrian, tracking, videoAn indoor action recognition dataset which consists of 18 classes performed by 20 individuals. Each action is individually performed for 8 times (4 daytim…
action, cross-view, indoor, multi-camera, open-view, recognition, videoThe Freiburg-Berkeley Motion Segmentation Dataset (FBMS-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames i…
benchmark, groundtruth, motion, object, pedestrian, segmentation, tracking, videoThe Airport MotionSeg dataset contains 12 sequences of videos of an aiprort scenario with small and large moving objects and various speeds. It is challen…
airport, camera, clustering, motion, segmentation, video, zoomThe Video Summarization (SumMe) dataset consists of 25 videos, each annotated with at least 15 human summaries (390 in total). The data consists of videos…
action, benchmark, event, groundtruth, human, summary, videoThe Weizmann actions dataset by Blank, Gorelick, Shechtman, Irani, and Basri consists of ten different types of actions: bending, jumping jack, jumping, j…
action, action classification, segmentation, videoThe MSR Action datasets is a collection of various 3D datasets for action recognition. See details http://research.microsoft.com/en-us/um/people/zliu/a…
3d, action, detection, recognition, reconstruction, videoThe dataset consist of the about 50 hours obtained from kindergarten surveillance videos. Dataset, totally approximately 100 videos sequences (1000GB, 50 …
action, background, behavior, human, segmentation, video surveillanceThe Video Segmentation Benchmark (VSB100) provides ground truth annotations for the Berkeley Video Dataset, which consists of 100 HD quality videos divide…
benchmark, groundtruth, motion, object, pedestrian, segmentation, tracking, videoThe GaTech VideoSeg dataset consists of two (waterski and yunakim?) video sequences for object segmentation. There exists no groundtruth segmentation an…
camera, model, motion, object, segmentation, videoThe multi-modal/multi-view datasets are created in a cooperation between University of Surrey and Double Negative within the EU FP7 IMPART project. The …
3d, action, color, dynamic, emotion, face, human, indoor, lidar, model, multi-mode, multi-view, outdoor, rgbd, videoThe dataset captures 25 people preparing 2 mixed salads each and contains over 4h of annotated accelerometer and RGB-D video data. Annotated activities co…
action, activity, classification, detection, recognition, tracking, videoPenn-Fudan Pedestrian Detection and Segmentation
background, detection, motion, pedestrian, segmentationThe Berkeley Multimodal Human Action Database (MHAD) contains 11 actions performed by 7 male and 5 female subjects in the range 23-30 years of age except …
action, classification, motion, multiview, recognitionThe TUG (Timed Up and Go test) dataset consists of actions performed three times by 20 volunteers. The people involved in the test are aged between 22 and…
accelerometer, action, depth image processing - tug, human, kinect, recognition, time, video, wearableLASIESTA is composed by many real indoor and outdoor sequences organized in different categories, each of one covering a specific challenge in moving obje…
background, camera, challenge, dataset, detection, foreground, groundtruth, motion, object, stationary, subtractionMany different labeled video datasets have been collected over the past few years, but it is hard to compare them at a glance. So we have created a handy …
action, benchmark, classification, detection, object, recognition, videoThe CHALEARN Multi-modal Gesture Challenge is a dataset +700 sequences for gesture recognition using images, kinect depth, segmentation and skeleton data.…
action, depth, gesture, human, illumination, kinect, recognition, segmentation, skeletonAt Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing an…
autonomous, car, classification, detection, driving, recognition, robot, segmentation, street, synthetic, time, urban, videoThe GaTech VideoStab dataset consists of N videos for the task of video stabilization. This code is implemented in Youtube video editor for stabilization.…
camera, path, stabilization, videoThe dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversity …
actionScene Background Initialization (SBI) dataset The SBI dataset has been assembled in order to evaluate and compare the results of background initializati…
background, benchmark, change, detection, foreground, initializationThe dataset contains 15 documentary films that are downloaded from YouTube, whose durations vary from 9 minutes to as long as 50 minutes, and the total nu…
detection, object, videoContains six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several times by 25 subjects in four diff…
actionThe Webcam Interestingness dataset consists of 20 different webcam streams, with 159 images each. It is annotated with interestingness ground truth, acqui…
classification, interest, ranking, retrieval, video, weather, webcamThis dataset consists of seven meal-preparation activities, each performed by 10 subjects. Subjects perform the activities based on the given cooking reci…
actionThe xawAR16 dataset is a multi-RGBD camera dataset, generated inside an operating room (IHU Strasbourg), which was designed to evaluate tracking/relocaliz…
depth, medicine, operation, recognition, surgery, table, videoThe Pittsburgh Fast-food Image dataset (PFID) consists of 4545 still images, 606 stereo pairs, 3033600 videos for structure from motion, and 27 privacy-pr…
classification, food, laboratory, real, recognition, reconstruction, videoThe dataset consists of four temporally synchronized data modalities. These modalities include RGB videos, depth videos, skeleton positions, and inertial …
actionThe CERTH image blur dataset consists of 2450 digital images, 1850 out of which are photographs captured by various camera models in different shooting co…
blur, defocus, detection, image, motion, qualityThis dataset comprises of 10 actions related to breakfast preparation, performed by 52 different individuals in 18 different kitchens.
actionThe Shefeld Kinect Gesture (SKIG) dataset contains 2160 hand gesture sequences (1080 RGB sequences and 1080 depth sequences) collected from 6 subjects. Al…
action, depth, gesture, human, illumination, kinect, recognitionThe Longterm Pedestrian dataset consists of images from a stationary camera running 24 hours for 7 days at about 1 fps. It used for adaptive detection an…
background, change, coffee, detection, graz, illumination, indoor, multitarget, pedestrian, robustObservations of several subjects setting a table in different ways. Contains videos, motion capture data, RFID tag readings,...
actionHollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video i…
actionThe Olympic Sports Dataset contains YouTube videos of athletes practicing different sports.
actionThe MSR RGB-D Dataset 7-Scenes dataset is a collection of tracked RGB-D camera frames. The dataset may be used for evaluation of methods for different app…
depth, kinect, location, reconstruction, tracking, videoThis dataset consists of a set of actions collected from various sports which are typically featured on broadcast television channels such as the BBC and …
actionGaze data on video stimuli for computer vision and visual analytics. Converted 318 video sequences from several different gaze tracking data sets with p…
gaze data, metadata, polygon annotation, segmentation, videoThe database of nude and non-nude videos contains a collection of 179 video segments collected from the following movies: Alpha Dog, Basic Instinct, Befor…
movie, nude detection, videoThe BEOID dataset includes object interactions ranging from preparing a coffee to operating a weight lifting machine and opening a door. The dataset is re…
3d, egocentric, interaction, object, pose, tracking, videoThe current video database containing six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several time…
action classification, segmentation, videoThis dataset contains 5 different collective activities : crossing, walking, waiting, talking, and queueing and 44 short video sequences some of which wer…
actionThe Babenko tracking dataset contains 12 video sequences for single object tracking. For each clip they provide (1) a directory with the original ima…
animal, face, object tracking, occlusion, single, videoThe Stanford 40 Actions dataset contains images of humans performing 40 actions. In each image, we provide a bounding box of the person who is performing …
action, boundingbox, detection, human, recognitionThe Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset cap…
autonomous, car, classification, detection, driving, recognition, robot, segmentation, street, time, urban, video, yearTo evaluate our method we designed a new ground truth database of 50 images. The following zip-files contain: Data, Segmentation, Labelling - Lasso, Label…
background, boundingbox, color, image segmentation, optimizationFully annotated dataset of RGB-D video data and data from accelerometers attached to kitchen objects capturing 25 people preparing two mixed salads each (…
actionThe Berkeley Video Segmentation Dataset (BVSD) contains videos for segmentation (boundary?) Dataset train Dataset test
benchmark, segmentation, videoWe wanted to have a collection of action recognition papers and results that everybody can use for reference. The site will work by the community principl…
action, benchmark, dataset, recognitionThe Traffic Video dataset consists of X video of an overhead camera showing a street crossing with multiple traffic scenarios. The dataset can be downlo…
detection, overhead, road, tracking, traffic, urban, video, viewThis is a subset of the dataset introduced in the SIGGRAPH Asia 2009 paper, Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequences.…
camera, change, illumination, light, nature, static, time, urban, video, webcamThe All I Have Seen (AIHS) dataset is created to study the properties of total visual input in humans, for around two weeks Nebojsa Jojic wore a camera ca…
3d, clustering, indoor, outdoor, scene, similarity, study, summary, user, videoThe Mall dataset was collected from a publicly accessible webcam for crowd counting and profiling research. Ground truth: Over 60,000 pedestrians were …
counting, crowd, detection, indoor, pedestrian, tracking, video, webcamThe HandNet dataset contains depth images of 10 participants hands non-rigidly deforming infront of a RealSense RGB-D camera. This dataset includes 214…
articulation, classification, detection, fingertip, hand, pose, rgbd, segmentation, videoWe present a new large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high q…
car, cities, detection, pedestrian, person, segmentation, semantic, stereo, urban, video, weaklyThe crowd datasets are collected from a variety of sources, such as UCF and data-driven crowd datasets. The sequences are diverse, representing dense crow…
anomaly, crowd, detection, human, pedestrian, scene, understanding, videoThe Weather and Illumination Database (WILD) is an extensive database of high quality images of an outdoor urban scene, acquired every hour over all seaso…
camera, change, depth, estimation, illumination, light, newyork, static, time, urban, video, weather, webcamThe Cholec80 dataset contains 80 videos of cholecystectomy surgeries performed by 13 surgeons. The videos are captured at 25 fps. The dataset is labeled w…
medicine, phase, recognition, surgery, tool, videoWe introduce the Shelf dataset for multiple human pose estimation from multiple views. In addition we annotate the body joints in the Campus dataset from …
3d, capture, estimation, human, motion, multiple, pose, viewThe video co-segmentation dataset contains 4 video sets which totally has 11 videos with 5 frames of each video labeled with the pixel-level ground-trut…
co-segmentation, dataset, segmentation, videoThe dataset contains 2326 video sequences of 15 different sport actions and human body joint annotations for all sequences.
actionShakeFive2 A collection of 8 dyadic human interactions with accompanying skeleton metadata. The metadata is frame based xml data containing the skeleton…
human, interaction, kinect, videoThe TVPR dataset includes 23 registration sessions. Each of the 23 folders contains the video of one registration session. Acquisitions have been performe…
clothing, depth, gender, identification, indoor, people, person, recognition, reidentification, top-view, videoThe Lane Level Localization dataset was collected on a highway in San Francisco with the following properties: * Reasonable traffic * Multiple lane hig…
3d, autonomous, benchmark, car, driving, gps, localization, map, road, videoThe Pornography database contains nearly 80 hours of 400 pornographic and 400 non-pornographic videos. For the pornographic class, we have browsed website…
pornography, video, video frames, video shotsThe Graz02 dataset by Andreas Opelt and Axel Pinz contains four categories of images: bikes, people, cars and a single background class. The annotation ha…
background, bike, car, clutter, graz, object detection, pedestrianThe Microsoft Research Cambridge-12 Kinect gesture dataset consists of sequences of human movements, represented as body-part locations, and the associate…
action, gesture, human, kinect, recognitionThe Salient Montages is a human-centric video summarization dataset from the paper [1]. In [1], we present a novel method to generate salient montages f…
human, montage, saliency, summarization, video, wearableThe multiple foreground video co-segmentation dataset, consisting of four sets, each with a video pair and two foreground objects in common. The dataset …
co-segmentation, segmentation, videoThese datasets were generated for the M2CAI challenges, a satellite event of MICCAI 2016 in Athens. Two datasets are available for two different challenge…
challenge, medicine, recognition, surgery, video, workflowThe VidPairs dataset contains 133 pairs of images, taken from 1080p HD (~2 megapixel) official movie trailers. Each pair consists of images of the same sc…
dense, description, flow, matching, optical, pair, patch, videoUCF50 is an action recognition dataset with 50 action categories, consisting of realistic videos taken from YouTube.
actionHollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video i…
action classification, segmentation, videoThe test sequences provide interested researchers a real-world multi-view test data set captured in the blue-c portals. The data is meant to be used for t…
action, camera, multiview, segmentation, trackingThe ICG Multi-Camera and Virtual PTZ dataset contains the video streams and calibrations of several static Axis P1347 cameras and one panoramic video from…
calibration, camera, crowd, detection, graz, multitarget, multiview, network, object, outdoor, panorama, pedestrian, tracking, videoThe ICG Multi-Camera datasets consist of Easy Data Set (just one person) Medium Data Set (3-5 persons, used for the experiments) Hard Data Set (crowd…
calibration, camera, detection, graz, indoor, multitarget, multiview, object, pedestrian, tracking, videoThe MOT Challenge is a framework for the fair evaluation of multiple people tracking algorithms. In this framework we provide: - A large collection of d…
3d, benchmark, benhttp://motchallenge.net/chmark, dataset, evaluation, multiple, pedestrian, people, surveillance, target, tracking, videoThis dataset features video sequences that were obtained using a R/C-controlled blimp equipped with an HD camera mounted on a gimbal.The collection repres…
actionDataset of 9,532 images of humans performing 40 different actions, annotated with bounding-boxes.
actionThe VSUMM (Video SUMMarization) dataset is of 50 videos from Open Video. All videos are in MPEG-1 format (30 fps, 352 x 240 pixels), in color and with sou…
keyframe, similarity, static, study, summary, type, user, videoThe UrbanStreet dataset used in the paper can be downloaded here [188M] . It contains 18 stereo sequences of pedestrians taken from a stereo rig mounted o…
detection, human, multitarget, pedestrian, recognition, segmentation, tracking, urban, videoThe Graz01 dataset by Andreas Opelt and Axel Pinz contains four types of images: bikes, people, background with no bikes, background with no people.
background, bike, clutter, graz, object detection, occlusion, pedestrianWalk, Run, Jump, Gallop sideways, Bend, One-hand wave, Two-hands wave, Jump in place, Jumping Jack, Skip.
actionThe Where Who Why (WWW) dataset provides 10,000 videos with over 8 million frames from 8,257 diverse scenes, therefore offering a superior comprehensive d…
crowd, detection, flow, optical, pedestrian, recognition, surveillance, videoThe GaTech VideoContext dataset consists of over 100 groundtruth annotated outdoor videos with over 20000 frames for the task of geometric context evalua…
classification, context, geometry, nature, outdoor, segmentation, semantic, supervised, unsupervised, urban, videoThe dataset was captured by a Kinect device. There are 12 dynamic American Sign Language (ASL) gestures, and 10 people. Each person performs each gesture …
actionCollected from various sources, mostly from movies, and a small proportion from public databases, YouTube and Google videos. The dataset contains 6849 cli…
actionThe Yotta dataset consists of 70 images for semantic labeling given in 11 classes. It also contains multiple videos and camera matrices for 14km or drivin…
3d, camera, classification, reconstruction, segmentation, semantic, urban, videoThe Daimler Urban Segmentation Dataset consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs wi…
motion, outdoor, segmentation, semantic, stereo, urbanThe YouTube-Objects dataset is composed of videos collected from YouTube by querying for the names of 10 object classes. It contains between 9 and 24 vide…
detection, flow, object, optical, segmentation, videoThe automotive multi-sensor (AMUSE) dataset consists of inertial and other complementary sensor data combined with monocular, omnidirectional, high frame …
api, city, image, inertial, streetside, traffic, urban, videoThe High Definition Analytics (HDA) dataset is a multi-camera High-Resolution image sequence dataset for research on High-Definition surveillance: Pedestr…
benchmark, camera, detection, high-definition, human, indoor, lisbon, multiview, network, pedestrian, re-identification, surveillance, tracking, videoA Kinect dataset for hand detection in naturalistic driving settings as well as a challenging 19 dynamic hand gesture recognition dataset for human machin…
action