9.00 - 9.30
|
Registration & Coffee
|
9.30 - 9.45
|
Opening (Prof. Dr. Bodo Urban & Prof. Dr. Thomas Kirste)
|
9.45 - 10.45
|
Keynote (Prof. Dr. Paul Lukowicz)
|
10.45 - 11.00
|
Coffee Break
|
11.00 - 12.15
|
Session #1: Introduction & Reviews(Chair: Gerald Bieber)
A Typology of Wearable Activity Recognition and Interaction (Manuel Dietrich and Kristof Van Laerhoven)
In this paper, we will provide a typology of sensor-based activity recognition and interaction, which we call wearable activity recognition. The typology will focus on a conceptual level regarding the relation between persons and computing systems. Two paradigms, first the activity based seamless and obtrusive interaction and second activity-tracking for reflection, are seen as predominant. The conceptual approach will lead to the key term of this technology research, which is currently underexposed in a wider and conceptual understanding: human action/activity. Modeling human action as a topic for human-computer interaction (HCI) in general exists since its beginning. We will apply two classic theories which are influential in the HCI research to the application of wearable activity recognition. It is both a survey and a critical reflection on these concepts. As a further goal of our approach, we argue for the relevance and the benefits this typology can have. Beside practical consequences, a typology of the human-computer relation and the discussion of the key term activity can be a medium for exchange which other disciplines. Especially when applications become more serious, for example in health care, a typology including a wider mutual understanding can be useful for cooperations with non-technical practitioners e.g. doctors or psychologists.
A Study on Measuring Heart- and Respiration-Rate via Wrist-Worn Accelerometer-based Seismocardiography (SCG) in Comparison to Commonly Applied Technologies(Marian Haescher, Denys J.C. Matthies, John Trimpop and Bodo Urban)
Since the human body is a living organism, it emits various life signs, which can be traced with electromyography, but also with motion sensitive sensors such as typical inertial sensors. In this paper we present how to recognize the Heart Rate (HR), Respiration Rate (RR) and the muscular Microvibrations (MV) from an accelerometer sensor worn on the wrist. We compare our Seismocardiography (SCG) / Ballistocardiography (BCG) approach to commonly measurement methods. In conclusion, our study confirmed that SCG/BCD with a wrist-worn accelerometer also provides accurate vital data. While the recognized RR only deviated slightly to the ground truth (SD=16.61%), the detection of HR is non-significantly different (SD=1.63%) to the current gold-standard.
A Review and Quantitative Comparison of Methods for Kinect Calibration(Wei Xiang, Christopher Conly, Christopher McMurrough and Vassilis Athitsos)
To utilize the full potential of RGB-D devices, calibration must be performed to determine the intrinsic and extrinsic parameters of the color and depth sensors and to reduce lens and depth distortion. After doing so, the depth pixels can be mapped to color pixels and both data streams can be simultaneously utilized. This work presents an overview and quantitative comparison of RGB-D calibration techniques and examines how the resolution and number of images affects calibration.
|
12:15 - 13:30
|
Lunch Break
|
13:30 - 14:45
|
Session #2: Classifier Specific Activity Recognition(Chair: Marian Haescher)
Activity Recognition Using Conditional Random Field(Megha Agarwal and Peter Flach)
Activity recognition is an integral component of ubiquitous computing. Recognizing an activity is a challenging task since activities can be concurrent, interleaved or ambiguous and can consist of multiple actors (which would require parallel activity recognition). This paper investigates how the discriminative nature of conditional random fields (CRF) can be exploited to enhance the accuracy of recognizing activities when compared to that achieved using generative models. It aims to apply CRF to recognize complex activities, analyze the model trained by CRF and evaluate the performance of CRF against existing models using stochastic gradient descent (which is suitable for online learning).
Phase Registration Improves Classification and Clustering Of Cycles Based On Self-Organizing Maps(Juan Carlos Quintana Duque and Dietmar Saupe)
Self-Organizing Maps (SOM) have been used to reduce the complexity of joint kinematic and kinetic data in order to cluster, classify and visualize cyclic motion data. In this paper we describe the results after training SOM with preprocessed data based on phase registration by dynamic time warping. For validation, we recorded acceleration data of human locomotion varying the treadmill slope, activity (i.e., walking, jogging, running), and whether or not 1.5 kg weights were attached to the ankles. The topological quality of the map after training improved when the phase registration was applied. Furthermore, test and subject classification improved, in particular for walking data, when the phase registration was applied for each individual activity. Activity classification improved when the phase registration was calculated from all cycles of our experiments together.
Exploiting Thread-Level Parallelism in Template-Based Gesture Recognition with Dynamic Time Warping
(Florian Grützmacher, Johann-Peter Wolff and Christian Haubelt)
Mobile devices have become ubiquitous, powerful computing devices. While their use scenarios require new input methods, their typical many-core computing architectures allow for new ways to implement these input methods. In this paper the suitability of
many-core digital signal processors for online hand gesture recognition is evaluated. To this end, a system consisting of a data glove with three accelerometers and a many-core digital signal processor board is presented. Experiments assess realtime properties in hand gesture recognition on the many-core processing platform.
|
14.45 - 15.15
|
Coffee Break
|
15:15 - 16:30
|
Session #3: Gesture & Activity Recognition(Chair: Denys J.C. Matthies)
eRing: Multiple Finger Gesture Recognition with one Ring Using an Electric Field(Mathias Wilhelm, Daniel Krakowczyk, Frank Trollmann and Sahin Albayrak)
Since gestures are one of the natural interaction modalities between humans they also represent a promising interaction modality for human computer interaction. Finger rings could provide an unobtrusive way to detect hand and finger gestures if they are able to detect a large variety of gestures involving hands and multiple fingers. One method that could be used to detect such gestures with a single ring is electric field sensing. In this paper we present an early prototype, called eRing, which uses this method and evaluate its capability to detect different finger- and hand-gestures via a user study.
Opportunities for Activity Recognition using Ultrasound Doppler Sensing on Unmodified Mobile Phones [Best Paper] (Biying Fu, Jakob Karolus, Tobias Grosse-Puppendahl, Jonathan Hermann and Arjan Kuijper)
Nowadays activity recognition on smartphones is ubiquitously applied, for example to monitor personal health. The smartphone's sensors act as a foundation to provide information on movements, the user's location or direction. Incorporating ultrasound sensing using the smartphone's native speaker and microphone provides additional means for perceiving the environment and humans. In this paper, we outline possible usage scenarios for this new and promising sensing modality. Based on a custom implementation, we provide results on various experiments to assess the opportunities for activity recognition systems. We discuss various limitations and possibilities when wearing the smartphone on the human body. In stationary deployments, e.g. while placed on a night desk, our implementation is able to detect movements in proximities up to 1.5m.
Acoustic Tracking of Hand Activities on Surfaces(Andreas Braun, Stefan Krepp and Arjan Kuijper)
Many common forms of activities are haptic in their nature. We touch, grasp, and interact with a plethora of different objects every day. Some of those objects are registering our activities, such as the millions of touch screens we are using every day. Adding perception to arbitrary objects is an active area of research, with a variety of different technologies in use. Acoustic sensors, such as microphones, react to mechanical waves propagating through a medium. By attaching an acoustic sensor to a surface, we can analyze activities on this medium. In this paper, we present signal analysis and machine learning methods that enable us to detect a variety of interaction events on a surface. We extend from previous work, by combining swipe and touch detection in a single method, for the latter achieving an accuracy between 91% and 99% with a single microphone and 97% to 100% with two microphones.
|
17:30 - 18:30
|
Get Together on a Boat
|
18.30 - 21.00
|
Best Paper Banquette
|
10:00 - 11.15
|
Session #4: Traffic Safety Applications(Chair: Rebekka Alm)
Enhancing Traffic Safety with Wearable Low-Resolution Displays(Tobias Grosse-Puppendahl, Oskar Bechtold, Lukas Strassel, David Jakob and Arjan Kuijper)
Safety is a major concern for non-motorized traffic participants, such as cyclists, pedestrians or skaters. Due to their weak nature compared to cars, accidents often lead to serious implications. In this paper, we investigate how additional protection can be achieved with wearable displays attached to a person's arm, leg or back. Different to prior work, we present an extensive study on design considerations for wearable displays in traffic. Based on interviews, experiments, and an online questionnaire with more than 100 participants, we identify potential placements, form factors, and use-cases. These findings enabled us to develop a wearable display system for traffic safety, called beSeen. It can be attached to different parts of the human body, such as arms, legs, or the back. Our device unobtrusively recognizes turn indication gestures, braking, and its placement on the body. We evaluate beSeen's performance and show that it can be reliably used for enhancing traffic safety.
Exploring Vibrotactile Feedback on the Body and Foot for the Purpose of Pedestrian Navigation(Anita Meier, Denys J.C. Matthies, Bodo Urban and Reto Wettach)
In this paper, we present an evaluation of vibrotactile onbody feedback for the purpose of pedestrian navigation. For this specific task, many researchers already provide different approaches such as vibrating belts, wristbands or shoes. Still, there are issues left that have to be considered, such as which body position is most suitable, what kind of vibration patterns are easy to interpret, and how applicable are vibrotactile feedback systems in real scenarios. To find answers, we reconstructed prototypes commonly found in literature and continued to further evaluate different foot-related designs. On the one hand, we learned that vibrotactile feedback at the foot reduces visual attention and thus also potentially reduces stress. However, on the other hand, we found that urban space can be very diverse, and ambiguous and therefore a vibrotactile system cannot completely replace common path finding systems for pedestrians. Rather, we envision such a system to be applied complementary as an assistive technology.
Car Crash Detection on Smartphones [Industrial Submission] (Julia Lahn, Heiko Peter and Peter Braun)
In this paper we describe a simple car crash detection algorithm implemented on Android smartphones. The algorithm uses accelerator sensor and location sensor information which are combined to detect typical pattern of car crash situations. We present technical details of our implementation and first results of an evaluation.
|
11:15 - 12.30
|
Lunch Break
|
12:30 - 13.45
|
Session #5: Industrial & Laboratory Applications(Chair: Mario Aehnelt)
RFID-Based Compound Identification in Wet Laboratories With Google Glass(Philipp M. Scholl and Kristof Van Laerhoven)
Experimentation in Wet Laboratories requires tracking the whereabouts of small containers like test tubes, flasks, and bottles. The current practice is to do this tracking manually by annotating these containers with colored adhesive markers, hand-writing, QR- and barcodes, or RFID-Tags.
These annotations are subject to harsh environmental conditions (e.g., many samples are being kept in a freezer), and can be hard to share with other lab works as multiple lab users might not follow the same annotation system.
Increasing their durability, as well as providing a central tracking system for these containers, is of organizational interest.
In this paper we present a system for the implicit tracking of RFID-augmented containers with a wrist-worn reader unit, and a voice-interaction scheme based on a head-mounted display.
Computational Causal Behaviour Models for Assisted Manufacturing (Sebastian Bader, Frank Krüger and Thomas Kirste)
In this paper, we present a computational state space model to track and analyse activities of workers during manual assembly processes.
Those models are well suited to capture the semi-structured processes present in final product assembly tasks. But in contrast to pure activity recognition systems, which map sensor data to executed activities, those models are able to track the context of the user, and to reason about context variables which are not directly observable through sensors. We describe our modelling approach and report on first evaluation results.
Plant@Hand: From Activity Recognition to Situation-based Annotation Management at Mobile Assembly Workplaces(Rebekka Alm, Mario Aehnelt and Bodo Urban)
This paper describes an approach towards situation-based an- notation management on the basis of work integrated activity recognition and situation detection. We motivate situation- based annotations as a means for collecting and processing contextual knowledge on the work domain in order to im- prove the quality of information assistance at mobile assem- bly workplaces. Especially, when we make use of automated processes which aim to detect the worker’s ongoing activities and situations, we have to deal at the same time with errors and wrongly inferred assumptions on reality. Here we see the strengths of annotation management which can be used to revise contextual background knowledge, required for deter- mining the autonomous behavior, in case of errors and devia- tions between inferred and real situations.
|
13.45 - 14.00
|
Closing
|