<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0" xml:lang="ja">
	<channel>
		<title>HASCA2020</title>
		<link>http://hasca2020.hasc.jp/</link>
		<atom:link href="http://hasca2020.hasc.jp/rss2.xml" rel="self" type="application/rss+xml" />
		<description></description>
		<language>ja</language>
		<copyright>Copyright (C) 2026 HASCA2020 All rights reserved.</copyright>
		<lastBuildDate>Mon, 31 Aug 2020 10:34:35 +0900</lastBuildDate>
		<generator>a-blog cms</generator>
		<docs>http://blogs.law.harvard.edu/tech/rss</docs>
		<item>
			<dc:creator>hasca-web</dc:creator>
			<title>Program</title>
			<link>http://hasca2020.hasc.jp/program/entry-41.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<p ><b>Best paper award</b> was presented to<br />
<b>ActivityGAN: Generative Adversarial Networks for Data Augmentation in<br />
Sensor-Based Human Activity Recognition</b><br />
Xi'ang Li (Huazhong University of Science and Technology, Wuhan, China)<br />
Jinqi Luo (Nanyang Technological University, Singapore, Singapore)<br />
Rabih Younes (Duke University, Durham, North Carolina, United States)<br />
<br />
<b>Proceedings</b><br />
<a href="https://dl.acm.org/doi/proceedings/10.1145/3410530">The accepted papers in HASCA workshop have been published on ACM DL.</a><br />
<br />
<b>Time is UTC(Coordinated Universal Time) on September 12, 2020.</b><br />
Presentation time:<br />
HASCA oral presentation, 12 min (10-min talk + 2-min Q&A)<br />
SHL oral presentation, 12 min (10-min talk + 2-min Q&A)<br />
SHL video, 1 min<br />
Nurse oral presentation, 9 min<br />
Nurse video, 1 min<br><br />
<table><br />
<tr> <td>1000-1130</td><br />
<td><br />
-Opening remarks<br><br />
-Nurse Challenge summary [15 min]<br />
-Nurse Challenge winner presentation [9 min]<br />
<p><span><i>Nurse Care activity Recognition Based on Machine Learning Techniques Using Accelerometer Data.</i><br />
Mohammad Sabik Irbaz, Abir Azad, Tanjila Alam Sathi, Lutfun Nahar Lota<br />
[<a href="https://drive.google.com/file/d/1hoBDgB9S7EcdkFcNsTngFs--oVobj5nv/view?usp=sharing" target="_blank"><span><b>Video</b></span></a>]</span></p><br />
-Nurse Challenge ceremony [5 min]<br><br />
-SHL Challenge Introduction [2 min]<br />
-SHL Challenge Summary [15 min]<br />
<p><span><i>Summary of the sussex-huawei locomotion-transportation recognition challenge 2020</i><br />
Lin Wang, Hristijan Gjoreski, Mathias Ciliberto, Paula Lago, Kazuya Murao, Tsuyoshi Okita, Daniel Roggen</p><br />
-SHL Challenge videos broadcast [12 min]<br />
<p><em>Combining LSTM and CNN for mode of transportation classification from smartphone sensors.</em><br />
Björn Friedrich, Carolin Lübbe.<br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/01_1022.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/01_1022.mp4" target="_blank" download="01_1022.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>Activity recognition for locomotion and transportation dataset using deep learning.</em><br />
Chan Naseeb, Bilal Al Saeedi. <br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/02_1015.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/02_1015.mp4" target="_blank" download="02_1015.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>Where are you? Human activity recognition with smartphone sensor data.</em><br />
Gulustan Dogan, Iremnaz Cay, Sinem Sena Ertas, Şeref Recep Keskin, Nouran Alotaibi, Elif Sahin.<br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/03_1009.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/03_1009.mp4" target="_blank" download="03_1009.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>Human activity recognition using multi-input CNN model with FFT spectrograms.</em><br />
Kei Yaguchi, Chihiro Ito, Wataru Miyazaki, Kazukiyo Ikarigawa, Yuki Morikawa, Ryo  Kawasaki, Yusuke Kyokawa, Eisaku Maeda, Masaki Shuzo. <br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/04_1008.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/04_1008.mp4" target="_blank" download="04_1008.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>Smartphone location identification and transport mode recognition using an ensemble of generative adversarial networks.</em><br />
Lukas Gunthermann.<br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/05_1020.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/05_1020.mp4" target="_blank" download="05_1020.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>A multi-view architecture for the SHL challenge.</em><br />
Massinissa Hamidi, Aomar Osmani, Pegah Alizadeh. <br />
</p><br />
<p><em>UPIC: user and position independent classical approach for locomotion and transportation modes recognition.</em><br />
Md. Sadman Siraj, Omar Shahid, Md. Ahasan Atick Faisal, Farhan Fuad Abir, Md. Atiqur Rahman Ahad, Sozo Inoue, Tahera Hossain.<br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/07_1001.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/07_1001.mp4" target="_blank" download="07_1001.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>Tackling the SHL recognition challenge with phone position detection and nearest neighbour smoothing.</em><br />
Peter Widhalm, Philipp Merz, Liviu Coconu, Norbert Brändle.<br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/08_1006.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/08_1006.mp4" target="_blank" download="08_1006.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>Ensemble learning for human activity recognition.</em><br />
Sekiguchi Ryoichi, Abe Kenji, Yokoyama Takumi, Kumano Masayasu. <br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/09_1019.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/09_1019.mp4" target="_blank" download="09_1019.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>Ensemble approach for sensor-based human activity recognition.</em><br />
Sunidhi Brajesh, Anjan Ragh Kotagal Shivaprakash, Aswathy Mohan, Indraneel Ray. <br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/10_1005.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/10_1005.mp4" target="_blank" download="10_1005.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
<p><em>Hierarchical Classification Using ML/DL for Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge.</em><br />
Yi-Ting Tseng, Yi-Hao Lin, Hsien-Ting Lin, Fong-Man Ho, Chia-Hung Lin. <br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/11_1007.pdf" target="_blank" rel="noopener noreferrer"><b>Poster</b></a>][<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/short_videos_presentations/11_1007.mp4" target="_blank" download="11_1007.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
-SHL team 1 presentation [12 min]<br />
<p><em>IndRNN based long-term temporal recognition in the spatial and frequency domain.</em><br />
Shuai Li, Beidi Zhao, Yanbo Gao.<br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/long_videos_presentations/01_1003.mp4" target="_blank" download="01_1003.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
-SHL team 2 presentation [12 min]<br />
<p><em>Tackling the SHL Challenge 2020 with person-specific classifiers and semi-supervised learning.</em><br />
Stefan Kalabakov, Simon Stankoski, Nina Reščič, Andrejaana Andova, Ivana Kiprijanovska, Vito Janko, Martin Gjoreski, Mitja Luštrek.<br />
</p><br />
-SHL team 3 presentation [12 min]<br />
<p><em>DenseNetX and GRU for the Sussex-Huawei locomotion-transportation recognition challenge.</em><br />
Yida Zhu, Runze Chen and Haiyong Luo.<br />
[<a href="http://www.shl-dataset.org/wp-content/uploads/SHLChallenge2020/long_videos_presentations/03_1021.mp4" target="_blank" download="03_1021.mp4" rel="noopener noreferrer"><b>Video</b></a>]</p><br />
-SHL ceremony (5 min)<br />
</td></tr><br />
<tr><td>1130-1200</td><td>Break [30 min]<br></td></tr><br />
<tr><td>1200-1330</td><br />
<td><br />
<b>Session Chair: Mathias Ciliberto (University of Sussex)</b><br />
<br />
-[HASCA] Using iOS for Inconspicuous Data Collection: A Real-World Assessment<br />
Yuuki Nishiyama, Denzil Ferreira, Wataru Sasaki, Tadashi Okoshi, Jin Nakazawa, Anind K Dey, Kaoru Sezaki<br />
<br />
-[HASCA] ActivityGANs: Generative Adversarial Networks for Data Augmentation in Sensor-Based Human Activity Recognition<br />
Xi'ang Li, Jinqi Luo, Rabih Younes<br />
<br />
-[HASCA] Improving Activity Data Collection with On-DevicePersonalization Using Fine-tuning<br />
Nattaya Mairittha, Tittaya Mairittha, Sozo Inoue<br />
<br />
-[HASCA] Social Distancing Warning System at Public Transportation by Analyzing Wi-Fi Signal from Mobile Devices<br />
Thongtat Oransirikul, Hideyuki Takada<br />
<br />
-[HASCA] Perception of Interaction between Hand and Object<br />
Yuki Toyosaka, Tsuyoshi Okita<br />
<br />
-[HASCA] MCoMat: A New Performance Metric for Imbalanced Multi-layer Activity Recognition Dataset<br />
Sayeda Shamma Alia, Paula Lago, Sozo Inoue<br />
<br />
-Nurse Challenge videos broadcast [7 min]<br><br />
<p><span><i>Feature Based Random Forest Nurse Care Activity Recognition Using Accelerometer Data.</i><br />
Carolin Lübbe, Björn Friedrich, Sebastian Fudickar, Sandra Hellmers, Andreas Hein.<br />
[<a href="https://drive.google.com/file/d/17KhgCc3i7vZmKz8hzoMRh5_7JoiygozB/view?usp=sharing" target="_blank"><span><b>Poster</b></span></a>][<a href="https://drive.google.com/file/d/11O-9x3zexAajFC7UzUDAUSVomsq5Zni7/view?usp=sharing" target="_blank"><span><b>Video</b></span></a>]</span></p><br />
<p><span><i>A Pragmatic Signal Processing Approach for Nurse Care Activity Recognition Using Classical Machine Learning.</i> <br />
Md Ahasan Atick Faisal, Md Sadman Siraj, Md Tahmeed Abdullah, Omar Shahid, Farhan Fuad Abir, M.A.R. Ahad.<br />
[<a href="https://drive.google.com/file/d/1koJJGRSJvF4ap0lZyfuJXmgbuL9uWfSy/view?usp=sharing" target="_blank"><span><b>Poster</b></span></a>][<a href="https://drive.google.com/file/d/1NnxiMPGfaZLHJ4KtBdzAE1rUdaIGjoj1/view?usp=sharing" target="_blank"><span><b>Video</b></span></a>]</span></p><br />
<p><span><i>Complex Nurse Care Activity Recognition Using Statistical Features.</i><br />
Promit Basak, Shahamat Mustavi Tasin, Malisha Islam Tapotee, Md. Mamun <br />
Sheikh, A.H.M. Nazmus Sakib, Sriman Bidhan Baray, M.A.R. Ahad.<br />
[<a href="https://drive.google.com/file/d/10zMKjrOsNBkACRfw8omybGA21dRDX53c/view?usp=sharing" target="_blank"><span><b>Poster</b></span></a>][<a href="https://drive.google.com/file/d/12LCeGvMR7j_P3DpGy9H8SPoozVfeAX1I/view?usp=sharing" target="_blank"><span><b>Video</b></span></a>]</span></p><br />
<p><span><i>Nurse Care Activity Recognition Based on Convolution Neural Network for Accelerometer Data.</i><br />
Md. Golam Rasul, Mashrur Hossain Khan, Lutfun Nahar Lota.<br />
[<a href="https://drive.google.com/file/d/1vrvCv4H736GTOkurnfiifwlX5EdpcOtv/view?usp=sharing" target="_blank"><span><b>Poster</b></span></a>][<a href="https://drive.google.com/file/d/1WZVQL7S0df9Cr3RQQoJSKj6DfdPNY2zB/view?usp=sharing" target="_blank"><span><b>Video</b></span></a>]</span></p><br />
<p><span><i>A Window-Based Sequence-to-One Approach with Dynamic Voting <br />
for Nurse Care Activity Recognition Using Acceleration-Based Wearable <br />
Sensor.</i><br />
Yiwen Dong, Jingxiao Liu, Yitao Gao, Sulagna Sarkar, Zhizhang Hu, <br />
Jonathon Fagert, Shijia Pan, Pei Zhang, Hae Young Noh, Mostafa Mirshekari.<br />
[<a href="https://drive.google.com/file/d/1_9uxio5ahdEDk9f3v4Z7xMWB_z6u5re3/view?usp=sharing" target="_blank"><span><b>Video</b></span></a>]</span></p><br />
<p><span><i>Nurse Care activity Recognition Challenge: A Comparative Verification of Multiple Preprocessing Approaches.</i><br />
Hitoshi Matsuyama, Takuto Yoshida, Nozomi Hayashida, Yuto Fukushima, Takuro Yonezawa, Nobuo Kawaguchi.<br />
[<a href="https://drive.google.com/file/d/1DZ0V5Fa2LU9I8gBMnG7uql3TYPJ-3RrK/view?usp=sharing" target="_blank"><span><b>Poster</b></span></a>][<a href="https://drive.google.com/file/d/1xLRvSTfTrQel71z3ahPC5lF4c1ioQOSr/view?usp=sharing" target="_blank"><span><b>Video</b></span></a>]</span></p><br />
<p><span><i>Nurse Care Activity Recognition: Using Random Forest to Handle Imbalanced Class Problem.</i></span><br />
<span>Arafat Rahman, Nazmun Nahid, Iqbal Hassan, M.A.R. Ahad.<br />
[<a href="https://drive.google.com/file/d/1gb1TnY13jsFSkk6CaRilfi06qJsmj96Y/view?usp=sharing" target="_blank"><span><b>Poster</b></span></a>][<a href="https://drive.google.com/file/d/1BvV1PWnuDfAqChczFrRYx3ORNbxDkF3A/view?usp=sharing" target="_blank"><span><b>Video</b></span></a>]</span></p><br />
<br />
</td></tr><br />
<tr><td>1330-1500</td><td>Break [90 min]</td></tr><br />
<tr><td>1500-1630</td><br />
<td><br />
<b>Session Chair: Paula Lago (Kyusyu Inst. of Tech.)</b><br />
<br />
-[HASCA] Action Recognition Using Spatially Distributed Radar Setup Through Microdoppler Signature<br />
Smriti Rani, Arijit Chowdhury, Andrew Gigie, Tapas Chakravarty, Arpan Pal<br />
<br />
-[HASCA]ARM Cortex M4-based Extensible Multimodal Wearable Platform for Sensor Research and Context Sensing from Motion & Sound<br />
Daniel Roggen<br />
<br />
-[HASCA] CausalBatch: Solving Complexity/Performance Tradeoffs for Deep Convolutional and LSTM Networks for Wearable Activity Recognition<br />
Lloyd Pellatt, Daniel Roggen<br />
<br />
-[HASCA] Mental stress classification during motor tasks in older adults using an Artificial Neural Network<br />
Apostolos Kalatzis, Laura Stanley, Ranjana Mehta, Rohith Karthikeyan<br />
<br />
-[HASCA] Identifying Label Noise in Time-Series Datasets<br />
Gentry Atkinson, Metsis Vangelis<br />
<br />
-Closing<br />
</td></tr><br />
</table></p>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<category>program</category>
			<guid isPermaLink="true">http://hasca2020.hasc.jp/program/entry-41.html</guid>
			<pubDate>Mon, 31 Aug 2020 10:34:53 +0900</pubDate>
		</item>
		<item>
			<dc:creator>kawaguti</dc:creator>
			<title>Welcome to HASCA2020</title>
			<link>http://hasca2020.hasc.jp/index.html</link>
			<description><![CDATA[
			<div class="newsTextBox">
			
				
				
				<h2 id="h440">Welcome to HASCA2020 Web site!</h2>
				

				
			
				
				
				<p>HASCA2020 is an eighth International Workshop on Human Activity Sensing Corpus and Applications. The workshop will be held in conjunction with UbiComp/ISWC2020.</p>

				

				
			
				
				
				<h2 id="h442">Abstract</h2>
				

				
			
				
				
				<p>The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpora and improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.</p>

<p>This year, HASCA will welcome papers from participants to</p>

<p>the Sussex-Huawei Locomotion and Transportation Recognition Competition <a href="http://www.shl-dataset.org/activity-recognition-challenge-2020/">http://www.shl-dataset.org/activity-recognition-challenge-2020/</a>.</p>

<p>The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence. We expect the following domains to be relevant contributions to this workshop (but not limited to):</p>

				

				
			
				
				
				<h2 id="h444">Data collection / Corpus construction</h2>
				

				
			
				
				
				<p>Experiences or reports from data collection and/or corpus construction projects, such as papers describing the formats, styles or methodologies for data collection. Cloud- sourcing data collection or participatory sensing also could be included in this topic.</p>

				

				
			
				
				
				<h2 id="h446">Effectiveness of Data / Data Centric Research</h2>
				

				
			
				
				
				<p>There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scape corpus with machine learning, there will be a large space for improving the performance of recognition results.</p>

				

				
			
				
				
				<h2 id="h448">Tools and Algorithms for Activity Recognition</h2>
				

				
			
				
				
				<p>If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.</p>

				

				
			
				
				
				<h2 id="h450">Real World Application and Experiences</h2>
				

				
			
				
				
				<p>Activity recognition "in the Lab" usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between "Lab Envi- ronment" and "Real World Environment". Large scale human activity sensing corpus will help to overcome this gap/valley.</p>

				

				
			
				
				
				<h2 id="h452">Sensing Devices and Systems</h2>
				

				
			
				
				
				<p>Data collection is not only performed by the "off the shelf" sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.</p>

				

				
			
				
				
				<h2 id="h454">Mobile experience sampling, experience sampling strategies: </h2>
				

				
			
				
				
				<p >Advances in experience sampling ap- proaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.</p>
				

				
			
				
				
				<h2 id="h456">Unsupervised pattern discovery</h2>
				

				
			
				
				
				<p >Discovering mean- ingful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.</p>
				

				
			
				
				
				<h2 id="h458">Dataset acquisition and annotation through crowd-sourcing, web-mining</h2>
				

				
			
				
				
				<p >A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalizing on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.</p>
				

				
			
				
				
				<h2 id="h460">Transfer learning, semi-supervised learning, lifelong learning</h2>
				

				
			
				
				
				<p >The ability to translate recognition mod- els across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.</p>
				

				

				<br class="clearHidden" />
			</div>
			]]></description>
			<guid isPermaLink="true">http://hasca2020.hasc.jp/index.html</guid>
			<pubDate>Wed, 18 Mar 2020 15:00:53 +0900</pubDate>
		</item>
	</channel>
</rss>