Authentication by mapping keystrokes to music : the melody of typing
Belman, Amith K.; Paul, Tirthankar; Wang, Li; Iyengar, S. S.; Sniatała, Paweł; Jin, Zhanpeng; Phoha, Vir V.; Vainio, Seppo; Roning, Juha (2020-04-23)
A. K. Belman et al., "Authentication by Mapping Keystrokes to Music: The Melody of Typing," 2020 International Conference on Artificial Intelligence and Signal Processing (AISP), Amaravati, India, 2020, pp. 1-6, doi: 10.1109/AISP48273.2020.9073125
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
https://rightsstatements.org/vocab/InC/1.0/
https://urn.fi/URN:NBN:fi-fe2020051126030
Tiivistelmä
Abstract
Expressing Keystroke Dynamics (KD) in form of sound opens new avenues to apply sound analysis techniques on KD. However this mapping is not straight-forward as varied feature space, differences in magnitudes of features and human interpretability of the music bring in complexities. We present a musical interface to KD by mapping keystroke features to music features. Music elements like melody, harmony, rhythm, pitch and tempo are varied with respect to the magnitude of their corresponding keystroke features. A pitch embedding technique makes the music discernible among users. Using the data from 30 users, who typed fixed strings multiple times on a desktop, shows that these auditory signals are distinguishable between users by both standard classifiers (SVM, Random Forests and Naive Bayes) and humans alike.
Kokoelmat
- Avoin saatavuus [37205]