Nasir et al. (2017)
"Predicting couple therapy outcomes based on speech acoustic features"
PLOS ONE. 134組のカップル療法録音で音響特徴(shimmer/jitter/pitch)による関係改善予測精度79.2%を達成。
Nasir et al. (2015)
"Still Together?: The Role of Acoustic Features in Predicting Marital Outcome"
Proc. Interspeech. USC Signal Analysis and Interpretation Lab.
Jordan et al. (2025)
"Speech Emotion Recognition in Mental Health: Systematic Review"
JMIR Mental Health. 音響特徴が感情的手がかりを伝える仕組みの体系的レビュー。
Larrouy-Maestri, Poeppel & Pell (2025)
"The Sound of Emotional Prosody: Nearly 3 Decades of Research and Future Directions"
Perspectives on Psychological Science. f0・発話速度・声質が感情状態を伝える仕組みの30年レビュー。
Cox et al. (2022)
"A systematic review and Bayesian meta-analysis of the acoustic features of infant-directed speech"
Nature Human Behaviour. 88研究・734効果量のメタ分析。F0/母音空間の横断的安定性を確認。
Filippa et al. (2025)
"Maternal and paternal infant directed speech is modulated by the child's age"
Scientific Reports. 69家族の親IDS音響特徴を3-18ヶ月で追跡。9ヶ月でピッチ・強度が顕著に変化。
Bryant et al. (2025)
"Pitch characteristics of real-world infant-directed speech vary with pragmatic context"
PLOS ONE. 3,607音声クリップから、IDSのピッチ特性が語用論的文脈・性別で変動することを発見。
Peter et al. (2025)
"Infant Directed Speech Facilitates Vowel Category Discrimination in Pre-Verbal Infants"
Developmental Science. IDSの音響的誇張が4ヶ月児の母音識別を促進。
Kuhl et al. (1997)
"Cross-language analysis of phonetic units in language addressed to infants"
Science, 277(5326). 母音超明瞭化(F1/F2空間拡大)が乳児の言語獲得を促進。
Kukleva et al. (2025)
"Listening deeper: neural networks unravel acoustic features in preterm infant crying"
Scientific Reports. CNNでメルスペクトログラムから在胎週数を92.4%の精度で分類。
Qiu et al. (2024)
"Classification of Infant Cry Based on Hybrid Audio Features and ResLSTM"
Journal of Voice. 泣き声5分類で94-96%精度を達成。
Vilcekova et al. (2025)
"A Study of Deep Learning Models for Audio Classification of Infant Crying"
Informatics. ResNetとEfficientNetを用いた乳児啼泣検出モデルの比較研究。
Bellieni et al. (2004)
"Cry features reflect pain intensity in term newborns"
Pediatric Research, 55(1). DAN>8で高F0「サイレン泣き」アラーム閾値。
Oliveira et al. (2025)
"Listening to the Mind: Integrating Vocal Biomarkers into Digital Health"
Brain Sciences. pitch/jitter/shimmer/HNRで感情・精神状態を検出するMLアプローチのレビュー。
Pietrowicz et al. (2025)
"Automated acoustic voice screening for comorbid depression and anxiety"
JASA Express Letters. 1分間の音声から音響特徴により70-83%の精度でうつ病・不安障害を検出。
Abou Chami et al. (2025)
"The Human Voice as a Digital Health Solution Leveraging Artificial Intelligence"
Sensors. F0/shimmer/jitter/HNRをAIで評価するデジタルヘルスソリューション。
Koffi (2025)
"A Comprehensive Review of Jitter, Shimmer, and HNR"
Linguistic Portfolios, 14(1). jitter/shimmer/HNRの言語学・パラ言語学応用の包括的レビュー。
Sfeir et al. (2025)
"A Systematic Review on Parent-Child Synchrony: Stress, Resilience and Psychopathology"
Clinical Child and Family Psychology Review. 親子間の発声・表情・生理学的同期のレビュー。
Scherer (2003)
"Vocal communication of emotion: A review of research paradigms"
Speech Communication. shimmer/jitter/HNRによる不随意的感情推定の基盤モデル。
Bowlby (1969/1982)Attachment and Loss, Vol.1: Attachment. 安全基地の声質特性(低めのHNR、安定したピッチ)。