Skip to main content
SearchLoginLogin or Signup

2024 ADSA Annual Meeting Keynotes

Information about the keynote addresses at the 2024 ADSA Annual Meeting

Published onJan 31, 2025
2024 ADSA Annual Meeting Keynotes
·

Keynote Speakers

Tuesday October 29

Janet Haven - Data and Society
Past, Present and Future(s): How We Keep People in the Loop

Video Recording; Slide Deck

Abstract:

When we talk about technology, it is easy to lose sight of people. In this talk, Janet Haven, executive director of Data & Society, argues that it’s an oversight we cannot afford. Haven will walk us through some of the major events of the past 25 years that shaped — and then shifted — our understanding of the relationship between people and technology. She will discuss where we are now, where we are headed, and the tools and approaches we can use to ensure that people are at the center of how technology is designed, deployed, and governed.

Agus Sudjianto (Senior Vice President of Risk and Technology - H2O.ai) and Doug Hague (University of North Carolina, Charlotte)
Fireside Chat - Agus Sudjianto (Senior Vice President of Risk and Technology - H2O.ai) and Doug Hague (University of North Carolina, Charlotte)

Video Recording

We will discuss our eerily common journeys from our early training in physics to engineering PhDs to system thinking education. Our backgrounds influenced our work in the development of what is now called Model Risk Management (MRM). We'll talk about how the MRM regulations and practice have co-evolved and deep dive into how data science is critical to managing the risk of LLMs and Generative AI.


Wednesday October 30

Student Keynotes

Video Recording (both student keynotes)

Alexandra Veremeychik - Montgomery College
Considerations of Children and Adolescents in Data and Artificial Intelligence (The Kids are AI-ght?)

Slide Deck

Abstract:
Big data and artificial intelligence have permeated discussions not only within business circles or academic spheres, but the considerations of people in all walks of life. Despite the vast discourse from talks, panels, journals, think pieces, and podcasts on these compelling new technologies, a crucial demographic had been surprisingly overlooked: children and adolescents.

Decision-makers in business and government frequently display a lack of understanding and overlook the consequences of their actions on this vulnerable group, as evidenced by recent press releases, business memos, and legislation.

There is an urgent need to develop ethical policies that prioritize the protection of those providing data, especially minors who are particularly at risk. The dangers inherent in the handling of data and the use of AI are amplified when the humans involved are vulnerable, such as minorities, the poor, and crucially – the youth. If children are not being explicitly protected, then they are implicitly being left behind.

This presentation will address the significant gap in research, legislation, and policy concerning the effects of data and AI on children and explore the necessity of safeguarding this critical population.

Zahra Khanjani - University of Maryland Baltimore County
Strengthening AI Models for Spoofed Audio Detection: An Interdisciplinary Approach Incorporating Linguistic Knowledge

Slide Deck

Abstract:

Deepfakes—misleading content generated or manipulated using AI methods—have proliferated as vehicles for deception and fraud, posing ever-increasing threats to individuals and institutions. Audio deepfakes in particular are overlooked in existing literature compared to video and image counterparts (Khanjani et al., 2023). Our multidisciplinary team of data scientists and sociolinguists—experts in a subdiscipline of linguistics that deals with how human language varies, socially and stylistically—offers a novel approach to detecting audio deepfakes and other spoofed audio attacks by incorporating insights about spoken human language into machine learning techniques.

This talk shares results from four years of our ongoing research and outlines novel pathways for interdisciplinary collaboration to address deepfakes as a pressing societal problem. Findings demonstrate how audio representations, manually extracted by sociolinguists, increase the detection performance significantly at the scale of all types of spoofed audio attacks, when combined with machine learning models (Khanjani et al., 2023). Additionally, when AI models are used to automatically extract Audio Linguistic Representations designed for anti-Spoofing (ALiRaS), under expert supervision, the performance of the common baselines significantly increased. Overall, the talk demonstrates that leveraging human expert knowledge is crucial in creating robust audio representations used in spoofed audio detection for strengthening AI solutions.


Thursday October 31

Maggie Levenstein - Inter-university Consortium for Political and Social Research
FAIR Data for Fair Data Science and a Fair Society

Video Recording; Slide deck

Abstract:

[abstract unavailable]

Comments
0
comment
No comments here
Why not start the discussion?