Secure AI models
CISA Emphasizes Responsible AI Use, Addresses Data Bias Challenges
The Cybersecurity and Infrastructure Security Agency is taking steps to ensure the responsible use of artificial intelligence models amid potential biases in datasets, according to Preston Werntz, the agency’s chief data officer.
During a virtual industry meeting, Werntz, a past Potomac Officers Club event speaker, acknowledged the inherent challenge of bias in AI datasets and the importance of understanding the various forms it can take across different agencies, FedScoop reported.
Werntz explained that while CISA might be more concerned with data collection bias between critical infrastructure sectors, some agencies might focus on bias related to people and rights.
To ensure the security and effectiveness of its AI models, Werntz said CISA is implementing measures such as adopting data best practices, monitoring for model drift and tracking the lineage of data used to train models.
Werntz also emphasized the agency’s commitment to collaborating with other federal agencies to share best practices on AI adoption and explore new technologies that can improve data management for AI initiatives.
Looking ahead, CISA will focus on training the workforce on commercial AI tools and promoting open data practices by making more data available to security researchers and the general public.
Category: Speaker News