News from NIST: Dioptra, AI Risk Management Framework (AI RMF) Generative AI Profile, and How PII Identification and Redaction can Support Suggested Best Practices

Acting on its obligations flowing from a 2023 Executive Order, the US Department of Commerce’s National Institute of Standards and Technology (NIST) has recently released two new tools to aid companies developing Generative AI models (GenAI) do so responsibly and securely.
Dioptra
The first tool is geared towards the GenAI system developers themselves, instead of governance professionals. Citing from the GitHub repository:
Dioptra is a software test platform for assessing the trustworthy characteristics of artificial intelligence (AI). Trustworthy AI is: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair - with harmful bias managed. Dioptra supports the Measure function of the NIST AI Risk Management Framework by providing functionality to assess, analyze, and track identified AI risks.
Dioptra is designed to serve a variety of use cases across different stages of AI model development, evaluation, and deployment. For model testing, it offers comprehensive assessment capabilities throughout the development lifecycle for first-party developers. Second-party users can leverage Dioptra to evaluate AI models during acquisition processes or within controlled lab environments. Third-party auditors and compliance professionals can utilize the platform to conduct thorough assessments as part of their regulatory or quality assurance activities.
In the research domain, Dioptra aids trustworthy AI researchers by providing a robust system for tracking experiments, ensuring reproducibility and facilitating collaboration. For evaluations and challenges, it serves as a common platform, offering standardized resources and environments for participants to compete fairly and effectively.
Lastly, Dioptra supports red-teaming activities by providing a controlled environment where models and resources can be exposed to security experts. This allows for the identification of vulnerabilities and the improvement of model robustness in a safe and managed setting. Overall, Dioptra's versatility makes it a valuable tool for a wide range of stakeholders in the AI ecosystem, from developers and researchers to auditors and security professionals.
Dioptra is designed with several key properties that enhance its functionality and user experience. At its core, Dioptra emphasizes reproducibility by automatically creating snapshots of resources, ensuring that experiments can be accurately reproduced and validated. This is complemented by its traceability feature, which maintains a comprehensive history of experiments and their inputs, allowing for detailed analysis and auditing.
The platform's extensibility is achieved through a plugin system that supports the expansion of functionality and seamless integration of existing Python packages. Interoperability between these plugins is facilitated by a robust type system, promoting smooth interaction between different components.Dioptra's modular architecture allows users to compose new experiments from pre-existing components using simple YAML files, enhancing flexibility and ease of use. Security is prioritized with user authentication, and access controls are in development to further strengthen data protection.
Users benefit from an intuitive web interface that provides interactive access to Dioptra's features. Furthermore, the platform is designed for shareability and reusability, supporting multi-tenant deployment. This enables users to share and reuse components efficiently, fostering collaboration and knowledge exchange within the AI research and development community.
AI RMF Generative AI Profile
The second tool is the AI RMF Generative AI Profile, an expansion on the AI Risk Management Framework NIST published in January 2023 that addresses GenAI risks and mitigation strategies. The AI Profile lists 12 risk categories and almost 200 recommended actions that should be taken to mitigate these risks. These actions focus on governance mechanisms like establishing and implementing policies, oversight and incident reporting mechanisms, and engaging diversely composed teams and representative populations throughout the AI system lifecycle.
In this table we are listing the Suggested Actions that Private AI’s solutions can support, including a brief explanation of Private AI’s relevant capabilities.
Conclusion
The release of Dioptra and the AI RMF Generative AI Profile by NIST marks a significant step forward in promoting responsible and secure development of Generative AI systems. These tools provide developers, researchers, and compliance professionals with valuable resources to assess, manage, and mitigate risks associated with AI technologies. As the field of AI continues to evolve rapidly, the importance of such frameworks and platforms cannot be overstated. By leveraging these tools alongside privacy-enhancing technologies like those offered by Private AI, organizations can better navigate the complex landscape of AI development and deployment.