Risk Cognizance AI Policy for Local and OpenAI Integration: Document Creation, Processing, and Report Generation
Version: Public
Effective Date: October 31, 2024
1. Purpose
This policy outlines Risk Cognizance’s approach to using localize and OpenAI-integrated AI services for document creation, processing, and report generation, in compliance with NIST AI Risk Management Framework (AI RMF) standards. This ensures responsible, secure, and ethical AI use that aligns with Risk Cognizance’s governance, risk management, and compliance objectives.
2. Scope
This policy applies to all Risk Cognizance stakeholders, including employees, contractors, and third-party vendors, involved in using AI for document creation, processing, and report generation. This includes local AI systems and OpenAI integrations, ensuring compliance with data privacy, security, and ethical AI standards as outlined by the NIST AI RMF.
3. Policy Statements
3.1 Governance and Oversight
- AI Oversight Committee: An AI Governance Committee oversees all AI integrations, manages associated risks, and ensures adherence to this policy.
- Risk Assessment: Routine risk assessments for both local and OpenAI integrations help manage security, regulatory compliance, and ethical considerations in line with NIST AI RMF.
3.2 NIST AI RMF Compliance
- Alignment with AI RMF Functions:
- Govern: Establish policies and accountability for AI.
- Map: Identify purposes, contexts, and impacts of AI models.
- Measure: Assess performance and security.
- Manage: Continuously mitigate risks through adjustments.
3.3 Data Privacy and Security
- Access Control: Role-based access limits sensitive data handling to authorized personnel.
- Data Minimization & Encryption: Enforce data minimization, and encrypt data per NIST SP 800-53 standards.
- Data Retention & Deletion: Implement retention periods and secure deletion practices per NIST guidelines.
3.4 Ethical and Responsible AI Use
- Bias Mitigation: Regularly audit and address any bias in AI outputs.
- Transparency: Ensure transparency regarding AI’s role in business processes.
- Human Oversight: Maintain human review to ensure AI-generated outputs align with Risk Cognizance standards.
3.5 Model Performance and Monitoring
- Model Validation: Validate local and OpenAI models for compliance with Risk Cognizance’s standards.
- Continuous Monitoring: Track and improve model performance, maintaining logs for review.
- Incident Response: Prepare an incident response plan for AI systems to manage misuse or vulnerabilities.
3.6 User Training and Awareness
- Training: Educate employees on secure, responsible AI use.
- Awareness Programs: Emphasize data security and ethical considerations in AI training.
3.7 Third-Party and Vendor Compliance
- Vendor Compliance: Ensure OpenAI adheres to Risk Cognizance’s data protection and ethical standards.
- Risk Assessments: Conduct periodic assessments of OpenAI’s policies to ensure compliance.
3.8 Audit and Compliance
- Internal Audits: Regular audits to assess adherence to this policy and verify alignment with NIST AI RMF standards.
- Documentation: Maintain documentation for accountability and compliance tracking.
4. Roles and Responsibilities
- AI Governance Committee: Oversees AI policy and NIST AI RMF compliance.
- Data Privacy Officer: Manages data protection compliance for AI activities.
- System Administrators: Handle access control, audits, and ensure secure practices.
- End Users: Responsible for following this policy and attending required training.
5. Compliance and Review
This policy will be reviewed annually, or as needed for regulatory updates, AI RMF changes, or shifts in AI use. Non-compliance may lead to restricted access or corrective action.
6. Approval and Acknowledgment
Risk Cognizance is committed to secure and responsible AI use, adhering to NIST AI RMF standards to support safe, compliant, and efficient document processing and report generation.
This public version underscores Risk Cognizance’s dedication to compliance, transparency, and AI’s responsible use in our business practices.