Minimum Standards for AI Solutions
[Digitalchemy LLC] is committed to developing, deploying, and operating AI features and solutions in a way that is trustworthy, safe, and compliant. These minimum standards apply to all AI-enabled functionality we ship and guide how we design disclosures, user controls, data handling, testing, and monitoring.
Lawfulness and Fairness
Our AI features and solutions must comply with applicable laws and regulations, including requirements related to fundamental rights, non-discrimination, privacy, consumer protection, and the protection of personal and business data. We aim to reduce unfair bias and ensure fair access and consistent behavior across different user groups, languages, accents, and usage contexts. Where limitations are known, we communicate them clearly.
Transparency and Explainability
We inform users when they are interacting with AI features and label AI-generated content. We provide clear information, as appropriate, about what an AI feature does and does not do, what data is used to generate outputs, key limitations and potential risks, and how users can report issues or request support. We maintain documentation proportionate to the nature and risk of the AI feature or solution, including information necessary to understand its purpose and expected performance.Accuracy and Reliability
We test AI features and solutions to meet performance levels appropriate to their intended purpose and risk profile, including mobile-specific conditions such as device differences, network variability, and relevant edge cases. We implement quality controls to minimize errors and misleading outputs and communicate uncertainties when outputs may be unreliable. Where suitable, we design AI outputs to be reviewable and correctable by users.
Robustness and Safety
We design AI features and solutions to be resilient against failures and misuse and to minimize unintended harm. We apply security-by-design practices suitable for mobile environments and implement safeguards appropriate to the feature’s purpose, including protections against harmful or inappropriate content where relevant. We implement cybersecurity measures to protect data, systems, and AI features from unauthorized access and maintain processes to detect, respond to, and remediate incidents through updates and mitigations where applicable.
Protection of Data and Data Privacy
We process personal data in line with applicable data protection laws and privacy-by-design principles, including data minimization and purpose limitation. We implement technical and organizational measures to protect personal and confidential information throughout the data lifecycle. Users should avoid entering sensitive personal or confidential information into free-text AI fields unless the feature explicitly requires it and they understand the implications described in our privacy information.
Human Agency and Autonomy
Our AI features and solutions are designed to support users, not replace their judgement. We provide meaningful user control appropriate to the feature, including the ability to review, edit, accept, reject, or regenerate AI outputs where applicable, and we maintain a clear separation between AI suggestions and user decisions. Human oversight is built into product design and operational processes, with escalation paths for issues and mechanisms to reduce risk to an acceptable level.
Accountability
Each AI feature and solution has an internal owner responsible for compliance, risk assessment, and lifecycle management. We maintain evidence proportionate to risk, including documentation of feature purpose and limitations, testing and evaluation results, data flows and privacy/security measures, incident handling and user feedback processes, and compliance decisions, including vendor due diligence where third-party AI services are used.