Secure & Responsible AI
At Mindtickle, we integrate AI with a strong commitment to security, ethics, and regulatory compliance. Our goal is to enhance customer experience without compromising trust or transparency.
We use enterprise-grade AI models from trusted providers like Microsoft Azure, OpenAI, and AWS Bedrock. All third parties undergo thorough due diligence, including security certifications and privacy reviews. Data access, processing, and retention are tightly controlled.
Our AI features prioritize safety, fairness, and accountability. Customer data is never used for public model training, retained, or manually reviewed. With clear AI terms, strict data segregation, and strong governance, customers stay fully in control of their AI experience.


Third-Party Oversight
Enterprise AI Models
Mindtickle leverages private enterprise AI models provided by Microsoft Azure OpenAI and AWS Bedrock.
Data Privacy
Third parties processing customer data are transparently documented in a public sub-processor repository, sign data processing agreements with Mindtickle, and commit to standard contractual clauses for secure data transfer.
Approved Third Parties
Third parties, including AI model providers, are evaluated by Mindtickle and approved by Customers before use.
International Transfers
Mindtickle performs GDPR-mandated transfer impact assessment through an independent legal auditor to review all the locations where data could be transferred.
Supplier Due Diligence
Third parties used by Mindtickle undergo mandatory evaluation through a review of compliances and audits such as SOC 2, ISO, Penetration Testing, AI Controls, Security and Privacy Policies, etc.

AI Guardrails
No Public AI
Customer data is never transmitted to public AI models.
Data Confidentiality
Customer data will never be used to contribute to the AI models’ knowledge base.
Opt-out of Model Training
Customer data is never used to train or improve AI models.
No Manual Oversight
We have opted out of human review from AI models to eliminate manual oversight.
Zero AI Data Retention
AI models access data only temporarily to process requests. Data is deleted immediately upon request completion and is never stored permanently.
Content Safety
A content filtering system prevents the generation of inappropriate, harmful, unethical, and copyrighted content through AI features.

Data Use and Access
Data Ownership
Customers completely own the data provided as input, user instructions, additional context, and the generated response/output, and Mindtickle does not make any ownership claims to such data.
No High-Risk Processing
Mindtickle does not use AI to perform facial recognition or biometric analysis.
No Sharing of Sensitive Data
Audio/Video recordings are never shared with AI models; only the limited subset of the learner’s audio/video recording text transcript is shared.
Data Segregation
AI requests are strictly segregated at the tenant and user levels to ensure that customer data remains isolated and there is no cross-tenant access to AI interactions.
Data Minimization
Only the minimum required data is shared with the AI models.
AI Terms
We maintain dedicated AI terms that outline our commitment to responsible data usage and retention while addressing key topics like data ownership, accuracy, and accountability.

Responsible AI
Responsible AI Measures
Mindtickle follows responsible AI principles in alignment with the EU AI Act, and general industry standards to remain ethical, transparent, and aligned with regulatory requirements.
Model Diversity
We select AI models that have been trained on a broad and diverse range of datasets to ensure fairness, equity, and non-discrimination.
Inclusive AI
System prompts are designed to prevent bias, toxicity, and discrimination by AI models.
AI Support, Human Leadership
AI features are designed to enhance human capabilities, boost productivity, and provide data-driven insights—not replace human roles.
Human-in-the-Loop
Content generated by AI features is saved in draft mode, requiring human review and approval before publishing — ensuring accuracy and accountability.
AI Use Disclosure
AI-powered features and output are clearly labeled with the “Mindtickle Copilot” title and an icon indicating AI use.

AI Governance
Documentation
All AI features are thoroughly documented to highlight the data fields used, the purpose of each data field, and the generated output/response to help in AI use case evaluation.
Flexible AI Control
Mindtickle provides customers full control over their AI experience, with the flexibility to enable or disable AI features based on their organization’s use cases.
Granular AI Access
Customers decide who can use AI features via the platform’s Role Based Access Control (RBAC) framework.
End-User Feedback
End users are provided a mechanism to give feedback on AI-generated content to identify and address any problematic content, unintended biases, or ethical concerns.
Auditability
Mindtickle logs all requests between users and AI models, capturing relevant data points, including user input, system instructions, and pre-defined constraints for a transparent and secure audit trail.
Explainability
AI outputs are context-aware, rule-based, and constrained to ensure clear reasoning behind the response and feedback.
Audits and Compliance
EU AI Act
Mindtickle has formally assessed itself through an external auditor and has implemented all the controls to comply with the EU AI Act, ensuring ethical and safe AI deployment.
ISO 42001
Mindtickle has been audited by an independent assessor and complies with all the requirements provided by ISO 42001 – the world’s first comprehensive AI system security standard.
AI Penetration Testing
Mindtickle rigorously tests its platform, and all AI features through semi-annual VAPT audits, covering OWASP’s top 10 AI/LLM risks, including prompt injection, output manipulation, model inversion, data poisoning, etc., along with responsible AI measures.
Robust Compliance Ecosystem
Mindtickle adheres to rigorous industry standards and compliance frameworks, including SOC 2, SOC 3, ISO 27001, ISO 22301, ISO 27701, ISO 27017, ISO 27018, HIPAA, 21 CFR Part 11, GDPR, SCCs, DPF, CCPA, CPRA, UK DPA 2018, and various US state privacy laws.
AI Principles
Responsible AI principles, AI compliance policy, and AI system security checks are integrated into the AI product development lifecycle.


