At Intarmour, we recognize the transformative potential of artificial intelligence (AI) in enhancing cybersecurity, cloud operations, and digital governance. Our integration of AI into services is conducted with full awareness of its legal, technical, and ethical implications. This page outlines our commitment to responsible, proportionate, and auditable use of AI technologies.
Scope of Use
Artificial intelligence is employed in specific operational contexts, including:
Security event triage and prioritization: AI assists in identifying and prioritizing security events based on contextual risk signals.
Log and signal analysis: Pattern recognition for anomalous events across distributed cloud infrastructure.
Content intelligence: Semantic classification and text embedding for compliance documentation and data governance.
Support and communication: Limited use of AI assistants for internal triage and structured responses, never for decision-making.
In all scenarios, human oversight is maintained. We do not deploy AI for autonomous decision-making affecting individuals, clients, or legal processes. Automated outputs support, not replace, expert analysis and verified workflows.
Model Selection and Governance
Only models meeting documented security, compliance, and transparency thresholds are utilized. Currently, we operate the following categories of models:
Language models: Mistral, Granite, LLaMA variants for internal research and limited pilot applications.
Embedding models: BGE, Gemma2 for semantic indexing.
Transcription models: Whisper V3 for audio-to-text transformation on non-client datasets.
All model interactions are logged. Version control and reproducibility are enforced for internal audits and incident reconstruction. No client data is used for model training or fine-tuning.
Data Protection and Privacy Compliance
AI integrations are designed in accordance with the General Data Protection Regulation (GDPR) and the evolving framework of the EU AI Act. We do not engage in profiling, behavioral inference, or targeting of individuals. Where personal data is present in any upstream process, automated handling is explicitly excluded.
We do not transmit sensitive information to external inference APIs. All inference occurs in secured, segregated environments with strict access control and full traceability.
Oversight and Responsibility
AI operations at Intarmour are subject to internal review and validation by our governance and engineering teams. Deployment of new models or automation layers is preceded by risk analysis, documentation of intended use, and validation of impact on security, privacy, and client trust.
We adopt a conservative stance on automation. When in doubt, human-first execution prevails.
Contact and Inquiries
To request our internal AI governance framework or inquire about specific use cases, please contact:
ai-governance@intarmour.com