Data Security
At Mellon AI, security is not an afterthought — it is a design principle. This page explains the technical and organisational controls we apply to protect your business data, how we handle data processed through AI systems, and the options available for clients who require stricter data controls.
Our Security Commitments in Brief
All data transmitted to and from our systems is encrypted using TLS 1.2 or higher. Stored data is encrypted at rest using AES-256.
AI workflow logs are retained for 90 days by default. Client data is deleted within 30 days of a verified deletion request.
Healthcare, legal, and financial clients can request fully on-premises AI deployments where no data leaves their own infrastructure.
We will notify affected clients within 72 hours of discovering an eligible data breach, in accordance with the Notifiable Data Breaches scheme.
1. Infrastructure Security
1.1 Hosting and Cloud Infrastructure
Our infrastructure is hosted on reputable cloud providers (primarily Amazon Web Services and/or Google Cloud Platform) in Australian data centres where available, or in geographically appropriate regions. These providers maintain ISO 27001, SOC 2 Type II, and other industry certifications.
1.2 Encryption
- In transit: All data transferred between your systems, our systems, and AI providers is encrypted using TLS 1.2 as a minimum standard (TLS 1.3 where supported)
- At rest: Data stored on our infrastructure is encrypted using AES-256
- API keys and credentials: Client API keys, integration credentials, and authentication tokens are stored encrypted and are never logged or transmitted in plaintext
1.3 Access Controls
- Access to client data is restricted to Mellon AI team members who require it to deliver services
- We use role-based access control (RBAC) to limit permissions to the minimum necessary
- All team member access requires multi-factor authentication (MFA)
- Administrative access to production systems is logged and reviewed
2. AI Data Handling
2.1 What Data Passes Through AI Providers
When AI automation workflows process your business data, content is sent via API to the relevant AI model provider (e.g., OpenAI, Anthropic, or Google). This typically includes:
- The content you instruct the AI to process (e.g., customer emails, documents, reports)
- System instructions (prompts) that define the AI agent's behaviour
We do not send unnecessary personal or sensitive data to AI providers. We design prompts and workflows to use the minimum data required to perform the requested task.
2.2 AI Provider Data Policies
We use enterprise-tier or API-tier agreements with AI providers that include the following protections:
- OpenAI API: Data submitted via API is not used to train OpenAI models by default. OpenAI's data retention for API inputs is 30 days (zero-data-retention options available on enterprise plans).
- Anthropic API: Anthropic does not train models on API inputs by default. Standard API data retention is up to 30 days.
- Google Gemini API: Google does not use API inputs to train models without explicit consent. Data handling is governed by Google's Cloud Data Processing Addendum.
Clients should review the relevant AI provider's data policies if their business requires specific contractual data processing protections.
2.3 Sensitive Data Categories
We apply additional caution to workflows involving sensitive data categories, including:
- Health and medical information
- Financial account and credit information
- Legal privilege or confidential communications
- Government-issued identifiers (TFNs, Medicare numbers, etc.)
Clients in healthcare, legal, financial services, or other regulated industries should discuss their specific requirements with us before engagement. We offer deployment configurations that reduce or eliminate the transmission of sensitive data to third-party AI providers.
3. On-Premises and Privacy-First Deployment Options
For clients where data sovereignty or strict privacy compliance is a non-negotiable requirement, we offer the following deployment options:
3.1 Local AI Deployment
We can deploy open-source LLMs (such as Llama, Mistral, or equivalent models) on infrastructure you own and control. In this configuration:
- No data is transmitted to any external AI provider
- All processing occurs within your own network or cloud tenancy
- You maintain complete control over the model, the data, and the outputs
3.2 Private Cloud Deployment
For clients who prefer managed infrastructure with data residency controls, we can deploy AI systems in a dedicated cloud environment in Australia, isolated from shared infrastructure.
3.3 Hybrid Architecture
We can architect systems where sensitive data is processed locally or on-premises, while non-sensitive tasks are handled by cloud AI providers — balancing capability and privacy.
These options are available under our Growth and custom enterprise engagements. Contact us to discuss your requirements.
4. Application and Workflow Security
4.1 Authentication and Authorisation
All integrations with your business systems (CRMs, email, databases, etc.) use OAuth 2.0, API keys, or token-based authentication. Credentials are stored encrypted and are scoped to the minimum permissions required for the integration to function.
4.2 Prompt Injection and AI Security
We apply defensive prompt design to mitigate the risk of prompt injection attacks — attempts by malicious inputs to cause AI agents to behave in unintended ways. This includes:
- Input sanitisation before content is passed to AI models
- System prompt isolation from user-provided content
- Output validation before AI responses are acted upon or transmitted
4.3 Logging and Monitoring
AI agent actions and workflow executions are logged for audit and debugging purposes. Logs include what the agent did, when, and what inputs triggered the action. Logs do not store full sensitive content; they contain sufficient detail for incident investigation.
5. Organisational Security
5.1 Team Practices
- All Mellon AI personnel with access to client systems operate under confidentiality agreements
- Access to client environments is granted only when required for service delivery and is revoked promptly when the need ends
- We do not share access credentials between team members; each uses individual authenticated accounts
5.2 Vendor Management
Third-party tools and services used in delivering our services are assessed for their security posture before adoption. We maintain a register of sub-processors and review it regularly.
5.3 Security Reviews
We conduct periodic internal security reviews of our systems and processes. We do not currently publish a formal penetration testing schedule, but enterprise clients may request a security review as part of their engagement.
6. Incident Response and Breach Notification
6.1 Incident Response
We maintain an incident response procedure that includes:
- Detection and containment of suspected security incidents
- Assessment of the nature and scope of any data exposure
- Eradication of the root cause and recovery of affected systems
- Post-incident review to prevent recurrence
6.2 Notifiable Data Breaches
We comply with the Notifiable Data Breaches (NDB) scheme under the Privacy Act 1988 (Cth). In the event of an eligible data breach:
- Affected clients will be notified within 72 hours of us becoming aware of the breach
- We will notify the Office of the Australian Information Commissioner (OAIC) as required by law
- Our notification will include the nature of the breach, the information involved, and the steps we are taking in response
7. Client Responsibilities
Security is a shared responsibility. You play an important role in keeping your data safe:
- Keep API keys, access credentials, and integration tokens confidential and rotate them regularly
- Notify us promptly if you suspect any unauthorised access to systems we have configured
- Ensure your own staff follow reasonable security practices (strong passwords, MFA, device security)
- Review AI outputs before actioning them in sensitive contexts
- Inform us before connecting any AI system we deploy to sensitive data sources not discussed during onboarding
8. Compliance
Our security practices are designed to support compliance with:
- Privacy Act 1988 (Cth) and the Australian Privacy Principles
- Notifiable Data Breaches scheme
- ISO 27001 principles (we rely on our infrastructure providers' certifications)
- GDPR — where we process data of individuals in the European Economic Area, we apply GDPR-equivalent protections
Clients in regulated industries (healthcare under the My Health Records Act 2012, financial services under ASIC regulations, etc.) should discuss their specific compliance requirements with us during engagement. We can tailor deployment architectures to support industry-specific frameworks.
9. Questions and Security Reports
If you have questions about our security practices, or if you wish to report a security vulnerability, please contact us:
Habitedge Pty Ltd
Trading as Mellon AI
Email: [email protected]
For security disclosures, please include "Security" in the subject line. We commit to acknowledging security reports within 2 business days.