Responsible AI at RTOFlow
RTOFlow uses AI to generate training and assessment documents for Australian Registered Training Organisations. That means our outputs end up in front of learners, assessors, and ASQA auditors. We take that responsibility seriously.
This page explains how we build and operate RTOFlow's AI pipeline responsibly — what we do, what we don't do, and where we're still building.
Our core principle: we generate, you validate
No RTOFlow document should go to a learner or an auditor without a qualified trainer or assessor reviewing it first. Every document we generate includes a visible notice:
This document was generated with AI assistance. It must be reviewed, contextualised for your industry and learner group, and validated by a qualified trainer and assessor before use. Do not submit AI-generated content to learners or assessors without review.
This isn't a legal disclaimer — it's how we designed the product. RTOFlow accelerates document creation; it does not replace professional judgement.
Alignment with the Australian Voluntary AI Safety Standard 2024
The Australian Government published the Voluntary AI Safety Standard 2024 (DISR/NAIC, 5 September 2024) as a framework of 10 guardrails for responsible AI use in Australian organisations. We align our practices against this standard.
Important note: The Standard is voluntary. "Alignment" is self-assessed and does not constitute certification or government endorsement.
Guardrail 1 — Accountability
What it requires: A named person or team accountable for AI decisions.
What RTOFlow does: RTOFlow maintains internal AI governance with a designated responsible officer for AI system decisions, model selection, and output quality. Our AI practices are reviewed on a scheduled basis.
Guardrail 2 — Risk assessment
What it requires: Identify and document AI-specific risks and how they're mitigated.
What RTOFlow does: We scope RTOFlow's AI to document generation only — we do not make enrolment decisions, assessment judgements, or eligibility determinations. Our risk assessment focuses on:
- Hallucination risk: Unit data is pulled live from training.gov.au, not generated. We ground every output in authoritative source data to reduce fabrication.
- Misuse risk: We include mandatory review notices in all outputs and document this expectation in our Terms of Service.
- Scope creep risk: RTOFlow does not store or analyse learner data; it has no access to student management systems by default.
Guardrail 3 — Data governance
What it requires: Know what data you use, don't over-collect, manage data responsibly.
What RTOFlow does:
- We do not train our AI models on customer data. Your unit codes, RTO name, and industry context inputs are used only to generate your document and are not retained for model training.
- We do not require or store personal information about learners or students. RTOFlow operates at the RTO/unit level, not the individual learner level.
- Data inputs (unit code, RTO name, industry context) are processed to generate the output document and are not retained beyond the session unless you save a job.
- Our data processing practices are detailed in our Privacy Policy.
Guardrail 4 — Ongoing monitoring
What it requires: Track AI system performance and respond to incidents.
What RTOFlow does: We monitor output quality through:
- Internal spot-checks of generated documents against source unit data
- User feedback mechanisms in the application (flag a document for review)
- Incident logging for any reported errors, hallucinations, or quality failures
- Scheduled model quality reviews when underlying AI models are updated
We are continuing to formalise and document these processes.
Guardrail 5 — Human oversight
What it requires: Humans can review, override, or stop AI decisions.
What RTOFlow does: Human oversight is built into the product by design:
- All documents are presented as drafts for human review — they are not automatically sent to learners or submitted to assessors.
- The application UI clearly labels all content as AI-generated.
- Users can edit every section of every generated document before use.
- The product includes a "flag for review" mechanism to report quality issues.
There is no automated pipeline that takes RTOFlow output directly to a learner without a human touching it. That is intentional.
Guardrail 6 — Transparency to affected people
What it requires: Tell people when AI is being used.
What RTOFlow does:
- All generated documents include an AI generation notice visible to the practitioner.
- Our marketing and product copy clearly describes RTOFlow as an AI-powered tool — we do not obscure that documents are AI-generated.
- This responsible AI page is publicly accessible and linked from our homepage.
We are working on: explicit learner-facing disclosure recommendations for RTOs using RTOFlow-generated materials.
Guardrail 7 — Contestability
What it requires: People affected by AI decisions can challenge or seek review.
What RTOFlow does:
- Users can flag any document for quality review via the in-app feedback mechanism.
- RTOFlow's support team reviews flagged documents and responds within 2 business days.
- Because RTOFlow generates drafts (not final decisions), every output can be edited, rejected, or replaced by the practitioner before it affects any learner.
We are working on: a more formal contestability and appeal process documentation.
Guardrail 8 — Privacy protection
What it requires: Protect personal information used in or by AI systems.
What RTOFlow does:
- RTOFlow does not process personal information about learners or students.
- The only personal information we hold is account information (name, email, RTO details) for billing and product access — this is covered by our Privacy Policy.
- We do not use customer data to train or fine-tune AI models.
- We comply with the Australian Privacy Act 1988 and the Australian Privacy Principles.
Guardrail 9 — Reliability and safety
What it requires: The system does what it says, consistently and safely.
What RTOFlow does:
- Unit data is pulled live from training.gov.au at generation time — outputs are grounded in the current, authoritative version of each unit of competency.
- Each generated document includes metadata: unit code, unit version, generation date, and RTOFlow version — so you know exactly what source data was used.
- We test all new model versions against a benchmark set of known units before deploying to production.
- System uptime and status: status.rtoflow.au
Guardrail 10 — Fairness and non-discrimination
What it requires: AI outputs don't unfairly disadvantage any group of people.
What RTOFlow does:
- RTOFlow's outputs are driven by unit of competency content from training.gov.au, not by demographic or personal characteristics. The same unit code produces the same structured output regardless of who is generating it.
- Our industry context feature (e.g., "construction", "aged care") is used to contextualise examples — it is not used to vary the scope or rigour of assessment requirements.
- We do not analyse, segment, or apply different treatment to users based on demographic characteristics.
What we're still building
We're honest that this is a work in progress. The following items are on our roadmap:
| Area | Status | Target |
|---|---|---|
| Formal AI governance policy (published) | In progress | Q3 2026 |
| AI incident register (internal) | In progress | Q2 2026 |
| Learner-facing disclosure guide for RTOs | Planned | Q3 2026 |
| Formal contestability process | Planned | Q3 2026 |
| Third-party AI safety review | Planned | 2027 |
Questions or concerns
If you have questions about RTOFlow's AI practices, want to report a quality issue, or want to discuss our responsible AI approach, contact us at:
ai-safety@rtoflow.au
Last updated: May 2026.
Reference: Australian Voluntary AI Safety Standard 2024 — industry.gov.au