Responsibility Statement
We believe we are accountable for the intelligence we release. At dogAdvisor, our aim is to provide dog owners with the world's safest and smartest pet care AI.




Engineered for safety
Each generation of Max undergoes safety testing and regular safety checks and reviews to keep Max safe
Responsible Oversight
We actively monitor Max's conversations and disclosure is in the public interest, you'll hear about it in our Incident Disclosures
Responsible Growth
We scale Max thoughtfully, with ongoing research and improvement, only deploying tools that are safe, transparent, and helpful for owners
At dogAdvisor, we recognise that deploying AI systems in domains affecting animal welfare creates profound responsibilities, but we must be absolutely clear from the outset: we cannot and do not guarantee that Max is safe, we cannot be certain Max will ever be perfectly safe, and you must read and agree to our Terms of Service before using Max. Our Terms of Service establish the legal boundaries of our relationship, including limitations of liability, and this Responsibility Statement does not override, modify, or supersede those terms in any way. Max is an AI system that makes mistakes. Max is not a veterinarian. Max cannot replace professional veterinary consultation. You must always consult qualified veterinary professionals for medical decisions affecting your dog's health. No amount of safety testing, monitoring, or improvement can eliminate the fundamental reality that AI systems are imperfect, that Max will provide incorrect guidance, and that relying on Max without appropriate veterinary consultation creates risks we cannot control or eliminate.
With those critical disclaimers established, we want to explain what we are doing to make Max as safe as reasonably possible within the inherent limitations of AI technology. We conduct comprehensive pre-deployment safety testing before any new Max generation reaches production, with our team deliberately attempting to elicit unsafe responses, bypass safety constraints, and identify failure modes across emergency scenarios, adversarial manipulation attempts, edge cases, and boundary conditions. We test Emergency Guidance activations against professional veterinary first aid standards, evaluate Medical Intelligence responses for clinical accuracy and appropriate boundaries, and verify Safety Intent refusals correctly identify harmful requests. No system reaches dog owners until this safety validation completes, though we acknowledge that passing our testing does not guarantee safe operation in all real-world scenarios users will encounter. Commercial pressure, feature timelines, and competitive dynamics do not override safety testing requirements, but completing safety testing does not mean Max is safe — it means Max has passed our current testing protocols, which are themselves imperfect and cannot anticipate every possible failure mode.
Our engineers have access to conversation data for safety monitoring purposes, with our safety team conducting ongoing review of Max conversations to identify systematic issues, validate feature performance, and detect emerging failure patterns. Every Emergency Guidance activation receives manual human review where a safety engineer examines the conversation to verify appropriate activation, validate guidance quality, and identify gaps requiring system updates, though this human oversight cannot catch every error and does not guarantee that Emergency Guidance provides safe or appropriate advice. We regularly review Medical Intelligence conversations to ensure clinical accuracy, appropriate boundary maintenance, and correct application of domain knowledge, though sampling cannot examine every conversation and errors will occur between review cycles. Safety Intent activations are logged and audited to validate refusal patterns, and we conduct random sampling of general conversations to monitor response quality and verify adherence to Principle Alignments, but monitoring is retrospective and cannot prevent harm from occurring before we identify problems.
If you believe Max provided unsafe guidance, failed to recognise an emergency, gave inaccurate medical information, or violated safety principles, you can flag conversations for priority review through our incident reporting system at Report Safety Incident. Flagged conversations enter fast-track review typically within 48-72 hours where we examine the specific conversation, evaluate whether system behaviour violated safety standards, determine whether issues represent isolated failures or systematic problems, and implement corrections when failures are identified. We contact users who flag conversations to explain investigation findings and describe corrective actions when applicable, though our determination that system behaviour was appropriate does not mean the guidance was correct or safe — it means we believe the behaviour aligned with Max's design specifications, which themselves may be flawed.
When our monitoring or user reports identify genuine safety failures where Max provided guidance that could reasonably result in animal harm, failed to recognise clear emergencies, violated constitutional principles, or operated outside safe parameters, we publish incident disclosures at Disclosed Safety Incidents. Our disclosure threshold is public interest rather than perfection — we disclose incidents where transparency serves legitimate public interest in understanding Max's safety performance, limitations, and ongoing improvement, though many errors and failures will not meet disclosure threshold and will not be published. Incident disclosures include explanation of what happened, why it happened, what harm could have resulted, what corrective actions we implemented, and what systemic changes we made to prevent recurrence, and resolved issues remain disclosed to create permanent record of Max's safety performance over time. However, incident disclosures represent only the subset of failures we identify and deem worthy of public disclosure — many failures will go undetected, many detected failures will not be disclosed, and the absence of recent incident disclosures does not indicate Max is operating safely.
If we identify guidance that could result in imminent harm to dogs, we activate emergency response protocols including immediate system review, rapid corrective deployment implementing fixes within days when systematic vulnerabilities are identified, and proactive user notification when we can identify users who may have received harmful guidance, though our ability to identify affected users is limited and many users who received harmful guidance will never be contacted. For issues that don't create imminent harm risk, we incorporate corrections into regular development cycles whilst tracking resolution to ensure problems get fixed, though development priorities mean some identified issues may remain unfixed for extended periods. We clearly communicate what Max cannot do — Max isn't a veterinarian, cannot diagnose diseases, cannot prescribe treatments, cannot replace professional veterinary consultation—and we acknowledge that Max makes mistakes and that AI guidance has inherent limitations, but these disclosures do not absolve us of responsibility for trying to improve Max's safety performance within the constraints of what's technically possible.
Responsibility for building, deploying, monitoring, and improving Max rests with dogAdvisor, but responsibility for decisions about whether to rely on Max's guidance, whether to seek veterinary consultation, and how to care for your dog rests entirely with you as established in our Terms of Service. When trade-offs arise between safety and commercial objectives, safety takes priority in our development decisions, but this prioritisation cannot guarantee safe outcomes because AI safety is fundamentally limited by current technology. We maintain relationships with veterinary advisors who provide domain expertise for evaluating clinical accuracy and emergency guidance appropriateness, though advisor input cannot guarantee Max provides appropriate veterinary guidance. Safety standards evolve as Max improves, with features acceptable in earlier generations potentially no longer meeting standards for newer systems, and we learn from near-misses and user feedback to inform continuous improvement, though learning from failures does not prevent future failures and continuous improvement does not mean Max becomes reliably safe over time.
Your reports of concerning behaviour trigger fast-track review and directly improve Max's safety, and you should use Max appropriately within its designed scope as intelligent intermediary between your observations and professional veterinary care rather than as replacement for veterinarians. Perfect safety is impossible and we cannot guarantee Max won't provide dangerous guidance, but accountability through continuous monitoring, honest incident disclosure, rapid response when failures occur, and commitment to prioritising dog welfare over commercial convenience represents what we can control within the inherent limitations of AI technology. We commit to comprehensive pre-deployment safety testing, manual review of Emergency Guidance activations, continuous monitoring through quality sampling and conversation review, fast-track review for user-flagged conversations, public incident disclosure when failures meet public interest threshold, rapid corrective action for time-critical safety issues, honest transparency about limitations and trade-offs, governance prioritising safety over commercial objectives, and continuous improvement informed by failures and user feedback, but none of these commitments guarantee Max operates safely and all commitments are subject to the limitations, disclaimers, and liability limitations established in our Terms of Service which you must read and agree to before using Max.
This Responsibility Statement establishes what we're doing to make Max safer within the fundamental constraints that AI systems make mistakes, that perfect safety is impossible, and that we cannot guarantee Max provides safe or appropriate guidance. Max exists to help dogs and the people who love them, but that purpose creates responsibilities we can only partially fulfill given the inherent limitations of AI technology. We document our operational practices, incident handling, and safety commitment not as guarantee of safe operation but as transparency about what we're attempting to achieve and what users should realistically expect. By using dogAdvisor Max, you acknowledge that Max is an AI system that makes mistakes, that you have read and agreed to our Terms of Service including all limitations of liability, that Max is not a veterinarian and cannot replace professional veterinary consultation, and that you use Max at your own risk. Always consult veterinary professionals for medical decisions affecting your dog's health. We're doing everything we reasonably can to make Max safer, but we cannot and do not guarantee that Max is safe or ever will be safe.
This responsibility statement is submitted on the 27th December 2025, and is submitted by dogAdvisor's Intelligence team. You can contact us at ai.safety@dogadvisor.dog. Access report links, incident disclosures, and terms of service in the footer of dogAdvisor please.


