Responsibility Statement

At dogAdvisor, we believe that our intelligence should be safe, transparent, and accountable to the people and animals it serves.

Deni Darenberg β€” Founder, dogAdvisor

dogAdvisor | dogadvisor | dog advisor |dogAdvisor.dog
dogAdvisor | dogadvisor | dog advisor |dogAdvisor.dog

AI is going to change the world. It's up to us whether that change is for the better

People are already using AI for increasingly important questions. Questions about their health, their relationships, their children, their dogs. This is happening whether we like it or not. The question isn't whether people will use AI for these things - they already are. The question is whether the AI they're using is built for that responsibility.

Most AI deployers today use the same guardrails for writing a poem as they do for questions that could affect whether someone's dog lives or dies. We think that's absurd. It creates real problems. If you build AI for everything, you build it for nothing in particular. If you treat all questions the same, you take none of them seriously enough.

We're taking a different approach. We built Max specifically for dog care because we believe AI can make dog ownership genuinely possible for everyone. Not just more accessible for some people - actually possible for anyone. The nervous first-time owner who's terrified they're going to mess something up. The person who can't afford frequent vet visits for every small question. The family trying to figure out if what they're seeing at 2am needs immediate attention or can wait until morning. We think AI can help these people in ways that matter. But only if we build it right. Only if we take the hard route of actually caring about safety instead of just saying we do. Only if we're accountable for what we put into the world. This approach of Accountable Intelligence defines everything we do and it lays the foundations for how we take responsibility for the intelligence we deploy to the world.

How we take responsibility

Before Max reaches users, we run thousands of tests designed to break him. We're deliberately trying to make it fail - emergency scenarios, adversarial prompts, edge cases where the right answer isn't obvious. Our goal is to find problems in testing rather than having them show up when someone's actually relying on Max for their dog, and we share the findings of our work in our Model Cards and Addendums. At minimum, this safety testing for a new generation takes around four months.

The truth is, this slows us down considerably. Most companies don't do pre-deployment testing at this scale for consumer AI. We think they should, especially for applications where mistakes have real consequences. Dogs depend entirely on the humans caring for them, and those humans are turning to AI for guidance. If we're going to provide that guidance, we need to make sure it's safe enough to deserve trust.

Once Max is deployed, our engineers review conversations to understand how it performs in actual use. We manually examine emergency situations to verify Max recognised them appropriately. We look for systematic issues and patterns in failures that testing didn't catch. When someone tells us Max gave unsafe guidance or missed something important, we take it seriously. We regularly review conversations where Safety Stops, Welfare Protections, Emergency Guidances, Medical Intelligences, or other advanced protocols are deployed, we report on their safety, and (if necessary) release emergency fixes without delay. If we find Max is making systematic mistakes, we fix them without delay. We also regularly review conversations all users have with Max within a 21 day period to ensure Max is adhering to our regulations and standards.

When we identify genuine safety incidents - situations where Max's guidance could have harmed an animal, where we missed an emergency, where our safety mechanisms failed - we disclose these publicly. The truth is, no one likes publishing their mistakes. It's uncomfortable. But we think transparency about failures is part of taking safety seriously. If we're only transparent about successes, we're not actually being transparent. We retain discretion over what reaches disclosure threshold because not every error is worth publishing, but if something happened that people should know about to understand Max's safety performance and limitations, we disclose it.

Four times now, Max has identified emergencies where users told us the recognition and guidance helped them get their dog to veterinary care in time. Veterinarians saved those dogs, not Max. But Max helped bridge the gap between "something seems wrong" and "I need to act now." That validates our approach, but it's not why we do this. We do this because if you're going to deploy AI in people's lives, you need to take responsibility for building it safely. Most AI companies don't do this level of testing. Most don't manually review emergency activations. Most don't publicly disclose safety incidents. We think this is a mistake and we're trying to demonstrate what responsible deployment actually looks like.

What does this all mean?

We're accountable for the safety we build into Max. The testing before deployment. The monitoring after deployment. The mechanisms we build to catch failures. The transparency we maintain about what's working and what isn't. This is expensive and time-consuming. Testing thousands of questions before launch takes time. Manual review of emergency situations takes resources. Public disclosure of failures is uncomfortable. But we think it's necessary if AI is going to be safe enough to help with questions that actually matter.

Max is not a veterinarian. Max cannot diagnose, cannot prescribe, cannot replace professional veterinary consultation. Max will make mistakes because AI systems make mistakes. We cannot guarantee Max operates safely. What we can do is build comprehensive safety infrastructure, monitor performance rigorously, respond to failures rapidly, and improve systematically. Users must read and agree to our Terms of Service, which establish legal boundaries this statement does not modify.

The truth is we're trying to make dog ownership possible for everyone by building AI they can trust. Trust requires safety. Safety requires work. We're doing that work because we think it matters. We test before deployment. We monitor after deployment. We're honest about failures. We fix them. We learn from them. We do better. That's what accountability looks like to us.

This statement is submitted on the 11th February 2026.

Contact dogAdvisor Safety: accountability@dogadvisor.dog