Title
Create new category
Edit page index title
Edit category
Edit link
Remediation
Risky dataflows detected by HoundDog.ai can be fixed one of two ways:
Code Removal
Some risky data flows come from logging or streaming sensitive data to stdout or stderr. These are usually safe to fix by removing the logging or print statement completely. Example:
Before
xxxxxxxxxx# Vulnerable: printing sensitive user inputprint("User email:", user_email)After
xxxxxxxxxx# Fixed: logging removed completelyData Sanitization
Other cases involve data that must remain in the code, such as sending a user data to a monitoring tool like Datadog. In these situations, complete removal could cause unintended consequences, so the safer approach is to sanitize the value before use. This might involve redaction, masking, or encryption depending on the sensitivity of the data.
Before
xxxxxxxxxx// Vulnerable: raw user input sent to monitoringconst userId = req.body.userId;datadog.trackEvent("login_attempt", { userId });After
xxxxxxxxxx// Fixed: input sanitized and stored with a prefixed variable nameconst sanitizedUserId = sanitize(req.body.userId);datadog.trackEvent("login_attempt", { userId: sanitizedUserId });Auto-Closing Issues
To ensure the scanner auto-closes the risky dataflow after remediation, the variable name in the fixed version of the code must begin with sanitized - see Sanitizers for more details. Use the style that matches your language or project:
sanitizedUserId(camelCase)sanitized_user_id(snake_case)SanitizedUserId(PascalCase)
© 2025 HoundDog.ai, Inc. All rights reserved.