Skip to content

Policy Compliance Results

Policy Compliance Results: Understanding and Acting on Network Configuration Validation

Section titled “Policy Compliance Results: Understanding and Acting on Network Configuration Validation”

Policy Compliance Results provide comprehensive visibility into how well your network devices conform to organizational standards and regulatory requirements. The compliance reporting interface displays pass/fail status for each policy assignment, drill-down capability to per-device results, and detailed rule-level information enabling targeted remediation of non-compliant configurations.

Understanding how to read compliance reports, interpret results, and act on compliance failures is essential for maintaining configuration standards across your network infrastructure. This guide covers accessing compliance results, interpreting various result formats, analyzing failures, and establishing effective remediation workflows.

Assignment-level overview: High-level compliance status for each policy assignment showing overall pass/fail rates, affected device counts, and last execution timestamps.

Device-level detail: Per-device compliance status identifying exactly which devices pass or fail each policy assignment, enabling targeted remediation efforts.

Rule-level granularity: Individual policy rule results showing which specific rules passed or failed, providing precise information about non-compliant configuration elements.

JSON output: Complete evaluation results in structured JSON format, displaying policy methods, configuration strings evaluated, and pass/fail status for troubleshooting and auditing.

Historical tracking: Compliance trend data over time showing improvement or degradation in configuration adherence across device populations.

Export capabilities: Compliance data exportable for external reporting, audit documentation, and integration with other management systems.

Compliance results are produced when compliance jobs execute:

  1. Job initiation: Scheduled task or manual trigger starts compliance job
  2. Device identification: Job retrieves all devices in policy assignment scope
  3. Configuration retrieval: Latest configuration file loaded for each device (specified command)
  4. Policy evaluation: Each rule in policy definition evaluated against configuration
  5. Result recording: Pass/fail status recorded for each rule and overall evaluation
  6. Report generation: Results compiled into compliance report accessible via UI

Results are stored in database with timestamps, enabling historical analysis and trend identification.

Navigate to Compliance → Policy Compliance to access the compliance results interface.

Policy Compliance main view showing assignment list with compliance statistics Policy Compliance Main View

The main view provides high-level compliance statistics for all policy assignments.

Summary statistics:

  • Total number of policy assignments configured
  • Overall compliance percentage across all assignments
  • Total devices evaluated
  • Last update timestamp

Assignment list: Each row represents one policy assignment showing:

  • Assignment name
  • Policy definition name
  • Scope (device, category, or tag)
  • Device count in scope
  • Pass count and percentage
  • Fail count and percentage
  • Last execution timestamp
  • Expand/collapse control for detailed view
Policy Compliance main view with expanded assignment showing summary data Policy Compliance Assignment Summary

Click the expand icon next to any assignment to view detailed information:

Expanded view displays:

  • Complete assignment configuration (scope, command, policy definition)
  • Device-by-device compliance status
  • Quick action buttons:
    • View Details: Navigate to detailed results page
    • View Definition Code: Display policy definition content
    • Edit Assignment: Modify assignment configuration
Expanded policy assignment showing device list and action buttons Expanded Assignment View

Click View Details on any assignment to access the detailed results page.

Header section:

  • Assignment name and description
  • Policy definition name
  • Scope information
  • Last evaluation timestamp
  • Summary statistics (total devices, pass/fail counts, percentage)

Device results table:

  • Device name and IP address
  • Overall status (PASS/FAIL/NOT EVALUATED)
  • Last evaluation timestamp
  • Expand control for rule-level details
  • Action button to show result output
Detailed compliance results page showing per-device status Detailed Compliance Results Page

Each device in the results table displays overall compliance status:

PASS (Green indicator):

  • All policy rules passed validation for this device
  • Device configuration meets compliance requirements
  • No action needed

FAIL (Red indicator):

  • One or more policy rules failed validation
  • Device configuration violates compliance requirements
  • Remediation required

NOT EVALUATED - CONFIG NOT FOUND (Gray indicator):

  • Configuration file for specified command doesn’t exist
  • Device backup may have failed or command not executed
  • Investigate backup status before remediation

Expand any device to view individual rule results:

Rule information displayed:

  • Policy method used (e.g., must_match_single_string)
  • Comment/description from policy definition
  • Configuration string or pattern evaluated
  • Result status (pass/fail)
  • Raw result (true/false)
  • Configuration ID reference

Result interpretation:

  • pass/true: This specific rule passed validation
  • fail/false: This specific rule failed validation
  • Review failed rules to identify specific non-compliant configuration elements
Expanded device showing individual rule pass/fail status Per-Device Rule Results

Click Show Result Output button for any device to view complete JSON evaluation results.

JSON result output dialog showing detailed evaluation data Result Output JSON

The result output is a JSON representation of the policy evaluation, containing:

  • Each policy rule with its method, comment, and configuration string
  • Individual rule results (pass/fail, true/false)
  • Configuration ID indicating which config file was evaluated
  • Overall evaluation result and reason

Example of device passing all policy rules:

{
"0": {
"policyMethod": "must_match_single_string",
"comment": "Description: must_match_single_string SNMP Policy",
"policyString": "snmp-server host 1.1.1.1 TESTCOMMUNITY",
"result": "pass",
"resultRaw": true,
"configId": 9
},
"eval_result": "PASS",
"eval_result_raw": true,
"eval_result_reason": "All policy methods passed"
}

Key elements:

  • "0": Rule index (first rule in policy definition)
  • policyMethod: Validation method used
  • comment: Description from policy definition
  • policyString: Configuration string evaluated
  • result: “pass” indicates this rule passed
  • resultRaw: true indicates boolean pass
  • configId: Database reference for configuration file
  • eval_result: “PASS” indicates overall compliance
  • eval_result_reason: Human-readable result explanation

Interpretation: This device has the required SNMP configuration. All policy rules passed. Device is compliant.

Example of device failing one or more policy rules:

{
"0": {
"policyMethod": "must_match_single_string",
"comment": "Description: must_match_single_string SNMP Policy",
"policyString": "snmp-server host 1.1.1.1 TESTCOMMUNITY",
"result": "pass",
"resultRaw": true,
"configId": 9
},
"1": {
"policyMethod": "must_match_single_string",
"comment": "Description: must_match_single_string SNMP Policy",
"policyString": "snmp-server host 1.1.1.1 NOTCOMMUNITY",
"result": "fail",
"resultRaw": false,
"configId": 9
},
"eval_result": "FAIL",
"eval_result_raw": false,
"eval_result_reason": "1 policy methods failed"
}

Key elements:

  • Rule “0” passed (result: “pass”, resultRaw: true)
  • Rule “1” failed (result: “fail”, resultRaw: false)
  • eval_result: “FAIL” indicates overall non-compliance
  • eval_result_reason: “1 policy methods failed” identifies failure count

Interpretation: This device has first SNMP configuration but is missing second required configuration. Rule 1 failed because snmp-server host 1.1.1.1 NOTCOMMUNITY doesn’t exist in device configuration. Device requires remediation.

Example when configuration file not found:

{
"result": "NOT EVALUATED - CONFIG NOT FOUND"
}

Interpretation: Configuration file for specified command doesn’t exist for this device. This isn’t a policy failure—it’s a data availability issue.

Common causes:

  • Device backup failed or hasn’t completed
  • Command specified in assignment doesn’t match command in device command group
  • Command not enabled or not executing during backups
  • Configuration file was deleted or purged

Resolution: Verify device is backing up successfully and command is executing before investigating policy compliance.

Calculation: (Devices passing / Total devices) × 100

Example: Assignment with 100 devices where 85 pass:

  • Compliance rate: 85%
  • 15 devices require remediation

Good compliance: >95% indicates strong configuration management and limited drift.

Moderate compliance: 80-95% suggests some configuration drift requiring attention.

Poor compliance: < 80% indicates significant issues requiring immediate remediation and root cause analysis.

Monitor compliance percentages over time to identify:

Improving trends: Percentage increasing indicates successful remediation efforts and reduced configuration drift.

Declining trends: Percentage decreasing suggests:

  • New devices added without proper configuration
  • Configuration changes violating policies
  • Policy definition not matching current standards
  • Backup or evaluation issues

Stable low compliance: Persistent low compliance may indicate:

  • Policy definition issues (too strict, doesn’t match reality)
  • Systematic configuration problems requiring large-scale remediation
  • Devices don’t support configurations policy requires

Systematic failures: All or most devices failing same rule indicates:

  • Policy definition error (rule doesn’t match actual config format)
  • Legitimate widespread non-compliance requiring remediation campaign
  • Command output format changed (vendor firmware update)

Isolated failures: Few devices failing while most pass indicates:

  • Legitimate configuration drift on specific devices
  • Devices with unique requirements outside policy scope
  • Recent configuration changes not yet compliant

Alternating failures: Devices passing/failing different rules suggests:

  • Multiple configuration variations across device population
  • Need for more flexible policy (wildcards, regex)
  • Legacy vs. modern configuration standards

From compliance results, identify devices showing FAIL status.

Prioritization criteria:

  • Critical devices (core routers, firewalls) remediate first
  • Security-related failures take precedence over operational ones
  • Devices with multiple failures vs. single failures
  • Production vs. lab devices

Export failure list: Document devices requiring remediation for tracking and assignment.

For each failed device, review rule-level results and JSON output to determine:

What failed: Which specific policy rules didn’t pass Why it failed: Configuration missing, incorrect, or in wrong format Impact: Security vs. operational vs. cosmetic non-compliance Scope: Single device issue vs. systematic problem

Example analysis:

Device: Router-Core-01 Status: FAIL Failed rule: “snmp-server host 192.168.1.10 public” Reason: SNMP community string is “public” instead of required secure community Impact: Security vulnerability (weak SNMP community) Scope: Isolated to this device (other routers pass) Action: Change SNMP community string to secure value

Investigate why non-compliance exists:

Configuration never set: Device built without proper standards application Configuration changed: Device was compliant but configuration modified Configuration drift: Gradual changes over time moved device out of compliance Policy mismatch: Policy doesn’t reflect actual operational requirements Device limitation: Device doesn’t support required configuration

Root cause determines remediation approach:

  • Never set → Apply configuration
  • Changed → Restore previous configuration or apply correct configuration
  • Drift → Re-apply standards and investigate change control process
  • Policy mismatch → Update policy definition, not device
  • Device limitation → Exception documentation, policy scope adjustment

Individual device remediation:

  • Log into device
  • Apply required configuration changes
  • Verify changes don’t disrupt operations
  • Save configuration

Bulk remediation (multiple devices, same issue):

  • Create configuration snippet in rConfig
  • Test snippet on single device
  • Deploy snippet to affected devices via Snippet Deployment
  • Monitor for issues

Automated remediation (future state):

  • Script configuration corrections
  • Integrate with change management workflow
  • Schedule during maintenance windows
  • Validate post-change

Apply configuration changes to bring devices into compliance.

Best practices:

  • Test on lab or non-critical device first
  • Schedule changes during maintenance windows for critical devices
  • Have rollback plan ready
  • Document changes in change management system
  • Notify relevant teams before changes

Configuration application methods:

  • Manual CLI configuration
  • rConfig Snippet Deployment
  • Configuration management tools (Ansible, Puppet)
  • Vendor-specific management platforms

After applying changes, verify compliance:

  1. Trigger device backup: Ensure new configuration captured in rConfig
  2. Run compliance check manually: Re-evaluate device against policy
  3. Review results: Verify device now shows PASS status
  4. Check rule details: Confirm previously failed rules now pass
  5. Document resolution: Record what was changed and why

If still failing after remediation:

  • Review configuration was actually applied and saved
  • Verify configuration syntax matches policy definition exactly
  • Check for case sensitivity or whitespace issues
  • Confirm backup captured new configuration
  • Re-run backup if necessary before re-checking compliance

Remediation documentation should include:

  • Devices remediated
  • Configuration changes applied
  • Date and time of changes
  • Administrator who performed remediation
  • Verification that compliance restored
  • Any exceptions or special considerations

Tracking metrics:

  • Time to remediate per device
  • Compliance improvement percentage
  • Recurring failures (same devices/rules failing repeatedly)
  • Root cause patterns

Main view provides at-a-glance dashboard of compliance posture:

Key metrics displayed:

  • Total assignments
  • Overall compliance percentage
  • Devices evaluated
  • Assignments with failures

Visual indicators: Color-coded status (green/red/gray) enables quick identification of problem areas.

Export compliance results for:

  • External reporting to management
  • Audit documentation
  • Integration with CMDB or ticketing systems
  • Historical analysis in BI tools
  • Compliance trend tracking

Export methods:

  • CSV export from compliance results page
  • API access to compliance data
  • Database queries for custom reports
  • Scheduled report generation

Automate compliance reporting:

Weekly summary reports: Email to operations team with compliance percentages and failure counts.

Monthly management reports: Executive summary of compliance trends, improvement areas, persistent issues.

Audit reports: Comprehensive compliance documentation for regulatory audits with historical trend data.

Alert-based reports: Immediate notification when compliance drops below threshold or critical devices fail.

Daily: Review failed devices from automated compliance checks, address critical failures promptly.

Weekly: Analyze compliance trends, identify patterns in failures, prioritize remediation efforts.

Monthly: Generate management reports, review policy effectiveness, update policies as needed.

Quarterly: Comprehensive compliance audit, policy definition review, remediation workflow optimization.

Set compliance thresholds: Define acceptable compliance percentages (e.g., >95% for production).

Alert on threshold violations: Automated alerts when compliance drops below acceptable levels.

Monitor compliance trends: Identify declining compliance before it becomes critical.

Track remediation velocity: Measure how quickly failures are resolved.

Review policy effectiveness: Are policies catching actual non-compliance or generating false positives?

Update policies as standards evolve: Ensure policies reflect current organizational requirements.

Remove obsolete policies: Clean up policies no longer relevant to compliance needs.

Test policy changes: Validate policy updates before broad application to avoid widespread false failures.

Establish clear ownership: Who is responsible for remediating different device types or failure types?

Define SLAs: Maximum time to remediate failures based on severity (critical vs. informational).

Track remediation: Maintain visibility into remediation progress and blockers.

Prevent recurrence: Identify why failures occurred and implement preventive measures.

Document exceptions: Some devices may legitimately deviate from policies—document why.

Maintain remediation history: Track what was changed, when, and by whom.

Record policy decisions: Why specific policies exist, what compliance they enforce.

Create runbooks: Step-by-step remediation procedures for common failures.

Symptom: Every device in assignment fails identical policy rule.

Possible causes:

  • Policy definition syntax error or incorrect string
  • Case sensitivity issue
  • Extra/missing whitespace in policy string
  • Command output format different than policy expects

Resolution:

  1. Review policy definition carefully
  2. Compare policy string to actual device configuration
  3. Test policy against known-good device manually
  4. Adjust policy definition if needed
  5. Re-run compliance check after policy update

Symptom: Same device passes one run, fails next run without configuration changes.

Possible causes:

  • Policy definition was modified between runs
  • Dynamic content in configuration (timestamps, uptime)
  • Backup timing issues (evaluating partial backup)
  • Database/file system issues

Resolution:

  1. Check policy definition change history
  2. Review device configuration history for changes
  3. Verify policy doesn’t evaluate dynamic content
  4. Ensure backups complete before compliance checks run
  5. Check database and file system integrity

Symptom: Some devices show “NOT EVALUATED - CONFIG NOT FOUND” while others in same assignment evaluate normally.

Cause: Configuration files missing for affected devices.

Resolution:

  1. Verify affected devices backing up successfully
  2. Check backup logs for errors on those devices
  3. Confirm command exists in device command groups
  4. Ensure command enabled and executing
  5. Run manual backup for affected devices
  6. Re-run compliance check after successful backups

Symptom: No compliance results appear, last execution timestamp not updating.

Possible causes:

  • Scheduled task not configured or disabled
  • Horizon queue not processing jobs
  • Assignment disabled
  • Compliance job failing silently

Resolution:

  1. Verify scheduled task exists and is enabled
  2. Check Horizon is running and processing jobs
  3. Confirm assignment enabled status
  4. Review Horizon failed jobs for compliance job failures
  5. Check application logs for errors
  6. Run manual compliance check to validate functionality

Symptom: Devices show PASS status but manual review indicates non-compliance.

Possible causes:

  • Policy definition too lenient (wildcards matching too broadly)
  • Wrong command selected (evaluating wrong configuration)
  • Policy method doesn’t match intent
  • Cached results (viewing old results)

Resolution:

  1. Review policy definition for correctness
  2. Verify command selection matches policy requirements
  3. Confirm policy method appropriate for validation
  4. Force new compliance check to eliminate caching
  5. Test policy against known non-compliant device
StatusMeaningAction Required
PASSAll rules passedNone - device compliant
FAILOne or more rules failedInvestigate and remediate
NOT EVALUATEDConfig file not foundFix backup/command issues
FieldDescription
policyMethodValidation method used
commentDescription from policy definition
policyStringConfiguration string evaluated
result”pass” or “fail”
resultRawtrue (pass) or false (fail)
configIdConfiguration file reference
eval_resultOverall result (“PASS” or “FAIL”)
eval_result_reasonHuman-readable result explanation
  • Daily: Address critical failures
  • Weekly: Analyze trends, prioritize remediation
  • Monthly: Management reporting
  • Quarterly: Comprehensive audit, policy review

Policy Compliance Results provide comprehensive visibility into network device configuration compliance, enabling organizations to identify non-compliant devices, understand specific configuration violations, and systematically remediate issues to maintain configuration standards across the infrastructure.

Multi-level visibility: Compliance reporting spans from high-level assignment summaries through device-level status to granular rule-level results, providing appropriate detail at each organizational level.

Actionable information: JSON result output shows exactly which configurations passed or failed, eliminating guesswork and enabling targeted remediation efforts.

Three result states: PASS (compliant), FAIL (non-compliant), and NOT EVALUATED (data issue) clearly distinguish compliance status from data availability problems.

Remediation workflow: Systematic approach from identifying failures through verifying fixes ensures consistent, documented compliance restoration.

Continuous monitoring: Regular review of compliance results identifies trends, patterns, and recurring issues enabling proactive configuration management rather than reactive troubleshooting.

With comprehensive compliance reporting and systematic remediation workflows, rConfig enables organizations to maintain consistent configuration standards across networks of any size, supporting regulatory compliance, security baselines, and operational excellence.