backupchecks/TODO-cove-data-protection.md
Ivo Oskamp d1023f9e52 Translate Cove Data Protection TODO to English
Changed TODO document language from Dutch to English to align with
project documentation standards (all code and docs in English).

No content changes, only translation.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:33:34 +01:00

6.0 KiB

TODO: Cove Data Protection Integration

Date: 2026-02-10 Status: Research phase Priority: Medium


🎯 Goal

Integrate Cove Data Protection (formerly N-able Backup / SolarWinds Backup) into Backupchecks for backup status monitoring.

Challenge: Cove does NOT work with email notifications like other backup systems (Veeam, Synology, NAKIVO). We need to find an alternative method to import backup status information.


🔍 Research Questions

1. API Availability

  • Does Cove Data Protection have a public API?
  • What authentication method does the API use? (API key, OAuth, basic auth?)
  • Which endpoints are available for backup status?
  • Is there rate limiting on the API?
  • Documentation URL: ?

2. Data Structure

  • What information can we retrieve per backup job?
    • Job name
    • Status (success/warning/failed)
    • Start/end time
    • Backup type
    • Client/device name
    • Error messages
    • Objects/files backed up
  • Is there a webhook system available?
  • How often should the API be polled?

3. Multi-Tenancy

  • Does Cove support multi-tenant setups? (MSP use case)
  • Can we monitor multiple customers/partners from 1 account?
  • How are permissions/access managed?

4. Integration Strategy

  • Option A: Scheduled Polling

    • Cronjob that periodically calls API
    • Parse results to JobRun records
    • Pro: Simple, consistent with current flow
    • Con: Delay between backup and registration in system
  • Option B: Webhook/Push

    • Cove sends notifications to our endpoint
    • Pro: Real-time updates
    • Con: Requires external endpoint, security considerations
  • Option C: Email Forwarding

    • If Cove has email support after all (hidden setting?)
    • Pro: Reuses existing email import flow
    • Con: Possibly not available

📋 Technical Considerations

Database Model

Current JobRun model expects:

  • mail_message_id (FK) - how do we adapt this for API-sourced runs?
  • Possible new field: source_type ("email" vs "api")
  • Possible new field: external_id (Cove job ID)

Parser System

Current parser system works with email content. For API:

  • New "parser" concept for API responses?
  • Or direct JobRun creation without parser layer?

Architecture Options

Option 1: Extend Email Import System

API Poller → Pseudo-MailMessage → Existing Parser → JobRun
  • Pro: Reuse existing flow
  • Con: Hacky, email fields have no meaning

Option 2: Parallel Import System

API Poller → API Parser → JobRun (direct)
  • Pro: Clean separation, no email dependency
  • Con: Logic duplication

Option 3: Unified Import Layer

         → Email Import →
Unified  →                → Common Processor → JobRun
         → API Import   →
  • Pro: Future-proof, scalable
  • Con: Larger refactor

🔧 Implementation Steps (After Research)

Phase 1: API Research & POC

  1. Research Cove API documentation
  2. Test API authentication
  3. Test data retrieval (1 backup job)
  4. Mapping of Cove data → Backupchecks model
  5. Proof of concept script (standalone)

Phase 2: Database Changes

  1. Decide: extend MailMessage model or new source type?
  2. Migration: add source_type field to JobRun
  3. Migration: add external_id field to JobRun
  4. Update constraints/validations

Phase 3: Import Mechanism

  1. New file: containers/backupchecks/src/backend/app/cove_importer.py
  2. API client for Cove
  3. Data transformation to JobRun format
  4. Error handling & retry logic
  5. Logging & audit trail

Phase 4: Scheduling

  1. Cronjob/scheduled task for polling (every 15 min?)
  2. Or: webhook endpoint if Cove supports it
  3. Rate limiting & throttling
  4. Duplicate detection (avoid double imports)

Phase 5: UI Updates

  1. Job Details: indication that job is from API (not email)
  2. No "Download EML" button for API-sourced runs
  3. Possibly different metadata display

📚 References

Cove Data Protection

Similar Integrations

Other backup systems that use APIs:

  • Veeam: Has both email and REST API
  • Acronis: REST API available
  • MSP360: API for management

Resources

  • API documentation (yet to find)
  • SDK/Client libraries available?
  • Community/forum for integration questions?
  • Example code/integrations?

Open Questions

  1. Performance: How many Cove jobs do we need to monitor? (impact on polling frequency)
  2. Historical Data: Can we retrieve old backup runs, or only new ones?
  3. Filtering: Can we apply filters (only failed jobs, specific clients)?
  4. Authentication: Where do we store Cove API credentials? (SystemSettings?)
  5. Multi-Account: Do we support multiple Cove accounts? (MSP scenario)

🎯 Success Criteria

Minimum Viable Product (MVP)

  • Backup runs from Cove are automatically imported
  • Status (success/warning/failed) displayed correctly
  • Job name and timestamp available
  • Visible in Daily Jobs & Run Checks
  • Errors and warnings are shown

Nice to Have

  • Real-time import (webhook instead of polling)
  • Backup object details (individual files/folders)
  • Retry history
  • Storage usage metrics
  • Multi-tenant support

🚀 Next Steps

  1. Research first! - Start with API documentation investigation
  2. Create POC script (standalone, outside Backupchecks)
  3. Document findings in this file
  4. Decide which architecture option (1, 2, or 3)
  5. Only then start implementation

Status: Waiting on API research completion.


📝 Notes

  • This TODO document should be updated after each research step
  • Add API examples as soon as available
  • Document edge cases and limitations
  • Consider security implications (API key storage, rate limits, etc.)