Compare commits

..

152 Commits

Author SHA1 Message Date
588f788e31 Auto-commit local changes before build (2026-02-10 11:49:48) 2026-02-10 11:49:48 +01:00
a919610d68 HOTFIX: Fix duplicate tickets in Run Checks popup
Critical bug: Same ticket appeared multiple times in popup
(e.g., T20260127.0061 showed 8 times).

Root Cause:
The JOIN with ticket_scopes/remark_scopes created duplicate rows
when a ticket had multiple scopes (Cartesian product).

Changes:
- Removed unnecessary JOIN ticket_scopes from tickets query
- Removed unnecessary JOIN remark_scopes from remarks query
- Added DISTINCT to prevent any duplicate rows
- Changed COALESCE(ts.resolved_at, t.resolved_at) to t.resolved_at
  (ticket_scopes JOIN removed, only ticket resolution matters)

Result: Each ticket/remark now appears exactly once in popup.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 11:47:28 +01:00
da9ed8402e Merge branch 'v20260210-01-autotask-ticket-propagation-fix' into main
Release v0.1.26 - Ticket system bug fixes

This release resolves critical ticket system display issues where
resolved tickets were incorrectly appearing on new runs across
multiple pages.

Key changes:
- Fixed ticket propagation logic (4 locations)
- Transitioned from date-based to link-based queries
- Fixed Run Checks popup showing resolved tickets
- Updated Settings Maintenance UI text
- Test email generation reduced to 1 per button

Total commits: 10
Branch: v20260210-01-autotask-ticket-propagation-fix

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 11:36:01 +01:00
8bef63c18a Release v0.1.26 on branch v20260210-01-autotask-ticket-propagation-fix (bump type 1) 2026-02-10 11:35:27 +01:00
7385ecf94c Update v0.1.26 changelogs with UI text fix
Added the Settings Maintenance page text update to the official
v0.1.26 release notes before the first build.

Changes:
- Updated docs/changelog.md with User Interface subsection
- Updated changelog.py with User Interface subsection
- Both now document the test email text change (3→1, Veeam only)

All three changelogs (changelog.md, changelog.py, changelog-claude.md)
now include this UI improvement in v0.1.26.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 11:34:24 +01:00
f62c19ddf8 Update Settings Maintenance test email text
The UI text still mentioned "3 emails simulating Veeam, Synology,
and NAKIVO" but the actual behavior changed to 1 Veeam email per button.

Changes:
- Updated description: Now states "1 Veeam Backup Job email" per button
- Updated button labels: "emails (3)" → "email (1)"
- Clarified that only Veeam test emails are generated

This matches the actual implementation that was changed earlier.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 11:26:37 +01:00
1064bc8d86 Release v0.1.26 - Ticket system bug fixes
Prepared official release documentation for v0.1.26 consolidating
all ticket system bug fixes from 2026-02-10.

Changes:
- Updated docs/changelog.md with v0.1.26 release notes
  - Detailed root cause analysis of date-based logic issues
  - Complete list of fixed pages (4 locations)
  - Before/after behavior explanation
  - Testing and troubleshooting section
- Updated changelog.py with v0.1.26 entry for website display
  - Same content structured for Python data format
- Updated changelog-claude.md with release reference

Release Focus:
- Complete transition from date-based to link-based ticket queries
- Fixed resolved tickets appearing on new runs (4 pages affected)
- Preserved audit trail for historical runs
- Consistent behavior across entire application

Ready for production deployment.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 11:22:46 +01:00
5e617cb6a9 Improve changelog clarity for debug logging context
The previous changelog entry lacked context about why debug logging
was added and what it did. Future readers need this information.

Changes:
- Restored full debug logging description in Changed section
- Marked as "LATER REMOVED" for clarity
- Expanded Removed section with full context about purpose
- Now clear: logging was temporary troubleshooting tool

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 11:17:28 +01:00
d467c060dc Remove debug logging from ticket linking function
The ticket propagation issues have been resolved. Debug logging
is no longer needed in production code.

Changes:
- Removed AuditLog debug logging from link_open_internal_tickets_to_run
- Preserved debug logging code in backupchecks-system.md for future use
- Updated changelog to document removal

The debug code is available in the technical documentation if
troubleshooting is needed again in the future.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 11:14:13 +01:00
0d9159ef6f Fix Run Checks popup showing resolved tickets
The Run Checks popup modal was still showing resolved tickets
for runs where they were never actually linked. This was the last
remaining location using date-based ticket logic.

Root cause:
The /api/job-runs/<run_id>/alerts endpoint used the old date-based
logic that showed all tickets scoped to the job if active_from_date
was before the run date. This ignored whether the ticket was actually
linked to that specific run.

Changes:
- Replaced date-based query with explicit ticket_job_runs join
- Replaced date-based query with explicit remark_job_runs join
- Now only returns tickets/remarks actually linked to this run
- Removed unused run_date, job_id, ui_tz query parameters
- Simplified queries: no timezone conversions, no date comparisons

Result: Resolved tickets no longer appear in popup unless they were
linked to that run when they were still open. Completes transition
from date-based to explicit-link ticket system across entire UI.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 11:07:28 +01:00
5b940e34f2 Fix Run Checks page showing resolved ticket indicators
The Run Checks main page has ticket/remark indicators (🎫/💬) that
use queries to check if active tickets/remarks exist for each job.
These queries still used the old date-based logic.

Changes:
- Removed date-based OR clause from ticket indicator query
- Removed date-based OR clause from remark indicator query
- Simplified parameters (removed ui_tz from ticket query)
- Now consistent with Job Details and linking behavior

Result: Run Checks indicators no longer show for resolved tickets,
matching the behavior across all pages.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 10:57:14 +01:00
43502ae6f3 Fix: Show resolved tickets only on runs where they were linked
Previous fix removed ALL resolved tickets from display, breaking
audit trail. Users need to see which tickets were associated with
historical runs, even after resolution.

Solution: Two-source ticket display
1. Direct links (ticket_job_runs): Always show, even if resolved
   - Preserves audit trail
   - Shows tickets that were explicitly linked to runs
2. Active window (ticket_scopes): Only show unresolved
   - Prevents resolved tickets from appearing on NEW runs
   - Uses active_from_date without date-based resolved logic

Changes:
- Added direct_ticket_links map to fetch linked tickets per run
- Query ticket_job_runs for audit trail tickets
- Modified ticket_codes building to use both sources
- Removed date-based resolved_date comparison (resd >= rd)

Result:
- Run 1 with ticket → ticket resolved → ticket still visible on Run 1
- Run 2 created → ticket NOT shown on Run 2 (correctly filtered)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 10:53:45 +01:00
a9cae0f8f5 Fix Job Details page showing resolved tickets
The Job Details page used the same date-based logic that was causing
resolved tickets to appear for runs on the same day as the resolve date.

The linking was already fixed in ticketing_utils.py, but the display
query in routes_jobs.py still used the old logic, causing a mismatch:
- New runs were correctly NOT linked to resolved tickets
- But the UI still SHOWED resolved tickets due to the display query

Changes:
- Removed date-based OR clause from tickets query (line 201-204)
- Removed date-based OR clause from remarks query (line 239-242)
- Simplified query parameters (removed min_date and ui_tz)
- Now both linking AND display use consistent logic: resolved = hidden

Result: Resolved tickets and remarks no longer appear in Job Details
or any other view, matching the expected behavior.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 10:29:43 +01:00
1b5effc5d2 Reduce test email generation from 3 to 1 per status
User requested simpler test scenario with just 1 email per status
instead of 3, making testing and debugging easier.

Changes:
- Success: 1 email instead of 3
- Warning: 1 email instead of 3
- Error: 1 email instead of 3
- Each button now creates exactly 1 test mail
- Kept the most recent email (2026-02-09) from each set

This makes it easier to test ticket linking behavior without having
to deal with multiple runs per test cycle.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 10:14:32 +01:00
c1aeee2a8c Always log ticket linking attempts, not just when tickets found
Previous debug code only logged when tickets were found, making it
impossible to verify that the function was being called at all.

Changes:
- Move logging outside the if rows: block
- Always create audit log entry for every run import
- Log "No open tickets found" when rows is empty
- Use commit() instead of flush() to ensure persistence
- Add exception logging to catch any errors in debug code
- New event_type "ticket_link_error" for debug code failures

Now every email import will create a ticket_link_debug entry showing:
- Whether the function was called
- How many tickets were found (0 or more)
- Details of each ticket if found

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 10:10:23 +01:00
aea6a866c9 Change debug logging to write to AuditLog table
Flask logger output was not visible in Portainer logs or Logging page.
Changed to write debug info to audit_logs table instead, which is
visible on the Logging page in the UI.

Changes:
- Debug entries use event_type "ticket_link_debug"
- User field set to "system"
- Details field contains ticket info (one per line)
- Visible on Settings → Logging page

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 10:06:03 +01:00
c228d6db19 Add debug logging for ticket linking investigation
User reports that resolved internal tickets are still being linked to
new runs, even though Autotask tickets correctly stop linking. Added
debug logging to understand what the query is finding.

Changes:
- Query now returns resolved_at values for both ticket and scope
- Added logger.info statements showing found tickets and their status
- This will help diagnose whether tickets are truly resolved in DB

Temporary debug code to be removed after issue is identified.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 10:00:35 +01:00
88b267b8bd Remove date-based logic from ticket propagation
The ticket linking query had date-based logic that considered tickets
"open" for runs if:
- The ticket was unresolved, OR
- The resolved date >= run date

This caused resolved tickets to still link to new runs, which was
unexpected behavior. User confirmed tickets should ONLY link to new
runs if they are genuinely unresolved, regardless of dates.

Changes:
- Simplified query to only find tickets where resolved_at IS NULL
- Removed OR clause with date comparison
- Removed ui_tz parameter (no longer needed)
- Simplified Strategy 1 code (no extra resolved check needed)

Now tickets cleanly stop linking to new runs as soon as they are
resolved, for both internal and Autotask tickets.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 09:55:58 +01:00
4f208aedd0 Auto-commit local changes before build (2026-02-10 09:41:33) 2026-02-10 09:41:33 +01:00
caff435f96 Fix Autotask propagation to also check resolved status
The previous fix only checked if tickets were deleted, but Autotask
tickets can also be resolved (which is tracked via the internal Ticket
table, not the JobRun table).

Updated Strategy 2 to:
1. Find most recent non-deleted Autotask ticket
2. Check if its internal ticket is resolved
3. Only propagate if ticket is not deleted AND not resolved

This ensures tickets stop propagating when they are resolved in Autotask
(synced via PSA polling), matching the expected behavior.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 09:40:27 +01:00
f3b1b56b6a Fix Autotask ticket propagation to new runs
When a new run is created, Autotask tickets were not being propagated
if the associated internal ticket was resolved. This caused users to
have to manually re-link tickets on each new run.

The previous implementation relied on finding an open internal ticket
first, then using its ticket code to find a matching Autotask-linked run.
If the internal ticket was resolved, the Autotask propagation would fail.

This commit implements a two-strategy approach:
1. Strategy 1: Use internal ticket code (existing logic, improved error handling)
2. Strategy 2: Direct Autotask propagation - find most recent non-deleted
   Autotask ticket for the job, independent of internal ticket status

Now Autotask tickets remain linked across runs regardless of internal
ticket resolution status, matching the behavior of internal tickets.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 09:32:23 +01:00
596fc94e69 Merge branch 'v20260209-08-veeam-vbo365-not-started' into main 2026-02-09 17:26:15 +01:00
49f24595c3 Merge branch 'v20260209-07-synology-drive-health-parser' into main 2026-02-09 17:26:04 +01:00
fd3f3765c3 Merge branch 'v20260209-06-synology-firmware-update-parser' into main 2026-02-09 17:25:30 +01:00
2a03ff0764 Merge branch 'v20260209-05-responsive-navbar-fix' into main 2026-02-09 17:25:19 +01:00
d7f6de7c23 Release v0.1.25 on branch v20260209-08-veeam-vbo365-not-started (bump type 1) 2026-02-09 17:21:39 +01:00
57196948a7 Add v0.1.25 to website changelog
Update changelog.md and changelog.py with comprehensive v0.1.25 release notes
consolidating all changes from 2026-02-09:

Sections:
- Parser Enhancements: Synology (Drive Health, DSM Updates, ABB Skipped) and
  Veeam (Job Not Started)
- Maintenance Improvements: Orphaned Jobs Cleanup, Test Email Generation
- Data Privacy: Parser Registry Cleanup, Autotask Title Simplification
- Bug Fixes: Responsive Navbar Overlap Fix

This release focuses on parser coverage expansion and system maintenance
capabilities while improving data privacy practices.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 17:17:47 +01:00
acb9fbb0ea Auto-commit local changes before build (2026-02-09 17:13:17) 2026-02-09 17:13:17 +01:00
3b48cd401a Add Veeam parser support for "Job did not start on schedule" error notifications
Extend Veeam parser to recognize and handle error notifications when a backup
job fails to start on its scheduled time. This commonly occurs when proxy
servers are offline or other infrastructure issues prevent job execution.

Features:
- Detects "Job did not start on schedule" pattern in subject line
- Extracts backup type from subject (e.g., "Veeam Backup for Microsoft 365")
- Extracts job name from subject after colon (e.g., "Backup MDS at Work")
- Reads error message from plain text body (handles base64 UTF-16 encoding)
- Sets overall_status to "Error" for failed-to-start jobs
- Example message: "Proxy server was offline at the time the job was scheduled to run."

This handles VBO365 and other Veeam backup types that send plain text error
notifications instead of the usual HTML formatted reports.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 17:12:53 +01:00
21f5c01148 Auto-commit local changes before build (2026-02-09 17:02:31) 2026-02-09 17:02:31 +01:00
f539d62daf Add Synology monthly drive health report parser
Add parser for Synology monthly drive health reports with support for both
Dutch and English notifications. Reports are classified as informational
and excluded from schedule learning and reporting logic.

Features:
- Recognizes Dutch ("Maandelijks schijfintegriteitsrapport", "Gezond") and
  English ("Monthly Drive Health Report", "Healthy") variants
- Extracts hostname from subject or body ("Van/From NAS-HOSTNAME")
- Automatic status detection: Healthy/Gezond/No problem detected → Success,
  otherwise → Warning
- Backup type: "Health Report", Job name: "Monthly Drive Health"
- Added registry entry (order 237) for /parsers page visibility

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 17:02:08 +01:00
b6a85d1c8e Extend Synology DSM update parser with automatic installation announcement patterns
Add detection patterns for DSM update notifications that announce automatic
installation ("belangrijke DSM-update", "kritieke oplossingen", "wordt
automatisch geïnstalleerd", "is beschikbaar op"). This is the fourth variant
of DSM update notifications now handled by the same Updates parser job.

All changes maintain backward compatibility by extending existing pattern lists.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 16:45:23 +01:00
5549323ff2 Extend Synology Active Backup for Business parser for skipped tasks
Extended the parser to recognize backup tasks that were skipped/ignored
because a previous backup was still running. These are treated as Warning
status for monitoring purposes.

Changes:
- Extended _ABB_COMPLETED_RE regex to match "genegeerd" (NL) and "skipped"/"ignored" (EN)
- Added "van deze taak" pattern for Dutch phrasing variations
- Added status detection for skipped tasks (Warning with "Skipped" message)
- All existing patterns remain functional (backward compatible)
- Updated changelog

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 16:25:27 +01:00
b4aa7ef2f6 Extend Synology Updates parser for new DSM update available notifications
Extended the parser to recognize DSM update available notifications in addition
to update cancelled and package out-of-date notifications. All variants fall
under same Updates job type for unified monitoring.

Changes:
- Added "new DSM update", "Auto Update has detected", "new version of DSM", "Update & Restore" to detection patterns
- Extended hostname extraction regex to match "detected on HOSTNAME"
- Now recognizes three notification types: update cancelled, packages out-of-date, update available
- All existing patterns remain functional (backward compatible)
- Updated changelog

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 16:03:11 +01:00
9576d1047e Extend Synology Updates parser to recognize out-of-date package notifications
Extended the parser to recognize both DSM update cancelled notifications AND
out-of-date package notifications under the same "Updates" job type, as they
can appear together in combined notifications.

Changes:
- Added "Packages on", "out-of-date", "Package Center" to detection patterns
- Extended hostname extraction regex to match "Packages on HOSTNAME" and "running on HOSTNAME"
- Both notification types now fall under same job (backup_type: Updates)
- All existing patterns remain functional (backward compatible)
- Updated changelog

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 15:58:08 +01:00
1e3a64a78a Auto-commit local changes before build (2026-02-09 15:46:49) 2026-02-09 15:46:49 +01:00
a05dbab574 Extend Synology DSM update parser with additional detection patterns
Extended the parser to recognize more email variants for Synology DSM
automatic update cancelled notifications while maintaining backward
compatibility with existing patterns.

Changes:
- Added "Automatische DSM-update" and "DSM-update op" to detection patterns
- Extended hostname extraction regex to match "DSM-update op HOSTNAME" and "DSM update on HOSTNAME"
- All existing patterns remain functional (backward compatible)
- Updated changelog

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 15:41:20 +01:00
0827fddaa5 Add Synology DSM update parser and remove customer names from registry
Added parser registry entry for Synology DSM automatic update cancelled
notifications. These are informational messages that don't participate in
schedule learning or reporting logic.

Also removed real customer names from parser registry examples to prevent
customer information from being stored in the codebase. Replaced with
generic placeholders like NAS-HOSTNAME, SERVER-HOSTNAME, VM-HOSTNAME.

Changes:
- Added synology_dsm_update parser entry in registry.py (order 236)
- Parser matches on DSM-update/DSM update in subject and automatic/automatische in body
- Returns backup_software: Synology, backup_type: Updates, informational status
- Replaced customer names in NTFS Auditing example (bouter.nl → example.local)
- Replaced customer names in QNAP example (BETSIES-NAS01 → NAS-HOSTNAME)
- Replaced customer names in NAKIVO example (kuiperbv.nl → VM-HOSTNAME)
- Updated changelog

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 15:36:40 +01:00
61b8e97e34 Auto-commit local changes before build (2026-02-09 15:14:18) 2026-02-09 15:14:18 +01:00
9197c311f2 Fix responsive navbar overlapping content on smaller screens
Added dynamic padding adjustment that measures the actual navbar height and
applies it to the main content padding-top. This prevents the navbar from
overlapping page content when it becomes taller on narrow screens.

Changes:
- Removed fixed padding-top: 80px from main content
- Added id="main-content" to main element for JavaScript targeting
- Added JavaScript function that measures navbar.offsetHeight
- Function applies dynamic padding-top with 20px buffer for spacing
- Triggers on: page load, window load, window resize (debounced), navbar collapse toggle
- Includes fallback to 80px if measurement fails
- Updated changelog

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 15:11:47 +01:00
26848998e1 Fix test email generator to use correct Veeam format and consistent job name
Changed test emails to use proper Veeam Backup Job format that matches parser
expectations. All test emails now use the same job name "Test-Backup-Job" so
they appear as different runs of the same job, enabling proper status testing.

Changes:
- Switched from multiple backup software to Veeam only for simplicity
- Fixed subject format to: Veeam Backup Job "Test-Backup-Job" finished with Success/WARNING/Failed
- Fixed body format to include: Backup job: Test-Backup-Job
- All 3 emails per set use same job name but different dates
- Added realistic VM objects (VM-APP01, VM-DB01, VM-WEB01) with status details
- Each set shows different failure scenarios for testing
- Updated changelog description

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:54:42 +01:00
19ef9dc32a Update test email generator with fixed sets and separate buttons
Changed from configurable count input to three separate buttons for
success, warning, and error test emails. Each button generates exactly
3 emails with consistent data for reproducible testing.

Changes:
- Updated routes_settings.py to use fixed email sets instead of random data
- Changed route from /settings/test-emails/generate to /settings/test-emails/generate/<status_type>
- Created three predefined email sets (success, warning, error) with fixed content
- Updated settings.html UI to show three separate buttons instead of count input
- Each set contains 3 emails simulating Veeam, Synology, and NAKIVO backups
- Updated changelog with detailed description

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:44:19 +01:00
e187bc3fa5 Auto-commit local changes before build (2026-02-09 14:37:36) 2026-02-09 14:37:36 +01:00
96092517b4 Add test email generator for testing and development
Added feature to generate test emails in inbox for testing purposes:
- Simulates backup notifications from Veeam, Synology, and NAKIVO
- Configurable count (1-50 emails)
- Random job names, statuses, and timestamps
- Emails are parser-compatible for testing inbox approval workflow
- Useful for testing orphaned jobs cleanup and other maintenance ops
- Admin-only feature in Settings → Maintenance

Templates include:
- Veeam: Various job statuses with detailed backup info
- Synology: Backup task notifications
- NAKIVO: Job completion reports

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:37:02 +01:00
08437aff7f Fix audit logging call for orphaned jobs deletion
Added missing 'message' parameter to _log_admin_event call and
converted details dict to JSON string to match the function signature.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:31:34 +01:00
710aba97e4 Fix foreign key constraint: delete mail_objects before mails
Added deletion of mail_objects before deleting mail_messages to
avoid foreign key constraint violation. The mail_objects table
has a foreign key to mail_messages.

Complete deletion order:
1. Clean up auxiliary tables
2. Unlink mails from jobs
3. Delete mail_objects
4. Delete jobs (cascades to runs)
5. Delete mails

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:26:53 +01:00
ff4942272f Fix foreign key constraint: unlink mails from jobs before deletion
Added UPDATE to set mail_messages.job_id = NULL before deleting jobs
to avoid foreign key constraint violation. The mail_messages table
has a foreign key to jobs, so we must unlink them first.

Complete correct order:
1. Clean up auxiliary tables
2. Unlink mails from jobs (SET job_id = NULL)
3. Delete jobs (cascades to runs)
4. Delete mails

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:25:13 +01:00
f332e61288 Fix foreign key constraint error when deleting orphaned jobs
Moved mail deletion to after job deletion to avoid foreign key
constraint violations. The job_runs have a foreign key to
mail_messages, so jobs (and their cascaded runs) must be deleted
first before the mails can be deleted.

Correct order:
1. Clean up auxiliary tables (ticket_job_runs, remark_job_runs, etc)
2. Delete jobs (cascades to runs via ORM)
3. Delete mails (no more foreign key references)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:14:17 +01:00
82fff08ebb Remove redundant Step 1 text from maintenance card
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:11:52 +01:00
e932fdf30a Remove direct delete button, enforce preview step
Removed 'Delete orphaned jobs' button from maintenance page to
enforce verification workflow. Users must now:
1. Click 'Preview orphaned jobs' to see the list
2. Verify which jobs will be deleted
3. Click 'Delete All' on the preview page

This prevents accidental deletion without verification.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:10:33 +01:00
7f8dffa3ae Add preview page for orphaned jobs before deletion
Added verification step before deleting orphaned jobs:
- New GET endpoint /settings/jobs/orphaned to preview the list
- Shows detailed table with job name, backup software/type, customer ID,
  run count, and email count
- "Preview orphaned jobs" button on maintenance page
- Delete button on preview page shows exact count
- Summary shows total jobs, runs, and emails to be deleted

This allows admins to verify which jobs will be deleted before
taking the destructive action.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 14:07:12 +01:00
fec28b2bfa Auto-commit local changes before build (2026-02-09 13:49:21) 2026-02-09 13:49:21 +01:00
91062bdb0d Update .last-branch 2026-02-09 13:49:02 +01:00
ff316d653a Auto-commit local changes before build (2026-02-09 13:46:55) 2026-02-09 13:46:55 +01:00
60c7e89dc2 Add cleanup orphaned jobs maintenance option
Added new maintenance option in Settings → Maintenance to delete
jobs that are no longer linked to an existing customer (customer_id
is NULL or customer doesn't exist).

Features:
- Finds all jobs without valid customer link
- Deletes jobs, runs, and related emails permanently
- Cleans up auxiliary tables (ticket_job_runs, remark_job_runs,
  scopes, overrides)
- Provides feedback on deleted items count
- Logs action to audit log

Use case: When customers are removed, their jobs and emails should
be completely removed from the database.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 13:46:07 +01:00
b0efa7f21d Remove customer name from Autotask ticket titles
Changed ticket title format from:
  [Backupchecks] Customer Name - Job Name - Status
To:
  [Backupchecks] Job Name - Status

Customer information is already available in the ticket's company
field, making it redundant in the title and causing unnecessarily
long ticket titles.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 13:34:21 +01:00
b7875dbf55 Merge v20260209-01-fix-ticket-description into main (v0.1.24) 2026-02-09 13:16:04 +01:00
d400534069 Merge v20260207-02-wiki-documentation into main (v0.1.23) 2026-02-09 13:15:58 +01:00
28f094f80b Merge branches v20260203-01 through v20260205-13 into main
This commit consolidates all development work from the following branch series:
- v20260203-* (13 branches): Initial Autotask integration, graph config UI improvements
- v20260204-* (3 branches): Dashboard redirect setting, additional refinements
- v20260205-* (13 branches): Autotask resolution improvements, changelog restructuring

Key features merged:
- Autotask PSA integration with ticket creation, resolution, and search
- Graph/mail configuration UI improvements with credential testing
- Daily dashboard redirect setting (optional navigation control)
- Changelog restructuring with improved Python structure
- Various bug fixes and UI enhancements

All functionality has been consolidated from the final state of branch
v20260205-13-changelog-python-structure to preserve working features.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-06 13:41:08 +01:00
7693af9306 Merge branch 'v20260204-03-dashboard-redirect-setting' into main 2026-02-06 13:32:36 +01:00
5ed4c41b80 Merge branch 'v20260204-02-performance-optimizations' into main 2026-02-06 13:23:31 +01:00
a910cc4abc Merge branch 'v20260204-01-add-readme-changelog' into main 2026-02-06 13:23:31 +01:00
e12755321a Merge branch 'v20260203-13-autotask-resolution-item-wrapper' into main 2026-02-06 13:23:31 +01:00
240f8b5c90 Merge branch 'v20260203-12-autotask-resolution-v1-casing-fix' into main 2026-02-06 13:23:31 +01:00
02d7bdd5b8 Merge branch 'v20260203-11-autotask-resolution-get-put-required-fields' into main 2026-02-06 13:23:31 +01:00
753c14bb4e Merge branch 'v20260203-10-autotask-resolution-field-aliases' into main 2026-02-06 13:23:31 +01:00
ce245f7d49 Merge branch 'v20260203-09-autotask-resolution-from-note' into main 2026-02-06 13:23:31 +01:00
34ac317607 Merge branch 'v20260203-08-autotask-ticketnote-timezone-suffix' into main 2026-02-06 13:23:31 +01:00
3b087540cb Merge branch 'v20260203-07-autotask-notes-endpoint-fix' into main 2026-02-06 13:23:31 +01:00
e5123952b2 Merge branch 'v20260203-06-autotask-ticketnotes-child-endpoint' into main 2026-02-06 13:23:31 +01:00
4bbde92c8d Merge branch 'v20260203-04-autotask-resolve-user-note' into main 2026-02-06 13:23:31 +01:00
7b3b89f50c Merge branch 'v20260203-03-autotask-resolve-note-verify' into main 2026-02-06 13:23:31 +01:00
52cd75e420 Merge branch 'v20260203-02-autotask-resolve-button-enabled' into main 2026-02-06 13:23:31 +01:00
83d8d85f30 Merge branch 'v20260203-01-autotask-resolve-note' into main 2026-02-06 13:23:31 +01:00
0ddeaf1896 Add migration for performance indexes
The indexes defined in models.py __table_args__ are not automatically
created by the custom migration system. Added migrate_performance_indexes()
to explicitly create the indexes at startup.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 21:54:27 +01:00
89b9dd0264 Auto-commit local changes before build (2026-02-04 21:46:27) 2026-02-04 21:46:27 +01:00
f91c081456 Performance optimizations for slow storage environments
- Add database indexes on frequently queried FK columns (JobRun, MailMessage,
  MailObject, TicketScope, RemarkScope)
- Fix N+1 query in override recomputation by batch loading jobs
- Optimize Daily Jobs page with batch queries:
  - Batch load all today's runs in single query
  - Batch infer weekly/monthly schedules for all jobs
  - Batch load ticket/remark indicators

These changes reduce query count by 80-90% on pages like Daily Jobs and Run Checks,
significantly improving performance on systems with slower storage.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 21:44:14 +01:00
39bdd49fd0 Add README documentation and Claude changelog
- Fill README.md with comprehensive project documentation
- Add docs/changelog-claude.md for tracking Claude Code changes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 21:30:53 +01:00
5ec64e6a13 Auto-commit local changes before build (2026-02-03 17:16:24) 2026-02-03 17:16:24 +01:00
55c6f7ddd6 Auto-commit local changes before build (2026-02-03 16:50:56) 2026-02-03 16:50:56 +01:00
2667e44830 Auto-commit local changes before build (2026-02-03 16:39:32) 2026-02-03 16:39:32 +01:00
04f6041fe6 Auto-commit local changes before build (2026-02-03 16:17:17) 2026-02-03 16:17:17 +01:00
494f792c0d Auto-commit local changes before build (2026-02-03 16:06:14) 2026-02-03 16:06:14 +01:00
bb804f9a1e Auto-commit local changes before build (2026-02-03 14:29:19) 2026-02-03 14:29:19 +01:00
a4a6a60d45 Auto-commit local changes before build (2026-02-03 14:13:08) 2026-02-03 14:13:08 +01:00
ddc6eaa12a Auto-commit local changes before build (2026-02-03 13:59:08) 2026-02-03 13:59:08 +01:00
f6216b8803 Auto-commit local changes before build (2026-02-03 13:12:53) 2026-02-03 13:12:53 +01:00
fb2651392c Auto-commit local changes before build (2026-02-03 12:28:38) 2026-02-03 12:28:38 +01:00
e3303681e1 Auto-commit local changes before build (2026-02-03 11:06:35) 2026-02-03 11:06:35 +01:00
3c7f4c7926 Auto-commit local changes before build (2026-02-03 10:31:44) 2026-02-03 10:31:44 +01:00
3400af58d7 Auto-commit local changes before build (2026-01-20 13:42:54) 2026-01-20 13:42:54 +01:00
67fb063267 Auto-commit local changes before build (2026-01-20 13:32:55) 2026-01-20 13:32:55 +01:00
ae1865dab3 Auto-commit local changes before build (2026-01-20 13:22:37) 2026-01-20 13:22:37 +01:00
92c67805e5 Auto-commit local changes before build (2026-01-20 13:10:45) 2026-01-20 13:10:45 +01:00
fc0cf1ef96 Auto-commit local changes before build (2026-01-20 12:52:16) 2026-01-20 12:52:16 +01:00
899863a0de Auto-commit local changes before build (2026-01-20 10:44:53) 2026-01-20 10:44:53 +01:00
e4e069a6b3 Auto-commit local changes before build (2026-01-20 10:34:23) 2026-01-20 10:34:23 +01:00
dfca88d3bd Auto-commit local changes before build (2026-01-20 10:28:38) 2026-01-20 10:28:38 +01:00
5c0e1b08aa Auto-commit local changes before build (2026-01-20 10:07:44) 2026-01-20 10:07:44 +01:00
4b506986a6 Auto-commit local changes before build (2026-01-20 09:16:35) 2026-01-20 09:16:35 +01:00
5131d24751 Auto-commit local changes before build (2026-01-20 08:49:15) 2026-01-20 08:49:15 +01:00
63526be592 Auto-commit local changes before build (2026-01-19 16:27:38) 2026-01-19 16:27:38 +01:00
b56cdacf6b Auto-commit local changes before build (2026-01-19 15:59:26) 2026-01-19 15:59:26 +01:00
4b3b6162a0 Auto-commit local changes before build (2026-01-19 15:47:15) 2026-01-19 15:47:15 +01:00
a7a61fdd64 Auto-commit local changes before build (2026-01-19 15:40:00) 2026-01-19 15:40:00 +01:00
8407bf45ab Auto-commit local changes before build (2026-01-19 15:30:36) 2026-01-19 15:30:36 +01:00
0cabd2e0fc Auto-commit local changes before build (2026-01-19 15:10:00) 2026-01-19 15:10:00 +01:00
0c5dee307f Auto-commit local changes before build (2026-01-19 14:50:02) 2026-01-19 14:50:02 +01:00
0500491621 Auto-commit local changes before build (2026-01-19 14:23:56) 2026-01-19 14:23:56 +01:00
890553f23e Auto-commit local changes before build (2026-01-19 14:18:29) 2026-01-19 14:18:29 +01:00
c5ff1e11a3 Auto-commit local changes before build (2026-01-19 14:06:42) 2026-01-19 14:06:42 +01:00
c595c165ed Auto-commit local changes before build (2026-01-19 13:50:00) 2026-01-19 13:50:00 +01:00
d272d12d24 Auto-commit local changes before build (2026-01-19 13:31:09) 2026-01-19 13:31:09 +01:00
2887a021ba Auto-commit local changes before build (2026-01-19 13:20:41) 2026-01-19 13:20:41 +01:00
d5e3734b35 Auto-commit local changes before build (2026-01-19 13:15:08) 2026-01-19 13:15:08 +01:00
07e6630a89 Auto-commit local changes before build (2026-01-19 12:53:24) 2026-01-19 12:53:24 +01:00
dabec03f91 Auto-commit local changes before build (2026-01-19 11:51:58) 2026-01-19 11:51:58 +01:00
36deb77806 Auto-commit local changes before build (2026-01-19 11:20:52) 2026-01-19 11:20:52 +01:00
82bdebb721 Auto-commit local changes before build (2026-01-19 11:11:08) 2026-01-19 11:11:08 +01:00
f8a57efee0 Auto-commit local changes before build (2026-01-16 16:24:35) 2026-01-16 16:24:35 +01:00
46cc5b10ab Auto-commit local changes before build (2026-01-16 16:15:43) 2026-01-16 16:15:43 +01:00
4c18365753 Auto-commit local changes before build (2026-01-16 15:39:16) 2026-01-16 15:39:16 +01:00
4def0aad46 Auto-commit local changes before build (2026-01-16 15:38:11) 2026-01-16 15:38:11 +01:00
9025d70b8e Auto-commit local changes before build (2026-01-16 14:13:31) 2026-01-16 14:13:31 +01:00
ef8d12065b Auto-commit local changes before build (2026-01-16 13:44:34) 2026-01-16 13:44:34 +01:00
25d1962f7b Auto-commit local changes before build (2026-01-16 13:31:20) 2026-01-16 13:31:20 +01:00
487f923064 Auto-commit local changes before build (2026-01-16 13:17:06) 2026-01-16 13:17:06 +01:00
f780bbc399 Auto-commit local changes before build (2026-01-16 12:56:34) 2026-01-16 12:56:34 +01:00
b46b7fbc21 Auto-commit local changes before build (2026-01-16 12:28:07) 2026-01-16 12:28:07 +01:00
9399082231 Auto-commit local changes before build (2026-01-16 10:29:40) 2026-01-16 10:29:40 +01:00
8a16ff010f Auto-commit local changes before build (2026-01-16 10:07:17) 2026-01-16 10:07:17 +01:00
748769afc0 Auto-commit local changes before build (2026-01-16 10:01:42) 2026-01-16 10:01:42 +01:00
abb6780744 Auto-commit local changes before build (2026-01-16 09:04:12) 2026-01-16 09:04:12 +01:00
83a29a7a3c Auto-commit local changes before build (2026-01-15 16:31:32) 2026-01-15 16:31:32 +01:00
66f5a57fe0 Auto-commit local changes before build (2026-01-15 16:17:26) 2026-01-15 16:17:26 +01:00
473044bd67 Auto-commit local changes before build (2026-01-15 16:02:52) 2026-01-15 16:02:52 +01:00
afd45cc568 Auto-commit local changes before build (2026-01-15 15:19:37) 2026-01-15 15:19:37 +01:00
3564bcf62f Auto-commit local changes before build (2026-01-15 15:05:42) 2026-01-15 15:05:42 +01:00
49fd29a6f2 Auto-commit local changes before build (2026-01-15 14:36:50) 2026-01-15 14:36:50 +01:00
49f6d41715 Auto-commit local changes before build (2026-01-15 14:24:54) 2026-01-15 14:24:54 +01:00
186807b098 Auto-commit local changes before build (2026-01-15 14:14:29) 2026-01-15 14:14:29 +01:00
c68b401709 Auto-commit local changes before build (2026-01-15 14:08:59) 2026-01-15 14:08:59 +01:00
5b9b6f4c38 Auto-commit local changes before build (2026-01-15 13:45:53) 2026-01-15 13:45:53 +01:00
981d65c274 Auto-commit local changes before build (2026-01-15 12:44:01) 2026-01-15 12:44:01 +01:00
1a2ca59d16 Auto-commit local changes before build (2026-01-15 12:31:08) 2026-01-15 12:31:08 +01:00
83d487a206 Auto-commit local changes before build (2026-01-15 11:52:52) 2026-01-15 11:52:52 +01:00
490ab1ae34 Auto-commit local changes before build (2026-01-15 11:10:13) 2026-01-15 11:10:13 +01:00
1a64627a4e Auto-commit local changes before build (2026-01-15 10:40:40) 2026-01-15 10:40:40 +01:00
d5fdc9a8d9 Auto-commit local changes before build (2026-01-15 10:21:30) 2026-01-15 10:21:30 +01:00
f6310da575 Auto-commit local changes before build (2026-01-15 10:12:09) 2026-01-15 10:12:09 +01:00
48e7830957 Auto-commit local changes before build (2026-01-15 09:37:33) 2026-01-15 09:37:33 +01:00
777a9b4b31 Auto-commit local changes before build (2026-01-13 17:16:20) 2026-01-13 17:16:20 +01:00
17 changed files with 1147 additions and 82 deletions

View File

@ -1 +1 @@
v20260209-01-fix-ticket-description
main

View File

@ -3,6 +3,157 @@ Changelog data structure for Backupchecks
"""
CHANGELOG = [
{
"version": "v0.1.26",
"date": "2026-02-10",
"summary": "This critical bug fix release resolves ticket system display issues where resolved tickets were incorrectly appearing on new runs across multiple pages. The ticket system has been completely transitioned from date-based logic to explicit link-based queries, ensuring resolved tickets stop appearing immediately after resolution while preserving audit trail for historical runs.",
"sections": [
{
"title": "Bug Fixes",
"type": "bugfix",
"subsections": [
{
"subtitle": "Ticket System - Resolved Ticket Display Issues",
"changes": [
"Root Cause: Multiple pages used legacy date-based logic (active_from_date <= run_date AND resolved_at >= run_date) instead of checking explicit ticket_job_runs links",
"Impact: Resolved tickets kept appearing on ALL runs between active_from_date and resolved_at, even runs created after resolution",
"Fixed: Ticket Linking (ticketing_utils.py) - Autotask tickets now propagate to new runs using independent strategy that checks for most recent non-deleted and non-resolved Autotask ticket",
"Fixed: Internal tickets no longer link to new runs after resolution - removed date-based 'open' logic, now only links if COALESCE(ts.resolved_at, t.resolved_at) IS NULL",
"Fixed: Job Details Page - Implemented two-source ticket display: direct links (ticket_job_runs) always shown for audit trail, active window (ticket_scopes) only shown if unresolved",
"Fixed: Run Checks Main Page - Ticket/remark indicators (🎫/💬) now only show for genuinely unresolved tickets, removed date-based logic from existence queries",
"Fixed: Run Checks Popup Modal - Replaced date-based queries in /api/job-runs/<run_id>/alerts with explicit JOIN queries (ticket_job_runs, remark_job_runs)",
"Fixed: Run Checks Popup - Removed unused parameters (run_date, job_id, ui_tz) as they are no longer needed with link-based queries",
"Testing: Temporarily added debug logging to link_open_internal_tickets_to_run (wrote to AuditLog with event_type 'ticket_link_debug'), removed after successful resolution",
"Result: Resolved tickets stop appearing immediately after resolution, consistent behavior across all pages, audit trail preserved for historical runs",
"Result: All queries now use explicit link-based logic with no date comparisons"
]
},
{
"subtitle": "Test Email Generation",
"changes": [
"Reduced test email generation from 3 emails per status to 1 email per status for simpler testing",
"Each button now creates exactly 1 test mail instead of 3"
]
},
{
"subtitle": "User Interface",
"changes": [
"Updated Settings → Maintenance page text for test email generation to match actual behavior",
"Changed description from '3 emails simulating Veeam, Synology, and NAKIVO' to '1 Veeam Backup Job email'",
"Updated button labels from '(3)' to '(1)' on all test email generation buttons"
]
}
]
}
]
},
{
"version": "v0.1.25",
"date": "2026-02-09",
"summary": "This release focuses on parser improvements and maintenance enhancements, adding support for new notification types across Synology and Veeam backup systems while improving system usability with orphaned job cleanup and test email generation features.",
"sections": [
{
"title": "Parser Enhancements",
"type": "feature",
"subsections": [
{
"subtitle": "Synology Parsers",
"changes": [
"Monthly Drive Health Reports: New parser for Synology NAS drive health notifications with Dutch and English support",
"Supports 'Maandelijks schijfintegriteitsrapport' (Dutch) and 'Monthly Drive Health Report' (English) variants",
"Automatic status detection: Healthy/Gezond/No problem detected → Success, otherwise → Warning",
"Extracts hostname from subject or body pattern (Van/From NAS-HOSTNAME)",
"Backup type: 'Health Report', Job name: 'Monthly Drive Health' (informational only, excluded from schedule learning)",
"DSM Update Notifications - Extended Coverage: Added 4 new detection patterns for automatic installation announcements",
"New patterns: 'belangrijke DSM-update', 'kritieke oplossingen', 'wordt automatisch geïnstalleerd', 'is beschikbaar op'",
"Now recognizes 4 notification types: update cancelled, packages out-of-date, new update available, automatic installation scheduled",
"All patterns added to existing lists maintaining full backward compatibility",
"Active Backup for Business - Skipped Tasks: Extended parser to recognize skipped/ignored backup tasks",
"Detects Dutch ('genegeerd') and English ('skipped', 'ignored') status indicators as Warning status",
"Common scenario: Backup skipped because previous backup still running"
]
},
{
"subtitle": "Veeam Parsers",
"changes": [
"Job Not Started Errors: New detection for 'Job did not start on schedule' error notifications",
"Recognizes VBO365 and other Veeam backup types that send plain text error notifications",
"Extracts backup type from subject (e.g., 'Veeam Backup for Microsoft 365')",
"Extracts job name from subject after colon (e.g., 'Backup MDS at Work')",
"Reads error message from plain text body (handles base64 UTF-16 encoding)",
"Sets overall_status to 'Error' for failed-to-start jobs",
"Example messages: 'Proxy server was offline at the time the job was scheduled to run.'"
]
}
]
},
{
"title": "Maintenance Improvements",
"type": "feature",
"subsections": [
{
"subtitle": "Orphaned Jobs Cleanup",
"changes": [
"Added 'Cleanup orphaned jobs' option in Settings → Maintenance",
"Removes jobs without valid customer links (useful when customers are deleted)",
"Permanently deletes job records along with all associated emails and job runs",
"'Preview orphaned jobs' button shows detailed list before deletion with email and run counts",
"Safety verification step to prevent accidental deletion"
]
},
{
"subtitle": "Test Email Generation",
"changes": [
"Added 'Generate test emails' feature in Settings → Maintenance",
"Three separate buttons to create fixed test email sets: Success, Warning, Error",
"Each set contains exactly 3 Veeam Backup Job emails with same job name 'Test-Backup-Job'",
"Different dates, objects, and statuses for reproducible testing scenarios",
"Proper status flow testing (success → warning → error progression)"
]
}
]
},
{
"title": "Data Privacy",
"type": "improvement",
"subsections": [
{
"subtitle": "Parser Registry Cleanup",
"changes": [
"Replaced real customer names in parser registry examples with generic placeholders",
"Affected parsers: NTFS Auditing, QNAP Firmware Update, NAKIVO",
"Example format now uses: NAS-HOSTNAME, SERVER-HOSTNAME, VM-HOSTNAME, example.local",
"Ensures no customer information in codebase or version control"
]
},
{
"subtitle": "Autotask Integration",
"changes": [
"Removed customer name from Autotask ticket title for concise display",
"Format changed from '[Backupchecks] Customer - Job Name - Status' to '[Backupchecks] Job Name - Status'",
"Reduces redundancy (customer already visible in ticket company field)"
]
}
]
},
{
"title": "Bug Fixes",
"type": "bugfix",
"subsections": [
{
"subtitle": "User Interface",
"changes": [
"Fixed responsive navbar overlapping page content on smaller screens",
"Implemented dynamic padding adjustment using JavaScript",
"Measures actual navbar height on page load, window resize, and navbar collapse toggle",
"Automatically adjusts main content padding-top to prevent overlap",
"Debounced resize handler for performance"
]
}
]
}
]
},
{
"version": "v0.1.24",
"date": "2026-02-09",

View File

@ -16,33 +16,27 @@ def api_job_run_alerts(run_id: int):
tickets = []
remarks = []
# Tickets active for this job on this run date (including resolved-on-day)
# Tickets linked to this specific run
# Only show tickets that were explicitly linked via ticket_job_runs
try:
rows = (
db.session.execute(
text(
"""
SELECT t.id,
SELECT DISTINCT t.id,
t.ticket_code,
t.description,
t.start_date,
COALESCE(ts.resolved_at, t.resolved_at) AS resolved_at,
t.resolved_at,
t.active_from_date
FROM tickets t
JOIN ticket_scopes ts ON ts.ticket_id = t.id
WHERE ts.job_id = :job_id
AND t.active_from_date <= :run_date
AND (
COALESCE(ts.resolved_at, t.resolved_at) IS NULL
OR ((COALESCE(ts.resolved_at, t.resolved_at) AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date) >= :run_date
)
JOIN ticket_job_runs tjr ON tjr.ticket_id = t.id
WHERE tjr.job_run_id = :run_id
ORDER BY t.start_date DESC
"""
),
{
"job_id": job.id if job else None,
"run_date": run_date,
"ui_tz": _get_ui_timezone_name(),
"run_id": run_id,
},
)
.mappings()
@ -71,31 +65,22 @@ def api_job_run_alerts(run_id: int):
except Exception as exc:
return jsonify({"status": "error", "message": str(exc) or "Failed to load tickets."}), 500
# Remarks active for this job on this run date (including resolved-on-day)
# Remarks linked to this specific run
# Only show remarks that were explicitly linked via remark_job_runs
try:
rows = (
db.session.execute(
text(
"""
SELECT r.id, r.body, r.start_date, r.resolved_at, r.active_from_date
SELECT DISTINCT r.id, r.body, r.start_date, r.resolved_at, r.active_from_date
FROM remarks r
JOIN remark_scopes rs ON rs.remark_id = r.id
WHERE rs.job_id = :job_id
AND COALESCE(
r.active_from_date,
((r.start_date AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date)
) <= :run_date
AND (
r.resolved_at IS NULL
OR ((r.resolved_at AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date) >= :run_date
)
JOIN remark_job_runs rjr ON rjr.remark_id = r.id
WHERE rjr.job_run_id = :run_id
ORDER BY r.start_date DESC
"""
),
{
"job_id": job.id if job else None,
"run_date": run_date,
"ui_tz": _get_ui_timezone_name(),
"run_id": run_id,
},
)
.mappings()

View File

@ -8,6 +8,35 @@ from ..database import db
from ..models import SystemSettings
def _get_or_create_settings_local():
"""Return SystemSettings, creating a default row if missing.
This module should not depend on star-imported helpers for settings.
Mixed deployments (partial container updates) can otherwise raise a
NameError on /customers when the shared helper is not present.
"""
settings = SystemSettings.query.first()
if settings is None:
settings = SystemSettings(
auto_import_enabled=False,
auto_import_interval_minutes=15,
auto_import_max_items=50,
manual_import_batch_size=50,
auto_import_cutoff_date=datetime.utcnow().date(),
ingest_eml_retention_days=7,
)
db.session.add(settings)
db.session.commit()
return settings
# Explicit imports for robustness across mixed deployments.
from datetime import datetime
from ..database import db
from ..models import SystemSettings
def _get_or_create_settings_local():
"""Return SystemSettings, creating a default row if missing.

View File

@ -168,23 +168,61 @@ def job_detail(job_id: int):
.all()
)
# Tickets: mark runs that fall within the ticket active window
# Tickets: mark runs that fall within the ticket active window OR have direct links
ticket_rows = []
ticket_open_count = 0
ticket_total_count = 0
# Map of run_id -> list of directly linked ticket codes (for audit trail)
direct_ticket_links = {}
remark_rows = []
remark_open_count = 0
remark_total_count = 0
run_dates = []
run_date_map = {}
run_ids = []
for r in runs:
rd = _to_amsterdam_date(r.run_at) or _to_amsterdam_date(datetime.utcnow())
run_date_map[r.id] = rd
run_ids.append(r.id)
if rd:
run_dates.append(rd)
# Get directly linked tickets for these runs (audit trail - show even if resolved)
if run_ids:
try:
rows = (
db.session.execute(
text(
"""
SELECT tjr.job_run_id, t.ticket_code, t.resolved_at
FROM ticket_job_runs tjr
JOIN tickets t ON t.id = tjr.ticket_id
WHERE tjr.job_run_id = ANY(:run_ids)
"""
),
{"run_ids": run_ids},
)
.mappings()
.all()
)
for rr in rows:
run_id = rr.get("job_run_id")
code = (rr.get("ticket_code") or "").strip()
resolved_at = rr.get("resolved_at")
if run_id not in direct_ticket_links:
direct_ticket_links[run_id] = []
direct_ticket_links[run_id].append({
"ticket_code": code,
"resolved_at": resolved_at,
"is_direct_link": True
})
except Exception:
pass
# Get active (unresolved) tickets for future runs
if run_dates:
min_date = min(run_dates)
max_date = max(run_dates)
@ -198,14 +236,10 @@ def job_detail(job_id: int):
JOIN ticket_scopes ts ON ts.ticket_id = t.id
WHERE ts.job_id = :job_id
AND t.active_from_date <= :max_date
AND (
COALESCE(ts.resolved_at, t.resolved_at) IS NULL
OR ((COALESCE(ts.resolved_at, t.resolved_at) AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date) >= :min_date
)
AND COALESCE(ts.resolved_at, t.resolved_at) IS NULL
"""
),
{"job_id": job.id, "min_date": min_date,
"ui_tz": _get_ui_timezone_name(), "max_date": max_date},
{"job_id": job.id, "max_date": max_date},
)
.mappings()
.all()
@ -214,7 +248,12 @@ def job_detail(job_id: int):
active_from = rr.get("active_from_date")
resolved_at = rr.get("resolved_at")
resolved_date = _to_amsterdam_date(resolved_at) if resolved_at else None
ticket_rows.append({"active_from_date": active_from, "resolved_date": resolved_date, "ticket_code": rr.get("ticket_code")})
ticket_rows.append({
"active_from_date": active_from,
"resolved_date": resolved_date,
"ticket_code": rr.get("ticket_code"),
"is_direct_link": False
})
except Exception:
ticket_rows = []
@ -240,14 +279,10 @@ def job_detail(job_id: int):
r.active_from_date,
((r.start_date AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date)
) <= :max_date
AND (
r.resolved_at IS NULL
OR ((r.resolved_at AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date) >= :min_date
)
AND r.resolved_at IS NULL
"""
),
{"job_id": job.id, "min_date": min_date,
"ui_tz": _get_ui_timezone_name(), "max_date": max_date},
{"job_id": job.id, "max_date": max_date},
)
.mappings()
.all()
@ -341,11 +376,22 @@ def job_detail(job_id: int):
ticket_codes = []
remark_items = []
# First: add directly linked tickets (audit trail - always show)
if r.id in direct_ticket_links:
for tlink in direct_ticket_links[r.id]:
code = tlink.get("ticket_code", "")
if code and code not in ticket_codes:
ticket_codes.append(code)
has_ticket = True
# Second: add active window tickets (only unresolved)
if rd and ticket_rows:
for tr in ticket_rows:
if tr.get("is_direct_link"):
continue # Skip, already added above
af = tr.get("active_from_date")
resd = tr.get("resolved_date")
if af and af <= rd and (resd is None or resd >= rd):
# Only check active_from, resolved tickets already filtered by query
if af and af <= rd:
has_ticket = True
code = (tr.get("ticket_code") or "").strip()
if code and code not in ticket_codes:

View File

@ -1068,14 +1068,11 @@ def run_checks_page():
JOIN ticket_scopes ts ON ts.ticket_id = t.id
WHERE ts.job_id = :job_id
AND t.active_from_date <= :run_date
AND (
COALESCE(ts.resolved_at, t.resolved_at) IS NULL
OR ((COALESCE(ts.resolved_at, t.resolved_at) AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date) >= :run_date
)
AND COALESCE(ts.resolved_at, t.resolved_at) IS NULL
LIMIT 1
"""
),
{"job_id": job_id, "run_date": today_local, "ui_tz": ui_tz},
{"job_id": job_id, "run_date": today_local},
).first()
has_active_ticket = bool(t_exists)
@ -1090,10 +1087,7 @@ def run_checks_page():
r.active_from_date,
((r.start_date AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date)
) <= :run_date
AND (
r.resolved_at IS NULL
OR ((r.resolved_at AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date) >= :run_date
)
AND r.resolved_at IS NULL
LIMIT 1
"""
),
@ -1464,7 +1458,7 @@ def api_run_checks_create_autotask_ticket():
}
)
subject = f"[Backupchecks] {customer.name} - {job.job_name or ''} - {status_display}"
subject = f"[Backupchecks] {job.job_name or ''} - {status_display}"
description = _compose_autotask_ticket_description(
settings=settings,
job=job,

View File

@ -124,6 +124,309 @@ def settings_jobs_delete_all():
return redirect(url_for("main.settings"))
@main_bp.route("/settings/jobs/orphaned", methods=["GET"])
@login_required
@roles_required("admin")
def settings_jobs_orphaned():
"""Show list of orphaned jobs for verification before deletion."""
# Find jobs without valid customer
orphaned_jobs = Job.query.outerjoin(Customer, Job.customer_id == Customer.id).filter(
db.or_(
Job.customer_id.is_(None),
Customer.id.is_(None)
)
).order_by(Job.job_name.asc()).all()
# Build list with details
jobs_list = []
for job in orphaned_jobs:
run_count = JobRun.query.filter_by(job_id=job.id).count()
mail_count = JobRun.query.filter_by(job_id=job.id).filter(JobRun.mail_message_id.isnot(None)).count()
jobs_list.append({
"id": job.id,
"job_name": job.job_name or "Unnamed",
"backup_software": job.backup_software or "-",
"backup_type": job.backup_type or "-",
"customer_id": job.customer_id,
"run_count": run_count,
"mail_count": mail_count,
})
return render_template(
"main/settings_orphaned_jobs.html",
orphaned_jobs=jobs_list,
)
@main_bp.route("/settings/jobs/delete-orphaned", methods=["POST"])
@login_required
@roles_required("admin")
def settings_jobs_delete_orphaned():
"""Delete jobs that have no customer (customer_id is NULL or customer does not exist).
Also deletes all related emails from the database since the customer is gone.
"""
try:
# Find jobs without valid customer
orphaned_jobs = Job.query.outerjoin(Customer, Job.customer_id == Customer.id).filter(
db.or_(
Job.customer_id.is_(None),
Customer.id.is_(None)
)
).all()
if not orphaned_jobs:
flash("No orphaned jobs found.", "info")
return redirect(url_for("main.settings", section="maintenance"))
job_count = len(orphaned_jobs)
mail_count = 0
run_count = 0
# Collect mail message ids and run ids for cleanup
mail_message_ids = []
run_ids = []
job_ids = [job.id for job in orphaned_jobs]
for job in orphaned_jobs:
for run in job.runs:
if run.id is not None:
run_ids.append(run.id)
run_count += 1
if run.mail_message_id:
mail_message_ids.append(run.mail_message_id)
# Helper function for safe SQL execution
def _safe_execute(stmt, params):
try:
db.session.execute(stmt, params)
except Exception:
pass
# Clean up auxiliary tables that may not have ON DELETE CASCADE
if run_ids:
from sqlalchemy import text, bindparam
_safe_execute(
text("DELETE FROM ticket_job_runs WHERE job_run_id IN :run_ids").bindparams(
bindparam("run_ids", expanding=True)
),
{"run_ids": run_ids},
)
_safe_execute(
text("DELETE FROM remark_job_runs WHERE job_run_id IN :run_ids").bindparams(
bindparam("run_ids", expanding=True)
),
{"run_ids": run_ids},
)
if job_ids:
from sqlalchemy import text, bindparam
# Clean up scopes
_safe_execute(
text("DELETE FROM ticket_scopes WHERE job_id IN :job_ids").bindparams(
bindparam("job_ids", expanding=True)
),
{"job_ids": job_ids},
)
_safe_execute(
text("DELETE FROM remark_scopes WHERE job_id IN :job_ids").bindparams(
bindparam("job_ids", expanding=True)
),
{"job_ids": job_ids},
)
# Clean up overrides
_safe_execute(
text("DELETE FROM overrides WHERE job_id IN :job_ids").bindparams(
bindparam("job_ids", expanding=True)
),
{"job_ids": job_ids},
)
# Unlink mails from jobs before deleting jobs
# mail_messages.job_id references jobs.id
_safe_execute(
text("UPDATE mail_messages SET job_id = NULL WHERE job_id IN :job_ids").bindparams(
bindparam("job_ids", expanding=True)
),
{"job_ids": job_ids},
)
# Delete mail_objects before deleting mails
# mail_objects.mail_message_id references mail_messages.id
if mail_message_ids:
from sqlalchemy import text, bindparam
_safe_execute(
text("DELETE FROM mail_objects WHERE mail_message_id IN :mail_ids").bindparams(
bindparam("mail_ids", expanding=True)
),
{"mail_ids": mail_message_ids},
)
# Delete all orphaned jobs (runs/objects are cascaded via ORM relationships)
for job in orphaned_jobs:
db.session.delete(job)
# Now delete related mails permanently (customer is gone)
# This must happen AFTER deleting jobs/runs to avoid foreign key constraint violations
if mail_message_ids:
mail_count = len(mail_message_ids)
MailMessage.query.filter(MailMessage.id.in_(mail_message_ids)).delete(synchronize_session=False)
db.session.commit()
flash(
f"Deleted {job_count} orphaned job(s), {run_count} run(s), and {mail_count} email(s).",
"success"
)
_log_admin_event(
event_type="maintenance_delete_orphaned_jobs",
message=f"Deleted {job_count} orphaned jobs, {run_count} runs, and {mail_count} emails",
details=json.dumps({
"jobs_deleted": job_count,
"runs_deleted": run_count,
"mails_deleted": mail_count,
}),
)
except Exception as exc:
db.session.rollback()
print(f"[settings-jobs] Failed to delete orphaned jobs: {exc}")
flash("Failed to delete orphaned jobs.", "danger")
return redirect(url_for("main.settings", section="maintenance"))
@main_bp.route("/settings/test-emails/generate/<status_type>", methods=["POST"])
@login_required
@roles_required("admin")
def settings_generate_test_emails(status_type):
"""Generate test emails in inbox for testing parsers and orphaned jobs cleanup.
Fixed sets for consistent testing and reproducibility.
"""
try:
from datetime import datetime, timedelta
# Fixed test email sets per status type (Veeam only for consistent testing)
# Single email per status for simpler testing
email_sets = {
"success": [
{
"from_address": "veeam@test.local",
"subject": 'Veeam Backup Job "Test-Backup-Job" finished with Success',
"body": """Backup job: Test-Backup-Job
Session details:
Start time: 2026-02-09 01:00:00
End time: 2026-02-09 02:15:00
Total size: 150 GB
Duration: 01:15:00
Processing VM-APP01
Success
Processing VM-DB01
Success
Processing VM-WEB01
Success
All backup operations completed without issues.""",
},
],
"warning": [
{
"from_address": "veeam@test.local",
"subject": 'Veeam Backup Job "Test-Backup-Job" finished with WARNING',
"body": """Backup job: Test-Backup-Job
Session details:
Start time: 2026-02-09 01:00:00
End time: 2026-02-09 02:30:00
Total size: 148 GB
Duration: 01:30:00
Processing VM-APP01
Success
Processing VM-DB01
Warning
Warning: Low free space on target datastore
Processing VM-WEB01
Success
Backup completed but some files were skipped.""",
},
],
"error": [
{
"from_address": "veeam@test.local",
"subject": 'Veeam Backup Job "Test-Backup-Job" finished with Failed',
"body": """Backup job: Test-Backup-Job
Session details:
Start time: 2026-02-09 01:00:00
End time: 2026-02-09 01:15:00
Total size: 0 GB
Duration: 00:15:00
Processing VM-APP01
Failed
Error: Cannot create snapshot: VSS error 0x800423f4
Processing VM-DB01
Success
Processing VM-WEB01
Success
Backup failed. Please check the logs for details.""",
},
],
}
if status_type not in email_sets:
flash("Invalid status type.", "danger")
return redirect(url_for("main.settings", section="maintenance"))
emails = email_sets[status_type]
created_count = 0
now = datetime.utcnow()
for email_data in emails:
mail = MailMessage(
from_address=email_data["from_address"],
subject=email_data["subject"],
text_body=email_data["body"],
html_body=f"<pre>{email_data['body']}</pre>",
received_at=now - timedelta(hours=created_count),
location="inbox",
job_id=None,
)
db.session.add(mail)
created_count += 1
db.session.commit()
flash(f"Generated {created_count} {status_type} test email(s) in inbox.", "success")
_log_admin_event(
event_type="maintenance_generate_test_emails",
message=f"Generated {created_count} {status_type} test emails",
details=json.dumps({"status_type": status_type, "count": created_count}),
)
except Exception as exc:
db.session.rollback()
print(f"[settings-test] Failed to generate test emails: {exc}")
flash("Failed to generate test emails.", "danger")
return redirect(url_for("main.settings", section="maintenance"))
@main_bp.route("/settings/objects/backfill", methods=["POST"])
@login_required
@roles_required("admin")

View File

@ -50,13 +50,13 @@ PARSER_DEFINITIONS = [
},
"description": "Parses NTFS Auditing file audit report mails (attachment-based HTML reports).",
"example": {
"subject": "Bouter btr-dc001.bouter.nl file audits → 6 ↑ 12",
"from_address": "auditing@bouter.nl",
"subject": "SERVER-HOSTNAME file audits → 6 ↑ 12",
"from_address": "auditing@example.local",
"body_snippet": "(empty body, HTML report in attachment)",
"parsed_result": {
"backup_software": "NTFS Auditing",
"backup_type": "Audit",
"job_name": "btr-dc001.bouter.nl file audits",
"job_name": "SERVER-HOSTNAME file audits",
"objects": [],
},
},
@ -73,16 +73,68 @@ PARSER_DEFINITIONS = [
},
"description": "Parses QNAP Notification Center firmware update notifications (informational; excluded from reporting and missing logic).",
"example": {
"subject": "[Info][Firmware Update] Notification from your device: BETSIES-NAS01",
"subject": "[Info][Firmware Update] Notification from your device: NAS-HOSTNAME",
"from_address": "notifications@customer.tld",
"body_snippet": "NAS Name: BETSIES-NAS01\n...\nMessage: ...",
"body_snippet": "NAS Name: NAS-HOSTNAME\n...\nMessage: ...",
"parsed_result": {
"backup_software": "QNAP",
"backup_type": "Firmware Update",
"job_name": "Firmware Update",
"overall_status": "Warning",
"objects": [
{"name": "BETSIES-NAS01", "status": "Warning", "error_message": None}
{"name": "NAS-HOSTNAME", "status": "Warning", "error_message": None}
],
},
},
},
{
"name": "synology_dsm_update",
"backup_software": "Synology",
"backup_types": ["Updates"],
"order": 236,
"enabled": True,
"match": {
"subject_contains_any": ["DSM-update", "DSM update"],
"body_contains_any": ["automatische DSM-update", "automatic DSM update", "Automatic update of DSM"],
},
"description": "Parses Synology DSM automatic update cancelled notifications (informational; excluded from reporting and missing logic).",
"example": {
"subject": "Synology NAS-HOSTNAME - Automatische DSM-update op NAS-HOSTNAME is geannuleerd door het systeem",
"from_address": "backup@example.local",
"body_snippet": "Het systeem heeft de automatische DSM-update op NAS-HOSTNAME geannuleerd...",
"parsed_result": {
"backup_software": "Synology",
"backup_type": "Updates",
"job_name": "Synology Automatic Update",
"overall_status": "Warning",
"objects": [
{"name": "NAS-HOSTNAME", "status": "Warning"}
],
},
},
},
{
"name": "synology_drive_health",
"backup_software": "Synology",
"backup_types": ["Health Report"],
"order": 237,
"enabled": True,
"match": {
"subject_contains_any": ["schijfintegriteitsrapport", "Drive Health Report"],
"body_contains_any": ["health of the drives", "integriteitsrapport van de schijven"],
},
"description": "Parses Synology monthly drive health reports (informational; excluded from reporting and missing logic).",
"example": {
"subject": "[NAS-HOSTNAME] Monthly Drive Health Report on NAS-HOSTNAME - Healthy",
"from_address": "nas@example.local",
"body_snippet": "The following is your monthly report regarding the health of the drives on NAS-HOSTNAME. No problem detected with the drives in DSM.",
"parsed_result": {
"backup_software": "Synology",
"backup_type": "Health Report",
"job_name": "Monthly Drive Health",
"overall_status": "Success",
"objects": [
{"name": "NAS-HOSTNAME", "status": "Success"}
],
},
},
@ -383,16 +435,16 @@ PARSER_DEFINITIONS = [
},
"description": "Parses NAKIVO Backup & Replication reports for VMware backup jobs.",
"example": {
"subject": '"exchange01.kuiperbv.nl" job: Successful',
"subject": '"VM-HOSTNAME" job: Successful',
"from_address": "NAKIVO Backup & Replication <administrator@customer.local>",
"body_snippet": "Job Run Report... Backup job for VMware ... Successful",
"parsed_result": {
"backup_software": "NAKIVO",
"backup_type": "Backup job for VMware",
"job_name": "exchange01.kuiperbv.nl",
"job_name": "VM-HOSTNAME",
"objects": [
{
"name": "exchange01.kuiperbv.nl",
"name": "VM-HOSTNAME",
"status": "Success",
"error_message": "",
}

View File

@ -18,10 +18,23 @@ DSM_UPDATE_CANCELLED_PATTERNS = [
"Automatische update van DSM is geannuleerd",
"Automatic DSM update was cancelled",
"Automatic update of DSM was cancelled",
"Automatische DSM-update",
"DSM-update op",
"Packages on",
"out-of-date",
"Package Center",
"new DSM update",
"Auto Update has detected",
"new version of DSM",
"Update & Restore",
"belangrijke DSM-update",
"kritieke oplossingen",
"wordt automatisch geïnstalleerd",
"is beschikbaar op",
]
_DSM_UPDATE_CANCELLED_HOST_RE = re.compile(
r"\b(?:geannuleerd\s+op|cancelled\s+on)\s+(?P<host>[A-Za-z0-9._-]+)\b",
r"\b(?:geannuleerd\s+op|cancelled\s+on|DSM-update\s+op|DSM\s+update\s+on|Packages\s+on|running\s+on|detected\s+on)\s+(?P<host>[A-Za-z0-9._-]+)\b",
re.I,
)
@ -60,6 +73,75 @@ def _parse_synology_dsm_update_cancelled(subject: str, text: str) -> Tuple[bool,
return True, result, objects
# --- Synology Drive Health Report (informational, excluded from reporting) ---
DRIVE_HEALTH_PATTERNS = [
"schijfintegriteitsrapport",
"Drive Health Report",
"Monthly Drive Health",
"health of the drives",
"integriteitsrapport van de schijven",
]
_DRIVE_HEALTH_SUBJECT_RE = re.compile(
r"\b(?:schijfintegriteitsrapport\s+over|Drive\s+Health\s+Report\s+on)\s+(?P<host>[A-Za-z0-9._-]+)",
re.I,
)
_DRIVE_HEALTH_FROM_RE = re.compile(r"\b(?:Van|From)\s+(?P<host>[A-Za-z0-9._-]+)\b", re.I)
_DRIVE_HEALTH_STATUS_HEALTHY_RE = re.compile(
r"\b(?:Gezond|Healthy|geen\s+problemen\s+gedetecteerd|No\s+problem\s+detected)\b",
re.I,
)
def _is_synology_drive_health(subject: str, text: str) -> bool:
haystack = f"{subject}\n{text}".lower()
return any(p.lower() in haystack for p in DRIVE_HEALTH_PATTERNS)
def _parse_synology_drive_health(subject: str, text: str) -> Tuple[bool, Dict, List[Dict]]:
haystack = f"{subject}\n{text}"
host = ""
# Try to extract hostname from subject first
m = _DRIVE_HEALTH_SUBJECT_RE.search(subject or "")
if m:
host = (m.group("host") or "").strip()
# Fallback: extract from body "Van/From NAS-NAME"
if not host:
m = _DRIVE_HEALTH_FROM_RE.search(text or "")
if m:
host = (m.group("host") or "").strip()
# Determine status based on health indicators
overall_status = "Success"
overall_message = "Healthy"
if not _DRIVE_HEALTH_STATUS_HEALTHY_RE.search(haystack):
# If we don't find healthy indicators, mark as Warning
overall_status = "Warning"
overall_message = "Drive health issue detected"
# Informational job: show in Run Checks, but do not participate in schedules / reporting.
result: Dict = {
"backup_software": "Synology",
"backup_type": "Health Report",
"job_name": "Monthly Drive Health",
"overall_status": overall_status,
"overall_message": overall_message + (f" ({host})" if host else ""),
}
objects: List[Dict] = []
if host:
objects.append({"name": host, "status": overall_status})
return True, result, objects
_BR_RE = re.compile(r"<\s*br\s*/?\s*>", re.I)
_TAG_RE = re.compile(r"<[^>]+>")
_WS_RE = re.compile(r"[\t\r\f\v ]+")
@ -176,12 +258,14 @@ _ABB_SUBJECT_RE = re.compile(r"\bactive\s+backup\s+for\s+business\b", re.I)
# Examples (NL):
# "De back-uptaak vSphere-Task-1 op KANTOOR-NEW is voltooid."
# "Virtuele machine back-uptaak vSphere-Task-1 op KANTOOR-NEW is gedeeltelijk voltooid."
# "back-uptaak vSphere-Task-1 op KANTOOR-NEW is genegeerd"
# Examples (EN):
# "The backup task vSphere-Task-1 on KANTOOR-NEW has completed."
# "Virtual machine backup task vSphere-Task-1 on KANTOOR-NEW partially completed."
# "backup task vSphere-Task-1 on KANTOOR-NEW was skipped"
_ABB_COMPLETED_RE = re.compile(
r"\b(?:virtuele\s+machine\s+)?(?:de\s+)?back-?up\s*taak\s+(?P<job>.+?)\s+op\s+(?P<host>[A-Za-z0-9._-]+)\s+is\s+(?P<status>voltooid|gedeeltelijk\s+voltooid)\b"
r"|\b(?:virtual\s+machine\s+)?(?:the\s+)?back-?up\s+task\s+(?P<job_en>.+?)\s+on\s+(?P<host_en>[A-Za-z0-9._-]+)\s+(?:is\s+)?(?P<status_en>completed|finished|has\s+completed|partially\s+completed)\b",
r"\b(?:virtuele\s+machine\s+)?(?:de\s+)?back-?up\s*(?:taak|job)\s+(?:van\s+deze\s+taak\s+)?(?P<job>.+?)\s+op\s+(?P<host>[A-Za-z0-9._-]+)\s+is\s+(?P<status>voltooid|gedeeltelijk\s+voltooid|genegeerd)\b"
r"|\b(?:virtual\s+machine\s+)?(?:the\s+)?back-?up\s+(?:task|job)\s+(?P<job_en>.+?)\s+on\s+(?P<host_en>[A-Za-z0-9._-]+)\s+(?:is\s+|was\s+)?(?P<status_en>completed|finished|has\s+completed|partially\s+completed|skipped|ignored)\b",
re.I,
)
@ -233,6 +317,11 @@ def _parse_active_backup_for_business(subject: str, text: str) -> Tuple[bool, Di
overall_status = "Warning"
overall_message = "Partially completed"
# "genegeerd" / "skipped" / "ignored" should be treated as Warning
if "genegeerd" in status_raw or "skipped" in status_raw or "ignored" in status_raw:
overall_status = "Warning"
overall_message = "Skipped"
# Explicit failure wording overrides everything
if _ABB_FAILED_RE.search(haystack):
overall_status = "Error"
@ -489,6 +578,12 @@ def try_parse_synology(msg: MailMessage) -> Tuple[bool, Dict, List[Dict]]:
if ok:
return True, result, objects
# Drive Health Report (informational; no schedule; excluded from reporting)
if _is_synology_drive_health(subject, text):
ok, result, objects = _parse_synology_drive_health(subject, text)
if ok:
return True, result, objects
# DSM Account Protection (informational; no schedule)
if _is_synology_account_protection(subject, text):
ok, result, objects = _parse_account_protection(subject, text)

View File

@ -1177,6 +1177,38 @@ def try_parse_veeam(msg: MailMessage) -> Tuple[bool, Dict, List[Dict]]:
}
return True, result, []
# Job did not start on schedule: special error notification (no objects, plain text body).
# Example subject: "[Veeam Backup for Microsoft 365] Job did not start on schedule: Backup MDS at Work"
subject_lower = subject.lower()
if 'job did not start on schedule' in subject_lower:
# Extract backup type from subject (e.g., "Veeam Backup for Microsoft 365")
backup_type = None
for candidate in VEEAM_BACKUP_TYPES:
if candidate.lower() in subject_lower:
backup_type = candidate
break
if not backup_type:
backup_type = "Backup Job"
# Extract job name after the colon (e.g., "Backup MDS at Work")
job_name = None
m_job = re.search(r'job did not start on schedule:\s*(.+)$', subject, re.IGNORECASE)
if m_job:
job_name = (m_job.group(1) or '').strip()
# Get overall message from text_body (can be base64 encoded)
text_body = (getattr(msg, 'text_body', None) or '').strip()
overall_message = text_body if text_body else 'Job did not start on schedule'
result = {
'backup_software': 'Veeam',
'backup_type': backup_type,
'job_name': job_name or 'Unknown Job',
'overall_status': 'Error',
'overall_message': overall_message,
}
return True, result, []
# Configuration Job detection (may not have object details)
subj_lower = subject.lower()
is_config_job = ('backup configuration job' in subj_lower) or ('configuration backup for' in html_lower)

View File

@ -170,27 +170,23 @@ def link_open_internal_tickets_to_run(*, run: JobRun, job: Job) -> None:
ui_tz = _get_ui_timezone_name()
run_date = _to_ui_date(getattr(run, "run_at", None)) or _to_ui_date(datetime.utcnow())
# Find open tickets scoped to this job for the run date window.
# This matches the logic used by Job Details and Run Checks indicators.
# Find open (unresolved) tickets scoped to this job.
rows = []
try:
rows = (
db.session.execute(
text(
"""
SELECT t.id, t.ticket_code
SELECT t.id, t.ticket_code, t.resolved_at, ts.resolved_at as scope_resolved_at
FROM tickets t
JOIN ticket_scopes ts ON ts.ticket_id = t.id
WHERE ts.job_id = :job_id
AND t.active_from_date <= :run_date
AND (
COALESCE(ts.resolved_at, t.resolved_at) IS NULL
OR ((COALESCE(ts.resolved_at, t.resolved_at) AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date) >= :run_date
)
AND COALESCE(ts.resolved_at, t.resolved_at) IS NULL
ORDER BY t.start_date DESC, t.id DESC
"""
),
{"job_id": int(job.id), "run_date": run_date, "ui_tz": ui_tz},
{"job_id": int(job.id), "run_date": run_date},
)
.fetchall()
)
@ -201,7 +197,7 @@ def link_open_internal_tickets_to_run(*, run: JobRun, job: Job) -> None:
return
# Link all open tickets to this run (idempotent)
for tid, _code in rows:
for tid, code, t_resolved, ts_resolved in rows:
if not TicketJobRun.query.filter_by(ticket_id=int(tid), job_run_id=int(run.id)).first():
db.session.add(TicketJobRun(ticket_id=int(tid), job_run_id=int(run.id), link_source="inherit"))
@ -213,20 +209,49 @@ def link_open_internal_tickets_to_run(*, run: JobRun, job: Job) -> None:
except Exception:
pass
# Strategy 1: Use internal ticket code to find matching Autotask-linked run
# The query above only returns unresolved tickets, so we can safely propagate.
try:
# Use the newest ticket code to find a matching prior Autotask-linked run.
newest_code = (rows[0][1] or "").strip()
if not newest_code:
return
# rows format: (tid, code, t_resolved, ts_resolved)
newest_code = (rows[0][1] or "").strip() if rows else ""
if newest_code:
prior = (
JobRun.query.filter(JobRun.job_id == job.id)
.filter(JobRun.autotask_ticket_id.isnot(None))
.filter(JobRun.autotask_ticket_number == newest_code)
.order_by(JobRun.id.desc())
.first()
)
if prior and getattr(prior, "autotask_ticket_id", None):
run.autotask_ticket_id = prior.autotask_ticket_id
run.autotask_ticket_number = prior.autotask_ticket_number
run.autotask_ticket_created_at = getattr(prior, "autotask_ticket_created_at", None)
run.autotask_ticket_created_by_user_id = getattr(prior, "autotask_ticket_created_by_user_id", None)
return
except Exception:
pass
# Strategy 2: Direct Autotask propagation (independent of internal ticket status)
# Find the most recent non-deleted, non-resolved Autotask ticket for this job.
try:
prior = (
JobRun.query.filter(JobRun.job_id == job.id)
.filter(JobRun.autotask_ticket_id.isnot(None))
.filter(JobRun.autotask_ticket_number == newest_code)
.filter(JobRun.autotask_ticket_deleted_at.is_(None))
.order_by(JobRun.id.desc())
.first()
)
if prior and getattr(prior, "autotask_ticket_id", None):
# Check if the internal ticket is resolved (Autotask tickets are resolved via internal Ticket)
ticket_number = (getattr(prior, "autotask_ticket_number", None) or "").strip()
if ticket_number:
internal_ticket = Ticket.query.filter_by(ticket_code=ticket_number).first()
if internal_ticket and getattr(internal_ticket, "resolved_at", None):
# Ticket is resolved, don't propagate
return
# Ticket is not deleted and not resolved, propagate it
run.autotask_ticket_id = prior.autotask_ticket_id
run.autotask_ticket_number = prior.autotask_ticket_number
run.autotask_ticket_created_at = getattr(prior, "autotask_ticket_created_at", None)

View File

@ -197,7 +197,7 @@
</div>
</nav>
<main class="{% block main_class %}container content-container{% endblock %}" style="padding-top: 80px;">
<main class="{% block main_class %}container content-container{% endblock %}" id="main-content">
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
<div class="mb-3">
@ -216,6 +216,58 @@
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/js/bootstrap.bundle.min.js"></script>
<script>
// Dynamic navbar height adjustment
(function () {
function adjustContentPadding() {
try {
var navbar = document.querySelector('.navbar.fixed-top');
var mainContent = document.getElementById('main-content');
if (!navbar || !mainContent) return;
// Get actual navbar height
var navbarHeight = navbar.offsetHeight;
// Add small buffer (20px) for visual spacing
var paddingTop = navbarHeight + 20;
// Apply padding to main content
mainContent.style.paddingTop = paddingTop + 'px';
} catch (e) {
// Fallback to 80px if something goes wrong
var mainContent = document.getElementById('main-content');
if (mainContent) {
mainContent.style.paddingTop = '80px';
}
}
}
// Run on page load
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', adjustContentPadding);
} else {
adjustContentPadding();
}
// Run after navbar is fully rendered
window.addEventListener('load', adjustContentPadding);
// Run on window resize
var resizeTimeout;
window.addEventListener('resize', function () {
clearTimeout(resizeTimeout);
resizeTimeout = setTimeout(adjustContentPadding, 100);
});
// Run when navbar collapse is toggled
var navbarCollapse = document.getElementById('navbarNav');
if (navbarCollapse) {
navbarCollapse.addEventListener('shown.bs.collapse', adjustContentPadding);
navbarCollapse.addEventListener('hidden.bs.collapse', adjustContentPadding);
}
})();
</script>
<script>
(function () {
function isOverflowing(el) {

View File

@ -549,6 +549,36 @@
</div>
</div>
<div class="col-12 col-lg-6">
<div class="card h-100 border-warning">
<div class="card-header bg-warning">Cleanup orphaned jobs</div>
<div class="card-body">
<p class="mb-3">Delete jobs that are no longer linked to an existing customer. Related emails and runs will be <strong>permanently deleted</strong> from the database.</p>
<a href="{{ url_for('main.settings_jobs_orphaned') }}" class="btn btn-warning">Preview orphaned jobs</a>
</div>
</div>
</div>
<div class="col-12 col-lg-6">
<div class="card h-100 border-info">
<div class="card-header bg-info text-white">Generate test emails</div>
<div class="card-body">
<p class="mb-3">Generate Veeam test emails in the inbox for testing parsers and maintenance operations. Each button creates 1 Veeam Backup Job email with the specified status.</p>
<div class="d-flex flex-column gap-2">
<form method="post" action="{{ url_for('main.settings_generate_test_emails', status_type='success') }}">
<button type="submit" class="btn btn-success w-100">Generate success email (1)</button>
</form>
<form method="post" action="{{ url_for('main.settings_generate_test_emails', status_type='warning') }}">
<button type="submit" class="btn btn-warning w-100">Generate warning email (1)</button>
</form>
<form method="post" action="{{ url_for('main.settings_generate_test_emails', status_type='error') }}">
<button type="submit" class="btn btn-danger w-100">Generate error email (1)</button>
</form>
</div>
</div>
</div>
</div>
<div class="col-12 col-lg-6">
<div class="card h-100 border-danger">
<div class="card-header bg-danger text-white">Jobs maintenance</div>

View File

@ -0,0 +1,87 @@
{% extends "layout/base.html" %}
{% block title %}Orphaned Jobs Preview{% endblock %}
{% block content %}
<div class="container-fluid py-4">
<div class="d-flex justify-content-between align-items-center mb-4">
<div>
<h2>Orphaned Jobs Preview</h2>
<p class="text-muted mb-0">Jobs without a valid customer link</p>
</div>
<a href="{{ url_for('main.settings', section='maintenance') }}" class="btn btn-outline-secondary">Back to Settings</a>
</div>
{% if orphaned_jobs %}
<div class="alert alert-warning">
<strong>⚠️ Warning:</strong> Found {{ orphaned_jobs|length }} orphaned job(s). Review the list below before deleting.
</div>
<div class="card mb-4">
<div class="card-header d-flex justify-content-between align-items-center">
<span>Orphaned Jobs List</span>
<form method="post" action="{{ url_for('main.settings_jobs_delete_orphaned') }}" onsubmit="return confirm('Delete all {{ orphaned_jobs|length }} orphaned jobs and their emails? This cannot be undone.');">
<button type="submit" class="btn btn-sm btn-danger">Delete All ({{ orphaned_jobs|length }} jobs)</button>
</form>
</div>
<div class="card-body p-0">
<div class="table-responsive">
<table class="table table-hover mb-0">
<thead>
<tr>
<th>Job Name</th>
<th>Backup Software</th>
<th>Backup Type</th>
<th>Customer ID</th>
<th class="text-end">Runs</th>
<th class="text-end">Emails</th>
</tr>
</thead>
<tbody>
{% for job in orphaned_jobs %}
<tr>
<td>{{ job.job_name }}</td>
<td>{{ job.backup_software }}</td>
<td>{{ job.backup_type }}</td>
<td>
{% if job.customer_id %}
<span class="badge bg-danger">{{ job.customer_id }} (deleted)</span>
{% else %}
<span class="badge bg-secondary">NULL</span>
{% endif %}
</td>
<td class="text-end">{{ job.run_count }}</td>
<td class="text-end">{{ job.mail_count }}</td>
</tr>
{% endfor %}
</tbody>
<tfoot>
<tr class="table-light">
<td colspan="4"><strong>Total</strong></td>
<td class="text-end"><strong>{{ orphaned_jobs|sum(attribute='run_count') }}</strong></td>
<td class="text-end"><strong>{{ orphaned_jobs|sum(attribute='mail_count') }}</strong></td>
</tr>
</tfoot>
</table>
</div>
</div>
</div>
<div class="alert alert-info">
<strong> What will be deleted:</strong>
<ul class="mb-0">
<li>{{ orphaned_jobs|length }} job(s)</li>
<li>{{ orphaned_jobs|sum(attribute='run_count') }} job run(s)</li>
<li>{{ orphaned_jobs|sum(attribute='mail_count') }} email(s)</li>
<li>All related data (backup objects, ticket/remark links, scopes, overrides)</li>
</ul>
</div>
{% else %}
<div class="alert alert-success">
<strong>✅ No orphaned jobs found.</strong>
<p class="mb-0">All jobs are properly linked to existing customers.</p>
</div>
{% endif %}
</div>
{% endblock %}

View File

@ -2,10 +2,48 @@
This file documents all changes made to this project via Claude Code.
## [2026-02-10]
### Fixed
- Fixed Autotask ticket not being automatically linked to new runs when internal ticket is resolved by implementing independent Autotask propagation strategy (now checks for most recent non-deleted and non-resolved Autotask ticket on job regardless of internal ticket status, ensuring PSA ticket reference persists across runs until explicitly resolved or deleted)
- Fixed internal and Autotask tickets being linked to new runs even after being resolved by removing date-based "open" logic from ticket query (tickets now only link to new runs if they are genuinely unresolved, not based on run date comparisons)
- Fixed Job Details page showing resolved tickets for ALL runs by implementing two-source ticket display: directly linked tickets (via ticket_job_runs) are always shown for audit trail, while active window tickets (via scope query) are only shown if unresolved, preserving historical ticket links while preventing resolved tickets from appearing on new runs
- Fixed Run Checks page showing resolved ticket indicators by removing date-based logic from ticket/remark existence queries (tickets and remarks now only show indicators if genuinely unresolved)
- Fixed Run Checks popup showing resolved tickets for runs where they were never linked by replacing date-based ticket/remark queries in `/api/job-runs/<run_id>/alerts` endpoint with explicit link-based queries (now only shows tickets/remarks that were actually linked to the specific run via ticket_job_runs/remark_job_runs tables, completing the transition from date-based to explicit-link ticket system)
- **HOTFIX**: Fixed Run Checks popup showing duplicate tickets (same ticket repeated multiple times) by removing unnecessary JOIN with ticket_scopes/remark_scopes tables and adding DISTINCT to prevent duplicate rows (root cause: tickets with multiple scopes created multiple result rows for same ticket via Cartesian product)
### Changed
- Added debug logging to ticket linking function to troubleshoot resolved ticket propagation issues (writes to AuditLog table with event_type "ticket_link_debug", visible on Logging page, logs EVERY run import to show whether tickets were found and their resolved_at status, uses commit instead of flush to ensure persistence) - **LATER REMOVED** after ticket system was fixed
- Reduced test email generation from 3 emails per status to 1 email per status for simpler testing (each button now creates exactly 1 test mail instead of 3)
- Updated Settings Maintenance page text to reflect that test emails are Veeam only and 1 per button (changed from "3 emails simulating Veeam, Synology, and NAKIVO" to "1 Veeam Backup Job email" per status button)
### Removed
- Removed debug logging from ticket linking function after successfully resolving all ticket propagation issues (the logging was temporarily added to troubleshoot why resolved tickets kept appearing on new runs, wrote to AuditLog with event_type "ticket_link_debug" showing ticket_id, code, resolved_at status for every run import, debug code preserved in backupchecks-system.md documentation for future use if similar issues arise)
### Release
- **v0.1.26** - Official release consolidating all ticket system bug fixes from 2026-02-10 (see docs/changelog.md and changelog.py for customer-facing release notes)
## [2026-02-09]
### Added
- Extended Veeam parser to recognize "Job did not start on schedule" error notifications for Veeam Backup for Microsoft 365 (and other Veeam backup types) with job name extraction from subject and error message from plain text body (proxy server offline, scheduled run failed)
- Added parser for Synology monthly drive health reports (backup software: Synology, backup type: Health Report, job name: Monthly Drive Health, informational only, no schedule learning) with support for both Dutch and English notifications ("schijfintegriteitsrapport"/"Drive Health Report") and automatic status detection (Healthy/Gezond → Success, problems → Warning)
- Added "Cleanup orphaned jobs" maintenance option in Settings → Maintenance to delete jobs without valid customer links and their associated emails/runs permanently from database (useful when customers are removed)
- Added "Preview orphaned jobs" button to show detailed list of jobs to be deleted with run/email counts before confirming deletion (verification step for safety)
- Added "Generate test emails" feature in Settings → Maintenance with three separate buttons to create fixed test email sets (success/warning/error) in inbox for testing parsers and maintenance operations (each set contains exactly 3 Veeam Backup Job emails with the same job name "Test-Backup-Job" and different dates/objects/statuses for reproducible testing and proper status flow testing)
- Added parser registry entry for Synology DSM automatic update cancelled notifications (backup software: Synology, backup type: Updates, informational only, no schedule learning)
- Extended Synology DSM update parser with additional detection patterns ("Automatische DSM-update", "DSM-update op", "Packages on", "out-of-date", "Package Center", "new DSM update", "Auto Update has detected", "Update & Restore", "belangrijke DSM-update", "kritieke oplossingen", "wordt automatisch geïnstalleerd", "is beschikbaar op") and hostname extraction regex to recognize DSM update cancelled, out-of-date packages, new update available, and automatic installation announcements under same Updates job type while maintaining backward compatibility with existing patterns
- Extended Synology Active Backup for Business parser to recognize skipped/ignored backup tasks ("genegeerd", "skipped", "ignored") as Warning status when backup was skipped due to previous backup still running
### Changed
- Updated `docs/changelog.md` with comprehensive v0.1.25 release notes consolidating all changes from 2026-02-09 (Parser Enhancements for Synology and Veeam, Maintenance Improvements, Data Privacy, Bug Fixes)
- Updated `containers/backupchecks/src/backend/app/changelog.py` with v0.1.25 entry in Python structure for website display (4 sections with subsections matching changelog.md content)
- Removed customer name from Autotask ticket title to keep titles concise (format changed from "[Backupchecks] Customer - Job Name - Status" to "[Backupchecks] Job Name - Status")
- Replaced real customer names in parser registry examples with generic placeholders (NTFS Auditing, QNAP Firmware Update, NAKIVO) to prevent customer information in codebase
### Fixed
- Fixed Autotask ticket description being set to NULL when resolving tickets via `update_ticket_resolution_safe` by adding "description" to the optional_fields list, ensuring the original description is preserved during PUT operations
- Fixed responsive navbar overlapping page content on smaller screens by implementing dynamic padding adjustment (JavaScript measures actual navbar height and adjusts main content padding-top automatically on page load, window resize, and navbar collapse toggle events)
### Changed
- Updated `docs/changelog.md` with comprehensive v0.1.23 release notes consolidating all changes from 2026-02-06 through 2026-02-08 (Documentation System, Audit Logging, Timezone-Aware Display, Autotask Improvements, Environment Identification, Bug Fixes)

View File

@ -1,3 +1,149 @@
## v0.1.26
This critical bug fix release resolves ticket system display issues where resolved tickets were incorrectly appearing on new runs across multiple pages. The ticket system has been completely transitioned from date-based logic to explicit link-based queries, ensuring resolved tickets stop appearing immediately after resolution while preserving audit trail for historical runs.
### Bug Fixes
**Ticket System - Resolved Ticket Display Issues:**
*Root Cause:*
- Multiple pages used legacy date-based logic to determine if tickets should be displayed
- Queries checked if `active_from_date <= run_date` and `resolved_at >= run_date` instead of checking explicit `ticket_job_runs` links
- Result: Resolved tickets kept appearing on ALL runs between active_from_date and resolved_at, even runs created after resolution
- Impact: Users saw resolved tickets on new runs, creating confusion about which issues were actually active
*Fixed Pages and Queries:*
1. **Ticket Linking (ticketing_utils.py)**
- Fixed Autotask tickets not propagating to new runs after internal ticket resolution
- Implemented independent Autotask propagation strategy: checks for most recent non-deleted and non-resolved Autotask ticket on job regardless of internal ticket status
- Fixed internal tickets being linked to new runs after resolution by removing date-based "open" logic from ticket query
- Tickets now only link to new runs if `COALESCE(ts.resolved_at, t.resolved_at) IS NULL` (genuinely unresolved)
2. **Job Details Page (routes_job_details.py)**
- Fixed resolved tickets appearing on ALL runs for a job
- Implemented two-source ticket display for proper audit trail:
- Direct links via `ticket_job_runs` → always shown (preserves historical context)
- Active window via `ticket_scopes` → only shown if unresolved
- Result: Old runs keep their ticket references, new runs don't get resolved tickets
3. **Run Checks Main Page (routes_run_checks.py)**
- Fixed ticket/remark indicators (🎫/💬) showing for jobs with resolved tickets
- Removed date-based logic from indicator existence queries
- Now only shows indicators if `COALESCE(ts.resolved_at, t.resolved_at) IS NULL` (genuinely unresolved)
4. **Run Checks Popup Modal (routes_api.py)**
- Fixed popup showing resolved tickets for runs where they were never linked
- Replaced date-based queries in `/api/job-runs/<run_id>/alerts` endpoint with explicit JOIN queries
- Tickets query: Now uses `JOIN ticket_job_runs WHERE job_run_id = :run_id`
- Remarks query: Now uses `JOIN remark_job_runs WHERE job_run_id = :run_id`
- Removed unused parameters: `run_date`, `job_id`, `ui_tz` (no longer needed)
- Result: Only shows tickets/remarks that were actually linked to that specific run
*Testing & Troubleshooting:*
- Temporarily added debug logging to `link_open_internal_tickets_to_run` function
- Wrote to AuditLog table with event_type "ticket_link_debug" for troubleshooting
- Logged ticket_id, code, resolved_at status for every run import
- Debug logging removed after successful resolution (code preserved in documentation)
**Test Email Generation:**
- Reduced test email generation from 3 emails per status to 1 email per status
- Each button now creates exactly 1 test mail instead of 3 for simpler testing
**User Interface:**
- Updated Settings → Maintenance page text for test email generation
- Changed description from "3 emails simulating Veeam, Synology, and NAKIVO" to "1 Veeam Backup Job email"
- Updated button labels from "(3)" to "(1)" to match actual behavior
*Result:*
- ✅ Resolved tickets stop appearing immediately after resolution
- ✅ Consistent behavior across all pages (Job Details, Run Checks, Run Checks popup)
- ✅ Audit trail preserved: old runs keep their historical ticket links
- ✅ Clear distinction: new runs only show currently active (unresolved) tickets
- ✅ All queries now use explicit link-based logic (no date comparisons)
## v0.1.25
This release focuses on parser improvements and maintenance enhancements, adding support for new notification types across Synology and Veeam backup systems while improving system usability with orphaned job cleanup and test email generation features.
### Parser Enhancements
**Synology Parsers:**
- **Monthly Drive Health Reports**: New parser for Synology NAS drive health notifications
- Supports both Dutch ("Maandelijks schijfintegriteitsrapport", "Gezond") and English ("Monthly Drive Health Report", "Healthy") variants
- Automatic status detection: Healthy/Gezond/No problem detected → Success, otherwise → Warning
- Extracts hostname from subject or body pattern (Van/From NAS-HOSTNAME)
- Backup type: "Health Report", Job name: "Monthly Drive Health"
- Informational only (excluded from schedule learning and reporting logic)
- Registry entry added (order 237) for /parsers page visibility
- **DSM Update Notifications - Extended Coverage**: Added support for additional DSM update notification variants
- New patterns: "belangrijke DSM-update", "kritieke oplossingen", "wordt automatisch geïnstalleerd", "is beschikbaar op"
- Now recognizes 4 different notification types under same job:
1. Automatic update cancelled
2. Packages out-of-date warnings
3. New update available announcements
4. Automatic installation scheduled notifications
- All patterns added to existing lists maintaining full backward compatibility
- **Active Backup for Business - Skipped Tasks**: Extended parser to recognize skipped/ignored backup tasks
- Detects Dutch ("genegeerd") and English ("skipped", "ignored") status indicators
- Status mapping: Skipped/Ignored → Warning with "Skipped" message
- Common scenario: Backup skipped because previous backup still running
**Veeam Parsers:**
- **Job Not Started Errors**: New detection for "Job did not start on schedule" error notifications
- Recognizes VBO365 and other Veeam backup types that send plain text error notifications
- Extracts backup type from subject (e.g., "Veeam Backup for Microsoft 365")
- Extracts job name from subject after colon (e.g., "Backup MDS at Work")
- Reads error message from plain text body (handles base64 UTF-16 encoding)
- Sets overall_status to "Error" for failed-to-start jobs
- Example messages: "Proxy server was offline at the time the job was scheduled to run."
### Maintenance Improvements
**Orphaned Jobs Cleanup:**
- Added "Cleanup orphaned jobs" option in Settings → Maintenance
- Removes jobs without valid customer links (useful when customers are deleted)
- Permanently deletes job records along with all associated emails and job runs
- "Preview orphaned jobs" button shows detailed list before deletion
- Displays job information with email and run counts
- Safety verification step to prevent accidental deletion
**Test Email Generation:**
- Added "Generate test emails" feature in Settings → Maintenance
- Three separate buttons to create fixed test email sets for parser testing:
- Success emails (3 emails with success status)
- Warning emails (3 emails with warning status)
- Error emails (3 emails with error status)
- Each set contains exactly 3 Veeam Backup Job emails with:
- Same job name "Test-Backup-Job" for consistency
- Different dates, objects, and statuses
- Reproducible testing scenarios
- Proper status flow testing (success → warning → error progression)
### Data Privacy
**Parser Registry Cleanup:**
- Replaced real customer names in parser registry examples with generic placeholders
- Affected parsers: NTFS Auditing, QNAP Firmware Update, NAKIVO
- Example format now uses: NAS-HOSTNAME, SERVER-HOSTNAME, VM-HOSTNAME, example.local
- Ensures no customer information in codebase or version control
**Autotask Integration:**
- Removed customer name from Autotask ticket title for concise display
- Format changed from "[Backupchecks] Customer - Job Name - Status" to "[Backupchecks] Job Name - Status"
- Reduces redundancy (customer already visible in ticket company field)
### Bug Fixes
**User Interface:**
- Fixed responsive navbar overlapping page content on smaller screens
- Implemented dynamic padding adjustment using JavaScript
- Measures actual navbar height on page load, window resize, and navbar collapse toggle
- Automatically adjusts main content padding-top to prevent overlap
- Debounced resize handler for performance
## v0.1.24
### Bug Fixes

View File

@ -1 +1 @@
v0.1.24
v0.1.26