Merge branches v20260203-01 through v20260205-13 into main

This commit consolidates all development work from the following branch series:
- v20260203-* (13 branches): Initial Autotask integration, graph config UI improvements
- v20260204-* (3 branches): Dashboard redirect setting, additional refinements
- v20260205-* (13 branches): Autotask resolution improvements, changelog restructuring

Key features merged:
- Autotask PSA integration with ticket creation, resolution, and search
- Graph/mail configuration UI improvements with credential testing
- Daily dashboard redirect setting (optional navigation control)
- Changelog restructuring with improved Python structure
- Various bug fixes and UI enhancements

All functionality has been consolidated from the final state of branch
v20260205-13-changelog-python-structure to preserve working features.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
Ivo Oskamp 2026-02-06 13:41:08 +01:00
parent 7693af9306
commit 28f094f80b
20 changed files with 2204 additions and 1373 deletions

View File

@ -1 +1 @@
v20260204-03-dashboard-redirect-setting
v20260205-13-changelog-python-structure

121
README.md
View File

@ -1,121 +0,0 @@
# BackupChecks
A backup monitoring and compliance application designed for Managed Service Providers (MSPs) and IT departments.
## Features
### Mail Ingestion & Parsing
- Automated email import from Microsoft Graph API (Office 365)
- Supports 11 backup software platforms:
- Veeam (including SOBR capacity monitoring)
- Synology
- QNAP
- Nakivo
- Syncovery
- Boxafe
- R-Drive
- 3CX
- NTFS Auditing
- And more
- Intelligent email parsing extracts backup metadata
- Raw EML storage for debugging with configurable retention
### Backup Job Tracking
- Dashboard with daily backup job status summaries
- Expected vs. actual job runs based on schedules
- Missed backup detection
- Status tracking: Success, Warning, Error, Missed
- Timezone-aware calculations (defaults to Europe/Amsterdam)
### Run Checks & Review Workflow
- Manual review interface for backup failures
- Mark runs as "reviewed"
- Approval workflow for backup email processing
- Storage capacity monitoring
- Autotask PSA ticket integration
### Override Rules
- Exception rules at global or job-level
- Matching criteria: status, error message patterns
- Validity windows for temporary overrides
### Multi-Customer Management
- Organize jobs by customer
- Autotask company mapping
- Customer-scoped reporting and permissions
### Tickets & Remarks
- Internal ticket system for backup issues
- Automatic linking to affected job runs
- Scope-based ticket resolution
- Feedback/feature request board
### Reporting & Analytics
- Snapshot-based reporting with configurable periods
- Historical success rates and trend analysis
- CSV export functionality
### Autotask Integration
- Create PSA tickets from failed backup runs
- Link internal tickets to Autotask
- Deep links back to BackupChecks
### User Management
- Role-based access control (admin, viewer, custom roles)
- Theme preferences (light/dark/auto)
- In-app news/announcements
## Technology Stack
**Backend:**
- Flask 3.0.3 (Python)
- SQLAlchemy with PostgreSQL 16
- Flask-Login for authentication
- Gunicorn server
**Frontend:**
- Jinja2 templates (server-side rendering)
- Bootstrap-based responsive UI
- JavaScript/AJAX for dynamic interactions
**Infrastructure:**
- Docker containerized
- PostgreSQL database
- Adminer for database management
## Project Structure
```
backupchecks/
├── containers/backupchecks/
│ ├── Dockerfile
│ ├── requirements.txt
│ └── src/
│ ├── backend/app/
│ │ ├── main.py # Entry point
│ │ ├── models.py # Database models
│ │ ├── parsers/ # Backup software parsers
│ │ ├── integrations/ # Autotask integration
│ │ └── main/ # Route handlers
│ ├── templates/ # Jinja2 templates
│ └── static/ # CSS, JS, assets
├── deploy/
│ └── backupchecks-stack.yml # Docker Compose
└── docs/ # Documentation
```
## Deployment
The application runs as a Docker stack in Portainer.
## Configuration
Key settings are managed via the web interface under Settings:
- Mail import configuration (Microsoft Graph API)
- Autotask integration credentials
- Timezone settings
- User management
## License
See [LICENSE](LICENSE) file.

View File

@ -0,0 +1,870 @@
"""
Changelog data structure for Backupchecks
"""
CHANGELOG = [
{
"version": "v0.1.22",
"date": "2026-02-05",
"summary": "This major release introduces comprehensive Autotask PSA integration, enabling seamless ticket management, customer company mapping, and automated ticket lifecycle handling directly from Backupchecks. The integration includes extensive settings configuration, robust API client implementation, intelligent ticket linking across job runs, and conditional ticket status updates based on time entries.",
"sections": [
{
"title": "Autotask Integration Core Features",
"type": "feature",
"subsections": [
{
"subtitle": "Settings and Configuration",
"changes": [
"Complete Autotask integration settings in Settings → Integrations",
"Environment selection (Sandbox/Production) with automatic zone discovery",
"API authentication with fallback support for different tenant configurations",
"Tracking identifier (Integration Code) configuration for ticket attribution",
"Connection testing and diagnostics",
"Reference data synchronization (queues, sources, priorities, statuses)",
"Configurable ticket defaults (queue, source, status, priority)",
"Autotask integration and automatic mail import can now be properly disabled after being enabled (fixed unchecked checkbox processing)"
]
},
{
"subtitle": "Customer Company Mapping",
"changes": [
"Explicit Autotask company mapping for customers using ID-based linkage",
"Company search with auto-suggestions when opening unmapped customers",
"Automatically populates search box with customer name and displays matching Autotask companies",
"Mapping status tracking (ok/renamed/missing/invalid)",
"Bulk mapping refresh for all customers",
"Clear search boxes when opening modals for better user experience"
]
},
{
"subtitle": "Ticket Creation and Management",
"changes": [
"Create Autotask tickets directly from Run Checks page",
"Automatic ticket number assignment and storage",
"Link existing Autotask tickets to job runs",
"Cross-company ticket search for overarching infrastructure issues (search by ticket number finds tickets across all companies)",
"Ticket propagation to all active runs of the same job",
"Internal ticket registration for legacy compatibility (Tickets, Tickets/Remarks, Job Details)",
"Real-time ticket status polling and updates",
"Deleted ticket detection and audit tracking (deletion date/time and deleted-by resource information)"
]
},
{
"subtitle": "Ticket Resolution and Status Management",
"changes": [
"Conditional ticket status updates based on time entries:",
" - Tickets without time entries: automatically closed (status 5 - Complete)",
" - Tickets with time entries: remain open for time tracking continuation",
"Dynamic confirmation messages indicating closure behavior based on time entry presence",
"Safe resolution updates preserving stabilizing fields (issueType, subIssueType, source)",
"Resolution field mirroring from internal ticket notes",
"Ticket notes created via `/Tickets/{id}/Notes` endpoint with timezone-aware timestamps",
"Deleted ticket handling with complete audit trail"
]
},
{
"subtitle": "Technical Implementation",
"changes": [
"Full-featured Autotask REST API client (`integrations/autotask/client.py`)",
"Zone information discovery and endpoint resolution",
"Robust authentication handling with header-based fallback for sandbox environments",
"Picklist-based reference data retrieval (queues, sources, priorities, statuses)",
"Entity metadata parsing with tenant-specific field detection",
"Database migrations for Autotask linkage fields across SystemSettings, Customer, JobRun, and Ticket models",
"Ticketing utilities for internal/external ticket synchronization",
"Comprehensive API contract documentation (`docs/autotask_rest_api.md`)",
"Functional design living document for integration architecture"
]
}
]
},
{
"title": "User Interface Improvements",
"type": "improvement",
"changes": [
"Search boxes now clear automatically when opening modals (Run Checks Link existing, Customer mapping)",
"Auto-search for similar company names when mapping unmapped customers",
"Cross-company ticket search when using ticket numbers (e.g., \"T20260205.0001\")",
"Dynamic confirmation messages for ticket resolution based on time entries",
"Improved visibility of Autotask ticket information in Run Checks",
"Status labels displayed instead of numeric codes in ticket lists",
"\"Deleted in PSA\" status display with deletion audit information",
"\"Resolved by PSA (Autotask)\" differentiation from Backupchecks-driven resolution"
]
},
{
"title": "Bug Fixes and Stability",
"type": "fixed",
"changes": [
"Fixed Autotask REST API base URL casing (ATServicesRest/V1.0)",
"Fixed reference data retrieval using correct picklist endpoints",
"Fixed authentication fallback for sandbox-specific behavior",
"Fixed company name display from nested API responses",
"Fixed ticket ID normalization and response unwrapping (itemId handling)",
"Fixed TicketJobRun linkage for legacy ticket behavior",
"Fixed unchecked checkbox processing for enable/disable toggles (Autotask integration, automatic mail import)",
"Fixed ticket resolution updates to preserve exact field values from GET response",
"Fixed picklist field detection for tenant-specific metadata",
"Fixed migration stability with idempotent column checks",
"Fixed settings page crash with local helper functions",
"Fixed Run Checks modal stacking and Bootstrap 4/5 compatibility",
"Fixed JavaScript errors (renderModal → renderRun)",
"Fixed indentation errors preventing application startup",
"Fixed ticket propagation to ensure all active runs receive ticket linkage",
"Fixed polling to use read-only operations without state mutation"
]
},
{
"title": "Documentation",
"type": "documentation",
"changes": [
"Added comprehensive Autotask REST API contract documentation (`docs/autotask_rest_api.md`)",
"Created functional design living document for integration architecture",
"Documented ticket lifecycle, status management, and time entry considerations",
"Added changelog tracking for Claude Code changes (`docs/changelog-claude.md`)"
]
}
]
},
{
"version": "v0.1.21",
"date": "2026-01-20",
"summary": "This release focuses on improving correctness, consistency, and access control across core application workflows, with particular attention to changelog rendering, browser-specific mail readability, Run Checks visibility, role-based access restrictions, override flexibility, and VSPC object linking reliability. The goal is to ensure predictable behavior, clearer diagnostics, and safer administration across both day-to-day operations and complex multi-entity reports.",
"sections": [
{
"title": "Changelog Rendering and Documentation Accuracy",
"type": "improvement",
"changes": [
"Updated the Changelog route to render remote Markdown content instead of plain text",
"Enabled full Markdown parsing so headings, lists, links, and code blocks are displayed correctly",
"Ensured the changelog always fetches the latest version directly from the source repository at request time",
"Removed legacy plain-text rendering to prevent loss of structure and formatting"
]
},
{
"title": "Mail Rendering and Browser Compatibility",
"type": "improvement",
"changes": [
"Forced a light color scheme for embedded mail content to prevent Microsoft Edge from applying automatic dark mode styling",
"Added explicit `color-scheme` and `forced-color-adjust` rules so original mail CSS is respected",
"Ensured consistent mail readability across Edge and Firefox",
"Applied these fixes consistently across Inbox, Deleted Inbox, Job Details, Run Checks, Daily Jobs, and Admin All Mail views"
]
},
{
"title": "Run Checks Visibility and Consistency",
"type": "improvement",
"changes": [
"Added support for displaying the overall remark (overall_message) directly on the Run Checks page",
"Ensured consistency between Run Checks and Job Details, where the overall remark was already available",
"Improved operator visibility of high-level run context without requiring navigation to job details"
]
},
{
"title": "Initial Setup and User Existence Safeguards",
"type": "fixed",
"changes": [
"Fixed an incorrect redirect to the \"Initial admin setup\" page when users already exist",
"Changed setup detection logic from \"admin user exists\" to \"any user exists\"",
"Ensured existing environments always show the login page instead of allowing a new initial admin to be created",
"Prevented direct access to the initial setup route when at least one user is present"
]
},
{
"title": "Role-Based Access Control and Menu Restrictions",
"type": "improvement",
"changes": [
"Restricted the Reporter role to only access Dashboard, Reports, Changelog, and Feedback",
"Updated menu rendering to fully hide unauthorized menu items for Reporter users",
"Adjusted route access to ensure Feedback pages remain accessible for the Reporter role",
"Improved overall consistency between visible navigation and backend access rules"
]
},
{
"title": "Override Matching Flexibility and Maintainability",
"type": "feature",
"changes": [
"Added configurable error text matching modes for overrides: contains, exact, starts with, and ends with",
"Updated override evaluation logic to apply the selected match mode across run remarks and object error messages",
"Extended the overrides UI with a match type selector and improved edit support for existing overrides",
"Added a database migration to create and backfill the `overrides.match_error_mode` field for existing records"
]
},
{
"title": "Job Deletion Stability",
"type": "fixed",
"changes": [
"Fixed an error that occurred during job deletion",
"Corrected backend deletion logic to prevent runtime exceptions",
"Ensured related records are handled safely to avoid constraint or reference errors during removal"
]
},
{
"title": "VSPC Object Linking and Normalization",
"type": "fixed",
"changes": [
"Fixed VSPC company name normalization so detection and object prefixing behave consistently",
"Ensured filtered object persistence respects the UNIQUE(customer_id, object_name) constraint",
"Correctly update `last_seen` timestamps for existing objects",
"Added automatic object persistence routing for VSPC per-company runs, ensuring objects are linked to the correct customer and job",
"Improved auto-approval for VSPC Active Alarms summaries with per-company run creation and case-insensitive object matching",
"Added best-effort retroactive processing to automatically link older inbox messages once company mappings are approved"
]
},
{
"title": "VSPC Normalization Bug Fixes and Backward Compatibility",
"type": "fixed",
"changes": [
"Removed duplicate definitions of VSPC Active Alarms company extraction logic that caused inconsistent normalization",
"Ensured a single, consistent normalization path is used when creating jobs and linking objects",
"Improved object linking so real objects (e.g. HV01, USB Disk) are reliably associated with their jobs",
"Restored automatic re-linking for both new and historical VSPC mails",
"Added backward-compatible matching to prevent existing VSPC jobs from breaking due to earlier inconsistent company naming"
]
}
]
},
{
"version": "v0.1.20",
"date": "2026-01-15",
"summary": "This release delivers a comprehensive set of improvements focused on parser correctness, data consistency, and clearer operator workflows across Inbox handling, Run Checks, and administrative tooling. The main goal of these changes is to ensure that backup notifications are parsed reliably, presented consistently, and handled through predictable and auditable workflows, even for complex or multi-entity reports.",
"sections": [
{
"title": "Mail Parsing and Data Integrity",
"type": "improvement",
"changes": [
"Fixed Veeam Backup for Microsoft 365 parsing where the overall summary message was not consistently stored",
"Improved extraction of overall detail messages so permission and role warnings are reliably captured",
"Ensured the extracted overall message is always available across Job Details, Run Checks, and reporting views",
"Added decoding of HTML entities in parsed object fields (name, type, status, error message) before storage, ensuring characters such as ampersands are displayed correctly",
"Improved robustness of parsing logic to prevent partial or misleading data from being stored when mails contain mixed or malformed content"
]
},
{
"title": "Object Classification and Sorting",
"type": "improvement",
"changes": [
"Updated object list sorting to improve readability and prioritization",
"Objects are now grouped by severity in a fixed order: Errors first, then Warnings, followed by all other statuses",
"Within each severity group, objects are sorted alphabetically (AZ)",
"Applied the same sorting logic consistently across Inbox, Job Details, Run Checks, Daily Jobs, and the Admin All Mail view",
"Improved overall run status determination by reliably deriving the worst detected object state"
]
},
{
"title": "Parsers Overview and Maintainability",
"type": "improvement",
"changes": [
"Refactored the Parsers overview page to use the central parser registry instead of a static, hardcoded list",
"All available parsers are now displayed automatically, ensuring the page stays in sync as parsers are added or removed",
"Removed hardcoded parser definitions from templates to improve long-term maintainability",
"Fixed a startup crash in the parsers route caused by an invalid absolute import by switching to a package-relative import",
"Prevented Gunicorn worker boot failures and Bad Gateway errors during application initialization"
]
},
{
"title": "User Management and Feedback Handling",
"type": "feature",
"changes": [
"Added support for editing user roles directly from the User Management interface",
"Implemented backend logic to update existing role assignments without requiring user deletion",
"Ensured role changes are applied immediately and reflected correctly in permissions and access control",
"Updated feedback listings to show only Open items by default",
"Ensured Resolved items are always sorted to the bottom when viewing all feedback",
"Preserved existing filtering, searching, and user-controlled sorting behavior"
]
},
{
"title": "UI Improvements and Usability Enhancements",
"type": "improvement",
"changes": [
"Introduced reusable ellipsis handling for long detail fields to prevent layout overlap",
"Added click-to-expand behavior for truncated fields, with double-click support to expand and select all text",
"Added automatic tooltips showing the full value when a field is truncated",
"Removed the redundant \"Objects\" heading above objects tables to reduce visual clutter",
"Applied truncation and expansion behavior consistently across Inbox, Deleted Mail, Run Checks, Daily Jobs, Job Detail views, and Admin All Mail",
"Reset expanded ellipsis fields when Bootstrap modals or offcanvas components are opened or closed to prevent state leakage",
"Fixed layout issues where the Objects table could overlap mail content in the Run Checks popup"
]
},
{
"title": "Veeam Cloud Connect and VSPC Parsing",
"type": "improvement",
"changes": [
"Improved the Veeam Cloud Connect Report parser by combining User and Repository Name into a single object identifier",
"Excluded \"TOTAL\" rows from object processing",
"Correctly classified red rows as Errors and yellow/orange rows as Warnings",
"Ensured overall status is set to Error when one or more objects are in error state",
"Added support for Veeam Service Provider Console daily alarm summary emails",
"Implemented per-company object aggregation and derived overall status from the worst detected state",
"Improved detection of VSPC Active Alarms emails to prevent incorrect fallback to other Veeam parsers",
"Fixed a SyntaxError in the VSPC parser that caused application startup failures"
]
},
{
"title": "VSPC Company Mapping Workflow",
"type": "feature",
"changes": [
"Introduced a dedicated company-mapping popup for VSPC Active Alarms summary reports",
"Enabled manual mapping of companies found in mails to existing customers",
"Implemented per-company job and run creation using the format \"Active alarms summary | <Company>\"",
"Disabled the standard approval flow for this report type and replaced it with a dedicated mapping workflow",
"Required all detected companies to be mapped before full approval, while still allowing partial approvals",
"Prevented duplicate run creation on repeated approvals",
"Improved visibility and usability of the mapping popup with scroll support for large company lists",
"Ensured only alarms belonging to the selected company are attached to the corresponding run"
]
},
{
"title": "NTFS Auditing and Synology ABB Enhancements",
"type": "improvement",
"changes": [
"Added full parser support for NTFS Auditing reports",
"Improved hostname and FQDN extraction from subject lines, supporting multiple subject formats and prefixes",
"Ensured consistent job name generation as \"<hostname> file audits\"",
"Set overall status to Warning when detected change counts are greater than zero",
"Improved Synology Active Backup for Business parsing to detect partially completed jobs as Warning",
"Added support for localized completion messages and subject variants",
"Improved per-device object extraction and ensured specific device statuses take precedence over generic listings"
]
},
{
"title": "Workflow Simplification and Cleanup",
"type": "improvement",
"changes": [
"Removed the \"Mark success (override)\" button from the Run Checks popup",
"Prevented creation of unintended overrides when marking individual runs as successful",
"Simplified override handling so Run Checks actions no longer affect override administration",
"Ensured firmware update notifications (QNAP) are treated as informational warnings and excluded from missing-run detection and reporting"
]
}
]
},
{
"version": "v0.1.19",
"date": "2026-01-10",
"summary": "This release delivers a broad set of improvements focused on reliability, transparency, and operational control across mail processing, administrative auditing, and Run Checks workflows.",
"sections": [
{
"title": "Mail Import Reliability and Data Integrity",
"type": "improvement",
"changes": [
"Updated the mail import flow so messages are only moved to the processed folder after a successful database store and commit",
"Prevented Graph emails from being moved when parsing, storing, or committing data fails",
"Added explicit commit and rollback handling to guarantee database consistency",
"Improved logging around import, commit, and rollback failures"
]
},
{
"title": "Administrative Mail Auditing and Visibility",
"type": "feature",
"changes": [
"Introduced an admin-only \"All Mail\" audit page",
"Implemented pagination with a fixed page size of 50 items",
"Added always-visible search filters (From, Subject, Backup, Type, Job name, date range)",
"Added \"Only unlinked\" filter to identify messages not associated with any job"
]
},
{
"title": "Run Checks Usability and Control",
"type": "improvement",
"changes": [
"Added copy-to-clipboard icon next to ticket numbers",
"Introduced manual \"Success (override)\" action for Operators and Admins",
"Updated UI indicators for overridden runs with blue success status",
"Improved mail rendering with fallback to text bodies and EML extraction"
]
},
{
"title": "Parser Enhancements",
"type": "improvement",
"changes": [
"Added parser support for 3CX SSL Certificate notification emails",
"Added detection for Synology DSM automatic update cancellation messages"
]
}
]
},
{
"version": "v0.1.18",
"date": "2026-01-05",
"summary": "This release focuses on improving ticket reuse, scoping, and visibility across jobs, runs, and history views.",
"sections": [
{
"title": "Ticket Linking and Reuse",
"type": "improvement",
"changes": [
"Updated ticket linking logic to allow the same ticket number across multiple jobs and runs",
"Prevented duplicate ticket creation errors when reusing existing ticket codes",
"Ensured existing tickets are consistently reused and linked"
]
},
{
"title": "Job History Enhancements",
"type": "feature",
"changes": [
"Added Tickets and Remarks section to Job History mail popup",
"Enabled viewing and managing tickets/remarks directly from Job History",
"Aligned ticket handling with Run Checks popup behavior"
]
}
]
},
{
"version": "v0.1.17",
"date": "2025-12-30",
"summary": "This release focuses on improving job normalization, ticket and remark handling, UI usability, and the robustness of run and object detection.",
"sections": [
{
"title": "Job Normalization and Aggregation",
"type": "improvement",
"changes": [
"Veeam job names now normalized to prevent duplicates (Combined/Full suffixes merged)",
"Added support for archiving inactive jobs"
]
},
{
"title": "Inbox and Bulk Operations",
"type": "feature",
"changes": [
"Introduced multi-select inbox functionality for Operator and Admin roles",
"Added bulk \"Delete selected\" action with validation and audit logging"
]
},
{
"title": "Tickets and Remarks",
"type": "improvement",
"changes": [
"Ticket creation now uses user-provided codes with strict validation",
"Editing of tickets/remarks disabled; must be resolved and recreated",
"Removed ticket description fields to prevent inconsistent data"
]
}
]
},
{
"version": "v0.1.16",
"date": "2025-12-25",
"summary": "This release significantly expands and stabilizes the reporting functionality, focusing on configurability, correctness, and richer output formats.",
"sections": [
{
"title": "Reporting Enhancements",
"type": "feature",
"changes": [
"Reports now job-aggregated instead of object-level",
"Full report lifecycle management added",
"Advanced reporting foundations with configurable definitions",
"Multiple export formats: CSV, HTML, and PDF",
"Extensive column selection with drag-and-drop ordering",
"Job-level aggregated metrics and success rate charts"
]
}
]
},
{
"version": "v0.1.15",
"date": "2025-12-20",
"summary": "This release focused on improving operational clarity and usability by strengthening dashboard guidance and introducing reporting foundation.",
"sections": [
{
"title": "Dashboard and User Guidance",
"type": "improvement",
"changes": [
"Added comprehensive explanatory section to Dashboard",
"Implemented automatic redirection to Dashboard on first daily visit",
"Refactored Settings area into clearly separated sections"
]
},
{
"title": "Dashboard News",
"type": "feature",
"changes": [
"Added per-user Dashboard News section with read/unread tracking",
"Full admin management of news items"
]
},
{
"title": "Run Checks Multi-Select",
"type": "improvement",
"changes": [
"Added Shift-click multi-selection for efficient bulk review",
"Fixed edge cases with selection and checkbox synchronization"
]
}
]
},
{
"version": "v0.1.14",
"date": "2025-12-15",
"summary": "Focused on improving sorting, parsing, and override functionality.",
"sections": [
{
"title": "Daily Jobs Sorting",
"type": "improvement",
"changes": [
"Consistent multi-level sort: Customer → Backup Software → Type → Job Name",
"Fixed backend ordering to ensure server-side consistency"
]
},
{
"title": "Overrides Configuration",
"type": "improvement",
"changes": [
"Replaced free-text inputs with dropdowns for Backup Software and Type",
"Made newly created overrides apply immediately and retroactively",
"Added full support for editing existing overrides"
]
},
{
"title": "Overrides UI Indicators",
"type": "feature",
"changes": [
"Introduced blue status indicator for runs with overrides applied",
"Added persistent override reporting metadata to job runs"
]
}
]
},
{
"version": "v0.1.13",
"date": "2025-12-10",
"summary": "Focused on improving visibility and consistency of Tickets and Remarks.",
"sections": [
{
"title": "Tickets and Remarks Visibility",
"type": "improvement",
"changes": [
"Added clear visual indicators for active Tickets and Remarks in Run Checks",
"Enhanced Job Details to display actual ticket numbers and remark messages",
"Improved navigation with direct \"Job page\" links"
]
},
{
"title": "Missed Run Detection",
"type": "improvement",
"changes": [
"Now includes ±1 hour tolerance window",
"Respects configured UI timezone"
]
}
]
},
{
"version": "v0.1.12",
"date": "2025-12-05",
"summary": "Dashboard improvements, inbox soft-delete, and enhanced parser support.",
"sections": [
{
"title": "Dashboard and UI",
"type": "improvement",
"changes": [
"Corrected dashboard counters for Expected, Missed, and Success (override) statuses",
"Fixed layout issues and improved label wrapping",
"Extended Job History with weekday labels and review metadata"
]
},
{
"title": "Inbox Soft-Delete",
"type": "feature",
"changes": [
"Introduced soft-delete for Inbox messages",
"Added Admin-only \"Deleted mails\" page with audit details",
"Added popup previews for deleted mails"
]
},
{
"title": "Parser Enhancements",
"type": "improvement",
"changes": [
"Improved Veeam parsing (Health Check, License Key)",
"Added Synology support (Active Backup, R-Sync, Account Protection)",
"Added R-Drive Image and Syncovery parsers"
]
}
]
},
{
"version": "v0.1.11",
"date": "2025-11-30",
"summary": "Major stability fixes and introduction of Run Checks page.",
"sections": [
{
"title": "Stability and Bug Fixes",
"type": "fixed",
"changes": [
"Fixed multiple page crashes caused by missing imports",
"Resolved Jinja2 template errors and SQL/runtime issues"
]
},
{
"title": "Run Checks Page",
"type": "feature",
"changes": [
"Introduced new Run Checks page to review job runs independently",
"Displays all unreviewed runs with no time-based filtering",
"Supports bulk review actions and per-job review via popups",
"Added admin-only features for audit and review management"
]
},
{
"title": "Timezone Support",
"type": "feature",
"changes": [
"Added configurable timezone setting in Settings",
"Updated all frontend date/time rendering to use configured timezone"
]
}
]
},
{
"version": "v0.1.10",
"date": "2025-11-25",
"summary": "Performance improvements and batch processing for large datasets.",
"sections": [
{
"title": "Performance and Stability",
"type": "improvement",
"changes": [
"Reworked Re-parse all to process in controlled batches",
"Added execution time guards to prevent timeouts",
"Optimized job-matching queries and database operations"
]
},
{
"title": "Job Matching and Parsing",
"type": "improvement",
"changes": [
"Fixed approved job imports to persist from_address",
"Improved Veeam Backup Job parsing with multi-line warnings/errors",
"Fixed regressions in backup object detection and storage"
]
},
{
"title": "Tickets and Overrides",
"type": "improvement",
"changes": [
"Introduced run-date scoped ticket activity",
"Implemented scoping for remarks",
"Improved override handling with immediate application"
]
}
]
},
{
"version": "v0.1.9",
"date": "2025-11-20",
"summary": "Changelog system improvements and code refactoring.",
"sections": [
{
"title": "Changelog System",
"type": "improvement",
"changes": [
"Migrated to structured, non-markdown format",
"Simplified rendering logic",
"Standardized formatting across all versions"
]
},
{
"title": "Code Refactoring",
"type": "improvement",
"changes": [
"Refactored large routes.py into multiple smaller modules",
"Introduced shared routes module for common imports",
"Fixed NameError issues after refactoring"
]
}
]
},
{
"version": "v0.1.8",
"date": "2025-11-15",
"summary": "Consistent job matching and auto-approval across all mail processing flows.",
"sections": [
{
"title": "Job Matching Improvements",
"type": "improvement",
"changes": [
"Introduced single shared job-matching helper based on full unique key",
"Updated manual inbox approval to reuse existing jobs",
"Aligned inbox Re-parse all auto-approve logic",
"Fixed automatic mail import auto-approve"
]
}
]
},
{
"version": "v0.1.7",
"date": "2025-11-10",
"summary": "Export/import functionality and parser enhancements.",
"sections": [
{
"title": "Job Export and Import",
"type": "feature",
"changes": [
"Introduced export and import functionality for approved jobs",
"Import process automatically creates missing customers",
"Updates existing jobs based on unique identity"
]
},
{
"title": "Parser Enhancements",
"type": "improvement",
"changes": [
"Improved Boxafe parsing (Shared Drives, Domain Accounts)",
"Added Synology Hyper Backup Dutch support",
"Added Veeam SOBR and Health Check support"
]
}
]
},
{
"version": "v0.1.6",
"date": "2025-11-05",
"summary": "Auto-approve fixes and centralized changelog.",
"sections": [
{
"title": "Bug Fixes",
"type": "fixed",
"changes": [
"Corrected auto-approve logic for automatic mail imports",
"Fixed Re-parse all to respect approved status",
"Fixed multiple Jinja2 template syntax errors"
]
},
{
"title": "Changelog Page",
"type": "feature",
"changes": [
"Introduced centralized Changelog page",
"Added to main navigation"
]
}
]
},
{
"version": "v0.1.5",
"date": "2025-10-30",
"summary": "Microsoft Graph restoration and application reset functionality.",
"sections": [
{
"title": "Microsoft Graph",
"type": "fixed",
"changes": [
"Restored Graph folder retrieval (fixed import error)",
"Fixed automatic mail importer signal-based timeout issues",
"Implemented missing backend logic for automatic imports"
]
},
{
"title": "Application Reset",
"type": "feature",
"changes": [
"Added Application Reset option in Settings",
"Full backend support for complete data wipe",
"Confirmation step to prevent accidental resets"
]
}
]
},
{
"version": "v0.1.4",
"date": "2025-10-25",
"summary": "Database migration stability and object parsing improvements.",
"sections": [
{
"title": "Database Stability",
"type": "fixed",
"changes": [
"Stabilized migrations by running in separate transaction scopes",
"Resolved backend startup 502 errors",
"Eliminated ResourceClosedError exceptions"
]
},
{
"title": "Object Parsing",
"type": "improvement",
"changes": [
"Aligned manual imports with Re-parse all behavior",
"Ensured consistent object detection across all import paths",
"Hardened against Microsoft Graph timeouts"
]
}
]
},
{
"version": "v0.1.3",
"date": "2025-10-20",
"summary": "Logging persistence and UI improvements.",
"sections": [
{
"title": "Logging",
"type": "fixed",
"changes": [
"Fixed logging persistence to database",
"Added pagination (20 entries per page)",
"Extended view to show all available log fields"
]
},
{
"title": "Jobs and Daily Jobs",
"type": "improvement",
"changes": [
"Standardized default sorting",
"Persisted Daily Jobs start date setting",
"Improved table readability and layout"
]
},
{
"title": "Tickets and Remarks",
"type": "feature",
"changes": [
"Added database schema for persistent tickets",
"Implemented Tickets page with tabbed navigation",
"Added indicators in Daily Jobs for active tickets/remarks"
]
}
]
},
{
"version": "v0.1.2",
"date": "2025-10-15",
"summary": "Parser support expansion and in-app logging system.",
"sections": [
{
"title": "Parser Support",
"type": "improvement",
"changes": [
"Extended Synology Hyper Backup parser (Strato HiDrive support)",
"Improved handling of successful runs without objects"
]
},
{
"title": "Administration",
"type": "feature",
"changes": [
"Introduced admin-only \"Delete all jobs\" action",
"Ensured related mails moved back to Inbox on job deletion",
"Fixed foreign key constraint issues"
]
},
{
"title": "Logging System",
"type": "feature",
"changes": [
"Moved to in-app AdminLog-based logging",
"Detailed logging per imported/auto-approved email",
"Summary logging at end of import runs"
]
},
{
"title": "Object Persistence",
"type": "improvement",
"changes": [
"Restored persistence after manual approval",
"Added maintenance action to backfill missing object links",
"Centralized object persistence logic"
]
}
]
}
]

View File

@ -558,14 +558,17 @@ class AutotaskClient:
return {"id": tid}
def update_ticket_resolution_safe(self, ticket_id: int, resolution_text: str) -> Dict[str, Any]:
"""Safely update the Ticket 'resolution' field without changing status.
"""Safely update the Ticket 'resolution' field with conditional status update.
Autotask Tickets require a full PUT update; therefore we must:
- GET /Tickets/{id} to retrieve current stabilising fields (including classification/routing)
- PUT /Tickets with those stabilising fields unchanged, and only update 'resolution'
- Query time entries for the ticket
- PUT /Tickets with stabilising fields and conditional status
Status logic (per API contract section 9):
- If NO time entries exist: set status to 5 (Complete)
- If time entries exist: keep current status unchanged
IMPORTANT:
- GET /Tickets/{id} returns the ticket object under the 'item' envelope in most tenants.
@ -599,42 +602,65 @@ def update_ticket_resolution_safe(self, ticket_id: int, resolution_text: str) ->
if not isinstance(ticket, dict) or not ticket:
raise AutotaskError("Autotask did not return a ticket object.")
def _pick(d: Dict[str, Any], keys: List[str]) -> Any:
def _pick(d: Dict[str, Any], keys: List[str]) -> tuple[bool, Any]:
"""Pick first available field from possible field names.
Returns tuple: (found, value)
- found=True if field exists (even if value is None)
- found=False if field doesn't exist in dict
This allows us to distinguish between "field missing" vs "field is null",
which is critical for Autotask PUT payloads that require exact values.
"""
for k in keys:
if k in d and d.get(k) not in (None, ""):
return d.get(k)
return None
if k in d:
return (True, d[k])
return (False, None)
# Required stabilising fields for safe resolution updates (validated via Postman tests)
resolved_issue_type = _pick(ticket, ["issueType", "issueTypeID", "issueTypeId"])
resolved_sub_issue_type = _pick(ticket, ["subIssueType", "subIssueTypeID", "subIssueTypeId"])
resolved_source = _pick(ticket, ["source", "sourceID", "sourceId"])
resolved_status = _pick(ticket, ["status", "statusID", "statusId"])
# Required stabilising fields for safe resolution updates (validated via Postman tests).
# Field names are camelCase as per API contract (docs/autotask_rest_api.md section 2.1).
# We must copy the EXACT values from GET response to PUT payload, even if null.
found_id, ticket_id = _pick(ticket, ["id"])
found_issue_type, resolved_issue_type = _pick(ticket, ["issueType", "issueTypeID", "issueTypeId"])
found_sub_issue_type, resolved_sub_issue_type = _pick(ticket, ["subIssueType", "subIssueTypeID", "subIssueTypeId"])
found_source, resolved_source = _pick(ticket, ["source", "sourceID", "sourceId"])
found_status, resolved_status = _pick(ticket, ["status", "statusID", "statusId"])
# Validate that required fields exist in the response
missing: List[str] = []
if _pick(ticket, ["id"]) in (None, ""):
if not found_id or ticket_id in (None, ""):
missing.append("id")
if resolved_issue_type in (None, ""):
missing.append("issueType")
if resolved_sub_issue_type in (None, ""):
missing.append("subIssueType")
if resolved_source in (None, ""):
missing.append("source")
if resolved_status in (None, ""):
if not found_status or resolved_status in (None, ""):
missing.append("status")
if not found_issue_type:
missing.append("issueType")
if not found_sub_issue_type:
missing.append("subIssueType")
if not found_source:
missing.append("source")
if missing:
raise AutotaskError(
"Cannot safely update ticket resolution because required fields are missing: " + ", ".join(missing)
)
# Check for time entries as per API contract section 9
# If no time entries exist, we can set status to 5 (Complete)
# If time entries exist, status remains unchanged
time_entries = self.query_time_entries_by_ticket_id(int(ticket_id))
has_time_entries = len(time_entries) > 0
# Determine final status based on time entry check
# Status 5 = Complete (sets completedDate and resolvedDateTime)
final_status = resolved_status if has_time_entries else 5
# Build payload with exact values from GET response (including null if that's what we got)
payload: Dict[str, Any] = {
"id": int(ticket.get("id")),
"id": int(ticket_id),
"issueType": resolved_issue_type,
"subIssueType": resolved_sub_issue_type,
"source": resolved_source,
# Keep status unchanged
"status": resolved_status,
"status": final_status,
"resolution": str(resolution_text or ""),
}
@ -650,7 +676,7 @@ def update_ticket_resolution_safe(self, ticket_id: int, resolution_text: str) ->
]
for f in optional_fields:
if f in ticket:
payload[f] = ticket.get(f)
payload[f] = ticket[f]
return self.update_ticket(payload)
@ -723,6 +749,7 @@ def update_ticket_resolution_safe(self, ticket_id: int, resolution_text: str) ->
return items[0]
return {}
def get_ticket_note(self, note_id: int) -> Dict[str, Any]:
"""Retrieve a TicketNote by ID via GET /TicketNotes/{id}."""
@ -942,3 +969,63 @@ def update_ticket_resolution_safe(self, ticket_id: int, resolution_text: str) ->
if limit and isinstance(limit, int) and limit > 0:
return items[: int(limit)]
return items
def query_tickets_by_number(
self,
ticket_number: str,
*,
exclude_status_ids: Optional[List[int]] = None,
limit: int = 10,
) -> List[Dict[str, Any]]:
"""Query Tickets by ticket number across all companies.
Uses POST /Tickets/query.
This is useful for linking overarching issues that span multiple companies.
"""
tnum = (ticket_number or "").strip()
if not tnum:
return []
flt: List[Dict[str, Any]] = [
{"op": "eq", "field": "ticketNumber", "value": tnum},
]
ex: List[int] = []
for x in exclude_status_ids or []:
try:
v = int(x)
except Exception:
continue
if v > 0:
ex.append(v)
if ex:
flt.append({"op": "notIn", "field": "status", "value": ex})
data = self._request("POST", "Tickets/query", json_body={"filter": flt})
items = self._as_items_list(data)
# Respect limit if tenant returns more.
if limit and isinstance(limit, int) and limit > 0:
return items[: int(limit)]
return items
def query_time_entries_by_ticket_id(self, ticket_id: int) -> List[Dict[str, Any]]:
"""Query TimeEntries for a specific ticket.
Uses POST /TimeEntries/query as per API contract section 6.
Returns list of time entry items. Empty list if no time entries exist.
"""
try:
tid = int(ticket_id)
except Exception:
tid = 0
if tid <= 0:
return []
payload = {"filter": [{"op": "eq", "field": "ticketID", "value": tid}]}
data = self._request("POST", "TimeEntries/query", json_body=payload)
return self._as_items_list(data)

View File

@ -1,48 +1,13 @@
from .routes_shared import * # noqa: F401,F403
import markdown
GITEA_CHANGELOG_RAW_URL = (
"https://gitea.oskamp.info/ivooskamp/backupchecks/raw/branch/main/docs/changelog.md"
)
from ..changelog import CHANGELOG
@main_bp.route("/changelog")
@login_required
@roles_required("admin", "operator", "reporter", "viewer")
def changelog_page():
changelog_md = ""
changelog_html = ""
error = None
try:
resp = requests.get(
GITEA_CHANGELOG_RAW_URL,
timeout=10,
headers={"Accept": "text/plain, text/markdown; q=0.9, */*; q=0.1"},
)
if resp.status_code != 200:
raise RuntimeError(f"HTTP {resp.status_code}")
changelog_md = resp.text or ""
changelog_html = markdown.markdown(
changelog_md,
extensions=[
"fenced_code",
"tables",
"sane_lists",
"toc",
],
output_format="html5",
)
except Exception as exc: # pragma: no cover
error = f"Unable to load changelog from Gitea ({GITEA_CHANGELOG_RAW_URL}): {exc}"
return render_template(
"main/changelog.html",
changelog_md=changelog_md,
changelog_html=changelog_html,
changelog_error=error,
changelog_source_url=GITEA_CHANGELOG_RAW_URL,
changelog_versions=CHANGELOG,
)

View File

@ -5,212 +5,6 @@ from .routes_shared import _format_datetime, _get_or_create_settings, _apply_ove
# A job is only marked Missed after the latest expected time plus this grace.
MISSED_GRACE_WINDOW = timedelta(hours=1)
# Job types that should never participate in schedule inference
_SKIP_SCHEDULE_TYPES = {
("veeam", "license key"),
("synology", "account protection"),
("synology", "updates"),
("qnap", "firmware update"),
("syncovery", "syncovery"),
}
def _batch_infer_schedules(job_ids: list[int], tz) -> dict[int, dict]:
"""Batch infer weekly schedules for multiple jobs in a single query.
Returns dict of job_id -> {weekday: [times]} schedule maps.
"""
MIN_OCCURRENCES = 3
if not job_ids:
return {}
# Load all historical runs for schedule inference in one query
try:
runs = (
JobRun.query
.filter(
JobRun.job_id.in_(job_ids),
JobRun.run_at.isnot(None),
JobRun.missed.is_(False),
JobRun.mail_message_id.isnot(None),
)
.order_by(JobRun.job_id, JobRun.run_at.desc())
.limit(len(job_ids) * 500) # ~500 runs per job max
.all()
)
except Exception:
runs = []
# Group runs by job_id
runs_by_job: dict[int, list] = {jid: [] for jid in job_ids}
for r in runs:
if r.job_id in runs_by_job and len(runs_by_job[r.job_id]) < 500:
runs_by_job[r.job_id].append(r)
# Process each job's runs
result = {}
for job_id in job_ids:
job_runs = runs_by_job.get(job_id, [])
schedule = {i: [] for i in range(7)}
if not job_runs:
result[job_id] = schedule
continue
counts = {i: {} for i in range(7)}
for r in job_runs:
if not r.run_at:
continue
dt = r.run_at
if tz is not None:
try:
if dt.tzinfo is None:
dt = dt.replace(tzinfo=datetime_module.timezone.utc).astimezone(tz)
else:
dt = dt.astimezone(tz)
except Exception:
pass
wd = dt.weekday()
minute_bucket = (dt.minute // 15) * 15
tstr = f"{dt.hour:02d}:{minute_bucket:02d}"
counts[wd][tstr] = int(counts[wd].get(tstr, 0)) + 1
for wd in range(7):
keep = [t for t, c in counts[wd].items() if int(c) >= MIN_OCCURRENCES]
schedule[wd] = sorted(keep)
result[job_id] = schedule
return result
def _batch_infer_monthly_schedules(job_ids: list[int], tz) -> dict[int, dict | None]:
"""Batch infer monthly schedules for multiple jobs.
Returns dict of job_id -> monthly schedule dict or None.
"""
MIN_OCCURRENCES = 3
if not job_ids:
return {}
# Load runs for monthly inference
try:
runs = (
JobRun.query
.filter(
JobRun.job_id.in_(job_ids),
JobRun.run_at.isnot(None),
JobRun.missed.is_(False),
JobRun.mail_message_id.isnot(None),
)
.order_by(JobRun.job_id, JobRun.run_at.asc())
.limit(len(job_ids) * 500)
.all()
)
except Exception:
runs = []
# Group runs by job_id
runs_by_job: dict[int, list] = {jid: [] for jid in job_ids}
for r in runs:
if r.job_id in runs_by_job and len(runs_by_job[r.job_id]) < 500:
runs_by_job[r.job_id].append(r)
result = {}
for job_id in job_ids:
job_runs = runs_by_job.get(job_id, [])
if len(job_runs) < MIN_OCCURRENCES:
result[job_id] = None
continue
# Convert to local time
local_dts = []
for r in job_runs:
if not r.run_at:
continue
dt = r.run_at
if tz is not None:
try:
if dt.tzinfo is None:
dt = dt.replace(tzinfo=datetime_module.timezone.utc).astimezone(tz)
else:
dt = dt.astimezone(tz)
except Exception:
pass
local_dts.append(dt)
if len(local_dts) < MIN_OCCURRENCES:
result[job_id] = None
continue
# Cadence heuristic
local_dts_sorted = sorted(local_dts)
gaps = []
for i in range(1, len(local_dts_sorted)):
try:
delta_days = (local_dts_sorted[i] - local_dts_sorted[i - 1]).total_seconds() / 86400.0
if delta_days > 0:
gaps.append(delta_days)
except Exception:
continue
if gaps:
gaps_sorted = sorted(gaps)
median_gap = gaps_sorted[len(gaps_sorted) // 2]
if median_gap < 20.0:
result[job_id] = None
continue
# Count day-of-month occurrences
dom_counts = {}
time_counts_by_dom = {}
for dt in local_dts:
dom = int(dt.day)
dom_counts[dom] = int(dom_counts.get(dom, 0)) + 1
minute_bucket = (dt.minute // 15) * 15
tstr = f"{int(dt.hour):02d}:{int(minute_bucket):02d}"
if dom not in time_counts_by_dom:
time_counts_by_dom[dom] = {}
time_counts_by_dom[dom][tstr] = int(time_counts_by_dom[dom].get(tstr, 0)) + 1
best_dom = None
best_dom_count = 0
for dom, c in dom_counts.items():
if int(c) >= MIN_OCCURRENCES and int(c) > best_dom_count:
best_dom = int(dom)
best_dom_count = int(c)
if best_dom is None:
result[job_id] = None
continue
time_counts = time_counts_by_dom.get(best_dom) or {}
keep_times = [t for t, c in time_counts.items() if int(c) >= MIN_OCCURRENCES]
if not keep_times:
best_t = None
best_c = 0
for t, c in time_counts.items():
if int(c) > best_c:
best_t = t
best_c = int(c)
if best_t:
keep_times = [best_t]
keep_times = sorted(set(keep_times))
if not keep_times:
result[job_id] = None
continue
result[job_id] = {"day_of_month": int(best_dom), "times": keep_times}
return result
@main_bp.route("/daily-jobs")
@login_required
@roles_required("admin", "operator", "viewer")
@ -236,6 +30,8 @@ def daily_jobs():
missed_start_date = getattr(settings, "daily_jobs_start_date", None)
# Day window: treat run_at as UTC-naive timestamps stored in UTC (existing behavior)
# Note: if your DB stores local-naive timestamps, this still works because the same logic
# is used consistently in schedule inference and details.
if tz:
local_midnight = datetime(
year=target_date.year,
@ -278,7 +74,6 @@ def daily_jobs():
weekday_idx = target_date.weekday() # 0=Mon..6=Sun
# Load all non-archived jobs with customer eagerly loaded
jobs = (
Job.query.join(Customer, isouter=True)
.filter(Job.archived.is_(False))
@ -286,112 +81,18 @@ def daily_jobs():
.all()
)
# Filter out job types that should skip schedule inference
eligible_jobs = []
for job in jobs:
bs = (job.backup_software or '').strip().lower()
bt = (job.backup_type or '').strip().lower()
if (bs, bt) not in _SKIP_SCHEDULE_TYPES:
eligible_jobs.append(job)
job_ids = [j.id for j in eligible_jobs]
# Batch load all today's runs for all jobs in one query
all_runs_today = []
if job_ids:
try:
all_runs_today = (
JobRun.query
.filter(
JobRun.job_id.in_(job_ids),
JobRun.run_at >= start_of_day,
JobRun.run_at < end_of_day,
)
.order_by(JobRun.job_id, JobRun.run_at.asc())
.all()
)
except Exception:
all_runs_today = []
# Group runs by job_id
runs_by_job: dict[int, list] = {jid: [] for jid in job_ids}
for r in all_runs_today:
if r.job_id in runs_by_job:
runs_by_job[r.job_id].append(r)
# Batch infer weekly schedules
schedule_maps = _batch_infer_schedules(job_ids, tz)
# For jobs without weekly schedule, batch infer monthly
jobs_needing_monthly = [
jid for jid in job_ids
if not (schedule_maps.get(jid, {}).get(weekday_idx) or [])
]
monthly_schedules = _batch_infer_monthly_schedules(jobs_needing_monthly, tz) if jobs_needing_monthly else {}
# Batch load ticket indicators
job_has_ticket: dict[int, bool] = {jid: False for jid in job_ids}
job_has_remark: dict[int, bool] = {jid: False for jid in job_ids}
if job_ids:
try:
ticket_job_ids = db.session.execute(
text(
"""
SELECT DISTINCT ts.job_id
FROM tickets t
JOIN ticket_scopes ts ON ts.ticket_id = t.id
WHERE ts.job_id = ANY(:job_ids)
AND t.active_from_date <= :target_date
AND (
t.resolved_at IS NULL
OR ((t.resolved_at AT TIME ZONE 'UTC' AT TIME ZONE 'Europe/Amsterdam')::date) >= :target_date
)
"""
),
{"job_ids": job_ids, "target_date": target_date},
).scalars().all()
for jid in ticket_job_ids:
job_has_ticket[jid] = True
except Exception:
pass
try:
remark_job_ids = db.session.execute(
text(
"""
SELECT DISTINCT rs.job_id
FROM remarks r
JOIN remark_scopes rs ON rs.remark_id = r.id
WHERE rs.job_id = ANY(:job_ids)
AND COALESCE(
r.active_from_date,
((r.start_date AT TIME ZONE 'UTC' AT TIME ZONE 'Europe/Amsterdam')::date)
) <= :target_date
AND (
r.resolved_at IS NULL
OR ((r.resolved_at AT TIME ZONE 'UTC' AT TIME ZONE 'Europe/Amsterdam')::date) >= :target_date
)
"""
),
{"job_ids": job_ids, "target_date": target_date},
).scalars().all()
for jid in remark_job_ids:
job_has_remark[jid] = True
except Exception:
pass
rows = []
for job in eligible_jobs:
schedule_map = schedule_maps.get(job.id, {})
for job in jobs:
schedule_map = _infer_schedule_map_from_runs(job.id)
expected_times = schedule_map.get(weekday_idx) or []
# If no weekly schedule, try monthly
# If no weekly schedule is inferred (e.g. monthly jobs), try monthly inference.
if not expected_times:
monthly = monthly_schedules.get(job.id)
monthly = _infer_monthly_schedule_from_runs(job.id)
if monthly:
dom = int(monthly.get("day_of_month") or 0)
mtimes = monthly.get("times") or []
# For months shorter than dom, treat the last day of month as the scheduled day.
try:
import calendar as _calendar
last_dom = _calendar.monthrange(target_date.year, target_date.month)[1]
@ -404,14 +105,69 @@ def daily_jobs():
if not expected_times:
continue
runs_for_day = runs_by_job.get(job.id, [])
runs_for_day = (
JobRun.query.filter(
JobRun.job_id == job.id,
JobRun.run_at >= start_of_day,
JobRun.run_at < end_of_day,
)
.order_by(JobRun.run_at.asc())
.all()
)
run_count = len(runs_for_day)
customer_name = job.customer.name if job.customer else ""
# Use pre-loaded ticket/remark indicators
has_active_ticket = job_has_ticket.get(job.id, False)
has_active_remark = job_has_remark.get(job.id, False)
# Ticket/Remark indicators for this job on this date
# Tickets: active-from date should apply to subsequent runs until resolved.
has_active_ticket = False
has_active_remark = False
try:
t_exists = db.session.execute(
text(
"""
SELECT 1
FROM tickets t
JOIN ticket_scopes ts ON ts.ticket_id = t.id
WHERE ts.job_id = :job_id
AND t.active_from_date <= :target_date
AND (
t.resolved_at IS NULL
OR ((t.resolved_at AT TIME ZONE 'UTC' AT TIME ZONE 'Europe/Amsterdam')::date) >= :target_date
)
LIMIT 1
"""
),
{"job_id": job.id, "target_date": target_date},
).first()
has_active_ticket = bool(t_exists)
r_exists = db.session.execute(
text(
"""
SELECT 1
FROM remarks r
JOIN remark_scopes rs ON rs.remark_id = r.id
WHERE rs.job_id = :job_id
AND COALESCE(
r.active_from_date,
((r.start_date AT TIME ZONE 'UTC' AT TIME ZONE 'Europe/Amsterdam')::date)
) <= :target_date
AND (
r.resolved_at IS NULL
OR ((r.resolved_at AT TIME ZONE 'UTC' AT TIME ZONE 'Europe/Amsterdam')::date) >= :target_date
)
LIMIT 1
"""
),
{"job_id": job.id, "target_date": target_date},
).first()
has_active_remark = bool(r_exists)
except Exception:
has_active_ticket = False
has_active_remark = False
# We show a single row per job for today.
last_remark_excerpt = ""

View File

@ -1576,6 +1576,11 @@ def api_run_checks_autotask_existing_tickets():
"""List open (non-terminal) Autotask tickets for the selected run's customer.
Phase 2.2: used by the Run Checks modal to link an existing PSA ticket.
Search behaviour:
- Always searches tickets for the customer's company
- If search term looks like a ticket number (starts with T + digits), also searches
across all companies to enable linking overarching issues
"""
try:
@ -1640,20 +1645,43 @@ def api_run_checks_autotask_existing_tickets():
# Best-effort; list will still work without labels.
pass
# First: query tickets for this customer's company
tickets = client.query_tickets_for_company(
int(customer.autotask_company_id),
search=q,
exclude_status_ids=sorted(AUTOTASK_TERMINAL_STATUS_IDS),
limit=75,
)
# Second: if search looks like a ticket number, also search across all companies
# This allows linking overarching issues that span multiple companies
cross_company_tickets = []
if q and q.upper().startswith("T") and any(ch.isdigit() for ch in q):
try:
cross_company_tickets = client.query_tickets_by_number(
q,
exclude_status_ids=sorted(AUTOTASK_TERMINAL_STATUS_IDS),
limit=10,
)
except Exception:
# Best-effort; main company query already succeeded
pass
except Exception as exc:
return jsonify({"status": "error", "message": f"Autotask ticket lookup failed: {exc}"}), 400
# Combine and deduplicate results
seen_ids = set()
items = []
for t in tickets or []:
def add_ticket(t):
if not isinstance(t, dict):
continue
return
tid = t.get("id")
if tid in seen_ids:
return
seen_ids.add(tid)
tnum = (t.get("ticketNumber") or t.get("number") or "")
title = (t.get("title") or "")
st = t.get("status")
@ -1672,6 +1700,14 @@ def api_run_checks_autotask_existing_tickets():
}
)
# Add company tickets first (primary results)
for t in tickets or []:
add_ticket(t)
# Then add cross-company tickets (secondary results for ticket number search)
for t in cross_company_tickets or []:
add_ticket(t)
# Sort: newest-ish first. Autotask query ordering isn't guaranteed, so we provide a stable sort.
items.sort(key=lambda x: (x.get("ticketNumber") or ""), reverse=True)
@ -1814,8 +1850,12 @@ def api_run_checks_autotask_link_existing_ticket():
def api_run_checks_autotask_resolve_note():
"""Post a user-visible 'should be resolved' update to an existing Autotask ticket.
This step does NOT close the ticket in Autotask.
Status update behaviour (per API contract section 9):
- If NO time entries exist: ticket is closed (status 5 = Complete)
- If time entries exist: ticket remains open
Primary behaviour: create a Ticket note via POST /Tickets/{id}/Notes so the message is clearly visible.
Then updates the ticket resolution field which triggers the conditional status update.
Fallback behaviour: if TicketNote create is not supported (HTTP 404), append the marker text
to the Ticket description via PUT /Tickets and verify persistence.
"""
@ -1847,6 +1887,19 @@ def api_run_checks_autotask_resolve_note():
if ticket_id <= 0:
return jsonify({"status": "error", "message": "Run has an invalid Autotask ticket id."}), 400
try:
client = _build_autotask_client_from_settings()
except Exception as exc:
return jsonify({"status": "error", "message": f"Autotask client setup failed: {exc}"}), 400
# Check for time entries to determine ticket closure status
# Per API contract section 9: ticket closes only if no time entries exist
try:
time_entries = client.query_time_entries_by_ticket_id(ticket_id)
has_time_entries = len(time_entries) > 0
except Exception:
has_time_entries = False # Assume no time entries if query fails
tz_name = _get_ui_timezone_name()
tz = _get_ui_timezone()
now_utc = datetime.utcnow().replace(tzinfo=timezone.utc)
@ -1855,19 +1908,20 @@ def api_run_checks_autotask_resolve_note():
actor = (getattr(current_user, "email", None) or getattr(current_user, "username", None) or "operator")
ticket_number = str(getattr(run, "autotask_ticket_number", "") or "").strip()
# Build dynamic message based on time entry check
marker = "[Backupchecks] Marked as resolved in Backupchecks"
if has_time_entries:
status_note = "(ticket remains open in Autotask due to existing time entries)"
else:
status_note = "(ticket will be closed in Autotask)"
body = (
f"{marker} (ticket remains open in Autotask).\n"
f"{marker} {status_note}.\n"
f"Time: {now} ({tz_name})\n"
f"By: {actor}\n"
+ (f"Ticket: {ticket_number}\n" if ticket_number else "")
)
try:
client = _build_autotask_client_from_settings()
except Exception as exc:
return jsonify({"status": "error", "message": f"Autotask client setup failed: {exc}"}), 400
# 1) Preferred: create an explicit TicketNote (user-visible update)
try:
note_payload = {

View File

@ -408,6 +408,7 @@ def settings():
if request.method == "POST":
autotask_form_touched = any(str(k).startswith("autotask_") for k in (request.form or {}).keys())
import_form_touched = any(str(k).startswith("auto_import_") or str(k).startswith("manual_import_") or str(k).startswith("ingest_eml_") for k in (request.form or {}).keys())
# NOTE: The Settings UI has multiple tabs with separate forms.
# Only update values that are present in the submitted form, to avoid
@ -505,7 +506,9 @@ def settings():
settings.daily_jobs_start_date = None
# Import configuration
if "auto_import_enabled" in request.form:
# Checkbox: only update when any import field is present (form was submitted)
# Unchecked checkboxes are not sent by browsers, so check import_form_touched
if import_form_touched:
settings.auto_import_enabled = bool(request.form.get("auto_import_enabled"))
if "auto_import_interval_minutes" in request.form:

View File

@ -534,18 +534,13 @@ def _recompute_override_flags_for_runs(job_ids: list[int] | None = None, start_a
except Exception:
runs = []
# Batch load all jobs to avoid N+1 queries
job_ids = {run.job_id for run in runs if run.job_id}
jobs_by_id = {}
if job_ids:
try:
jobs_by_id = {j.id: j for j in Job.query.filter(Job.id.in_(job_ids)).all()}
except Exception:
jobs_by_id = {}
updated = 0
for run in runs:
job = jobs_by_id.get(run.job_id)
job = None
try:
job = Job.query.get(run.job_id)
except Exception:
job = None
if not job:
continue

View File

@ -172,6 +172,7 @@ def migrate_system_settings_ui_timezone() -> None:
except Exception as exc:
print(f"[migrations] Failed to migrate system_settings.ui_timezone: {exc}")
def migrate_system_settings_autotask_integration() -> None:
"""Add Autotask integration columns to system_settings if missing."""
@ -248,8 +249,138 @@ def migrate_customers_autotask_company_mapping() -> None:
print(f"[migrations] Failed to migrate customers autotask company mapping columns: {exc}")
def migrate_tickets_resolved_origin() -> None:
"""Add resolved_origin column to tickets if missing.
This column stores the origin of the resolution (psa | backupchecks).
"""
table = "tickets"
column = "resolved_origin"
try:
engine = db.get_engine()
except Exception as exc:
print(f"[migrations] Could not get engine for tickets resolved_origin migration: {exc}")
return
try:
if _column_exists(table, column):
print("[migrations] tickets.resolved_origin already exists.")
return
with engine.begin() as conn:
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN {column} VARCHAR(32)'))
print("[migrations] migrate_tickets_resolved_origin completed.")
except Exception as exc:
print(f"[migrations] Failed to migrate tickets.resolved_origin: {exc}")
def migrate_job_runs_autotask_ticket_fields() -> None:
"""Add Autotask ticket linkage fields to job_runs if missing.
Columns:
- job_runs.autotask_ticket_id (INTEGER NULL)
- job_runs.autotask_ticket_number (VARCHAR(64) NULL)
- job_runs.autotask_ticket_created_at (TIMESTAMP NULL)
- job_runs.autotask_ticket_created_by_user_id (INTEGER NULL, FK users.id)
"""
table = "job_runs"
try:
engine = db.get_engine()
except Exception as exc:
print(f"[migrations] Could not get engine for job_runs Autotask ticket migration: {exc}")
return
try:
with engine.begin() as conn:
existing = _get_table_columns(conn, table)
if "autotask_ticket_id" not in existing:
print("[migrations] Adding job_runs.autotask_ticket_id column...")
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN autotask_ticket_id INTEGER'))
if "autotask_ticket_number" not in existing:
print("[migrations] Adding job_runs.autotask_ticket_number column...")
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN autotask_ticket_number VARCHAR(64)'))
if "autotask_ticket_created_at" not in existing:
print("[migrations] Adding job_runs.autotask_ticket_created_at column...")
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN autotask_ticket_created_at TIMESTAMP'))
if "autotask_ticket_created_by_user_id" not in existing:
print("[migrations] Adding job_runs.autotask_ticket_created_by_user_id column...")
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN autotask_ticket_created_by_user_id INTEGER'))
print("[migrations] migrate_job_runs_autotask_ticket_fields completed.")
except Exception as exc:
print(f"[migrations] Failed to migrate job_runs Autotask ticket fields: {exc}")
def migrate_job_runs_autotask_ticket_deleted_fields() -> None:
"""Add Autotask deleted ticket tracking fields to job_runs if missing.
Columns:
- job_runs.autotask_ticket_deleted_at (TIMESTAMP NULL)
- job_runs.autotask_ticket_deleted_by_resource_id (INTEGER NULL)
"""
table = "job_runs"
try:
engine = db.get_engine()
except Exception as exc:
print(f"[migrations] Could not get engine for job_runs Autotask deleted fields migration: {exc}")
return
try:
with engine.begin() as conn:
existing = _get_table_columns(conn, table)
if "autotask_ticket_deleted_at" not in existing:
print("[migrations] Adding job_runs.autotask_ticket_deleted_at column...")
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN autotask_ticket_deleted_at TIMESTAMP'))
if "autotask_ticket_deleted_by_resource_id" not in existing:
print("[migrations] Adding job_runs.autotask_ticket_deleted_by_resource_id column...")
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN autotask_ticket_deleted_by_resource_id INTEGER'))
print("[migrations] migrate_job_runs_autotask_ticket_deleted_fields completed.")
except Exception as exc:
print(f"[migrations] Failed to migrate job_runs Autotask deleted fields: {exc}")
def migrate_job_runs_autotask_ticket_deleted_by_name_fields() -> None:
"""Add Autotask deleted ticket by-name fields to job_runs if missing.
Columns:
- job_runs.autotask_ticket_deleted_by_first_name (VARCHAR(255) NULL)
- job_runs.autotask_ticket_deleted_by_last_name (VARCHAR(255) NULL)
"""
table = "job_runs"
try:
engine = db.get_engine()
except Exception as exc:
print(f"[migrations] Could not get engine for job_runs Autotask deleted by-name migration: {exc}")
return
try:
with engine.begin() as conn:
existing = _get_table_columns(conn, table)
if "autotask_ticket_deleted_by_first_name" not in existing:
print("[migrations] Adding job_runs.autotask_ticket_deleted_by_first_name column...")
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN autotask_ticket_deleted_by_first_name VARCHAR(255)'))
if "autotask_ticket_deleted_by_last_name" not in existing:
print("[migrations] Adding job_runs.autotask_ticket_deleted_by_last_name column...")
conn.execute(text(f'ALTER TABLE "{table}" ADD COLUMN autotask_ticket_deleted_by_last_name VARCHAR(255)'))
print("[migrations] migrate_job_runs_autotask_ticket_deleted_by_name_fields completed.")
except Exception as exc:
print(f"[migrations] Failed to migrate job_runs Autotask deleted by-name fields: {exc}")
def migrate_mail_messages_columns() -> None:
@ -935,147 +1066,6 @@ def run_migrations() -> None:
print("[migrations] All migrations completed.")
def migrate_job_runs_autotask_ticket_fields() -> None:
"""Add Autotask ticket linkage fields to job_runs if missing.
Columns:
- job_runs.autotask_ticket_id (INTEGER NULL)
- job_runs.autotask_ticket_number (VARCHAR(64) NULL)
- job_runs.autotask_ticket_created_at (TIMESTAMP NULL)
- job_runs.autotask_ticket_created_by_user_id (INTEGER NULL, FK users.id)
"""
table = "job_runs"
try:
engine = db.get_engine()
except Exception as exc:
print(f"[migrations] Could not get engine for job_runs Autotask ticket migration: {exc}")
return
try:
with engine.begin() as conn:
cols = _get_table_columns(conn, table)
if not cols:
print("[migrations] job_runs table not found; skipping migrate_job_runs_autotask_ticket_fields.")
return
if "autotask_ticket_id" not in cols:
print("[migrations] Adding job_runs.autotask_ticket_id column...")
conn.execute(text('ALTER TABLE "job_runs" ADD COLUMN autotask_ticket_id INTEGER'))
if "autotask_ticket_number" not in cols:
print("[migrations] Adding job_runs.autotask_ticket_number column...")
conn.execute(text('ALTER TABLE "job_runs" ADD COLUMN autotask_ticket_number VARCHAR(64)'))
if "autotask_ticket_created_at" not in cols:
print("[migrations] Adding job_runs.autotask_ticket_created_at column...")
conn.execute(text('ALTER TABLE "job_runs" ADD COLUMN autotask_ticket_created_at TIMESTAMP'))
if "autotask_ticket_created_by_user_id" not in cols:
print("[migrations] Adding job_runs.autotask_ticket_created_by_user_id column...")
conn.execute(text('ALTER TABLE "job_runs" ADD COLUMN autotask_ticket_created_by_user_id INTEGER'))
try:
conn.execute(
text(
'ALTER TABLE "job_runs" '
'ADD CONSTRAINT job_runs_autotask_ticket_created_by_user_id_fkey '
'FOREIGN KEY (autotask_ticket_created_by_user_id) REFERENCES users(id) '
'ON DELETE SET NULL'
)
)
except Exception as exc:
print(
f"[migrations] Could not add FK job_runs_autotask_ticket_created_by_user_id -> users.id (continuing): {exc}"
)
conn.execute(text('CREATE INDEX IF NOT EXISTS idx_job_runs_autotask_ticket_id ON "job_runs" (autotask_ticket_id)'))
except Exception as exc:
print(f"[migrations] migrate_job_runs_autotask_ticket_fields failed (continuing): {exc}")
return
print("[migrations] migrate_job_runs_autotask_ticket_fields completed.")
def migrate_job_runs_autotask_ticket_deleted_fields() -> None:
"""Add Autotask deleted ticket audit fields to job_runs if missing.
Columns:
- job_runs.autotask_ticket_deleted_at (TIMESTAMP NULL)
- job_runs.autotask_ticket_deleted_by_resource_id (INTEGER NULL)
"""
table = "job_runs"
try:
engine = db.get_engine()
except Exception as exc:
print(f"[migrations] Could not get engine for job_runs Autotask ticket deleted fields migration: {exc}")
return
try:
with engine.begin() as conn:
cols = _get_table_columns(conn, table)
if not cols:
print("[migrations] job_runs table not found; skipping migrate_job_runs_autotask_ticket_deleted_fields.")
return
if "autotask_ticket_deleted_at" not in cols:
print("[migrations] Adding job_runs.autotask_ticket_deleted_at column...")
conn.execute(text('ALTER TABLE "job_runs" ADD COLUMN autotask_ticket_deleted_at TIMESTAMP'))
if "autotask_ticket_deleted_by_resource_id" not in cols:
print("[migrations] Adding job_runs.autotask_ticket_deleted_by_resource_id column...")
conn.execute(text('ALTER TABLE "job_runs" ADD COLUMN autotask_ticket_deleted_by_resource_id INTEGER'))
conn.execute(text('CREATE INDEX IF NOT EXISTS idx_job_runs_autotask_ticket_deleted_by_resource_id ON "job_runs" (autotask_ticket_deleted_by_resource_id)'))
conn.execute(text('CREATE INDEX IF NOT EXISTS idx_job_runs_autotask_ticket_deleted_at ON "job_runs" (autotask_ticket_deleted_at)'))
except Exception as exc:
print(f"[migrations] migrate_job_runs_autotask_ticket_deleted_fields failed (continuing): {exc}")
return
print("[migrations] migrate_job_runs_autotask_ticket_deleted_fields completed.")
def migrate_job_runs_autotask_ticket_deleted_by_name_fields() -> None:
"""Add Autotask deleted-by name audit fields to job_runs if missing.
Columns:
- job_runs.autotask_ticket_deleted_by_first_name (VARCHAR(255) NULL)
- job_runs.autotask_ticket_deleted_by_last_name (VARCHAR(255) NULL)
"""
table = "job_runs"
try:
engine = db.get_engine()
except Exception as exc:
print(f"[migrations] Could not get engine for job_runs Autotask deleted-by name fields migration: {exc}")
return
try:
with engine.begin() as conn:
cols = _get_table_columns(conn, table)
if not cols:
print("[migrations] job_runs table not found; skipping migrate_job_runs_autotask_ticket_deleted_by_name_fields.")
return
if "autotask_ticket_deleted_by_first_name" not in cols:
print("[migrations] Adding job_runs.autotask_ticket_deleted_by_first_name column...")
conn.execute(text('ALTER TABLE "job_runs" ADD COLUMN autotask_ticket_deleted_by_first_name VARCHAR(255)'))
if "autotask_ticket_deleted_by_last_name" not in cols:
print("[migrations] Adding job_runs.autotask_ticket_deleted_by_last_name column...")
conn.execute(text('ALTER TABLE "job_runs" ADD COLUMN autotask_ticket_deleted_by_last_name VARCHAR(255)'))
conn.execute(text('CREATE INDEX IF NOT EXISTS idx_job_runs_autotask_ticket_deleted_by_first_name ON "job_runs" (autotask_ticket_deleted_by_first_name)'))
conn.execute(text('CREATE INDEX IF NOT EXISTS idx_job_runs_autotask_ticket_deleted_by_last_name ON "job_runs" (autotask_ticket_deleted_by_last_name)'))
except Exception as exc:
print(f"[migrations] migrate_job_runs_autotask_ticket_deleted_by_name_fields failed (continuing): {exc}")
print("[migrations] migrate_job_runs_autotask_ticket_deleted_by_name_fields completed.")
def migrate_jobs_archiving() -> None:
"""Add archiving columns to jobs if missing.
@ -1434,34 +1424,6 @@ def migrate_tickets_active_from_date() -> None:
def migrate_tickets_resolved_origin() -> None:
"""Add tickets.resolved_origin column if missing.
Used to show whether a ticket was resolved by PSA polling or manually inside Backupchecks.
"""
table = "tickets"
try:
engine = db.get_engine()
except Exception as exc:
print(f"[migrations] Could not get engine for tickets resolved_origin migration: {exc}")
return
try:
with engine.begin() as conn:
cols = _get_table_columns(conn, table)
if not cols:
print("[migrations] tickets table not found; skipping migrate_tickets_resolved_origin.")
return
if "resolved_origin" not in cols:
print("[migrations] Adding tickets.resolved_origin column...")
conn.execute(text('ALTER TABLE "tickets" ADD COLUMN resolved_origin VARCHAR(32)'))
except Exception as exc:
print(f"[migrations] tickets resolved_origin migration failed (continuing): {exc}")
print("[migrations] migrate_tickets_resolved_origin completed.")
def migrate_mail_messages_overall_message() -> None:
"""Add overall_message column to mail_messages if missing."""
table = "mail_messages"

View File

@ -253,12 +253,6 @@ class Job(db.Model):
class JobRun(db.Model):
__tablename__ = "job_runs"
__table_args__ = (
db.Index("idx_job_run_job_id", "job_id"),
db.Index("idx_job_run_job_id_run_at", "job_id", "run_at"),
db.Index("idx_job_run_job_id_reviewed_at", "job_id", "reviewed_at"),
db.Index("idx_job_run_mail_message_id", "mail_message_id"),
)
id = db.Column(db.Integer, primary_key=True)
@ -297,8 +291,6 @@ class JobRun(db.Model):
autotask_ticket_deleted_by_first_name = db.Column(db.String(255), nullable=True)
autotask_ticket_deleted_by_last_name = db.Column(db.String(255), nullable=True)
created_at = db.Column(db.DateTime, default=datetime.utcnow, nullable=False)
updated_at = db.Column(
db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False
@ -310,7 +302,6 @@ class JobRun(db.Model):
)
reviewed_by = db.relationship("User", foreign_keys=[reviewed_by_user_id])
autotask_ticket_created_by = db.relationship("User", foreign_keys=[autotask_ticket_created_by_user_id])
@ -352,11 +343,6 @@ class JobObject(db.Model):
class MailMessage(db.Model):
__tablename__ = "mail_messages"
__table_args__ = (
db.Index("idx_mail_message_job_id", "job_id"),
db.Index("idx_mail_message_location", "location"),
db.Index("idx_mail_message_job_id_location", "job_id", "location"),
)
id = db.Column(db.Integer, primary_key=True)
@ -416,9 +402,6 @@ class MailMessage(db.Model):
class MailObject(db.Model):
__tablename__ = "mail_objects"
__table_args__ = (
db.Index("idx_mail_object_mail_message_id", "mail_message_id"),
)
id = db.Column(db.Integer, primary_key=True)
mail_message_id = db.Column(db.Integer, db.ForeignKey("mail_messages.id"), nullable=False)
@ -453,10 +436,6 @@ class Ticket(db.Model):
class TicketScope(db.Model):
__tablename__ = "ticket_scopes"
__table_args__ = (
db.Index("idx_ticket_scope_ticket_id", "ticket_id"),
db.Index("idx_ticket_scope_job_id", "job_id"),
)
id = db.Column(db.Integer, primary_key=True)
ticket_id = db.Column(db.Integer, db.ForeignKey("tickets.id"), nullable=False)
scope_type = db.Column(db.String(32), nullable=False)
@ -498,10 +477,6 @@ class Remark(db.Model):
class RemarkScope(db.Model):
__tablename__ = "remark_scopes"
__table_args__ = (
db.Index("idx_remark_scope_remark_id", "remark_id"),
db.Index("idx_remark_scope_job_id", "job_id"),
)
id = db.Column(db.Integer, primary_key=True)
remark_id = db.Column(db.Integer, db.ForeignKey("remarks.id"), nullable=False)
scope_type = db.Column(db.String(32), nullable=False)

View File

@ -0,0 +1,212 @@
/* Changelog specific styling */
/* Navigation sidebar */
.changelog-nav {
padding: 1rem;
background: var(--bs-body-bg);
border-radius: 0.5rem;
border: 1px solid var(--bs-border-color);
}
.changelog-nav .changelog-nav-link {
padding: 0.15rem 0.5rem !important;
margin-bottom: 0.15rem !important;
border-radius: 0.25rem;
color: var(--bs-body-color);
text-decoration: none;
transition: all 0.15s ease-in-out;
font-size: 0.85rem !important;
line-height: 1.1 !important;
display: block;
}
.changelog-nav .changelog-nav-link span {
font-size: 0.7rem !important;
margin-top: 0;
line-height: 1 !important;
display: block;
opacity: 0.7;
}
.changelog-nav-link:hover {
background: var(--bs-tertiary-bg);
color: var(--bs-primary);
}
.changelog-nav-link:active,
.changelog-nav-link.active {
background: var(--bs-primary);
color: white;
}
/* Version cards */
.changelog-version-card {
border: 1px solid var(--bs-border-color);
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.05);
transition: box-shadow 0.2s ease-in-out;
scroll-margin-top: 80px;
}
.changelog-version-card:hover {
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
}
.changelog-version-card .card-header {
padding: 1.25rem 1.5rem;
background: linear-gradient(135deg, var(--bs-primary) 0%, var(--bs-primary-dark, #0056b3) 100%);
}
.changelog-version-card .card-body {
padding: 1.5rem;
}
/* Summary section */
.changelog-summary {
padding: 1rem;
background: var(--bs-light);
border-left: 4px solid var(--bs-primary);
border-radius: 0.375rem;
}
[data-bs-theme="dark"] .changelog-summary {
background: var(--bs-dark);
}
.changelog-summary .lead {
margin-bottom: 0;
font-size: 1rem;
line-height: 1.6;
}
/* Section styling */
.changelog-section {
border-bottom: 1px solid var(--bs-border-color);
padding-bottom: 1.5rem;
}
.changelog-section:last-child {
border-bottom: none;
padding-bottom: 0;
}
/* Type badges */
.changelog-badge-feature {
background: linear-gradient(135deg, #28a745 0%, #20c997 100%);
color: white;
font-weight: 600;
padding: 0.4rem 0.8rem;
font-size: 0.875rem;
}
.changelog-badge-improvement {
background: linear-gradient(135deg, #17a2b8 0%, #20c997 100%);
color: white;
font-weight: 600;
padding: 0.4rem 0.8rem;
font-size: 0.875rem;
}
.changelog-badge-fixed {
background: linear-gradient(135deg, #dc3545 0%, #fd7e14 100%);
color: white;
font-weight: 600;
padding: 0.4rem 0.8rem;
font-size: 0.875rem;
}
.changelog-badge-added {
background: linear-gradient(135deg, #007bff 0%, #6610f2 100%);
color: white;
font-weight: 600;
padding: 0.4rem 0.8rem;
font-size: 0.875rem;
}
.changelog-badge-removed {
background: linear-gradient(135deg, #6c757d 0%, #495057 100%);
color: white;
font-weight: 600;
padding: 0.4rem 0.8rem;
font-size: 0.875rem;
}
.changelog-badge-changed {
background: linear-gradient(135deg, #ffc107 0%, #ff9800 100%);
color: #212529;
font-weight: 600;
padding: 0.4rem 0.8rem;
font-size: 0.875rem;
}
.changelog-badge-documentation {
background: linear-gradient(135deg, #6f42c1 0%, #e83e8c 100%);
color: white;
font-weight: 600;
padding: 0.4rem 0.8rem;
font-size: 0.875rem;
}
/* Subsection styling */
.changelog-subsection {
margin-left: 0.5rem;
}
.changelog-subsection h4 {
font-weight: 600;
margin-bottom: 0.5rem;
}
/* List styling */
.changelog-list {
list-style-type: none;
padding-left: 0;
margin-bottom: 0;
}
.changelog-list li {
padding: 0.4rem 0 0.4rem 1.75rem;
position: relative;
line-height: 1.6;
}
.changelog-list li::before {
content: "●";
position: absolute;
left: 0.5rem;
color: var(--bs-primary);
font-weight: bold;
}
.changelog-list li:hover {
background: var(--bs-tertiary-bg);
border-radius: 0.25rem;
}
/* Nested lists (indented items) */
.changelog-list li:has(+ li) {
margin-bottom: 0.25rem;
}
/* Responsive adjustments */
@media (max-width: 767.98px) {
.changelog-version-card .card-header {
padding: 1rem;
}
.changelog-version-card .card-body {
padding: 1rem;
}
.changelog-summary {
padding: 0.75rem;
}
.changelog-list li {
font-size: 0.95rem;
}
}
/* Smooth scrolling */
html {
scroll-behavior: smooth;
}

View File

@ -12,6 +12,7 @@
<link rel="stylesheet" href="{{ url_for('static', filename='css/layout.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/status-text.css') }}" />
<link rel="icon" type="image/x-icon" href="{{ url_for('static', filename='favicon.ico') }}" />
{% block head %}{% endblock %}
<script>
(function () {

View File

@ -1,32 +1,97 @@
{% extends 'layout/base.html' %}
{% block head %}
{{ super() }}
<link rel="stylesheet" href="{{ url_for('static', filename='css/changelog.css') }}" />
{% endblock %}
{% block content %}
<div class="d-flex align-items-center justify-content-between mb-3">
<div class="row">
<!-- Sidebar with version navigation -->
<div class="col-lg-3 col-md-4 d-none d-md-block">
<div class="changelog-nav sticky-top" style="top: 80px;">
<h6 class="text-body-secondary text-uppercase mb-3">Versions</h6>
<nav class="nav flex-column">
{% for version_data in changelog_versions %}
<a class="nav-link changelog-nav-link" href="#{{ version_data.version }}">
{{ version_data.version }}
<span class="text-body-secondary small d-block">{{ version_data.date }}</span>
</a>
{% endfor %}
</nav>
</div>
</div>
<!-- Main content -->
<div class="col-lg-9 col-md-8">
<div class="d-flex align-items-center justify-content-between mb-4">
<div>
<h1 class="h3 mb-1">Changelog</h1>
<div class="text-body-secondary">Loaded live from the repository.</div>
<div class="text-body-secondary">Release history and updates</div>
</div>
</div>
{% if changelog_versions %}
{% for version_data in changelog_versions %}
<div class="card changelog-version-card mb-4" id="{{ version_data.version }}">
<div class="card-header bg-primary text-white">
<div class="d-flex align-items-center justify-content-between">
<div>
<h2 class="h4 mb-0">{{ version_data.version }}</h2>
</div>
{% if changelog_source_url %}
<div class="text-end">
<a class="btn btn-sm btn-outline-secondary" href="{{ changelog_source_url }}" target="_blank" rel="noopener">
View source
</a>
<span class="badge bg-light text-dark">{{ version_data.date }}</span>
</div>
{% endif %}
</div>
{% if changelog_error %}
<div class="alert alert-warning" role="alert">
{{ changelog_error }}
</div>
{% endif %}
<div class="card">
<div class="card-body">
{% if changelog_html %}
<div class="markdown-content">{{ changelog_html | safe }}</div>
{% if version_data.summary %}
<div class="changelog-summary mb-4">
<p class="lead">{{ version_data.summary }}</p>
</div>
{% endif %}
{% for section in version_data.sections %}
<div class="changelog-section mb-4">
<h3 class="h5 mb-3">
{% if section.type %}
<span class="badge changelog-badge-{{ section.type }}">{{ section.title }}</span>
{% else %}
<div class="text-body-secondary">No changelog content available.</div>
{{ section.title }}
{% endif %}
</h3>
{% if section.subsections %}
{% for subsection in section.subsections %}
<div class="changelog-subsection mb-3">
{% if subsection.subtitle %}
<h4 class="h6 text-body-secondary mb-2">{{ subsection.subtitle }}</h4>
{% endif %}
{% if subsection.changes %}
<ul class="changelog-list">
{% for change in subsection.changes %}
<li>{{ change }}</li>
{% endfor %}
</ul>
{% endif %}
</div>
{% endfor %}
{% elif section.changes %}
<ul class="changelog-list">
{% for change in section.changes %}
<li>{{ change }}</li>
{% endfor %}
</ul>
{% endif %}
</div>
{% endfor %}
</div>
</div>
{% endfor %}
{% else %}
<div class="alert alert-info" role="alert">
No changelog entries available.
</div>
{% endif %}
</div>
</div>

View File

@ -370,6 +370,7 @@
if (atResults) {
clearResults();
}
if (atSearchInput) atSearchInput.value = '';
setSelectedCompanyId(null);
setMsg("", false);
@ -379,13 +380,21 @@
var atStatus = btn.getAttribute("data-autotask-mapping-status") || "";
var atLast = btn.getAttribute("data-autotask-last-sync-at") || "";
renderCurrentMapping(atCompanyId, atCompanyName, atStatus, atLast);
// Auto-search for similar companies if not yet mapped
if (!atCompanyId && name && atSearchInput) {
atSearchInput.value = name;
performAutotaskSearch(name);
}
}
});
});
if (atSearchBtn && atSearchInput && atResults) {
atSearchBtn.addEventListener("click", async function () {
var q = (atSearchInput.value || "").trim();
// Reusable Autotask search function
async function performAutotaskSearch(query) {
if (!atResults) return;
var q = (query || "").trim();
if (!q) {
setMsg("Enter a search term.", true);
return;
@ -439,6 +448,12 @@
atResults.innerHTML = "<div class=\"text-muted small\">No results.</div>";
setMsg(e && e.message ? e.message : "Search failed.", true);
}
}
if (atSearchBtn && atSearchInput && atResults) {
atSearchBtn.addEventListener("click", async function () {
var q = (atSearchInput.value || "").trim();
await performAutotaskSearch(q);
});
}

View File

@ -1134,6 +1134,7 @@ table.addEventListener('change', function (e) {
btnAutotaskLink.addEventListener('click', function () {
if (!currentRunId) { alert('Select a run first.'); return; }
if (atlStatus) atlStatus.textContent = '';
if (atlSearch) atlSearch.value = '';
renderAtlRows([]);
// Show the existing Run Checks popup first, then switch to the Autotask popup.
// This prevents the main popup from breaking due to stacked modal backdrops.
@ -1191,7 +1192,7 @@ table.addEventListener('change', function (e) {
btnAutotaskResolveNote.addEventListener('click', function () {
if (!currentRunId) { alert('Select a run first.'); return; }
clearStatus();
if (!confirm('Add an update to the existing Autotask ticket that it should be resolved?\n\nThis will NOT close the ticket in Autotask.')) return;
if (!confirm('Add an update to the existing Autotask ticket that it should be resolved?\n\nThe ticket will be closed (status Complete) if there are no time entries.\nIf time entries exist, the ticket will remain open.')) return;
if (atStatus) atStatus.textContent = 'Posting update...';
btnAutotaskResolveNote.disabled = true;
apiJson('/api/run-checks/autotask-resolve-note', {

325
docs/autotask_rest_api.md Normal file
View File

@ -0,0 +1,325 @@
# Autotask REST API Postman Test Contract
## Reference Sources
Primary external reference used during testing:
- https://github.com/AutotaskDevelopment/REST-Postman/blob/main/2020.10.29%20-%20Autotask%20Collecction%20-%20Must%20fill%20in%20variables%20on%20collection.postman_collection.json
This document combines:
- Empirically validated Postman test results
- Official Autotask Postman collection references (where applicable)
---
## Purpose
This document captures **validated Autotask REST API behaviour** based on hands-on Postman testing.
This is an **authoritative test contract**:
- General-purpose
- Product-agnostic
- Based on proven results, not assumptions
If implementation deviates from this document, **the document is correct and the code is wrong**.
---
## 0. Base URLs
### Sandbox
https://webservices19.autotask.net/ATServicesRest/V1.0
### Production
https://webservices19.autotask.net/ATServicesRest/V1.0
Notes:
- `ATServicesRest` is case-sensitive
- Other casing variants are invalid
---
## 0.1 Global Invariants (Do Not Violate)
- TicketID (numeric) is the only authoritative identifier for single-ticket operations
- TicketNumber is display-only and must be resolved to TicketID first
- PATCH is not supported for Tickets
- PUT /Tickets is always a **full update**, never partial
- Never guess fields or values
---
## 1. Ticket Lookup
### Resolve TicketNumber → TicketID
Endpoint:
POST /Tickets/query
Filter:
- field: ticketNumber
- op: eq
- value: <TicketNumber>
Result:
- items[0].id → TicketID
---
## 2. Authoritative Ticket Retrieval
Endpoint:
GET /Tickets/{TicketID}
Always required before any update.
### 2.1 Response Envelope (Critical)
Validated response shape for single-ticket retrieval:
- The ticket object is returned under the `item` wrapper
Example shape:
- `issueType` is at `item.issueType`
- `subIssueType` is at `item.subIssueType`
- `source` is at `item.source`
Implementation rule:
- Always read stabilising fields from `item.*` in the GET response.
- Do **not** read these fields from the response root.
Note:
- PUT payloads are **not** wrapped in `item`. They use the plain ticket object fields at the request body root.
Commonly required stabilising fields:
- id
- ticketNumber
- companyID
- queueID
- title
- priority
- status
- dueDateTime
- ticketCategory
- issueType
- subIssueType
- source
- organizationalLevelAssociationID
- completedDate
- resolvedDateTime
- lastTrackedModificationDateTime
---
## 3. Ticket Status Picklist (ID → Label)
Endpoint:
GET /Tickets/entityInformation/fields
Validated behaviour:
- `status` is an integer picklist
- Picklist values (ID → label) are returned inline
- Status IDs and semantics are tenant-dependent
Do not assume lifecycle meaning based on label alone.
---
## 4. Ticket Update Behaviour
### PATCH
- Not supported
- Returns error indicating unsupported HTTP method
### PUT /Tickets
Validated behaviour:
- Full update required
- Missing fields cause hard failures
- Partial updates are rejected
Implementation rule:
- Always copy required fields from a fresh GET
- Change only the intended field(s)
---
## 5. Status Semantics (Validated Example)
Observed in test tenant:
- Status = 8 (label: "Completed")
- Status updates
- completedDate = null
- resolvedDateTime = null
- Status = 5 (label: "Complete")
- Status updates
- completedDate populated
- resolvedDateTime populated
Conclusion:
- Resolution timestamps depend on status ID, not label
- Validate per tenant before relying on timestamps
---
## 6. Time Entry Existence Check (Decisive)
Entity:
TimeEntries
Endpoint:
POST /TimeEntries/query
Filter:
- field: ticketID
- op: eq
- value: <TicketID>
Decision:
- count = 0 → no time entries
- count > 0 → time entries exist
---
## 7. Ticket Notes via REST Capability vs Endpoint Reality
Although entityInformation reports TicketNote as creatable, **entity-level create does not work**.
### Non-working endpoints
- POST /TicketNotes → 404
- POST /TicketNote → 404
### Working endpoint (only supported method)
POST /Tickets/{TicketID}/Notes
Required fields:
- Title
- Description
- NoteType
- Publish
Result:
- Note is created and immediately visible
Query endpoint works:
- POST /TicketNotes/query
---
## 8. Resolution Field Update (Validated Test)
### Test scope
This section documents the **explicit Postman tests** performed to validate how the `resolution` field can be updated safely.
### Field characteristics (validated)
- name: resolution
- dataType: string
- max length: 32000
- isReadOnly: false
### Critical constraint (proven)
Because **PATCH is not supported** and **PUT /Tickets is a full update**, the resolution field **cannot** be updated in isolation.
Sending an incomplete payload results in unintended changes to:
- classification
- routing
- status
- organizational structure
### Validated update pattern
1. Retrieve current ticket state
- GET /Tickets/{TicketID}
- Read fields from `item.*` (see section 2.1)
2. Construct PUT payload by copying current values of stabilising fields
(explicitly validated in tests):
- id
- companyID
- queueID
- title
- priority
- status
- dueDateTime
- ticketCategory
- issueType
- subIssueType
- source
- organizationalLevelAssociationID
3. Change **only** the `resolution` field
4. Execute update
- PUT /Tickets
### Test result
- Resolution text becomes visible in the Autotask UI
- No unintended changes occur
This behaviour was reproduced consistently and is considered authoritative.
---
## 9. Ticket Resolution Workflow (Validated Tests)
This section captures the **end-to-end resolution tests** performed via Postman.
### Test 1 Resolution without status change
Steps:
1. GET /Tickets/{TicketID}
2. POST /Tickets/{TicketID}/Notes
3. PUT /Tickets (update `resolution` only)
Result:
- Resolution text is updated
- Ticket status remains unchanged
- completedDate and resolvedDateTime remain null
Conclusion:
- Resolution text alone does **not** resolve a ticket
---
### Test 2 Conditional resolution based on time entries
Steps:
1. GET /Tickets/{TicketID}
2. POST /Tickets/{TicketID}/Notes
3. PUT /Tickets (update `resolution`)
4. POST /TimeEntries/query (filter by ticketID)
Decision logic (validated):
- If **no time entries exist**:
- PUT /Tickets with status = 5
- completedDate is set
- resolvedDateTime is set
- If **time entries exist**:
- Status is NOT changed
- Ticket remains open in Autotask
### Key conclusions
- Notes, resolution, and status are independent operations
- Status 5 is the only validated status that sets resolution timestamps
- Status changes must always be explicit and conditional
---
## Non-Negotiable Implementation Rules
- Always GET before PUT
- Never guess stabilising fields
- Never use PATCH
- Never change status implicitly
- Notes and resolution must be explicit
---
## 10. API Contract Summary
Hard rules for code:
- Always resolve TicketNumber → TicketID via POST /Tickets/query
- Always GET /Tickets/{TicketID} before any update
- Never attempt PATCH for Ticket updates
- Use PUT /Tickets with a full, stabilised payload
- Validate per tenant which status values set completedDate/resolvedDateTime
- Check time entries via POST /TimeEntries/query when status decisions depend on them
- Create ticket notes only via /Tickets/{TicketID}/Notes
If code deviates from this document, **the document is correct and the code is wrong**.

View File

@ -2,6 +2,111 @@
This file documents all changes made to this project via Claude Code.
## [2026-02-05]
### Added
- Redesigned changelog system to use Python-based structure instead of Markdown:
- Created `app/changelog.py` with structured changelog data (21 versions from v0.1.22 to v0.1.2)
- Each version contains: version number, date, summary, and structured sections
- Sections include: title, type (feature/improvement/fixed/documentation), and list of changes
- Removed Gitea dependency for changelog rendering - now fully self-contained
- No external dependencies, faster loading, always available
- New changelog.html template with modern design:
- Sidebar navigation with all versions for quick jumping between releases
- Sticky sidebar that remains visible during scrolling
- Bootstrap cards for each version with gradient blue headers
- Color-coded type badges for sections:
- Green gradient: Feature
- Blue gradient: Improvement
- Red gradient: Fixed
- Purple gradient: Documentation
- Responsive design (sidebar hidden on mobile devices)
- Summary section with blue left border highlight
- Click-to-expand sections with smooth animations
- Created `static/css/changelog.css` with comprehensive styling:
- Modern gradients for badges and headers
- Dark mode support via CSS variables
- Hover effects on navigation links and list items
- Smooth scrolling to version anchors
- Compact spacing optimizations (reduced padding, margins, font sizes)
- CSS specificity enhancements with !important flags to override Bootstrap defaults
- Added `{% block head %}` to base.html template to allow pages to inject custom CSS
### Changed
- Updated `routes_changelog.py` to load data from `changelog.py` instead of fetching from Gitea
- Simplified changelog route - removed markdown parsing and external HTTP requests
- Removed dependency on `markdown` library for changelog rendering
- Template now receives structured Python data instead of HTML string
### Fixed
- Fixed module import path in routes_changelog.py (changed from `from app.changelog` to `from ..changelog`)
- Fixed dictionary key conflict - renamed `items` to `changes` to avoid collision with dict.items() method
- Added missing `{% block head %}` in base.html that prevented custom CSS from loading
### Technical Details
- Changelog data structure uses dictionaries with keys: version, date, summary, sections
- Sections contain: title, type, subsections (optional), changes
- Subsections contain: subtitle, changes
- All list items use "changes" key instead of "items" to avoid Python reserved method conflicts
- CSS uses !important flags and increased specificity (.changelog-nav .changelog-nav-link) to override Bootstrap
- Compact spacing achieved with: 0.15rem padding, 0.15rem margins, 0.85rem/0.7rem font sizes, 1.1/1.0 line heights
### Added
- Autotask customer mapping now auto-searches for similar company names when opening unmapped customers:
- Automatically populates search box with customer name
- Displays matching Autotask companies as suggestions
- Speeds up mapping process by eliminating manual search for most customers
- Autotask "Link existing ticket" now supports cross-company ticket search:
- Added `query_tickets_by_number()` to search tickets by number across all companies
- When searching with a ticket number (e.g., "T20260205.0001"), results include:
- Tickets from the customer's company (primary results)
- Matching tickets from other companies (for overarching issues)
- Enables linking tickets for multi-company infrastructure issues
### Changed
- Autotask resolve confirmation and note messages now correctly indicate ticket closure status:
- Frontend confirmation dialog explains conditional closure based on time entries
- Backend route checks time entries before creating note and generates dynamic message:
- "ticket will be closed in Autotask" when no time entries exist
- "ticket remains open in Autotask due to existing time entries" when time entries exist
- Route docstring updated to reflect conditional status update behaviour
### Added
- Autotask conditional ticket status update based on time entries (API contract section 9):
- `query_time_entries_by_ticket_id()` - Query time entries for a ticket via POST /TimeEntries/query
- `update_ticket_resolution_safe()` now checks for time entries and conditionally sets status:
- If NO time entries exist: sets status to 5 (Complete) with completedDate and resolvedDateTime
- If time entries exist: keeps current status unchanged (ticket remains open)
### Fixed
- Automatic mail import can now be disabled in Settings after being enabled (fixed unchecked checkbox not being processed)
- Autotask "Link existing" search box now clears when opening the modal instead of retaining previous search text
- Autotask customer mapping search box now clears when opening the edit modal instead of retaining previous search text
- Autotask ticket resolution update now correctly preserves exact field values from GET response in PUT payload.
The `issueType`, `subIssueType`, and `source` fields are copied with their exact values (including null)
from the GET response, as required by Autotask API. Previously these fields were being skipped or modified.
### Added
- Restored Autotask PSA integration from branch `v20260203-13-autotask-resolution-item-wrapper`:
- `integrations/autotask/client.py` - Autotask REST API client with full support for:
- Zone information discovery
- Ticket CRUD operations (create, get, update)
- Ticket notes via `/Tickets/{id}/Notes` endpoint
- Safe resolution updates preserving stabilizing fields
- Query support for companies, tickets, time entries, deleted ticket logs
- Reference data retrieval (queues, ticket sources, priorities, statuses)
- `ticketing_utils.py` - Utilities for internal ticket management and Autotask linkage
- Database migrations for Autotask fields:
- `SystemSettings`: Autotask connection settings, defaults, and cached reference data
- `Customer`: Autotask company mapping fields
- `JobRun`: Autotask ticket linkage and deletion tracking fields
- `Ticket`: Resolution origin tracking
- Settings UI for Autotask configuration (connection test, reference data sync)
- Run Checks integration for Autotask ticket creation
- Customers page with Autotask company mapping
- Documentation files for Autotask integration design and implementation
- Added `docs/autotask_rest_api.md` - Validated Autotask REST API contract based on Postman testing
## [2026-02-04]
### Added

View File

@ -1,535 +1,96 @@
## v20260115-01-autotask-settings
### Changes:
- Added initial Autotask integration settings structure to Backupchecks.
- Introduced new system settings demonstrating Autotask configuration fields such as enable toggle, environment selection, credentials, tracking identifier, and Backupchecks base URL.
- Prepared data model and persistence layer to store Autotask-related configuration.
- Laid groundwork for future validation and integration logic without enabling ticket creation or customer mapping.
- Ensured changes are limited to configuration foundations only, keeping Phase 1 scope intact.
## v20260115-02-autotask-settings-migration-fix
### Changes:
- Fixed Autotask system settings migration so it is always executed during application startup.
- Added safe, idempotent column existence checks to prevent startup failures on re-deployments.
- Ensured all Autotask-related system_settings columns are created before being queried.
- Prevented aborted database transactions caused by missing columns during settings initialization.
- Improved overall stability of the Settings page when Autotask integration is enabled.
## v20260115-03-autotask-settings-ui
### Changes:
- Added visible Autotask configuration section under Settings → Integrations.
- Implemented form fields for enabling Autotask integration, environment selection, API credentials, tracking identifier, and Backupchecks base URL.
- Wired Autotask settings to SystemSettings for loading and saving configuration values.
- Added Diagnostics & Reference Data section with actions for testing the Autotask connection and refreshing reference data.
- Kept all functionality strictly within Phase 1 scope without introducing ticket or customer logic.
## v20260115-04-autotask-reference-data-fix
### Changes:
- Fixed Autotask API client to use correct endpoints for reference data instead of invalid `/query` routes.
- Implemented proper retrieval of Autotask Queues and Ticket Sources via collection endpoints.
- Added dynamic retrieval of Autotask Priorities using ticket entity metadata and picklist values.
- Cached queues, ticket sources, and priorities in system settings for safe reuse in the UI.
- Updated Autotask settings UI to use dropdowns backed by live Autotask reference data.
- Improved “Test connection” to validate authentication and reference data access reliably.
- Fixed admin event logging to prevent secondary exceptions during error handling.
## v20260115-05-autotask-queues-picklist-fix
Changes:
- Reworked Autotask reference data retrieval to use Ticket entity picklists instead of non-existent top-level resources.
- Retrieved Queues via the Tickets.queueID picklist to ensure compatibility with all Autotask tenants.
- Retrieved Ticket Sources via the Tickets.source picklist instead of a direct collection endpoint.
- Kept Priority retrieval fully dynamic using the Tickets.priority picklist.
- Normalized picklist values so IDs and display labels are handled consistently in settings dropdowns.
- Fixed Autotask connection test to rely on picklist availability, preventing false 404 errors.
## v20260115-06-autotask-auth-fallback
### Changes:
- Improved Autotask authentication handling to support sandbox-specific behavior.
- Implemented automatic fallback authentication flow when initial Basic Auth returns HTTP 401.
- Added support for header-based authentication using UserName and Secret headers alongside the Integration Code.
- Extended authentication error diagnostics to include selected environment and resolved Autotask zone information.
- Increased reliability of Autotask connection testing across different tenants and sandbox configurations.
## v20260115-07-autotask-picklist-field-detect
### Changes:
- Improved detection of Autotask Ticket entity picklist fields to handle tenant-specific field naming.
- Added fallback matching logic based on field name and display label for picklist fields.
- Fixed queue picklist resolution when fields are not named exactly `queue` or `queueid`.
- Applied the same robust detection logic to ticket priority picklist retrieval.
- Prevented connection test failures caused by missing or differently named metadata fields.
## v20260115-08-autotask-entityinfo-fields-shape-fix
### Changes:
- Fixed parsing of Autotask entityInformation responses to correctly read field metadata from the `fields` attribute.
- Extended metadata normalization to support different response shapes returned by Autotask.
- Improved picklist value handling to support both inline picklist values and URL-based retrieval.
- Resolved failures in queue, source, and priority picklist detection caused by empty or misparsed field metadata.
- Stabilized Autotask connection testing across sandbox environments with differing metadata formats.
## v20260115-09-autotask-customer-company-mapping
- Added explicit Autotask company mapping to customers using ID-based linkage.
- Extended customer data model with Autotask company ID, cached company name, mapping status, and last sync timestamp.
- Implemented Autotask company search and lookup endpoints for customer mapping.
- Added mapping status handling to detect renamed, missing, or invalid Autotask companies.
- Updated Customers UI to allow searching, selecting, refreshing, and clearing Autotask company mappings.
- Ensured mappings remain stable when Autotask company names change and block future ticket actions when mappings are invalid.
## v20260115-10-autotask-customers-settings-helper-fix
- Fixed /customers crash caused by missing _get_or_create_settings by removing reliance on shared star-imported helpers.
- Added a local SystemSettings get-or-create helper in customers routes to prevent runtime NameError in mixed/partial deployments.
- Added explicit imports for SystemSettings, db, and datetime to keep the Customers page stable across versions.
## v20260115-11-autotask-companyname-unwrap
- Fixed Autotask company name being shown as "Unknown" by correctly unwrapping nested Autotask API responses.
- Improved company lookup handling to support different response shapes (single item and collection wrappers).
- Ensured the cached Autotask company name is stored and displayed consistently after mapping and refresh.
## v20260115-12-autotask-customers-refreshall-mappings
- Added a “Refresh all Autotask mappings” button on the Customers page to validate all mapped customers in one action.
- Implemented a new backend endpoint to refresh mapping status for all customers with an Autotask Company ID and return a status summary (ok/renamed/missing/invalid).
- Updated the Customers UI to call the refresh-all endpoint, show a short result summary, and reload to reflect updated mapping states.
## v20260115-14-autotask-runchecks-ticket-migration-fix
- Fixed missing database helper used by the Autotask ticket fields migration for job runs.
- Corrected the job_runs migration to ensure Autotask ticket columns are created reliably and committed properly.
- Resolved Run Checks errors caused by incomplete database migrations after introducing Autotask ticket support.
## v20260115-15-autotask-default-ticket-status-setting
- Added “Default Ticket Status” dropdown to Autotask settings (Ticket defaults).
- Implemented retrieval and caching of Autotask ticket statuses as reference data for dropdown usage.
- Extended reference data refresh to include Ticket Statuses and updated diagnostics counters accordingly.
- Added database column for cached ticket statuses and included it in migrations for existing installations.
## v20260115-16-autotask-ticket-create-response-fix
- Fixed Autotask ticket creation handling for tenants that return a lightweight or empty POST /Tickets response.
- Added support for extracting the created ticket ID from itemId/id fields and from the Location header.
- Added a follow-up GET /Tickets/{id} to always retrieve the full created ticket object (ensuring ticketNumber/id are available).
## v20260115-17-autotask-ticket-create-trackingid-lookup
- Reworked Autotask ticket creation flow to no longer rely on POST /Tickets response data for returning an ID.
- Added deterministic fallback lookup using Tickets/query filtered by TrackingIdentifier (and CompanyID when available).
- Ensured the created ticket is reliably retrieved via follow-up GET /Tickets/{id} so ticketNumber/id can always be stored.
- Eliminated false-negative ticket creation errors when Autotask returns an empty body and no Location header.
## v20260115-19-autotask-ticket-create-debug-logging
- Added optional verbose Autotask ticket creation logging (guarded by BACKUPCHECKS_AUTOTASK_DEBUG=1).
- Introduced per-request correlation IDs and included them in ticket creation error messages for log tracing.
- Logged POST /Tickets response characteristics (status, headers, body preview) to diagnose tenants returning incomplete create responses.
- Logged fallback Tickets/query lookup payload and result shape to pinpoint why deterministic lookup fails.
## v20260116-01-autotask-ticket-id-normalization
### Changes:
- Normalized Autotask GET /Tickets/{id} API responses by unwrapping the returned "item" object.
- Ensured the ticket data is returned as a flat object so existing logic can reliably read the ticket id.
- Enabled correct retrieval of the Autotask ticketNumber via a follow-up GET after ticket creation.
- Prevented false error messages where ticket creation succeeded but no ticket id was detected.
## v20260116-02-runchecks-autotask-create-refresh
### Changes:
- Fixed a JavaScript error in the Run Checks view where a non-existent renderModal() function was called after creating an Autotask ticket.
- Replaced the renderModal() call with renderRun() to properly refresh the Run Checks modal state.
- Ensured the Autotask ticket status is updated in the UI without throwing a frontend error.
## v20260116-03-autotask-ticket-linking-visibility
### Changes:
- Ensured Autotask tickets created via Run Checks are stored as internal Ticket records instead of only external references.
- Linked created Autotask tickets to the corresponding Job Run so they appear in Tickets/Remarks.
- Added proper ticket association to Job Details, matching the behaviour of manually entered tickets.
- Updated the Run Checks view to show the ticket indicator when an Autotask ticket is linked to a run.
## v20260116-04-runchecks-autotask-ticket-polling
### Changes:
- Added read-only Autotask ticket polling triggered on Run Checks page load
- Introduced backend endpoint to poll only relevant active Autotask tickets linked to visible runs
- Implemented ticket ID deduplication to minimize Autotask API calls
- Ensured polling is best-effort and does not block Run Checks rendering
- Added client support for bulk ticket queries with per-ticket fallback
- Updated Run Checks UI to display polled PSA ticket status without modifying run state
- Explicitly prevented any ticket mutation, resolution, or Backupchecks state changes
## v20260116-05-autotask-ticket-create-link-all-open-runs
### Changes:
- Fixed Autotask ticket creation to link the newly created ticket to all relevant open runs of the same job
- Aligned automatic ticket creation behaviour with existing manual ticket linking logic
- Ensured ticket linkage is applied consistently across runs until the ticket is resolved
- Prevented Phase 2.1 polling from being blocked by incomplete ticket-run associations
- No changes made to polling logic, resolution logic, or PSA state interpretation
## v20260116-06-runchecks-polling-merge-fix
### Changes:
- Restored Phase 2.1 read-only Autotask polling logic after ticket-creation fix overwrote Run Checks routes
- Merged polling endpoint and UI polling trigger with updated ticket-linking behaviour
- Ensured polled PSA ticket status is available again on the Run Checks page
- No changes made to ticket creation logic, resolution handling, or Backupchecks run state
## v20260116-07-autotask-ticket-link-all-runs-ticketjobrun-fix
### Changes:
- Fixed Autotask ticket creation linking so the internal TicketJobRun associations are created for all relevant open runs of the same job
- Ensured ticket numbers and ticket presence are consistently visible per run (Run Checks and Job Details), not only for the selected run
- Made the list of runs to link deterministic by collecting run IDs first, then applying both run field updates and internal ticket linking across that stable set
- No changes made to polling logic or PSA status interpretation
## v20260116-08-autotask-ticket-backfill-ticketjobrun
- Fixed inconsistent ticket linking when creating Autotask tickets from the Run Checks page.
- Ensured that newly created Autotask tickets are linked to all related job runs, not only the selected run.
- Backfilled ticket-to-run associations so tickets appear correctly in the Tickets overview.
- Corrected Job Details visibility so open runs linked to the same ticket now display the ticket number consistently.
- Aligned Run Checks, Tickets, and Job Details views to use the same ticket-jobrun linkage logic.
## v20260116-09-autotask-ticket-propagate-active-runs
- Updated ticket propagation logic so Autotask tickets are linked to all active job runs (non-Reviewed) visible on the Run Checks page.
- Ensured ticket remarks and ticket-jobrun entries are created for each active run, not only the initially selected run.
- Implemented automatic ticket inheritance for newly incoming runs of the same job while the ticket remains unresolved.
- Stopped ticket propagation once the ticket or job is marked as Resolved to prevent incorrect linking to closed incidents.
- Aligned Run Checks, Tickets overview, and Job Details to consistently reflect ticket presence across all active runs.
## v20260116-10-autotask-ticket-sync-internal-ticketjobrun
- Aligned Autotask ticket creation with the legacy manual ticket workflow by creating or updating an internal Ticket record using the Autotask ticket number.
- Ensured a one-to-one mapping between Autotask tickets and internal Backupchecks tickets.
- Linked the internal Ticket to all active (non-Reviewed) job runs by creating or backfilling TicketJobRun relations.
- Restored visibility of Autotask-created tickets in Tickets, Tickets/Remarks, and Job Details pages.
- Implemented idempotent behavior so repeated ticket creation or re-polling does not create duplicate tickets or links.
- Prepared the ticket model for future scenarios where Autotask integration can be disabled and tickets can be managed manually again.
## v20260116-11-autotask-ticket-sync-legacy
- Restored legacy internal ticket workflow for Autotask-created tickets by ensuring internal Ticket records are created when missing.
- Implemented automatic creation and linking of TicketJobRun records for all active job_runs (reviewed_at IS NULL) that already contain Autotask ticket data.
- Ensured 1:1 mapping between an Autotask ticket and a single internal Ticket, identical to manual ticket behavior.
- Added inheritance logic so newly created job_runs automatically link to an existing open internal Ticket until it is resolved.
- Aligned Autotask ticket creation and polling paths with the legacy manual ticket creation flow, without changing any UI behavior.
- Ensured solution works consistently with Autotask integration enabled or disabled by relying exclusively on internal Ticket and TicketJobRun structures.
## v20260119-04-autotask-ticket-registration
### Changes:
- Implemented reliable Autotask ticket number retrieval by enforcing a post-create GET on the created ticket, avoiding incomplete create responses.
- Added automatic creation or reuse of an internal Ticket based on the Autotask ticket number to preserve legacy ticket behavior.
- Ensured idempotent linking of the internal Ticket to all open job runs (reviewed_at IS NULL) for the same job, matching manual ticket functionality.
- Propagated Autotask ticket references (autotask_ticket_id and autotask_ticket_number) to all related open runs when a ticket is created.
- Added repair/propagation logic so runs that already have an Autotask ticket ID but lack internal linking are corrected automatically.
- Guaranteed that future runs for the same job inherit the existing Autotask and internal ticket associations.
## v20260119-05-autotask-create-itemid
### Changes:
- Updated Autotask ticket creation handling to treat a POST response containing only {"itemId": <id>} as a successful ticket creation.
- Normalized the create response so the returned itemId is mapped internally to a ticket id, ensuring the existing follow-up GET /Tickets/{id} flow is always executed.
- Fixed erroneous failure condition where ticket creation was rejected because Autotask did not return a full ticket object.
- Restored compatibility with Autotasks documented behavior for ticket creation responses.
## v20260119-06-runchecks-renderRun-fix
### Changes:
- Fixed JavaScript error in Run Checks where a non-existent renderModal() function was called after creating an Autotask ticket.
- Replaced the invalid renderModal() call with renderRun() to correctly refresh the run state and UI.
- Prevented UI failure after successful Autotask ticket creation while preserving backend behavior.
## v20260119-07-autotask-propagate-ticket-to-all-runs
### Changes:
- Fixed ticket propagation logic so Autotask ticket numbers are applied to all open runs (reviewed_at IS NULL), not only the most recent run.
- Ensured runs that already had an autotask_ticket_id but were missing the autotask_ticket_number are now correctly updated.
- Restored legacy behavior where all active runs for the same job consistently display the linked ticket in Tickets, Tickets/Remarks, and Job Details.
- Prevented partial ticket linkage that caused only the newest run to show the ticket number.
## v20260119-08-autotask-disable-toggle-persist
### Changes:
- Fixed persistence of the “Enable Autotask integration” setting so disabling the integration is correctly saved.
- Updated form handling to explicitly set the Autotask enabled flag when the checkbox is unchecked, instead of implicitly keeping the previous value.
- Prevented the Autotask integration from being automatically re-enabled after saving settings.
## v20260119-09-autotask-disabled-legacy-ticket-ui
### Changes:
- Restored the legacy manual ticket registration UI when the Autotask integration is disabled.
- Updated Run Checks to switch the ticket creation interface based solely on the autotask_enabled setting.
- Hidden the Autotask ticket creation section entirely when the integration is turned off.
- Re-enabled the original legacy ticket creation flow to allow correct Ticket and TicketJobRun linking without Autotask.
## v20260119-10-runchecks-renderRun-alias
### Changes:
- Fixed remaining JavaScript references to the non-existent renderModal() function in the Run Checks flow.
- Ensured consistent use of renderRun() when toggling the Autotask integration on and off.
- Prevented UI errors when re-enabling the Autotask integration after it was disabled.
## v20260119-03-autotask-ticket-state-sync
### Changes:
- Implemented Phase 2: read-only PSA-driven ticket state synchronisation.
- Added targeted polling on Run Checks load for runs with an Autotask Ticket ID and no reviewed_at timestamp.
- Introduced authoritative fallback logic using GET Tickets/{TicketID} when tickets are missing from active list queries.
- Mapped Autotask status ID 5 (Completed) to automatic resolution of all linked active runs.
- Marked resolved runs explicitly as "Resolved by PSA" without modifying Autotask data.
- Ensured multi-run consistency: one Autotask ticket correctly resolves all associated active job runs.
- Preserved internal Ticket and TicketJobRun integrity to maintain legacy Tickets, Remarks, and Job Details behaviour.
## v20260119-04-autotask-psa-resolved-ui-recreate-ticket
### Changes:
- Added explicit UI indication when an Autotask ticket is resolved by PSA ("Resolved by PSA (Autotask)").
- Differentiated resolution origin between PSA-driven resolution and Backupchecks-driven resolution.
- Re-enabled ticket creation when an existing Autotask ticket was resolved by PSA, allowing operators to create a new ticket if the previous one was closed incorrectly.
- Updated Autotask ticket panel to reflect resolved state without blocking further actions.
- Extended backend validation to allow ticket re-creation after PSA-resolved tickets while preserving historical ticket links.
- Ensured legacy Tickets, Remarks, and Job Details behaviour remains intact.
## v20260119-14-fix-routes-runchecks-syntax
### Changes:
- Fixed a Python SyntaxError in routes_run_checks.py caused by an unmatched closing parenthesis.
- Removed an extra closing bracket introduced during the Autotask PSA resolved / recreate ticket changes.
- Restored successful Gunicorn worker startup and backend application boot.
- No functional or behavioural changes beyond resolving the syntax error.
## v20260119-15-fix-migrations-autotask-phase2
### Changes:
- Restored the missing `_get_table_columns()` helper function required by multiple database migrations.
- Fixed Autotask-related migrations that introduced the `resolved_origin` and Autotask job_run fields.
- Ensured all migrations run inside a safe transaction context so failures always trigger a rollback.
- Prevented database sessions from remaining in an aborted state after a failed migration.
- Resolved runtime database errors on the Run Checks page caused by earlier migration failures.
## v20260119-16-fix-runchecks-render-modal
### Changes:
- Fixed a JavaScript runtime error on the Run Checks page where `renderModal` was referenced but not defined.
- Replaced the obsolete `renderModal(...)` call with the correct Run Checks rendering function.
- Restored proper Run Checks page rendering without breaking existing ticket or modal behaviour.
## v20260119-17-fix-autotask-postcreate-ticketnumber-internal-linking
### Changes:
- Enforced mandatory post-create retrieval (GET Tickets/{TicketID}) after Autotask ticket creation to reliably obtain the Ticket Number.
- Persisted the retrieved Ticket Number to all active (unreviewed) runs of the same job when missing.
- Restored automatic creation and repair of internal Ticket records once the Ticket Number is known.
- Restored TicketJobRun linking so Autotask-created tickets appear correctly in Tickets, Remarks, and Job Details.
- Prevented UI state where a ticket was shown as “created” without a Ticket Number or internal ticket linkage.
## v20260119-18-fix-legacy-ticketnumber-sync
### Changes:
- Restored legacy ticket number compatibility by aligning internal Ticket activation timing with the original run date.
- Set internal Ticket `active_from_date` based on the earliest associated run timestamp instead of the current date.
- Ensured legacy ticket visibility and numbering work correctly for historical runs across Tickets, Remarks, Job Details, and Run Checks indicators.
- Applied the same logic during post-create processing and Phase 2 polling repair to keep legacy behaviour consistent and idempotent.
## v20260120-01-autotask-deleted-ticket-detection
### Changes:
- Added detection of deleted Autotask tickets using DeletedTicketLogs.
- Implemented fallback deleted detection via GET /Tickets/{id} when DeletedTicketLogs is unavailable.
- Stored deleted ticket metadata on job runs:
- autotask_ticket_deleted_at
- autotask_ticket_deleted_by_resource_id
- Marked internal tickets as resolved when the linked Autotask ticket is deleted (audit-safe handling).
- Updated Run Checks to display “Deleted in PSA” status.
- No changes made to Job Details view.
## v20260120-01-autotask-deleted-ticket-audit
### Changes:
- Extended deleted ticket audit data by resolving deletedByResourceID to resource details.
- Stored additional audit fields on job runs:
- autotask_ticket_deleted_by_first_name
- autotask_ticket_deleted_by_last_name
- Persisted deletion date and time from Autotask DeletedTicketLogs.
- Updated Run Checks to display:
- Deleted at (date/time)
- Deleted by (first name + last name, with resource ID as fallback)
- Ensured resource lookup is executed only when a delete is detected to minimize API usage.
- No changes made to Job Details view; data is stored for future reporting use.
## v20260120-03-autotask-deletedby-name-runlink
### Changes:
- Extended deleted ticket audit handling by resolving DeletedByResourceID to resource details.
- Stored deleted-by audit information on job runs:
- autotask_ticket_deleted_by_first_name
- autotask_ticket_deleted_by_last_name
- Updated Run Checks UI to display:
- “Deleted by: <First name> <Last name>
- Fallback to “Deleted by resource ID” when name data is unavailable.
- Ensured deletion date/time continues to be shown in Run Checks.
- Restored legacy ticket behavior by automatically linking new job runs to existing internal tickets (TicketJobRun).
- Ensured Autotask-linked tickets are inherited by new runs when an open ticket already exists for the job.
- No changes made to Job Details view; audit data is stored for future reporting.
## v20260120-04-autotask-deletedby-name-runlink-fix
### Changes:
- Fixed an IndentationError in mail_importer.py that prevented the application from booting.
- Added idempotent database migration for deleted-by name audit fields on job_runs:
- autotask_ticket_deleted_by_first_name
- autotask_ticket_deleted_by_last_name
- Extended Autotask client with GET /Resources/{id} support to resolve deletedByResourceID.
- Persisted deleted-by first/last name on job runs when a DeletedTicketLogs entry is detected.
- Updated Run Checks to display “Deleted by: <First name> <Last name>” with resource ID as fallback.
- Restored legacy behavior by linking newly created job runs to any open internal tickets (TicketJobRun inherit) during mail import.
## v20260120-05-autotask-indent-fix
- Fixed an IndentationError in routes_inbox.py that prevented Gunicorn from starting.
- Corrected the indentation of db.session.flush() to restore valid Python syntax.
- No functional or logical changes were made.
## v20260120-06-routes-inbox-indent-fix
### Changes:
- Fixed multiple indentation and syntax errors in routes_inbox.py.
- Corrected misaligned db.session.flush() calls to ensure proper transaction handling.
- Repaired indentation of link_open_internal_tickets_to_run logic to prevent runtime exceptions.
- Restored application startup stability by resolving Python IndentationError issues.
## v20260120-07-autotask-psa-resolution-handling
- Added support for linking existing Autotask tickets (Phase 2.2) using Autotask REST queries.
- Implemented ticket listing by company with exclusion of terminal tickets (status != Complete).
- Added search support for existing tickets by exact ticketNumber and by title (contains).
- Implemented authoritative validation of selected Autotask tickets via GET /Tickets/{id}.
- Defined terminal ticket detection based on:
- status == Complete (5)
- OR completedDate is set
- OR resolvedDateTime is set.
- Ensured terminal Autotask tickets automatically resolve the corresponding internal Backupchecks ticket.
- Preserved legacy internal Ticket and TicketJobRun creation/linking so Tickets overview, Tickets/Remarks, and Job Details continue to function identically to manually linked tickets.
- Ensured resolution timestamps are derived from Autotask (resolvedDateTime / completedDate) instead of using current time.
## v20260120-08-runchecks-link-existing-autotask-ticket
- Added Phase 2.2 “Link existing Autotask ticket” flow in Run Checks:
- New UI button “Link existing” next to “Create” in the Run Checks modal.
- Added modal popup with search + refresh and a selectable list of non-terminal tickets.
- Added backend API endpoints:
- GET /api/run-checks/autotask-existing-tickets (list/search tickets for mapped company, excluding terminal statuses)
- POST /api/run-checks/autotask-link-existing-ticket (link selected ticket to run and create/update internal Ticket + TicketJobRun links)
- Extended Autotask client with a ticket query helper to support listing/searching tickets for a company.
- Improved internal ticket resolve handling:
- Do not overwrite resolved_origin when already set; keep “psa” origin when resolved by PSA.
## v20260120-09-runchecks-modal-sequence-fix
- Fixed Run Checks popup behavior by preventing stacked Bootstrap modals.
- Restored correct modal sequence:
- The standard Run Checks modal opens first as before.
- The Autotask popup is opened only after explicitly selecting an Autotask action.
- Ensured the Run Checks modal is temporarily hidden when the Autotask popup opens.
- Automatically reopens the Run Checks modal when the Autotask popup is closed.
- Prevented broken backdrops, focus loss, and non-responsive popups caused by multiple active modals.
## v20260120-10-runchecks-bootstrap-compat-fix
- Fixed Run Checks page crash caused by referencing the Bootstrap 5 global "bootstrap" object when it is not available.
- Added Bootstrap 4/5 compatible modal helpers (show/hide/hidden event) using jQuery modal API when needed.
- Updated Run Checks modal opening and Autotask link modal flow to use the compatibility helpers.
- Restored normal Run Checks popup behavior (click handlers execute again because the page no longer errors on load).
## v20260120-11-runchecks-autotask-status-label
- Updated the “Link existing Autotask ticket” list to display the status label instead of the numeric status code.
- Added a safe fallback chain so the UI shows:
statusLabel (API) -> status_label (legacy) -> numeric status.
## v20260203-01-autotask-resolve-note
- Added a Resolve button to the Autotask ticket section on the Run Checks page.
- Resolve action does NOT close the Autotask ticket.
- Implemented functionality to add a note/update to the existing Autotask ticket indicating it is marked as resolved from Backupchecks.
- Added backend API endpoint to handle the resolve-note action.
- Extended Autotask client with a helper to update existing tickets via PUT.
## v20260203-02-autotask-resolve-button-enabled
- Fixed an issue where the Autotask Resolve button was incorrectly disabled.
- Updated UI disable logic so only the Create action is disabled when an active Autotask ticket exists.
- Ensured the Resolve button remains clickable for existing linked Autotask tickets.
## v20260203-03-autotask-resolve-note-verify
- Fixed download issue by re-packaging the changed files.
- Improved Resolve action feedback so status messages remain visible until completion.
- Added backend verification step to confirm the Autotask update is actually persisted.
- Return a clear error when Autotask accepts the request but does not store the update.
- Prevented false-positive “resolved” messages when no ticket update exists.
## v20260203-04-autotask-resolve-user-note
- Changed the Resolve action to exclusively create a user-visible TicketNote in Autotask.
- Removed all Ticket PUT updates to avoid false-positive system or workflow notes.
- Ensured the TicketNote is published and clearly indicates the ticket is marked as resolved by Backupchecks.
- Updated backend validation to only return success when the TicketNote is successfully created.
- Aligned frontend success messaging with actual TicketNote creation in Autotask.
## v20260203-06-autotask-ticketnotes-child-endpoint
- Updated the Resolve action to create ticket notes using the Autotask child endpoint POST /Tickets/{TicketID}/Notes.
- Removed usage of the unsupported POST /TicketNotes endpoint.
- Ensured the created note is user-visible in Autotask and clearly marks the ticket as resolved by Backupchecks.
## v20260203-07-autotask-notes-endpoint-fix
- Fixed Autotask ticket note creation using the POST /Tickets/{TicketID}/Notes endpoint.
- Updated response handling to support empty or non-JSON success responses without JSON parsing errors.
- Improved backend error handling so Autotask write errors return valid JSON instead of breaking the request.
- Made the ticket note payload more robust to support tenant-specific field requirements.
## v20260203-08-autotask-ticketnote-timezone-suffix
- Updated Autotask ticket note timestamps to use the configured Backupchecks timezone instead of UTC.
- Added a timezone suffix to the timestamp in the ticket note (e.g. Europe/Amsterdam).
- Ensured all user-visible timestamps written to Autotask follow the timezone setting from Backupchecks.
## v20260203-09-autotask-resolution-from-note
- When posting an Autotask “marked as resolved” note, the same text is now also written to the Ticket resolution field.
- Resolution updates follow the validated safe pattern: GET the current Ticket first, then PUT with stabilising fields while keeping the status unchanged.
- Added verification to ensure the resolution text is persisted after the update.
## v20260203-10-autotask-resolution-field-aliases
- Fix: Resolution PUT now always reuses the latest classification/routing fields from GET /Tickets/{id}, including support for common field-name variants (*ID/*Id).
- Prevents failure when issueType/subIssueType/source are not present under the expected keys after ticket creation or later changes.
- Keeps ticket status unchanged while updating resolution, per validated Postman contract.
## v20260203-12-autotask-resolution-v1-casing-fix
- Fixed Autotask REST base URL casing: ATServicesRest and V1.0 are now used exactly as required by the validated Postman contract.
- Ticket GET for resolution updates now uses the authoritative endpoint GET /Tickets/{TicketID} on .../ATServicesRest/V1.0, ensuring issueType/subIssueType/source are retrieved before PUT.
- Resolution update continues to keep ticket status unchanged and only writes the resolution field.
## v20260203-13-autotask-resolution-item-wrapper
- Fix: Resolution update now always reads stabilising fields from GET /Tickets/{id} response under item.* (issueType, subIssueType, source, status).
- Added a dedicated safe helper to update the Ticket resolution field via GET+PUT while keeping status unchanged.
- The resolve-note action now mirrors the same note text into the Ticket resolution field (both in the normal path and the 404 fallback path).
***
## v0.1.22
This major release introduces comprehensive Autotask PSA integration, enabling seamless ticket management, customer company mapping, and automated ticket lifecycle handling directly from Backupchecks. The integration includes extensive settings configuration, robust API client implementation, intelligent ticket linking across job runs, and conditional ticket status updates based on time entries.
### Autotask Integration Core Features
**Settings and Configuration:**
- Complete Autotask integration settings in Settings → Integrations
- Environment selection (Sandbox/Production) with automatic zone discovery
- API authentication with fallback support for different tenant configurations
- Tracking identifier (Integration Code) configuration for ticket attribution
- Connection testing and diagnostics
- Reference data synchronization (queues, sources, priorities, statuses)
- Configurable ticket defaults (queue, source, status, priority)
- Autotask integration and automatic mail import can now be properly disabled after being enabled (fixed unchecked checkbox processing)
**Customer Company Mapping:**
- Explicit Autotask company mapping for customers using ID-based linkage
- Company search with auto-suggestions when opening unmapped customers
- Automatically populates search box with customer name and displays matching Autotask companies
- Mapping status tracking (ok/renamed/missing/invalid)
- Bulk mapping refresh for all customers
- Clear search boxes when opening modals for better user experience
**Ticket Creation and Management:**
- Create Autotask tickets directly from Run Checks page
- Automatic ticket number assignment and storage
- Link existing Autotask tickets to job runs
- Cross-company ticket search for overarching infrastructure issues (search by ticket number finds tickets across all companies)
- Ticket propagation to all active runs of the same job
- Internal ticket registration for legacy compatibility (Tickets, Tickets/Remarks, Job Details)
- Real-time ticket status polling and updates
- Deleted ticket detection and audit tracking (deletion date/time and deleted-by resource information)
**Ticket Resolution and Status Management:**
- Conditional ticket status updates based on time entries:
- Tickets without time entries: automatically closed (status 5 - Complete)
- Tickets with time entries: remain open for time tracking continuation
- Dynamic confirmation messages indicating closure behavior based on time entry presence
- Safe resolution updates preserving stabilizing fields (issueType, subIssueType, source)
- Resolution field mirroring from internal ticket notes
- Ticket notes created via `/Tickets/{id}/Notes` endpoint with timezone-aware timestamps
- Deleted ticket handling with complete audit trail
**Technical Implementation:**
- Full-featured Autotask REST API client (`integrations/autotask/client.py`)
- Zone information discovery and endpoint resolution
- Robust authentication handling with header-based fallback for sandbox environments
- Picklist-based reference data retrieval (queues, sources, priorities, statuses)
- Entity metadata parsing with tenant-specific field detection
- Database migrations for Autotask linkage fields across SystemSettings, Customer, JobRun, and Ticket models
- Ticketing utilities for internal/external ticket synchronization
- Comprehensive API contract documentation (`docs/autotask_rest_api.md`)
- Functional design living document for integration architecture
### User Interface Improvements
- Search boxes now clear automatically when opening modals (Run Checks Link existing, Customer mapping)
- Auto-search for similar company names when mapping unmapped customers
- Cross-company ticket search when using ticket numbers (e.g., "T20260205.0001")
- Dynamic confirmation messages for ticket resolution based on time entries
- Improved visibility of Autotask ticket information in Run Checks
- Status labels displayed instead of numeric codes in ticket lists
- "Deleted in PSA" status display with deletion audit information
- "Resolved by PSA (Autotask)" differentiation from Backupchecks-driven resolution
### Bug Fixes and Stability
- Fixed Autotask REST API base URL casing (ATServicesRest/V1.0)
- Fixed reference data retrieval using correct picklist endpoints
- Fixed authentication fallback for sandbox-specific behavior
- Fixed company name display from nested API responses
- Fixed ticket ID normalization and response unwrapping (itemId handling)
- Fixed TicketJobRun linkage for legacy ticket behavior
- Fixed unchecked checkbox processing for enable/disable toggles (Autotask integration, automatic mail import)
- Fixed ticket resolution updates to preserve exact field values from GET response
- Fixed picklist field detection for tenant-specific metadata
- Fixed migration stability with idempotent column checks
- Fixed settings page crash with local helper functions
- Fixed Run Checks modal stacking and Bootstrap 4/5 compatibility
- Fixed JavaScript errors (renderModal → renderRun)
- Fixed indentation errors preventing application startup
- Fixed ticket propagation to ensure all active runs receive ticket linkage
- Fixed polling to use read-only operations without state mutation
### Documentation
- Added comprehensive Autotask REST API contract documentation (`docs/autotask_rest_api.md`)
- Created functional design living document for integration architecture
- Documented ticket lifecycle, status management, and time entry considerations
- Added changelog tracking for Claude Code changes (`docs/changelog-claude.md`)
---
## v0.1.21

View File

@ -1 +1 @@
v0.1.20
v0.1.22