Compare commits

..

61 Commits

Author SHA1 Message Date
dde2ccbb5d Fix Cove test script: parse Settings array format from API response
API returns Settings as list of single-key dicts, not a flat dict.
Also fixes AccountId display and status summary parsing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 09:58:19 +01:00
a30d51bed0 Fix Cove test script: remove partner field from login, use confirmed columns
Login requires only username + password (no partner field).
Updated column set matches confirmed working columns from Postman testing.
Added per-datasource output and 28-day color bar display.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 09:52:18 +01:00
6d086a883f Add Cove API test script and update documentation with N-able support findings
- Add standalone cove_api_test.py to verify new D9Fxx/D10Fxx/D11Fxx column codes
- D02/D03 confirmed as legacy by N-able support; D9/D10/D11 should work
- Document session status codes (F00) and timestamp fields (F09/F15/F18)
- Update TODO and knowledge docs with breakthrough status

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 09:08:41 +01:00
f35ec25163 Update technical notes for 3CX and remark alerts behavior 2026-02-19 14:22:32 +01:00
6f2f7b593b Auto-commit local changes before build (2026-02-19 14:15:08) 2026-02-19 14:15:08 +01:00
38f0f8954e Fix remark visibility in run alerts 2026-02-19 14:14:44 +01:00
2ee5db8882 Auto-commit local changes before build (2026-02-19 13:45:13) 2026-02-19 13:45:13 +01:00
ea244193e0 Hide non-backup 3CX informational jobs from Run Checks 2026-02-19 13:44:46 +01:00
c6ff104767 Auto-commit local changes before build (2026-02-19 13:28:33) 2026-02-19 13:28:33 +01:00
441f5a8e50 Handle 3CX update mails as informational runs 2026-02-19 13:27:52 +01:00
3c629bb664 Polish changelog wording for 2026-02-16 and 2026-02-19 2026-02-19 13:04:49 +01:00
e0e8ed2b0d Auto-commit local changes before build (2026-02-19 12:57:34) 2026-02-19 12:57:34 +01:00
53b028ef78 Add optional Autotask ID import toggle 2026-02-19 12:56:45 +01:00
1fb99dc6e7 Update technical notes for search remarks and filters 2026-02-16 16:59:33 +01:00
f2c0d0b36a Auto-commit local changes before build (2026-02-16 16:58:07) 2026-02-16 16:58:07 +01:00
652da5e117 Add remarks to global search results 2026-02-16 16:57:51 +01:00
c8e7491c94 Add Daily Jobs note to search results 2026-02-16 16:54:26 +01:00
e5da01cfbb Auto-commit local changes before build (2026-02-16 16:50:14) 2026-02-16 16:50:14 +01:00
b46010dbc2 Forward global search filters to overview pages 2026-02-16 16:49:47 +01:00
f90b2bdcf6 Keep search pagination at current section 2026-02-16 16:32:23 +01:00
fcbf67aeb3 Update technical notes for latest search improvements 2026-02-16 16:28:49 +01:00
2beba3bc9d Auto-commit local changes before build (2026-02-16 16:27:14) 2026-02-16 16:27:14 +01:00
ded71cb50f Improve daily jobs search metadata and modal link 2026-02-16 16:26:56 +01:00
dc3eb2f73c Auto-commit local changes before build (2026-02-16 16:19:53) 2026-02-16 16:19:53 +01:00
8a8f957c9f Add per-section pagination to global search 2026-02-16 16:19:26 +01:00
8c29f527c6 Document search template crash fix 2026-02-16 16:10:31 +01:00
fcce3b8854 Fix search template section items iteration 2026-02-16 16:09:22 +01:00
79933c2ecd Update technical notes for global search 2026-02-16 16:08:29 +01:00
d84d2142ec Auto-commit local changes before build (2026-02-16 16:06:20) 2026-02-16 16:06:20 +01:00
7476ebcbe3 Add role-aware global grouped search 2026-02-16 16:05:47 +01:00
189dc4ed37 Update technical notes for customer jobs filter 2026-02-16 15:26:51 +01:00
f4384086f2 Auto-commit local changes before build (2026-02-16 15:15:10) 2026-02-16 15:15:10 +01:00
dca117ed79 Add customer-to-jobs filtering navigation 2026-02-16 15:12:10 +01:00
ecdb331c9b Update technical documentation with detailed system knowledge
Enhanced technical-notes-codex.md with comprehensive details from Claude's
system knowledge document, including:

Ticketing & Autotask:
- Detailed two-ticket system explanation (internal vs Autotask)
- Complete ticket propagation strategies (Strategy 1 & 2)
- Where ticket linking is called (email-based, missed runs)
- Display logic with two-source approach
- Resolved vs Deleted distinction
- All critical rules and anti-patterns

Database Models:
- Complete model listing
- Foreign key relationships and critical deletion order
- Key model fields documentation

UI & UX:
- Detailed navbar behavior
- Status badge color coding
- Complete ticket copy functionality with three-tier fallback
- Checkbox autocomplete behavior

Parser Architecture:
- Parser types (Informational vs Regular)
- Synology Updates parser example
- Schedule learning behavior

Recent Changes:
- Documented 2026-02-13 fixes (missed run ticket linking, checkbox autoselect)
- Documented 2026-02-12 fixes (Run Checks modal, Edge copy button)
- Documented 2026-02-10 changes (screenshot support, link-based system)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 13:36:03 +01:00
084c91945a Convert technical notes to English 2026-02-13 13:20:19 +01:00
d2cdd34541 Add internal technical notes document 2026-02-13 13:18:08 +01:00
b5cf91d5f2 Fix checkboxes auto-selecting after page reload on Inbox and Run Checks
Added autocomplete="off" attribute to all checkboxes to prevent browser from
automatically restoring checkbox states after page reload.

Changes:
- Inbox page: Added autocomplete="off" to select-all and row checkboxes
- Run Checks page: Added autocomplete="off" to select-all and row checkboxes

This fixes the issue where after deleting items, the browser would automatically
re-select the same number of checkboxes that were previously selected, causing
unwanted selections on the reloaded page.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 11:00:21 +01:00
385aeb901c Auto-commit local changes before build (2026-02-13 10:53:29) 2026-02-13 10:53:29 +01:00
6468cbbc74 Fix Autotask and internal tickets not linking to missed runs
Added ticket linking to missed runs by calling link_open_internal_tickets_to_run
after creating missed JobRun records in _ensure_missed_runs_for_job function.

Changes:
- Added import for link_open_internal_tickets_to_run in routes_run_checks.py
- Added db.session.flush() and ticket linking call after creating weekly missed runs
- Added db.session.flush() and ticket linking call after creating monthly missed runs
- Ensures missed runs receive same ticket propagation as email-based runs

This fixes the issue where missed runs were not showing linked internal tickets
or Autotask tickets, while error/warning runs from emails were working correctly.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 10:52:00 +01:00
0e1e7e053d Document Autotask internal ticket linking fix in changelog
Fixed issue where Autotask internal tickets were not being linked to new runs.
This resolves the problem identified on 2026-02-11.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 10:39:03 +01:00
bd72f91598 Auto-commit local changes before build (2026-02-12 13:10:24) 2026-02-12 13:10:24 +01:00
2e0baa4e35 Fix copy ticket button not working in Edge on Job Details page
Moved clipboard functions (copyToClipboard, fallbackCopy, showCopyFeedback)
inside IIFE scope for proper closure access. Edge browser is stricter than
Firefox about scope resolution - functions must be in same scope as event
listeners that call them.

Previously these functions were in global scope while event listeners were
in IIFE scope, which worked in Firefox but failed silently in Edge.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-12 11:52:32 +01:00
9dee9c300a Auto-commit local changes before build (2026-02-12 11:11:59) 2026-02-12 11:11:59 +01:00
c5cf07f4e5 Fix tickets not showing in Run Checks modal detail view
Extended /api/job-runs/<run_id>/alerts endpoint to include both:
- Tickets explicitly linked to run via ticket_job_runs (audit trail)
- Tickets linked to job via ticket_scopes (active on run date)

Previously only ticket_job_runs was queried, causing newly created
tickets to not appear in the Meldingen section of the Run Checks modal.
They would only appear after being resolved (which creates a
ticket_job_runs entry). Now both sources are queried and duplicates
are prevented.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-12 10:53:00 +01:00
91755c6e85 Add N-able support ticket email template to Cove TODO
Added ready-to-send email template for requesting expanded API access:
- Complete email with subject line
- Detailed explanation of current limitations
- Specific requests (MSP-level access, status fields, timestamps, errors)
- Technical details and test results reference
- Professional business justification (MSP use case)
- Alternative contact methods listed

User can copy-paste this email on Thursday to contact N-able support.

Template requests:
1. MSP-level API user creation
2. Access to restricted column codes (status, timestamps, errors)
3. Documentation of column code meanings
4. Alternative integration methods if API expansion not possible

Ready for action on Thursday.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 17:26:19 +01:00
6674d40f4b Major update: Cove API tested - critical limitations discovered
Added comprehensive API test results document (with ChatGPT assistance):
- docs/cove_data_protection_api_calls_known_info.md

Key findings from live API testing:
- API works: JSON-RPC 2.0 at https://api.backup.management/jsonapi
- Authentication: Login method → visa token
- Method tested: EnumerateAccountStatistics (limited success)

CRITICAL LIMITATIONS DISCOVERED:
- Security error 13501 blocks most useful columns
- No backup status fields (success/failed/warning) accessible
- No error messages (D02Fxx/D03Fxx ranges blocked)
- No reliable backup timestamps
- No detailed run history
- API users are customer-scoped (not MSP-level)
- EnumerateAccounts method always fails (security block)

Working columns (allow-list only):
- I1 (account ID), I14 (storage bytes), I18 (hostname)
- D01F00-D01F07, D09F00 (numeric metrics, semantics unclear)

Impact on Backupchecks:
- Current API access INSUFFICIENT for backup monitoring
- Cannot determine if backups succeeded or failed
- No error messages to show users
- Core Backupchecks functionality not achievable with current API

Added decision matrix with 4 options:
A. Implement metrics-only (low value, storage usage only)
B. Request expanded access from N-able (requires vendor cooperation)
C. Explore alternative methods (webhooks, reports, email)
D. Defer integration until better API access available

Recommendation: Option B or C before implementing anything
- Contact N-able support for MSP-level API user + expanded columns
- OR investigate if Cove has webhook/reporting alternatives

This represents a significant blocker for Cove integration.
Full integration requires either vendor cooperation or alternative approach.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 16:55:31 +01:00
32e68d7209 Update Cove TODO: Add complete API documentation links
Major discovery - found comprehensive JSON API documentation on N-able site!

Added documentation sections:
- Core API docs: login, authentication, construct API calls
- Key endpoints: enumerate-customers, enumerate-devices, enumerate-device-statistics
- Reference docs: API column codes, schema documentation
- Architecture and security guides

Key findings:
- API docs located in "unused" folder but still functional
- JSON API structure (likely JSON-RPC or custom format)
- Three critical endpoints identified for backup monitoring:
  1. enumerate-customers (list all customers)
  2. enumerate-devices (list backup devices)
  3. enumerate-device-statistics (backup job results - KEY ENDPOINT!)

Updated status:
- Marked API documentation as found
- Changed next action from "find docs" to "read auth docs and test"
- Updated Phase 1 to start with reading login/auth documentation

Next steps:
1. Read login.htm to understand token authentication
2. Read construct-a-call.htm to understand request format
3. Read enumerate-device-statistics.htm - likely contains backup status data
4. Test in Postman with documented format

Documentation base URL:
https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:48:35 +01:00
23e59ab459 Update Cove TODO: Add comprehensive Postman testing instructions
Replaced curl examples with detailed Postman testing guide:
- Step-by-step Postman setup instructions
- Two authentication methods to test (Bearer Token vs X-API-Key)
- Multiple base URLs to try (api.backup.management, backup.management)
- Expected response codes and what they mean (200, 401, 403, 404)
- Endpoint discovery list (accounts, customers, devices, jobs)
- Tips for finding API documentation

Added Postman best practices:
- Create Cove API collection
- Use environment variables (cove_token, cove_base_url)
- Save response examples
- Check rate limit headers
- Export collection to JSON

Added structured template for documenting test results:
- Working configuration (base URL, auth method)
- Available endpoints table
- Key response fields mapping to Backupchecks
- Pagination and rate limiting details
- Location to save Postman collection export

Ready for immediate API testing with Postman!

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:44:24 +01:00
b2992acc56 Update Cove TODO: API user created, add testing instructions
Major progress update:
- API user successfully created in Cove portal
- Credentials: SuperUser role, top-level customer access, token generated
- Portal URL identified: https://backup.management
- API user management: https://backup.management/#/api-users

Added comprehensive testing section:
- Likely API base URLs to test (api.backup.management, backup.management/api)
- Step-by-step Phase 1 testing instructions
- Multiple curl command examples for authentication testing
- Different auth header formats to try (Bearer, X-API-Key)
- Common endpoints to discover (accounts, customers, devices)
- POC Python script template

Next steps:
1. Test API authentication with curl commands
2. Find working API base URL and auth method
3. Discover available endpoints
4. Document API response format
5. Create POC script for data retrieval

Status: Ready for immediate API testing!

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:42:11 +01:00
200dd23285 Update Cove TODO: API exists but activation method unknown
Added critical information from user:
- Confirmed: Cove Data Protection HAS API access (documented)
- Problem: Location/method to enable API access is unknown

Changes:
- Added Phase 0: API Access Activation (critical first step)
- Marked API availability as confirmed
- Added checklist for finding API activation in admin portal
- Listed possible admin portal locations to check
- Added support channel suggestions if activation unclear
- Updated current status section with latest info

Next action: Investigate Cove admin portal or contact support for
API activation instructions.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:38:08 +01:00
d1023f9e52 Translate Cove Data Protection TODO to English
Changed TODO document language from Dutch to English to align with
project documentation standards (all code and docs in English).

No content changes, only translation.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:33:34 +01:00
1de1b032e7 Add TODO for Cove Data Protection integration
Created comprehensive TODO document for integrating Cove Data Protection
(formerly N-able Backup) into Backupchecks.

Key challenges:
- Cove does not use email notifications like other backup systems
- Need to research API availability and authentication methods
- Must determine optimal integration strategy (polling vs webhooks)

Document includes:
- Research questions (API availability, data structure, multi-tenancy)
- Three architecture options for integration
- Implementation phases (research, database, import, scheduling, UI)
- Success criteria and open questions
- References section for documentation links

Status: Research phase - waiting on API documentation investigation

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:32:12 +01:00
661a5783cf Auto-commit local changes before build (2026-02-10 15:27:46) 2026-02-10 15:27:46 +01:00
dfe86a6ed1 Update changelog with copy ticket button improvements
Added documentation for:
- Copy ticket button on Job Details page
- Cross-browser clipboard copy fix (Edge no longer requires manual popup)
- Three-tier fallback mechanism for clipboard operations

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:04:38 +01:00
35ec337c54 Add copy ticket button to Job Details and improve cross-browser copy functionality
Changes:
- Added copy ticket button (⧉) next to ticket numbers in Job Details modal
- Implemented robust cross-browser clipboard copy mechanism:
  1. Modern navigator.clipboard API (works in HTTPS contexts)
  2. Legacy document.execCommand('copy') fallback (works in older browsers)
  3. Prompt fallback as last resort
- Applied improved copy function to both Run Checks and Job Details pages
- Copy now works directly in all browsers (Firefox, Edge, Chrome) without popup

This eliminates the manual copy step in Edge that previously required a popup.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 15:04:21 +01:00
c777728c91 Update changelog with comprehensive screenshot feature documentation
Added detailed documentation for screenshot attachment support in Feedback
system, including:
- File validation using imghdr (header inspection, not just extensions)
- Admin access control for deleted item attachments
- Automatic CASCADE delete behavior
- Enhanced admin deleted items view with permanent delete
- UI improvements for deleted item display (opacity + background)
- Security considerations for non-admin users

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 13:51:54 +01:00
0510613708 Fix: Allow admins to view screenshots of deleted feedback items
Two fixes:
1. Improved deleted item row styling (opacity + background)
2. Allow feedback_attachment route to serve images from deleted items (admin only)

Before: Screenshots shown as links only (2026-02-10_13_29_39.png)
After: Screenshots shown as images/thumbnails

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 13:46:24 +01:00
fc99f17db3 Add admin view for deleted feedback items + permanent delete
User request: Allow admins to view deleted items and permanently
delete them (hard delete) to clean up database and remove screenshots.

Features:
1. Admin-only "Show deleted" checkbox on feedback list
2. Deleted items shown with gray background + "Deleted" badge
3. Permanent delete button (only for soft-deleted items)
4. Hard delete removes item + all attachments from database
5. Admins can view detail pages of deleted items

Backend (routes_feedback.py):
- Added show_deleted parameter (admin only)
- Modified feedback_page query to optionally include deleted items
- Added deleted_at, deleted_by to query results
- Modified feedback_detail to allow admins to view deleted items
- New route: feedback_permanent_delete (hard delete)
  - Only works on already soft-deleted items (safety check)
  - Uses db.session.delete() - CASCADE removes attachments
  - Shows attachment count in confirmation message

Frontend:
- feedback.html:
  - "Show deleted items" checkbox (auto-submits form)
  - Deleted items: gray background (table-secondary)
  - Shows deleted timestamp
  - "Permanent Delete" button in Actions column
  - Confirmation dialog warns about permanent deletion
- feedback_detail.html:
  - "Deleted" badge in header
  - Actions sidebar shows warning + "Permanent Delete" button
  - Normal actions (resolve/delete) hidden for deleted items

Benefits:
- Audit trail preserved with soft delete
- Database can be cleaned up later by removing old deleted items
- Screenshots (BYTEA) don't accumulate forever
- Two-stage safety: soft delete → permanent delete

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 13:40:53 +01:00
1a506c0713 Fix: Add FeedbackAttachment to routes_shared imports
Missing import caused NameError when creating feedback with screenshots.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 13:30:47 +01:00
85798a07ae Auto-commit local changes before build (2026-02-10 13:29:10) 2026-02-10 13:29:10 +01:00
451ce1ab22 Add screenshot attachment support to Feedback/Bug system
User request: Allow screenshots to be attached to bug reports
and feature requests for better documentation and reproduction.

Database:
- New model: FeedbackAttachment (file_data BYTEA, filename, mime_type, file_size)
- Links to feedback_item_id (required) and feedback_reply_id (optional)
- Migration: auto-creates table with indexes on startup
- Cascading deletes when item or reply is deleted

Backend (routes_feedback.py):
- Helper function: _validate_image_file() for security
  - Validates file type using imghdr (not just extension)
  - Enforces size limit (5MB per file)
  - Secure filename handling with werkzeug
  - Allowed: PNG, JPG, GIF, WEBP
- Updated feedback_new: accepts multiple file uploads
- Updated feedback_reply: accepts multiple file uploads
- Updated feedback_detail: fetches attachments for item + replies
- New route: /feedback/attachment/<id> to serve images

Frontend:
- feedback_new.html: file input with multiple selection
- feedback_detail.html:
  - Shows item screenshots as clickable thumbnails (max 300x200)
  - Shows reply screenshots as clickable thumbnails (max 200x150)
  - File upload in reply form
  - All images open full-size in new tab

Security:
- Access control: only authenticated users with feedback roles
- Image type verification using imghdr (header inspection)
- File size limit enforced (5MB)
- Secure filename sanitization
- Deleted items hide their attachments (404)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 13:28:41 +01:00
49 changed files with 3897 additions and 101 deletions

7
.gitignore vendored
View File

@ -1,2 +1,9 @@
# Claude Code confidential files # Claude Code confidential files
.claude/ .claude/
# Codex local workspace files
.codex/
# Python cache artifacts
__pycache__/
*.pyc

View File

@ -1 +1 @@
main v20260219-03-fix-remark-visibility

View File

@ -0,0 +1,772 @@
# TODO: Cove Data Protection Integration
**Date:** 2026-02-10
**Status:** Research phase
**Priority:** Medium
---
## 🎯 Goal
Integrate Cove Data Protection (formerly N-able Backup / SolarWinds Backup) into Backupchecks for backup status monitoring.
**Challenge:** Cove does NOT work with email notifications like other backup systems (Veeam, Synology, NAKIVO). We need to find an alternative method to import backup status information.
---
## 🔍 Research Questions
### 1. API Availability
- [x] Does Cove Data Protection have a public API? **YES - Confirmed in documentation**
- [ ] **CRITICAL:** How to enable/activate API access? (settings location, admin portal?)
- [ ] What authentication method does the API use? (API key, OAuth, basic auth?)
- [ ] Which endpoints are available for backup status?
- [ ] Is there rate limiting on the API?
- [ ] Documentation URL: ?
- [ ] Is API access available in all Cove subscription tiers or only specific plans?
### 2. Data Structure
- [ ] What information can we retrieve per backup job?
- Job name
- Status (success/warning/failed)
- Start/end time
- Backup type
- Client/device name
- Error messages
- Objects/files backed up
- [ ] Is there a webhook system available?
- [ ] How often should the API be polled?
### 3. Multi-Tenancy
- [ ] Does Cove support multi-tenant setups? (MSP use case)
- [ ] Can we monitor multiple customers/partners from 1 account?
- [ ] How are permissions/access managed?
### 4. Integration Strategy
- [ ] **Option A: Scheduled Polling**
- Cronjob that periodically calls API
- Parse results to JobRun records
- Pro: Simple, consistent with current flow
- Con: Delay between backup and registration in system
- [ ] **Option B: Webhook/Push**
- Cove sends notifications to our endpoint
- Pro: Real-time updates
- Con: Requires external endpoint, security considerations
- [ ] **Option C: Email Forwarding**
- If Cove has email support after all (hidden setting?)
- Pro: Reuses existing email import flow
- Con: Possibly not available
---
## 📋 Technical Considerations
### Database Model
Current JobRun model expects:
- `mail_message_id` (FK) - how do we adapt this for API-sourced runs?
- Possible new field: `source_type` ("email" vs "api")
- Possible new field: `external_id` (Cove job ID)
### Parser System
Current parser system works with email content. For API:
- New "parser" concept for API responses?
- Or direct JobRun creation without parser layer?
### Architecture Options
**Option 1: Extend Email Import System**
```
API Poller → Pseudo-MailMessage → Existing Parser → JobRun
```
- Pro: Reuse existing flow
- Con: Hacky, email fields have no meaning
**Option 2: Parallel Import System**
```
API Poller → API Parser → JobRun (direct)
```
- Pro: Clean separation, no email dependency
- Con: Logic duplication
**Option 3: Unified Import Layer**
```
→ Email Import →
Unified → → Common Processor → JobRun
→ API Import →
```
- Pro: Future-proof, scalable
- Con: Larger refactor
---
## 🔧 Implementation Steps (After Research)
### Phase 0: API Access Activation (FIRST!)
**Critical step before any development can begin:**
1. [ ] **Find API activation location**
- Check Cove admin portal/dashboard
- Look in: Settings → API / Integrations / Developer section
- Check: Account settings, Company settings, Partner settings
- Search documentation for: "API activation", "API access", "enable API"
2. [ ] **Generate API credentials**
- API key generation
- Client ID / Client Secret (if OAuth)
- Note: which user/role can generate API keys?
3. [ ] **Document API base URL**
- Production API endpoint
- Sandbox/test environment (if available)
- Regional endpoints (EU vs US?)
4. [ ] **Document API authentication flow**
- Header format (Bearer token, API key in header, query param?)
- Token expiration and refresh
- Rate limit headers to watch
5. [ ] **Find API documentation portal**
- Developer documentation URL
- Interactive API explorer (Swagger/OpenAPI?)
- Code examples/SDKs
- Support channels for API questions
**Resources to check:**
- Cove admin portal: https://backup.management (or similar)
- N-able partner portal
- Cove knowledge base / support docs
- Contact Cove support for API access instructions
### Phase 1: API Research & POC
**Step 1: Read Authentication Documentation** ✅ DOCUMENTATION FOUND!
- [x] API documentation located
- [ ] **Read:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/login.htm
- [ ] **Read:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/construct-a-call.htm
- [ ] Document API base URL from docs
- [ ] Document authentication flow (likely JSON-RPC style based on "construct-a-call")
- [ ] Note any required request format (headers, body structure)
**Step 2: Test Authentication**
- [ ] Determine token format (Bearer token? API key header? Query param?)
- [ ] Common authentication patterns to test:
```bash
# Option 1: Bearer token
curl -H "Authorization: Bearer YOUR_TOKEN" https://api.example.com/endpoint
# Option 2: API Key header
curl -H "X-API-Key: YOUR_TOKEN" https://api.example.com/endpoint
# Option 3: Custom header
curl -H "X-Auth-Token: YOUR_TOKEN" https://api.example.com/endpoint
```
- [ ] Test with simple endpoint (e.g., `/api/v1/status`, `/api/accounts`, `/api/devices`)
**Step 3: Discover Available Endpoints**
- [ ] Find API documentation/reference
- [ ] Look for OpenAPI/Swagger spec
- [ ] Key endpoints we need:
- List customers/accounts
- List backup devices/jobs
- Get backup job history
- Get backup job status/details
- Get backup run results (success/failed/warnings)
**Step 4: Test Data Retrieval**
- [ ] Test listing customers (verify top-level access works)
- [ ] Test listing backup jobs for one customer
- [ ] Test retrieving details for one backup job
- [ ] Document response format (JSON structure)
- [ ] Save example API responses for reference
**Step 5: Proof of Concept Script**
1. [ ] Create standalone Python script (outside Backupchecks)
2. [ ] Test authentication and data retrieval
3. [ ] Parse API response to extract key fields
4. [ ] Mapping of Cove data → Backupchecks JobRun model
5. [ ] Document findings in this TODO
### Phase 2: Database Changes
1. [ ] Decide: extend MailMessage model or new source type?
2. [ ] Migration: add `source_type` field to JobRun
3. [ ] Migration: add `external_id` field to JobRun
4. [ ] Update constraints/validations
### Phase 3: Import Mechanism
1. [ ] New file: `containers/backupchecks/src/backend/app/cove_importer.py`
2. [ ] API client for Cove
3. [ ] Data transformation to JobRun format
4. [ ] Error handling & retry logic
5. [ ] Logging & audit trail
### Phase 4: Scheduling
1. [ ] Cronjob/scheduled task for polling (every 15 min?)
2. [ ] Or: webhook endpoint if Cove supports it
3. [ ] Rate limiting & throttling
4. [ ] Duplicate detection (avoid double imports)
### Phase 5: UI Updates
1. [ ] Job Details: indication that job is from API (not email)
2. [ ] No "Download EML" button for API-sourced runs
3. [ ] Possibly different metadata display
---
## 📚 References
### Cove Data Protection
- **Product name:** Cove Data Protection (formerly N-able Backup, SolarWinds Backup)
- **Website:** https://www.n-able.com/products/cove-data-protection
- **API Documentation Base:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/
### JSON API Documentation (Found!)
**Core Documentation:**
- 📘 **API Home:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/home.htm
- 🔑 **Login/Authentication:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/login.htm
- 🔧 **Construct API Calls:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/construct-a-call.htm
**Key Endpoints for Backupchecks:**
- 👥 **Enumerate Customers:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/enumerate-customers.htm
- 💻 **Enumerate Devices:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/enumerate-devices.htm
- 📊 **Enumerate Device Statistics:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/enumerate-device-statistics.htm
**Reference:**
- 📋 **API Column Codes:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/API-column-codes.htm
- 📋 **Legacy Column Codes:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/API-column-codes-legacy.htm
- 📐 **Schema Documentation:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/how-to-schema.htm
**Other Resources:**
- 🏗️ **Architecture Guide:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/Architecture-and-Security/Cove-Architecture-Guide.htm
- 🔒 **Security Guide:** https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/Architecture-and-Security/Cove-Security-Guide.htm
**Note:** API docs are in "unused" folder - likely legacy but still functional!
### Similar Integrations
Other backup systems that use APIs:
- Veeam: Has both email and REST API
- Acronis: REST API available
- MSP360: API for management
### Resources
- [ ] API documentation (yet to find)
- [ ] SDK/Client libraries available?
- [ ] Community/forum for integration questions?
- [ ] Example code/integrations?
---
## ❓ Open Questions
1. **Performance:** How many Cove jobs do we need to monitor? (impact on polling frequency)
2. **Historical Data:** Can we retrieve old backup runs, or only new ones?
3. **Filtering:** Can we apply filters (only failed jobs, specific clients)?
4. **Authentication:** Where do we store Cove API credentials? (SystemSettings?)
5. **Multi-Account:** Do we support multiple Cove accounts? (MSP scenario)
---
## 🎯 Success Criteria
### Minimum Viable Product (MVP)
- [ ] Backup runs from Cove are automatically imported
- [ ] Status (success/warning/failed) displayed correctly
- [ ] Job name and timestamp available
- [ ] Visible in Daily Jobs & Run Checks
- [ ] Errors and warnings are shown
### Nice to Have
- [ ] Real-time import (webhook instead of polling)
- [ ] Backup object details (individual files/folders)
- [ ] Retry history
- [ ] Storage usage metrics
- [ ] Multi-tenant support
---
## ⚠️ Critical Limitations Discovered (2026-02-10)
### What the API CAN provide:
- ✅ Account/device identifiers (I1)
- ✅ Storage usage metrics (I14 - bytes used)
- ✅ Computer/hostname (I18)
- ✅ Numeric metrics (D01F00-D01F07, D09F00)
- ✅ Basic partner metadata
### What the API CANNOT provide (security restrictions):
- ❌ **Last backup timestamp** - No reliable date/time fields accessible
- ❌ **Backup status** (success/failed/warning) - No explicit status fields
- ❌ **Error messages** - All D02Fxx/D03Fxx ranges blocked
- ❌ **Backup run history** - No detailed run information
- ❌ **Cross-customer aggregation** - API users are customer-scoped
- ❌ **Device enumeration** - EnumerateAccounts method blocked (error 13501)
### Root Cause
**Security error 13501** ("Operation failed because of security reasons") occurs when:
- Any restricted column code is requested in EnumerateAccountStatistics
- EnumerateAccounts method is called (always fails)
- This applies even with SuperUser + SecurityOfficer roles
**Column restrictions are per-tenant and not documented.** The allow-list is extremely limited.
### Impact on Backupchecks Integration
**Current API access is insufficient for backup monitoring** because:
1. No way to determine if a backup succeeded or failed
2. No error messages to display to users
3. No timestamps to track backup frequency
4. Cannot import backup "runs" in meaningful way
**Possible with current API:**
- Storage usage dashboard only
- Device inventory list
- But NOT backup status monitoring (core Backupchecks function)
---
## 🔀 Decision Point: Integration Feasibility
### Option A: Implement Metrics-Only Integration
**Pros:**
- Can display storage usage per device
- Simple implementation
- Works with current API access
**Cons:**
- Does NOT meet core Backupchecks requirement (backup status monitoring)
- No success/failure tracking
- No alerting on backup issues
- Limited value compared to email-based systems
**Effort:** Low (2-3 days)
**Value:** Low (storage metrics only, no backup monitoring)
### Option B: Request Expanded API Access from N-able ⭐ RECOMMENDED
**Contact N-able support and request:**
1. MSP-level API user capability (cross-customer access)
2. Access to restricted column codes:
- Backup timestamps (last successful backup)
- Status fields (success/warning/failed)
- Error message fields (D02Fxx/D03Fxx)
- Session/run history fields
**Pros:**
- Could enable full backup monitoring if granted
- Proper integration matching other backup systems
**Cons:**
- May require vendor cooperation
- No guarantee N-able will grant access
- Possible additional licensing costs?
- Timeline uncertain (support ticket process)
**Effort:** Unknown (depends on N-able response)
**Value:** High (if successful)
---
### 📧 Support Ticket Template (Ready to Send)
**To:** N-able Cove Data Protection Support
**Subject:** API Access Request - Backup Monitoring Integration
**Email Body:**
```
Hello N-able Support Team,
We are developing a backup monitoring solution for MSPs and are integrating
with Cove Data Protection via the JSON-RPC API for our customers.
Current Situation:
- We have successfully authenticated with the API
- API endpoint: https://api.backup.management/jsonapi
- API user management: https://backup.management/#/api-users
- Method tested: EnumerateAccountStatistics
- Role: SuperUser + SecurityOfficer
Current Limitations (Blocking Integration):
We are encountering "Operation failed because of security reasons (error 13501)"
when attempting to access essential backup monitoring data:
1. Backup Status Fields
- Cannot determine if backups succeeded, failed, or completed with warnings
- Need access to status/result columns
2. Timestamp Information
- Cannot access last backup date/time
- Need reliable timestamp fields to track backup frequency
3. Error Messages
- D02Fxx and D03Fxx column ranges are blocked
- Cannot retrieve error details to show users what went wrong
4. API User Scope
- API users are customer-scoped only
- Need MSP-level API user capability for cross-customer monitoring
Impact:
Without access to these fields, we can only retrieve storage usage metrics,
which is insufficient for backup status monitoring - the core requirement
for our MSP customers.
Request:
Can you please:
1. Enable MSP-level API user creation for cross-customer access
2. Grant access to restricted column codes containing:
- Backup status (success/failed/warning)
- Last backup timestamps
- Error messages and details
- Session/run history
3. Provide documentation on the semantic meaning of column codes (especially
D01F00-D01F07 and D09F00 which currently work)
4. OR suggest an alternative integration method if expanded API access is
not available (webhooks, reporting API, email notifications, etc.)
Technical Details:
- Our test results are documented at:
docs/cove_data_protection_api_calls_known_info.md (can provide upon request)
- Safe columns identified: I1, I14, I18, D01F00-D01F07, D09F00
- Restricted columns: Entire D02Fxx and D03Fxx ranges
Use Case:
We need this integration to provide our MSP customers with centralized backup
monitoring across multiple backup vendors (Veeam, Synology, NAKIVO, and Cove).
Without proper API access, Cove customers cannot benefit from our monitoring
solution.
Please advise on the best path forward for enabling comprehensive backup
monitoring via the Cove API.
Thank you for your assistance.
Best regards,
[Your Name]
[Company Name]
[Contact Information]
```
**Alternative Contact Methods:**
- N-able Partner Portal support ticket
- Cove support email (if available)
- N-able account manager (if assigned)
---
### Option C: Alternative Integration Methods
Explore if Cove has:
1. **Reporting API** (separate from JSON-RPC)
2. **Webhook system** (push notifications for backup events)
3. **Email notifications** (if available, use existing email parser)
4. **Export/CSV reports** (scheduled export that can be imported)
**Effort:** Medium (research required)
**Value:** Unknown
### Option D: Defer Integration
**Wait until:**
- Customer requests Cove support specifically
- N-able improves API capabilities
- Alternative integration method discovered
**Pros:**
- No wasted effort on limited implementation
- Focus on systems with better API support
**Cons:**
- Cove customers cannot use Backupchecks
- Competitive disadvantage if other MSPs support Cove
---
## 🎯 Recommended Next Steps
### Immediate (This Week)
1. **Decision:** Choose Option A, B, C, or D above
2. **If Option B (contact N-able):**
- Open support ticket with N-able
- Reference API user creation at https://backup.management/#/api-users
- Explain need for expanded column access for monitoring solution
- Attach findings from `/docker/develop/cove_data_protection_api_calls_known_info.md`
- Ask specifically for:
- MSP-level API user creation
- Access to backup status/timestamp columns
- Documentation of column codes semantics
3. **If Option C (alternative methods):**
- Check Cove portal for webhook/reporting settings
- Search N-able docs for "reporting API", "webhooks", "notifications"
- Test if email notifications can be enabled per customer
### Long Term (Future)
- Monitor N-able API changelog for improvements
- Check if other MSPs have found workarounds
- Consider partnering with N-able for integration
---
## 🚀 Next Steps
### Immediate Actions (Ready to Start!)
**1. Read API Documentation** ✅ FOUND!
Priority reading order:
1. **Start here:** [Login/Auth](https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/login.htm) - How to authenticate with your token
2. **Then read:** [Construct a Call](https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/construct-a-call.htm) - Request format
3. **Key endpoint:** [Enumerate Device Statistics](https://documentation.n-able.com/covedataprotection/USERGUIDE/documentation/Content/unused/service-management/json-api/enumerate-device-statistics.htm) - This likely has backup job data!
**What to extract from docs:**
- API base URL/endpoint
- Request format (JSON-RPC? REST? POST body structure?)
- How to use the token in requests
- Response format examples
- Which fields contain backup status/results
**2. Quick API Test with Postman** (can be done now with token!)
### Postman Setup Instructions
**Step 1: Create New Request**
1. Open Postman
2. Click "New" → "HTTP Request"
3. Name it "Cove API - Test Authentication"
**Step 2: Configure Request**
- **Method:** GET
- **URL:** Try these in order:
1. `https://api.backup.management/api/accounts`
2. `https://backup.management/api/accounts`
3. `https://api.backup.management/api/customers`
**Step 3: Add Authentication (try both methods)**
**Option A: Bearer Token**
- Go to "Authorization" tab
- Type: "Bearer Token"
- Token: `YOUR_TOKEN` (paste token from backup.management)
**Option B: API Key in Header**
- Go to "Headers" tab
- Add header:
- Key: `X-API-Key`
- Value: `YOUR_TOKEN`
**Step 4: Send Request and Analyze Response**
**Expected Results:**
- ✅ **200 OK** → Success! API works, save this configuration
- Copy the JSON response → we'll analyze structure
- Note which URL and auth method worked
- Check for pagination info in response
- ❌ **401 Unauthorized** → Wrong auth method
- Try other authentication option (Bearer vs X-API-Key)
- Check if token was copied correctly
- ❌ **404 Not Found** → Wrong endpoint URL
- Try alternative base URL (api.backup.management vs backup.management)
- Try different endpoint (/api/customers, /api/devices)
- ❌ **403 Forbidden** → Token works but insufficient permissions
- Verify API user has SuperUser role
- Check customer scope selection
**Step 5: Discover Available Endpoints**
Once authentication works, try these endpoints:
```
GET /api/accounts
GET /api/customers
GET /api/devices
GET /api/jobs
GET /api/statistics
GET /api/sessions
```
For each successful endpoint, save:
- The request in Postman collection
- Example response in TODO or separate file
- Note any query parameters (page, limit, filter, etc.)
**Step 6: Look for API Documentation**
Try these URLs in browser or Postman:
- `https://api.backup.management/swagger`
- `https://api.backup.management/docs`
- `https://api.backup.management/api-docs`
- `https://backup.management/api/documentation`
**Step 7: Document Findings**
After successful testing, document in this TODO:
- ✅ Working API base URL
- ✅ Correct authentication method (Bearer vs header)
- ✅ List of available endpoints discovered
- ✅ JSON response structure examples
- ✅ Any pagination/filtering patterns
- ✅ Rate limits (check response headers: X-RateLimit-*)
### Postman Tips for This Project
**Save Everything:**
- Create a "Cove API" collection in Postman
- Save all working requests
- Export collection to JSON for documentation
**Use Variables:**
- Create Postman environment "Cove Production"
- Add variable: `cove_token` = your token
- Add variable: `cove_base_url` = working base URL
- Use `{{cove_token}}` and `{{cove_base_url}}` in requests
**Check Response Headers:**
- Look for `X-RateLimit-Limit` (API call limits)
- Look for `X-RateLimit-Remaining` (calls left)
- Look for `Link` header (pagination)
**Save Response Examples:**
- For each endpoint, save example response
- Use Postman's "Save Response" feature
- Or copy JSON to separate file for reference
**3. Document Findings**
**After successful Postman testing, update this TODO with:**
```markdown
## ✅ API Testing Results (Add after testing)
### Working Configuration
- **Base URL:** [fill in]
- **Authentication:** Bearer Token / X-API-Key header (circle one)
- **Token Location:** Authorization header / X-API-Key header (circle one)
### Available Endpoints Discovered
| Endpoint | Method | Purpose | Response Fields |
|----------|--------|---------|-----------------|
| /api/accounts | GET | List accounts | [list key fields] |
| /api/customers | GET | List customers | [list key fields] |
| /api/devices | GET | List backup devices | [list key fields] |
| /api/jobs | GET | List backup jobs | [list key fields] |
### Key Response Fields for Backupchecks Integration
From backup job/session endpoint:
- Job ID: `[field name]`
- Job Name: `[field name]`
- Status: `[field name]` (values: success/warning/failed)
- Start Time: `[field name]`
- End Time: `[field name]`
- Customer/Device: `[field name]`
- Error Messages: `[field name]`
- Backup Objects: `[field name or nested path]`
### Pagination
- Method: [Link headers / page parameter / cursor / none]
- Page size: [default and max]
- Total count: [available in response?]
### Rate Limiting
- Limit: [X requests per Y time]
- Headers: [X-RateLimit-* header names]
### API Documentation URL
- [URL if found, or "Not found" if unavailable]
```
**Save Postman Collection:**
- Export collection as JSON
- Save to: `/docker/develop/backupchecks/docs/cove-api-postman-collection.json`
- Or share Postman workspace link in this TODO
**4. Create POC Script**
Once API works, create standalone Python test script:
```python
import requests
# Test script to retrieve Cove backup data
token = "YOUR_TOKEN"
base_url = "https://api.example.com"
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
}
# Get list of customers
response = requests.get(f"{base_url}/api/customers", headers=headers)
print(response.json())
```
**5. Plan Integration**
Based on POC results, decide architecture approach and start implementation
**Status:** Ready for API testing - token available!
---
## 📝 Notes
- This TODO document should be updated after each research step
- Add API examples as soon as available
- Document edge cases and limitations
- Consider security implications (API key storage, rate limits, etc.)
### Current Status (2026-02-23) 🎉 BREAKTHROUGH
- ✅ **Confirmed:** Cove Data Protection HAS API access (mentioned in documentation)
- ✅ **Found:** API user creation location in Cove portal
- ✅ **Created:** API user with SuperUser role and token
- ✅ **Found:** Complete JSON API documentation (N-able docs site)
- ✅ **Tested:** API authentication and multiple methods (with ChatGPT assistance)
- ✅ **BLOCKER RESOLVED:** N-able support (Andrew Robinson) confirmed D02/D03 are legacy!
- Use D10/D11 instead of D02/D03
- No MSP-level restrictions all users have same access
- New docs: https://developer.n-able.com/n-able-cove/docs/
- ✅ **New column codes identified:**
- D9F00 = Last Session Status (2=Failed, 5=Completed, 8=CompletedWithErrors)
- D9F09 = Last Successful Session Timestamp
- D9F15 = Last Session Timestamp
- D9F06 = Error Count
- 🔄 **Next step:** Run `cove_api_test.py` to verify new column codes work
- 📋 **After test:** Implement full integration in Backupchecks website
### Test Results Summary (see docs/cove_data_protection_api_calls_known_info.md)
- **Endpoint:** https://api.backup.management/jsonapi (JSON-RPC 2.0)
- **Authentication:** Login method → visa token → include in all subsequent calls
- **Working method:** EnumerateAccountStatistics (with limited columns)
- **Blocked method:** EnumerateAccounts (security error 13501)
- **Safe columns:** I1, I14, I18, D01F00-D01F07, D09F00
- **Restricted columns:** D02Fxx, D03Fxx ranges (cause entire request to fail)
- **Scope limitation:** API users are customer-scoped, not MSP-level
### API Credentials (Created)
- **Authentication:** Token-based
- **Role:** SuperUser (full access)
- **Scope:** Top-level customer (access to all sub-customers)
- **Token:** Generated (store securely!)
- **Portal URL:** https://backup.management
- **API User Management:** https://backup.management/#/api-users
**IMPORTANT:** Store token in secure location (password manager) - cannot be retrieved again if lost!
### Likely API Base URLs to Test
Based on portal URL `backup.management`:
1. `https://api.backup.management` (most common pattern)
2. `https://backup.management/api`
3. `https://api.backup.management/jsonapi` (some backup systems use this)
4. Check API user page for hints or documentation links
### Possible Admin Portal Locations
Check these sections in Cove dashboard:
- Settings → API Keys / Developer
- Settings → Integrations
- Account → API Access
- Partner Portal → API Management
- Company Settings → Advanced → API
### Support Channels
If API activation is not obvious:
- Cove support ticket: Ask "How do I enable API access for backup monitoring?"
- N-able partner support (if MSP)
- Check Cove community forums
- Review onboarding documentation for API mentions

View File

@ -26,5 +26,6 @@ from . import routes_feedback # noqa: F401
from . import routes_api # noqa: F401 from . import routes_api # noqa: F401
from . import routes_reporting_api # noqa: F401 from . import routes_reporting_api # noqa: F401
from . import routes_user_settings # noqa: F401 from . import routes_user_settings # noqa: F401
from . import routes_search # noqa: F401
__all__ = ["main_bp", "roles_required"] __all__ = ["main_bp", "roles_required"]

View File

@ -16,9 +16,11 @@ def api_job_run_alerts(run_id: int):
tickets = [] tickets = []
remarks = [] remarks = []
# Tickets linked to this specific run # Tickets linked to this run:
# Only show tickets that were explicitly linked via ticket_job_runs # 1. Explicitly linked via ticket_job_runs (audit trail when resolved)
# 2. Linked to the job via ticket_scopes (active on run date)
try: try:
# First, get tickets explicitly linked to this run via ticket_job_runs
rows = ( rows = (
db.session.execute( db.session.execute(
text( text(
@ -43,7 +45,11 @@ def api_job_run_alerts(run_id: int):
.all() .all()
) )
ticket_ids_seen = set()
for r in rows: for r in rows:
ticket_id = int(r.get("id"))
ticket_ids_seen.add(ticket_id)
resolved_at = r.get("resolved_at") resolved_at = r.get("resolved_at")
resolved_same_day = False resolved_same_day = False
if resolved_at and run_date: if resolved_at and run_date:
@ -52,7 +58,62 @@ def api_job_run_alerts(run_id: int):
tickets.append( tickets.append(
{ {
"id": int(r.get("id")), "id": ticket_id,
"ticket_code": r.get("ticket_code") or "",
"description": r.get("description") or "",
"start_date": _format_datetime(r.get("start_date")),
"active_from_date": str(r.get("active_from_date")) if r.get("active_from_date") else "",
"resolved_at": _format_datetime(r.get("resolved_at")) if r.get("resolved_at") else "",
"active": bool(active_now),
"resolved_same_day": bool(resolved_same_day),
}
)
# Second, get tickets linked to the job via ticket_scopes
# These are tickets that apply to the whole job (not just a specific run)
rows = (
db.session.execute(
text(
"""
SELECT DISTINCT t.id,
t.ticket_code,
t.description,
t.start_date,
t.resolved_at,
t.active_from_date
FROM tickets t
JOIN ticket_scopes ts ON ts.ticket_id = t.id
WHERE ts.job_id = :job_id
AND t.active_from_date <= :run_date
AND COALESCE(ts.resolved_at, t.resolved_at) IS NULL
ORDER BY t.start_date DESC
"""
),
{
"job_id": job.id if job else 0,
"run_date": run_date,
},
)
.mappings()
.all()
)
for r in rows:
ticket_id = int(r.get("id"))
# Skip if already added via ticket_job_runs
if ticket_id in ticket_ids_seen:
continue
ticket_ids_seen.add(ticket_id)
resolved_at = r.get("resolved_at")
resolved_same_day = False
if resolved_at and run_date:
resolved_same_day = _to_amsterdam_date(resolved_at) == run_date
active_now = r.get("resolved_at") is None
tickets.append(
{
"id": ticket_id,
"ticket_code": r.get("ticket_code") or "", "ticket_code": r.get("ticket_code") or "",
"description": r.get("description") or "", "description": r.get("description") or "",
"start_date": _format_datetime(r.get("start_date")), "start_date": _format_datetime(r.get("start_date")),
@ -65,9 +126,13 @@ def api_job_run_alerts(run_id: int):
except Exception as exc: except Exception as exc:
return jsonify({"status": "error", "message": str(exc) or "Failed to load tickets."}), 500 return jsonify({"status": "error", "message": str(exc) or "Failed to load tickets."}), 500
# Remarks linked to this specific run # Remarks linked to this run:
# Only show remarks that were explicitly linked via remark_job_runs # 1. Explicitly linked via remark_job_runs (audit trail when resolved)
# 2. Linked to the job via remark_scopes (active on run date)
try: try:
remark_ids_seen = set()
# First, remarks explicitly linked to this run.
rows = ( rows = (
db.session.execute( db.session.execute(
text( text(
@ -88,6 +153,9 @@ def api_job_run_alerts(run_id: int):
) )
for rr in rows: for rr in rows:
remark_id = int(rr.get("id"))
remark_ids_seen.add(remark_id)
body = (rr.get("body") or "").strip() body = (rr.get("body") or "").strip()
if len(body) > 180: if len(body) > 180:
body = body[:177] + "..." body = body[:177] + "..."
@ -101,7 +169,64 @@ def api_job_run_alerts(run_id: int):
remarks.append( remarks.append(
{ {
"id": int(rr.get("id")), "id": remark_id,
"body": body,
"start_date": _format_datetime(rr.get("start_date")) if rr.get("start_date") else "-",
"active_from_date": str(rr.get("active_from_date")) if rr.get("active_from_date") else "",
"resolved_at": _format_datetime(rr.get("resolved_at")) if rr.get("resolved_at") else "",
"active": bool(active_now),
"resolved_same_day": bool(resolved_same_day),
}
)
# Second, active job-level remarks from scope (not yet explicitly linked to this run).
ui_tz = _get_ui_timezone_name()
rows = (
db.session.execute(
text(
"""
SELECT DISTINCT r.id, r.body, r.start_date, r.resolved_at, r.active_from_date
FROM remarks r
JOIN remark_scopes rs ON rs.remark_id = r.id
WHERE rs.job_id = :job_id
AND COALESCE(
r.active_from_date,
((r.start_date AT TIME ZONE 'UTC' AT TIME ZONE :ui_tz)::date)
) <= :run_date
AND r.resolved_at IS NULL
ORDER BY r.start_date DESC
"""
),
{
"job_id": job.id if job else 0,
"run_date": run_date,
"ui_tz": ui_tz,
},
)
.mappings()
.all()
)
for rr in rows:
remark_id = int(rr.get("id"))
if remark_id in remark_ids_seen:
continue
remark_ids_seen.add(remark_id)
body = (rr.get("body") or "").strip()
if len(body) > 180:
body = body[:177] + "..."
resolved_at = rr.get("resolved_at")
resolved_same_day = False
if resolved_at and run_date:
resolved_same_day = _to_amsterdam_date(resolved_at) == run_date
active_now = resolved_at is None or (not resolved_same_day)
remarks.append(
{
"id": remark_id,
"body": body, "body": body,
"start_date": _format_datetime(rr.get("start_date")) if rr.get("start_date") else "-", "start_date": _format_datetime(rr.get("start_date")) if rr.get("start_date") else "-",
"active_from_date": str(rr.get("active_from_date")) if rr.get("active_from_date") else "", "active_from_date": str(rr.get("active_from_date")) if rr.get("active_from_date") else "",

View File

@ -63,7 +63,27 @@ def _get_or_create_settings_local():
@login_required @login_required
@roles_required("admin", "operator", "viewer") @roles_required("admin", "operator", "viewer")
def customers(): def customers():
items = Customer.query.order_by(Customer.name.asc()).all() q = (request.args.get("q") or "").strip()
def _patterns(raw: str) -> list[str]:
out = []
for tok in [t.strip() for t in (raw or "").split() if t.strip()]:
p = tok.replace("\\", "\\\\")
p = p.replace("%", "\\%").replace("_", "\\_")
p = p.replace("*", "%")
if not p.startswith("%"):
p = "%" + p
if not p.endswith("%"):
p = p + "%"
out.append(p)
return out
query = Customer.query
if q:
for pat in _patterns(q):
query = query.filter(func.coalesce(Customer.name, "").ilike(pat, escape="\\"))
items = query.order_by(Customer.name.asc()).all()
settings = _get_or_create_settings_local() settings = _get_or_create_settings_local()
autotask_enabled = bool(getattr(settings, "autotask_enabled", False)) autotask_enabled = bool(getattr(settings, "autotask_enabled", False))
@ -105,6 +125,7 @@ def customers():
can_manage=can_manage, can_manage=can_manage,
autotask_enabled=autotask_enabled, autotask_enabled=autotask_enabled,
autotask_configured=autotask_configured, autotask_configured=autotask_configured,
q=q,
) )
@ -484,6 +505,7 @@ def customers_export():
@roles_required("admin", "operator") @roles_required("admin", "operator")
def customers_import(): def customers_import():
file = request.files.get("file") file = request.files.get("file")
include_autotask_ids = bool(request.form.get("include_autotask_ids"))
if not file or not getattr(file, "filename", ""): if not file or not getattr(file, "filename", ""):
flash("No file selected.", "warning") flash("No file selected.", "warning")
return redirect(url_for("main.customers")) return redirect(url_for("main.customers"))
@ -520,6 +542,7 @@ def customers_import():
# Detect Autotask columns (backwards compatible - these are optional) # Detect Autotask columns (backwards compatible - these are optional)
autotask_id_idx = None autotask_id_idx = None
autotask_name_idx = None autotask_name_idx = None
if include_autotask_ids:
if "autotask_company_id" in header: if "autotask_company_id" in header:
autotask_id_idx = header.index("autotask_company_id") autotask_id_idx = header.index("autotask_company_id")
if "autotask_company_name" in header: if "autotask_company_name" in header:
@ -561,7 +584,7 @@ def customers_import():
if active_val is not None: if active_val is not None:
existing.active = active_val existing.active = active_val
# Update Autotask mapping if provided in CSV # Update Autotask mapping if provided in CSV
if autotask_company_id is not None: if include_autotask_ids and autotask_company_id is not None:
existing.autotask_company_id = autotask_company_id existing.autotask_company_id = autotask_company_id
existing.autotask_company_name = autotask_company_name existing.autotask_company_name = autotask_company_name
existing.autotask_mapping_status = None # Will be resynced existing.autotask_mapping_status = None # Will be resynced
@ -579,7 +602,10 @@ def customers_import():
try: try:
db.session.commit() db.session.commit()
flash(f"Import finished. Created: {created}, Updated: {updated}, Skipped: {skipped}.", "success") flash(
f"Import finished. Created: {created}, Updated: {updated}, Skipped: {skipped}. Autotask IDs imported: {'yes' if include_autotask_ids else 'no'}.",
"success",
)
# Audit logging # Audit logging
import json import json
@ -588,6 +614,7 @@ def customers_import():
f"Imported customers from CSV", f"Imported customers from CSV",
details=json.dumps({ details=json.dumps({
"format": "CSV", "format": "CSV",
"include_autotask_ids": include_autotask_ids,
"created": created, "created": created,
"updated": updated, "updated": updated,
"skipped": skipped "skipped": skipped
@ -599,5 +626,3 @@ def customers_import():
flash("Failed to import customers.", "danger") flash("Failed to import customers.", "danger")
return redirect(url_for("main.customers")) return redirect(url_for("main.customers"))

View File

@ -9,6 +9,21 @@ MISSED_GRACE_WINDOW = timedelta(hours=1)
@login_required @login_required
@roles_required("admin", "operator", "viewer") @roles_required("admin", "operator", "viewer")
def daily_jobs(): def daily_jobs():
q = (request.args.get("q") or "").strip()
def _patterns(raw: str) -> list[str]:
out = []
for tok in [t.strip() for t in (raw or "").split() if t.strip()]:
p = tok.replace("\\", "\\\\")
p = p.replace("%", "\\%").replace("_", "\\_")
p = p.replace("*", "%")
if not p.startswith("%"):
p = "%" + p
if not p.endswith("%"):
p = p + "%"
out.append(p)
return out
# Determine target date (default: today) in Europe/Amsterdam # Determine target date (default: today) in Europe/Amsterdam
date_str = request.args.get("date") date_str = request.args.get("date")
try: try:
@ -74,10 +89,21 @@ def daily_jobs():
weekday_idx = target_date.weekday() # 0=Mon..6=Sun weekday_idx = target_date.weekday() # 0=Mon..6=Sun
jobs = ( jobs_query = (
Job.query.join(Customer, isouter=True) Job.query.join(Customer, isouter=True)
.filter(Job.archived.is_(False)) .filter(Job.archived.is_(False))
.filter(db.or_(Customer.id.is_(None), Customer.active.is_(True))) .filter(db.or_(Customer.id.is_(None), Customer.active.is_(True)))
)
if q:
for pat in _patterns(q):
jobs_query = jobs_query.filter(
(func.coalesce(Customer.name, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.backup_software, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.backup_type, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.job_name, "").ilike(pat, escape="\\"))
)
jobs = (
jobs_query
.order_by(Customer.name.asc().nullslast(), Job.backup_software.asc(), Job.backup_type.asc(), Job.job_name.asc()) .order_by(Customer.name.asc().nullslast(), Job.backup_software.asc(), Job.backup_type.asc(), Job.job_name.asc())
.all() .all()
) )
@ -306,7 +332,7 @@ def daily_jobs():
) )
target_date_str = target_date.strftime("%Y-%m-%d") target_date_str = target_date.strftime("%Y-%m-%d")
return render_template("main/daily_jobs.html", rows=rows, target_date_str=target_date_str) return render_template("main/daily_jobs.html", rows=rows, target_date_str=target_date_str, q=q)
@main_bp.route("/daily-jobs/details") @main_bp.route("/daily-jobs/details")

View File

@ -1,5 +1,53 @@
from .routes_shared import * # noqa: F401,F403 from .routes_shared import * # noqa: F401,F403
from .routes_shared import _format_datetime from .routes_shared import _format_datetime
from werkzeug.utils import secure_filename
import imghdr
# Allowed image extensions and max file size
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif', 'webp'}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def _validate_image_file(file):
"""Validate uploaded image file.
Returns (is_valid, error_message, mime_type)
"""
if not file or not file.filename:
return False, "No file selected", None
# Check file size
file.seek(0, 2) # Seek to end
size = file.tell()
file.seek(0) # Reset to beginning
if size > MAX_FILE_SIZE:
return False, f"File too large (max {MAX_FILE_SIZE // (1024*1024)}MB)", None
if size == 0:
return False, "Empty file", None
# Check extension
filename = secure_filename(file.filename)
if '.' not in filename:
return False, "File must have an extension", None
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False, f"Only images allowed ({', '.join(ALLOWED_EXTENSIONS)})", None
# Verify it's actually an image by reading header
file_data = file.read()
file.seek(0)
image_type = imghdr.what(None, h=file_data)
if image_type is None:
return False, "Invalid image file", None
mime_type = f"image/{image_type}"
return True, None, mime_type
@main_bp.route("/feedback") @main_bp.route("/feedback")
@ -21,7 +69,14 @@ def feedback_page():
if sort not in ("votes", "newest", "updated"): if sort not in ("votes", "newest", "updated"):
sort = "votes" sort = "votes"
where = ["fi.deleted_at IS NULL"] # Admin-only: show deleted items
show_deleted = False
if get_active_role() == "admin":
show_deleted = request.args.get("show_deleted", "0") in ("1", "true", "yes", "on")
where = []
if not show_deleted:
where.append("fi.deleted_at IS NULL")
params = {"user_id": int(current_user.id)} params = {"user_id": int(current_user.id)}
if item_type: if item_type:
@ -58,6 +113,8 @@ def feedback_page():
fi.status, fi.status,
fi.created_at, fi.created_at,
fi.updated_at, fi.updated_at,
fi.deleted_at,
fi.deleted_by_user_id,
u.username AS created_by, u.username AS created_by,
COALESCE(v.vote_count, 0) AS vote_count, COALESCE(v.vote_count, 0) AS vote_count,
EXISTS ( EXISTS (
@ -95,6 +152,8 @@ def feedback_page():
"created_by": r["created_by"] or "-", "created_by": r["created_by"] or "-",
"vote_count": int(r["vote_count"] or 0), "vote_count": int(r["vote_count"] or 0),
"user_voted": bool(r["user_voted"]), "user_voted": bool(r["user_voted"]),
"is_deleted": bool(r["deleted_at"]),
"deleted_at": _format_datetime(r["deleted_at"]) if r["deleted_at"] else "",
} }
) )
@ -105,6 +164,7 @@ def feedback_page():
status=status, status=status,
q=q, q=q,
sort=sort, sort=sort,
show_deleted=show_deleted,
) )
@ -135,6 +195,31 @@ def feedback_new():
created_by_user_id=int(current_user.id), created_by_user_id=int(current_user.id),
) )
db.session.add(item) db.session.add(item)
db.session.flush() # Get item.id for attachments
# Handle file uploads (multiple files allowed)
files = request.files.getlist('screenshots')
for file in files:
if file and file.filename:
is_valid, error_msg, mime_type = _validate_image_file(file)
if not is_valid:
db.session.rollback()
flash(f"Screenshot error: {error_msg}", "danger")
return redirect(url_for("main.feedback_new"))
filename = secure_filename(file.filename)
file_data = file.read()
attachment = FeedbackAttachment(
feedback_item_id=item.id,
feedback_reply_id=None,
filename=filename,
file_data=file_data,
mime_type=mime_type,
file_size=len(file_data),
)
db.session.add(attachment)
db.session.commit() db.session.commit()
flash("Feedback item created.", "success") flash("Feedback item created.", "success")
@ -148,7 +233,8 @@ def feedback_new():
@roles_required("admin", "operator", "reporter", "viewer") @roles_required("admin", "operator", "reporter", "viewer")
def feedback_detail(item_id: int): def feedback_detail(item_id: int):
item = FeedbackItem.query.get_or_404(item_id) item = FeedbackItem.query.get_or_404(item_id)
if item.deleted_at is not None: # Allow admins to view deleted items
if item.deleted_at is not None and get_active_role() != "admin":
abort(404) abort(404)
vote_count = ( vote_count = (
@ -174,6 +260,15 @@ def feedback_detail(item_id: int):
resolved_by = User.query.get(item.resolved_by_user_id) resolved_by = User.query.get(item.resolved_by_user_id)
resolved_by_name = resolved_by.username if resolved_by else "" resolved_by_name = resolved_by.username if resolved_by else ""
# Get attachments for the main item (not linked to a reply)
item_attachments = (
FeedbackAttachment.query.filter(
FeedbackAttachment.feedback_item_id == item.id,
FeedbackAttachment.feedback_reply_id.is_(None),
)
.order_by(FeedbackAttachment.created_at.asc())
.all()
)
replies = ( replies = (
FeedbackReply.query.filter(FeedbackReply.feedback_item_id == item.id) FeedbackReply.query.filter(FeedbackReply.feedback_item_id == item.id)
@ -181,6 +276,25 @@ def feedback_detail(item_id: int):
.all() .all()
) )
# Get attachments for each reply
reply_ids = [r.id for r in replies]
reply_attachments_list = []
if reply_ids:
reply_attachments_list = (
FeedbackAttachment.query.filter(
FeedbackAttachment.feedback_reply_id.in_(reply_ids)
)
.order_by(FeedbackAttachment.created_at.asc())
.all()
)
# Map reply_id -> list of attachments
reply_attachments_map = {}
for att in reply_attachments_list:
if att.feedback_reply_id not in reply_attachments_map:
reply_attachments_map[att.feedback_reply_id] = []
reply_attachments_map[att.feedback_reply_id].append(att)
reply_user_ids = sorted({int(r.user_id) for r in replies}) reply_user_ids = sorted({int(r.user_id) for r in replies})
reply_users = ( reply_users = (
User.query.filter(User.id.in_(reply_user_ids)).all() if reply_user_ids else [] User.query.filter(User.id.in_(reply_user_ids)).all() if reply_user_ids else []
@ -196,6 +310,8 @@ def feedback_detail(item_id: int):
user_voted=bool(user_voted), user_voted=bool(user_voted),
replies=replies, replies=replies,
reply_user_map=reply_user_map, reply_user_map=reply_user_map,
item_attachments=item_attachments,
reply_attachments_map=reply_attachments_map,
) )
@main_bp.route("/feedback/<int:item_id>/reply", methods=["POST"]) @main_bp.route("/feedback/<int:item_id>/reply", methods=["POST"])
@ -222,6 +338,31 @@ def feedback_reply(item_id: int):
created_at=datetime.utcnow(), created_at=datetime.utcnow(),
) )
db.session.add(reply) db.session.add(reply)
db.session.flush() # Get reply.id for attachments
# Handle file uploads (multiple files allowed)
files = request.files.getlist('screenshots')
for file in files:
if file and file.filename:
is_valid, error_msg, mime_type = _validate_image_file(file)
if not is_valid:
db.session.rollback()
flash(f"Screenshot error: {error_msg}", "danger")
return redirect(url_for("main.feedback_detail", item_id=item.id))
filename = secure_filename(file.filename)
file_data = file.read()
attachment = FeedbackAttachment(
feedback_item_id=item.id,
feedback_reply_id=reply.id,
filename=filename,
file_data=file_data,
mime_type=mime_type,
file_size=len(file_data),
)
db.session.add(attachment)
db.session.commit() db.session.commit()
flash("Reply added.", "success") flash("Reply added.", "success")
@ -308,3 +449,60 @@ def feedback_delete(item_id: int):
flash("Feedback item deleted.", "success") flash("Feedback item deleted.", "success")
return redirect(url_for("main.feedback_page")) return redirect(url_for("main.feedback_page"))
@main_bp.route("/feedback/<int:item_id>/permanent-delete", methods=["POST"])
@login_required
@roles_required("admin")
def feedback_permanent_delete(item_id: int):
"""Permanently delete a feedback item and all its attachments from the database.
This is a hard delete - the item and all associated data will be removed permanently.
Only available for items that are already soft-deleted.
"""
item = FeedbackItem.query.get_or_404(item_id)
# Only allow permanent delete on already soft-deleted items
if item.deleted_at is None:
flash("Item must be deleted first before permanent deletion.", "warning")
return redirect(url_for("main.feedback_detail", item_id=item.id))
# Get attachment count for feedback message
attachment_count = FeedbackAttachment.query.filter_by(feedback_item_id=item.id).count()
# Hard delete - CASCADE will automatically delete:
# - feedback_votes
# - feedback_replies
# - feedback_attachments (via replies CASCADE)
# - feedback_attachments (direct, via item CASCADE)
db.session.delete(item)
db.session.commit()
flash(f"Feedback item permanently deleted ({attachment_count} screenshot(s) removed).", "success")
return redirect(url_for("main.feedback_page", show_deleted="1"))
@main_bp.route("/feedback/attachment/<int:attachment_id>")
@login_required
@roles_required("admin", "operator", "reporter", "viewer")
def feedback_attachment(attachment_id: int):
"""Serve a feedback attachment image."""
attachment = FeedbackAttachment.query.get_or_404(attachment_id)
# Check if the feedback item is deleted - allow admins to view
item = FeedbackItem.query.get(attachment.feedback_item_id)
if not item:
abort(404)
if item.deleted_at is not None and get_active_role() != "admin":
abort(404)
# Serve the image
from flask import send_file
import io
return send_file(
io.BytesIO(attachment.file_data),
mimetype=attachment.mime_type,
as_attachment=False,
download_name=attachment.filename,
)

View File

@ -9,12 +9,28 @@ from ..ticketing_utils import link_open_internal_tickets_to_run
import time import time
import re import re
import html as _html import html as _html
from sqlalchemy import cast, String
@main_bp.route("/inbox") @main_bp.route("/inbox")
@login_required @login_required
@roles_required("admin", "operator", "viewer") @roles_required("admin", "operator", "viewer")
def inbox(): def inbox():
q = (request.args.get("q") or "").strip()
def _patterns(raw: str) -> list[str]:
out = []
for tok in [t.strip() for t in (raw or "").split() if t.strip()]:
p = tok.replace("\\", "\\\\")
p = p.replace("%", "\\%").replace("_", "\\_")
p = p.replace("*", "%")
if not p.startswith("%"):
p = "%" + p
if not p.endswith("%"):
p = p + "%"
out.append(p)
return out
try: try:
page = int(request.args.get("page", "1")) page = int(request.args.get("page", "1"))
except ValueError: except ValueError:
@ -28,6 +44,18 @@ def inbox():
# Use location column if available; otherwise just return all # Use location column if available; otherwise just return all
if hasattr(MailMessage, "location"): if hasattr(MailMessage, "location"):
query = query.filter(MailMessage.location == "inbox") query = query.filter(MailMessage.location == "inbox")
if q:
for pat in _patterns(q):
query = query.filter(
(func.coalesce(MailMessage.from_address, "").ilike(pat, escape="\\"))
| (func.coalesce(MailMessage.subject, "").ilike(pat, escape="\\"))
| (cast(MailMessage.received_at, String).ilike(pat, escape="\\"))
| (func.coalesce(MailMessage.backup_software, "").ilike(pat, escape="\\"))
| (func.coalesce(MailMessage.backup_type, "").ilike(pat, escape="\\"))
| (func.coalesce(MailMessage.job_name, "").ilike(pat, escape="\\"))
| (func.coalesce(MailMessage.parse_result, "").ilike(pat, escape="\\"))
| (cast(MailMessage.parsed_at, String).ilike(pat, escape="\\"))
)
total_items = query.count() total_items = query.count()
total_pages = max(1, math.ceil(total_items / per_page)) if total_items else 1 total_pages = max(1, math.ceil(total_items / per_page)) if total_items else 1
@ -79,6 +107,7 @@ def inbox():
customers=customer_rows, customers=customer_rows,
can_bulk_delete=(get_active_role() in ("admin", "operator")), can_bulk_delete=(get_active_role() in ("admin", "operator")),
is_admin=(get_active_role() == "admin"), is_admin=(get_active_role() == "admin"),
q=q,
) )

View File

@ -13,12 +13,56 @@ from .routes_shared import (
@login_required @login_required
@roles_required("admin", "operator", "viewer") @roles_required("admin", "operator", "viewer")
def jobs(): def jobs():
# Join with customers for display selected_customer_id = None
jobs = ( selected_customer_name = ""
q = (request.args.get("q") or "").strip()
customer_id_raw = (request.args.get("customer_id") or "").strip()
if customer_id_raw:
try:
selected_customer_id = int(customer_id_raw)
except ValueError:
selected_customer_id = None
def _patterns(raw: str) -> list[str]:
out = []
for tok in [t.strip() for t in (raw or "").split() if t.strip()]:
p = tok.replace("\\", "\\\\")
p = p.replace("%", "\\%").replace("_", "\\_")
p = p.replace("*", "%")
if not p.startswith("%"):
p = "%" + p
if not p.endswith("%"):
p = p + "%"
out.append(p)
return out
base_query = (
Job.query Job.query
.filter(Job.archived.is_(False)) .filter(Job.archived.is_(False))
.outerjoin(Customer, Customer.id == Job.customer_id) .outerjoin(Customer, Customer.id == Job.customer_id)
.filter(db.or_(Customer.id.is_(None), Customer.active.is_(True))) )
if selected_customer_id is not None:
base_query = base_query.filter(Job.customer_id == selected_customer_id)
selected_customer = Customer.query.filter(Customer.id == selected_customer_id).first()
if selected_customer is not None:
selected_customer_name = selected_customer.name or ""
else:
# Default listing hides jobs for inactive customers.
base_query = base_query.filter(db.or_(Customer.id.is_(None), Customer.active.is_(True)))
if q:
for pat in _patterns(q):
base_query = base_query.filter(
(func.coalesce(Customer.name, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.backup_software, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.backup_type, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.job_name, "").ilike(pat, escape="\\"))
)
# Join with customers for display
jobs = (
base_query
.add_columns( .add_columns(
Job.id, Job.id,
Job.backup_software, Job.backup_software,
@ -54,6 +98,9 @@ def jobs():
"main/jobs.html", "main/jobs.html",
jobs=rows, jobs=rows,
can_manage_jobs=can_manage_jobs, can_manage_jobs=can_manage_jobs,
selected_customer_id=selected_customer_id,
selected_customer_name=selected_customer_name,
q=q,
) )

View File

@ -11,6 +11,16 @@ _OVERRIDE_DEFAULT_START_AT = datetime(1970, 1, 1)
def overrides(): def overrides():
can_manage = get_active_role() in ("admin", "operator") can_manage = get_active_role() in ("admin", "operator")
can_delete = get_active_role() == "admin" can_delete = get_active_role() == "admin"
q = (request.args.get("q") or "").strip()
def _match_query(text: str, raw_query: str) -> bool:
hay = (text or "").lower()
tokens = [t.strip() for t in (raw_query or "").split() if t.strip()]
for tok in tokens:
needle = tok.lower().replace("*", "")
if needle and needle not in hay:
return False
return True
overrides_q = Override.query.order_by(Override.level.asc(), Override.start_at.desc()).all() overrides_q = Override.query.order_by(Override.level.asc(), Override.start_at.desc()).all()
@ -92,16 +102,31 @@ def overrides():
rows = [] rows = []
for ov in overrides_q: for ov in overrides_q:
scope_text = _describe_scope(ov)
start_text = _format_datetime(ov.start_at)
end_text = _format_datetime(ov.end_at) if ov.end_at else ""
comment_text = ov.comment or ""
if q:
full_text = " | ".join([
ov.level or "",
scope_text,
start_text,
end_text,
comment_text,
])
if not _match_query(full_text, q):
continue
rows.append( rows.append(
{ {
"id": ov.id, "id": ov.id,
"level": ov.level or "", "level": ov.level or "",
"scope": _describe_scope(ov), "scope": scope_text,
"start_at": _format_datetime(ov.start_at), "start_at": start_text,
"end_at": _format_datetime(ov.end_at) if ov.end_at else "", "end_at": end_text,
"active": bool(ov.active), "active": bool(ov.active),
"treat_as_success": bool(ov.treat_as_success), "treat_as_success": bool(ov.treat_as_success),
"comment": ov.comment or "", "comment": comment_text,
"match_status": ov.match_status or "", "match_status": ov.match_status or "",
"match_error_contains": ov.match_error_contains or "", "match_error_contains": ov.match_error_contains or "",
"match_error_mode": getattr(ov, "match_error_mode", None) or "", "match_error_mode": getattr(ov, "match_error_mode", None) or "",
@ -122,6 +147,7 @@ def overrides():
jobs_for_select=jobs_for_select, jobs_for_select=jobs_for_select,
backup_software_options=backup_software_options, backup_software_options=backup_software_options,
backup_type_options=backup_type_options, backup_type_options=backup_type_options,
q=q,
) )
@ -398,4 +424,3 @@ def overrides_toggle(override_id: int):
flash("Override status updated.", "success") flash("Override status updated.", "success")
return redirect(url_for("main.overrides")) return redirect(url_for("main.overrides"))

View File

@ -1,6 +1,6 @@
from .routes_shared import * # noqa: F401,F403 from .routes_shared import * # noqa: F401,F403
from sqlalchemy import text from sqlalchemy import text, cast, String
import json import json
import csv import csv
import io import io
@ -101,12 +101,33 @@ def api_reports_list():
if err is not None: if err is not None:
return err return err
rows = ( q = (request.args.get("q") or "").strip()
db.session.query(ReportDefinition)
.order_by(ReportDefinition.created_at.desc()) def _patterns(raw: str) -> list[str]:
.limit(200) out = []
.all() for tok in [t.strip() for t in (raw or "").split() if t.strip()]:
p = tok.replace("\\", "\\\\")
p = p.replace("%", "\\%").replace("_", "\\_")
p = p.replace("*", "%")
if not p.startswith("%"):
p = "%" + p
if not p.endswith("%"):
p = p + "%"
out.append(p)
return out
query = db.session.query(ReportDefinition)
if q:
for pat in _patterns(q):
query = query.filter(
(func.coalesce(ReportDefinition.name, "").ilike(pat, escape="\\"))
| (func.coalesce(ReportDefinition.report_type, "").ilike(pat, escape="\\"))
| (func.coalesce(ReportDefinition.output_format, "").ilike(pat, escape="\\"))
| (cast(ReportDefinition.period_start, String).ilike(pat, escape="\\"))
| (cast(ReportDefinition.period_end, String).ilike(pat, escape="\\"))
) )
rows = query.order_by(ReportDefinition.created_at.desc()).limit(200).all()
return { return {
"items": [ "items": [
{ {

View File

@ -1,6 +1,7 @@
from .routes_shared import * # noqa: F401,F403 from .routes_shared import * # noqa: F401,F403
from datetime import date, timedelta from datetime import date, timedelta
from .routes_reporting_api import build_report_columns_meta, build_report_job_filters_meta from .routes_reporting_api import build_report_columns_meta, build_report_job_filters_meta
from sqlalchemy import cast, String
def get_default_report_period(): def get_default_report_period():
"""Return default report period (last 7 days).""" """Return default report period (last 7 days)."""
@ -52,13 +53,33 @@ def _build_report_item(r):
@main_bp.route("/reports") @main_bp.route("/reports")
@login_required @login_required
def reports(): def reports():
q = (request.args.get("q") or "").strip()
def _patterns(raw: str) -> list[str]:
out = []
for tok in [t.strip() for t in (raw or "").split() if t.strip()]:
p = tok.replace("\\", "\\\\")
p = p.replace("%", "\\%").replace("_", "\\_")
p = p.replace("*", "%")
if not p.startswith("%"):
p = "%" + p
if not p.endswith("%"):
p = p + "%"
out.append(p)
return out
# Pre-render items so the page is usable even if JS fails to load/execute. # Pre-render items so the page is usable even if JS fails to load/execute.
rows = ( query = db.session.query(ReportDefinition)
db.session.query(ReportDefinition) if q:
.order_by(ReportDefinition.created_at.desc()) for pat in _patterns(q):
.limit(200) query = query.filter(
.all() (func.coalesce(ReportDefinition.name, "").ilike(pat, escape="\\"))
| (func.coalesce(ReportDefinition.report_type, "").ilike(pat, escape="\\"))
| (func.coalesce(ReportDefinition.output_format, "").ilike(pat, escape="\\"))
| (cast(ReportDefinition.period_start, String).ilike(pat, escape="\\"))
| (cast(ReportDefinition.period_end, String).ilike(pat, escape="\\"))
) )
rows = query.order_by(ReportDefinition.created_at.desc()).limit(200).all()
items = [_build_report_item(r) for r in rows] items = [_build_report_item(r) for r in rows]
period_start, period_end = get_default_report_period() period_start, period_end = get_default_report_period()
@ -70,6 +91,7 @@ def reports():
job_filters_meta=build_report_job_filters_meta(), job_filters_meta=build_report_job_filters_meta(),
default_period_start=period_start.isoformat(), default_period_start=period_start.isoformat(),
default_period_end=period_end.isoformat(), default_period_end=period_end.isoformat(),
q=q,
) )

View File

@ -38,11 +38,19 @@ from ..models import (
TicketScope, TicketScope,
User, User,
) )
from ..ticketing_utils import link_open_internal_tickets_to_run
AUTOTASK_TERMINAL_STATUS_IDS = {5} AUTOTASK_TERMINAL_STATUS_IDS = {5}
def _is_hidden_3cx_non_backup(backup_software: str | None, backup_type: str | None) -> bool:
"""Hide non-backup 3CX informational jobs from Run Checks."""
bs = (backup_software or "").strip().lower()
bt = (backup_type or "").strip().lower()
return bs == "3cx" and bt in {"update", "ssl certificate"}
def _ensure_internal_ticket_for_autotask( def _ensure_internal_ticket_for_autotask(
*, *,
ticket_number: str, ticket_number: str,
@ -725,6 +733,8 @@ def _ensure_missed_runs_for_job(job: Job, start_from: date, end_inclusive: date)
mail_message_id=None, mail_message_id=None,
) )
db.session.add(miss) db.session.add(miss)
db.session.flush() # Ensure miss.id is available for ticket linking
link_open_internal_tickets_to_run(run=miss, job=job)
inserted += 1 inserted += 1
d = d + timedelta(days=1) d = d + timedelta(days=1)
@ -806,6 +816,8 @@ def _ensure_missed_runs_for_job(job: Job, start_from: date, end_inclusive: date)
mail_message_id=None, mail_message_id=None,
) )
db.session.add(miss) db.session.add(miss)
db.session.flush() # Ensure miss.id is available for ticket linking
link_open_internal_tickets_to_run(run=miss, job=job)
inserted += 1 inserted += 1
# Next month # Next month
@ -825,6 +837,21 @@ def _ensure_missed_runs_for_job(job: Job, start_from: date, end_inclusive: date)
def run_checks_page(): def run_checks_page():
"""Run Checks page: list jobs that have runs to review (including generated missed runs).""" """Run Checks page: list jobs that have runs to review (including generated missed runs)."""
q = (request.args.get("q") or "").strip()
def _patterns(raw: str) -> list[str]:
out = []
for tok in [t.strip() for t in (raw or "").split() if t.strip()]:
p = tok.replace("\\", "\\\\")
p = p.replace("%", "\\%").replace("_", "\\_")
p = p.replace("*", "%")
if not p.startswith("%"):
p = "%" + p
if not p.endswith("%"):
p = p + "%"
out.append(p)
return out
include_reviewed = False include_reviewed = False
if get_active_role() == "admin": if get_active_role() == "admin":
include_reviewed = request.args.get("include_reviewed", "0") in ("1", "true", "yes", "on") include_reviewed = request.args.get("include_reviewed", "0") in ("1", "true", "yes", "on")
@ -850,6 +877,8 @@ def run_checks_page():
today_local = _to_amsterdam_date(datetime.utcnow()) or datetime.utcnow().date() today_local = _to_amsterdam_date(datetime.utcnow()) or datetime.utcnow().date()
for job in jobs: for job in jobs:
if _is_hidden_3cx_non_backup(getattr(job, "backup_software", None), getattr(job, "backup_type", None)):
continue
last_rev = last_reviewed_map.get(int(job.id)) last_rev = last_reviewed_map.get(int(job.id))
if last_rev: if last_rev:
start_date = _to_amsterdam_date(last_rev) or settings_start start_date = _to_amsterdam_date(last_rev) or settings_start
@ -884,6 +913,14 @@ def run_checks_page():
.outerjoin(Customer, Customer.id == Job.customer_id) .outerjoin(Customer, Customer.id == Job.customer_id)
.filter(Job.archived.is_(False)) .filter(Job.archived.is_(False))
) )
if q:
for pat in _patterns(q):
base = base.filter(
(func.coalesce(Customer.name, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.backup_software, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.backup_type, "").ilike(pat, escape="\\"))
| (func.coalesce(Job.job_name, "").ilike(pat, escape="\\"))
)
# Runs to show in the overview: unreviewed (or all if admin toggle enabled) # Runs to show in the overview: unreviewed (or all if admin toggle enabled)
run_filter = [] run_filter = []
@ -956,7 +993,7 @@ def run_checks_page():
Job.id.asc(), Job.id.asc(),
) )
rows = q.limit(2000).all() rows = [r for r in q.limit(2000).all() if not _is_hidden_3cx_non_backup(r.backup_software, r.backup_type)]
# Ensure override flags are up-to-date for the runs shown in this overview. # Ensure override flags are up-to-date for the runs shown in this overview.
# The Run Checks modal computes override status on-the-fly, but the overview # The Run Checks modal computes override status on-the-fly, but the overview
@ -1131,6 +1168,7 @@ def run_checks_page():
is_admin=(get_active_role() == "admin"), is_admin=(get_active_role() == "admin"),
include_reviewed=include_reviewed, include_reviewed=include_reviewed,
autotask_enabled=autotask_enabled, autotask_enabled=autotask_enabled,
q=q,
) )
@ -1151,6 +1189,15 @@ def run_checks_details():
include_reviewed = request.args.get("include_reviewed", "0") in ("1", "true", "yes", "on") include_reviewed = request.args.get("include_reviewed", "0") in ("1", "true", "yes", "on")
job = Job.query.get_or_404(job_id) job = Job.query.get_or_404(job_id)
if _is_hidden_3cx_non_backup(getattr(job, "backup_software", None), getattr(job, "backup_type", None)):
job_payload = {
"id": job.id,
"customer_name": job.customer.name if job.customer else "",
"backup_software": job.backup_software or "",
"backup_type": job.backup_type or "",
"job_name": job.job_name or "",
}
return jsonify({"status": "ok", "job": job_payload, "runs": [], "message": "This 3CX informational type is hidden from Run Checks."})
q = JobRun.query.filter(JobRun.job_id == job.id) q = JobRun.query.filter(JobRun.job_id == job.id)
if not include_reviewed: if not include_reviewed:

View File

@ -0,0 +1,963 @@
from .routes_shared import * # noqa: F401,F403
from .routes_shared import (
_apply_overrides_to_run,
_format_datetime,
_get_or_create_settings,
_get_ui_timezone,
_infer_monthly_schedule_from_runs,
_infer_schedule_map_from_runs,
)
from sqlalchemy import and_, cast, func, or_, String
import math
SEARCH_LIMIT_PER_SECTION = 10
SEARCH_SECTION_KEYS = [
"inbox",
"customers",
"jobs",
"daily_jobs",
"run_checks",
"tickets",
"remarks",
"overrides",
"reports",
]
def _is_section_allowed(section: str) -> bool:
role = get_active_role()
allowed = {
"inbox": {"admin", "operator", "viewer"},
"customers": {"admin", "operator", "viewer"},
"jobs": {"admin", "operator", "viewer"},
"daily_jobs": {"admin", "operator", "viewer"},
"run_checks": {"admin", "operator"},
"tickets": {"admin", "operator", "viewer"},
"remarks": {"admin", "operator", "viewer"},
"overrides": {"admin", "operator", "viewer"},
"reports": {"admin", "operator", "viewer", "reporter"},
}
return role in allowed.get(section, set())
def _build_patterns(raw_query: str) -> list[str]:
tokens = [t.strip() for t in (raw_query or "").split() if t.strip()]
patterns: list[str] = []
for token in tokens:
p = token.replace("\\", "\\\\")
p = p.replace("%", "\\%").replace("_", "\\_")
p = p.replace("*", "%")
if not p.startswith("%"):
p = f"%{p}"
if not p.endswith("%"):
p = f"{p}%"
patterns.append(p)
return patterns
def _contains_all_terms(columns: list, patterns: list[str]):
if not patterns or not columns:
return None
term_filters = []
for pattern in patterns:
per_term = [col.ilike(pattern, escape="\\") for col in columns]
term_filters.append(or_(*per_term))
return and_(*term_filters)
def _parse_page(value: str | None) -> int:
try:
page = int((value or "").strip())
except Exception:
page = 1
return page if page > 0 else 1
def _paginate_query(query, page: int, order_by_cols: list):
total = query.count()
total_pages = max(1, math.ceil(total / SEARCH_LIMIT_PER_SECTION)) if total else 1
current_page = min(max(page, 1), total_pages)
rows = (
query.order_by(*order_by_cols)
.offset((current_page - 1) * SEARCH_LIMIT_PER_SECTION)
.limit(SEARCH_LIMIT_PER_SECTION)
.all()
)
return total, current_page, total_pages, rows
def _enrich_paging(section: dict, total: int, current_page: int, total_pages: int) -> None:
section["total"] = int(total or 0)
section["current_page"] = int(current_page or 1)
section["total_pages"] = int(total_pages or 1)
section["has_prev"] = section["current_page"] > 1
section["has_next"] = section["current_page"] < section["total_pages"]
section["prev_url"] = ""
section["next_url"] = ""
def _build_inbox_results(patterns: list[str], page: int) -> dict:
section = {
"key": "inbox",
"title": "Inbox",
"view_all_url": url_for("main.inbox"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("inbox"):
return section
query = MailMessage.query
if hasattr(MailMessage, "location"):
query = query.filter(MailMessage.location == "inbox")
match_expr = _contains_all_terms(
[
func.coalesce(MailMessage.from_address, ""),
func.coalesce(MailMessage.subject, ""),
cast(MailMessage.received_at, String),
func.coalesce(MailMessage.backup_software, ""),
func.coalesce(MailMessage.backup_type, ""),
func.coalesce(MailMessage.job_name, ""),
func.coalesce(MailMessage.parse_result, ""),
cast(MailMessage.parsed_at, String),
],
patterns,
)
if match_expr is not None:
query = query.filter(match_expr)
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[MailMessage.received_at.desc().nullslast(), MailMessage.id.desc()],
)
_enrich_paging(section, total, current_page, total_pages)
for msg in rows:
parsed_flag = bool(getattr(msg, "parsed_at", None) or (msg.parse_result or ""))
section["items"].append(
{
"title": msg.subject or f"Message #{msg.id}",
"subtitle": f"{msg.from_address or '-'} | {_format_datetime(msg.received_at)}",
"meta": f"{msg.backup_software or '-'} / {msg.backup_type or '-'} / {msg.job_name or '-'} | Parsed: {'Yes' if parsed_flag else 'No'}",
"link": url_for("main.inbox"),
}
)
return section
def _build_customers_results(patterns: list[str], page: int) -> dict:
section = {
"key": "customers",
"title": "Customers",
"view_all_url": url_for("main.customers"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("customers"):
return section
query = Customer.query
match_expr = _contains_all_terms([func.coalesce(Customer.name, "")], patterns)
if match_expr is not None:
query = query.filter(match_expr)
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[Customer.name.asc()],
)
_enrich_paging(section, total, current_page, total_pages)
for c in rows:
try:
job_count = c.jobs.count()
except Exception:
job_count = 0
section["items"].append(
{
"title": c.name or f"Customer #{c.id}",
"subtitle": f"Jobs: {job_count}",
"meta": "Active" if c.active else "Inactive",
"link": url_for("main.jobs", customer_id=c.id),
}
)
return section
def _build_jobs_results(patterns: list[str], page: int) -> dict:
section = {
"key": "jobs",
"title": "Jobs",
"view_all_url": url_for("main.jobs"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("jobs"):
return section
query = (
db.session.query(
Job.id.label("job_id"),
Job.backup_software.label("backup_software"),
Job.backup_type.label("backup_type"),
Job.job_name.label("job_name"),
Customer.name.label("customer_name"),
)
.select_from(Job)
.outerjoin(Customer, Customer.id == Job.customer_id)
.filter(Job.archived.is_(False))
.filter(db.or_(Customer.id.is_(None), Customer.active.is_(True)))
)
match_expr = _contains_all_terms(
[
func.coalesce(Customer.name, ""),
func.coalesce(Job.backup_software, ""),
func.coalesce(Job.backup_type, ""),
func.coalesce(Job.job_name, ""),
],
patterns,
)
if match_expr is not None:
query = query.filter(match_expr)
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[
Customer.name.asc().nullslast(),
Job.backup_software.asc(),
Job.backup_type.asc(),
Job.job_name.asc(),
],
)
_enrich_paging(section, total, current_page, total_pages)
for row in rows:
section["items"].append(
{
"title": row.job_name or f"Job #{row.job_id}",
"subtitle": f"{row.customer_name or '-'} | {row.backup_software or '-'} / {row.backup_type or '-'}",
"meta": "",
"link": url_for("main.job_detail", job_id=row.job_id),
}
)
return section
def _build_daily_jobs_results(patterns: list[str], page: int) -> dict:
section = {
"key": "daily_jobs",
"title": "Daily Jobs",
"view_all_url": url_for("main.daily_jobs"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("daily_jobs"):
return section
try:
tz = _get_ui_timezone()
except Exception:
tz = None
try:
target_date = datetime.now(tz).date() if tz else datetime.utcnow().date()
except Exception:
target_date = datetime.utcnow().date()
settings = _get_or_create_settings()
missed_start_date = getattr(settings, "daily_jobs_start_date", None)
if tz:
local_midnight = datetime(
year=target_date.year,
month=target_date.month,
day=target_date.day,
hour=0,
minute=0,
second=0,
tzinfo=tz,
)
start_of_day = local_midnight.astimezone(datetime_module.timezone.utc).replace(tzinfo=None)
end_of_day = (local_midnight + timedelta(days=1)).astimezone(datetime_module.timezone.utc).replace(tzinfo=None)
else:
start_of_day = datetime(
year=target_date.year,
month=target_date.month,
day=target_date.day,
hour=0,
minute=0,
second=0,
)
end_of_day = start_of_day + timedelta(days=1)
def _to_local(dt_utc):
if not dt_utc or not tz:
return dt_utc
try:
if dt_utc.tzinfo is None:
dt_utc = dt_utc.replace(tzinfo=datetime_module.timezone.utc)
return dt_utc.astimezone(tz)
except Exception:
return dt_utc
def _bucket_15min(dt_utc):
d = _to_local(dt_utc)
if not d:
return None
minute_bucket = (d.minute // 15) * 15
return f"{d.hour:02d}:{minute_bucket:02d}"
def _is_success_status(value: str) -> bool:
s = (value or "").strip().lower()
if not s:
return False
return ("success" in s) or ("override" in s)
query = (
db.session.query(
Job.id.label("job_id"),
Job.job_name.label("job_name"),
Job.backup_software.label("backup_software"),
Job.backup_type.label("backup_type"),
Customer.name.label("customer_name"),
)
.select_from(Job)
.outerjoin(Customer, Customer.id == Job.customer_id)
.filter(Job.archived.is_(False))
.filter(db.or_(Customer.id.is_(None), Customer.active.is_(True)))
)
match_expr = _contains_all_terms(
[
func.coalesce(Customer.name, ""),
func.coalesce(Job.backup_software, ""),
func.coalesce(Job.backup_type, ""),
func.coalesce(Job.job_name, ""),
],
patterns,
)
if match_expr is not None:
query = query.filter(match_expr)
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[
Customer.name.asc().nullslast(),
Job.backup_software.asc(),
Job.backup_type.asc(),
Job.job_name.asc(),
],
)
_enrich_paging(section, total, current_page, total_pages)
for row in rows:
expected_times = (_infer_schedule_map_from_runs(row.job_id).get(target_date.weekday()) or [])
if not expected_times:
monthly = _infer_monthly_schedule_from_runs(row.job_id)
if monthly:
try:
dom = int(monthly.get("day_of_month") or 0)
except Exception:
dom = 0
mtimes = monthly.get("times") or []
try:
import calendar as _calendar
last_dom = _calendar.monthrange(target_date.year, target_date.month)[1]
except Exception:
last_dom = target_date.day
scheduled_dom = dom if (dom and dom <= last_dom) else last_dom
if target_date.day == scheduled_dom:
expected_times = list(mtimes)
runs_for_day = (
JobRun.query.filter(
JobRun.job_id == row.job_id,
JobRun.run_at >= start_of_day,
JobRun.run_at < end_of_day,
)
.order_by(JobRun.run_at.asc())
.all()
)
run_count = len(runs_for_day)
last_status = "-"
expected_display = expected_times[-1] if expected_times else "-"
if run_count > 0:
last_run = runs_for_day[-1]
try:
job_obj = Job.query.get(int(row.job_id))
status_display, _override_applied, _override_level, _ov_id, _ov_reason = _apply_overrides_to_run(job_obj, last_run)
if getattr(last_run, "missed", False):
last_status = status_display or "Missed"
else:
last_status = status_display or (last_run.status or "-")
except Exception:
last_status = last_run.status or "-"
expected_display = _bucket_15min(last_run.run_at) or expected_display
else:
try:
today_local = datetime.now(tz).date() if tz else datetime.utcnow().date()
except Exception:
today_local = datetime.utcnow().date()
if target_date > today_local:
last_status = "Expected"
elif target_date == today_local:
last_status = "Expected"
else:
if missed_start_date and target_date < missed_start_date:
last_status = "-"
else:
last_status = "Missed"
success_text = "Yes" if _is_success_status(last_status) else "No"
section["items"].append(
{
"title": row.job_name or f"Job #{row.job_id}",
"subtitle": f"{row.customer_name or '-'} | {row.backup_software or '-'} / {row.backup_type or '-'}",
"meta": f"Expected: {expected_display} | Successful: {success_text} | Runs: {run_count}",
"link": url_for("main.daily_jobs", date=target_date.strftime("%Y-%m-%d"), open_job_id=row.job_id),
}
)
return section
def _build_run_checks_results(patterns: list[str], page: int) -> dict:
section = {
"key": "run_checks",
"title": "Run Checks",
"view_all_url": url_for("main.run_checks_page"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("run_checks"):
return section
agg = (
db.session.query(
JobRun.job_id.label("job_id"),
func.count(JobRun.id).label("run_count"),
)
.filter(JobRun.reviewed_at.is_(None))
.group_by(JobRun.job_id)
.subquery()
)
query = (
db.session.query(
Job.id.label("job_id"),
Job.job_name.label("job_name"),
Job.backup_software.label("backup_software"),
Job.backup_type.label("backup_type"),
Customer.name.label("customer_name"),
agg.c.run_count.label("run_count"),
)
.select_from(Job)
.join(agg, agg.c.job_id == Job.id)
.outerjoin(Customer, Customer.id == Job.customer_id)
.filter(Job.archived.is_(False))
)
match_expr = _contains_all_terms(
[
func.coalesce(Customer.name, ""),
func.coalesce(Job.backup_software, ""),
func.coalesce(Job.backup_type, ""),
func.coalesce(Job.job_name, ""),
],
patterns,
)
if match_expr is not None:
query = query.filter(match_expr)
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[
Customer.name.asc().nullslast(),
Job.backup_software.asc().nullslast(),
Job.backup_type.asc().nullslast(),
Job.job_name.asc().nullslast(),
],
)
_enrich_paging(section, total, current_page, total_pages)
for row in rows:
section["items"].append(
{
"title": row.job_name or f"Job #{row.job_id}",
"subtitle": f"{row.customer_name or '-'} | {row.backup_software or '-'} / {row.backup_type or '-'}",
"meta": f"Unreviewed runs: {int(row.run_count or 0)}",
"link": url_for("main.run_checks_page"),
}
)
return section
def _build_tickets_results(patterns: list[str], page: int) -> dict:
section = {
"key": "tickets",
"title": "Tickets",
"view_all_url": url_for("main.tickets_page"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("tickets"):
return section
query = (
db.session.query(Ticket)
.select_from(Ticket)
.outerjoin(TicketScope, TicketScope.ticket_id == Ticket.id)
.outerjoin(Customer, Customer.id == TicketScope.customer_id)
.outerjoin(Job, Job.id == TicketScope.job_id)
)
match_expr = _contains_all_terms(
[
func.coalesce(Ticket.ticket_code, ""),
func.coalesce(Customer.name, ""),
func.coalesce(TicketScope.scope_type, ""),
func.coalesce(TicketScope.backup_software, ""),
func.coalesce(TicketScope.backup_type, ""),
func.coalesce(TicketScope.job_name_match, ""),
func.coalesce(Job.job_name, ""),
],
patterns,
)
if match_expr is not None:
query = query.filter(match_expr)
query = query.distinct()
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[Ticket.start_date.desc().nullslast()],
)
_enrich_paging(section, total, current_page, total_pages)
for t in rows:
customer_display = "-"
scope_summary = "-"
try:
scope_rows = (
db.session.query(
TicketScope.scope_type.label("scope_type"),
TicketScope.backup_software.label("backup_software"),
TicketScope.backup_type.label("backup_type"),
Customer.name.label("customer_name"),
)
.select_from(TicketScope)
.outerjoin(Customer, Customer.id == TicketScope.customer_id)
.filter(TicketScope.ticket_id == t.id)
.all()
)
customer_names = []
for s in scope_rows:
cname = getattr(s, "customer_name", None)
if cname and cname not in customer_names:
customer_names.append(cname)
if customer_names:
customer_display = customer_names[0]
if len(customer_names) > 1:
customer_display = f"{customer_display} +{len(customer_names)-1}"
if scope_rows:
s = scope_rows[0]
bits = []
if getattr(s, "scope_type", None):
bits.append(str(getattr(s, "scope_type")))
if getattr(s, "backup_software", None):
bits.append(str(getattr(s, "backup_software")))
if getattr(s, "backup_type", None):
bits.append(str(getattr(s, "backup_type")))
scope_summary = " / ".join(bits) if bits else "-"
except Exception:
customer_display = "-"
scope_summary = "-"
section["items"].append(
{
"title": t.ticket_code or f"Ticket #{t.id}",
"subtitle": f"{customer_display} | {scope_summary}",
"meta": _format_datetime(t.start_date),
"link": url_for("main.ticket_detail", ticket_id=t.id),
}
)
return section
def _build_remarks_results(patterns: list[str], page: int) -> dict:
section = {
"key": "remarks",
"title": "Remarks",
"view_all_url": url_for("main.tickets_page", tab="remarks"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("remarks"):
return section
query = (
db.session.query(Remark)
.select_from(Remark)
.outerjoin(RemarkScope, RemarkScope.remark_id == Remark.id)
.outerjoin(Customer, Customer.id == RemarkScope.customer_id)
.outerjoin(Job, Job.id == RemarkScope.job_id)
)
match_expr = _contains_all_terms(
[
func.coalesce(Remark.title, ""),
func.coalesce(Remark.body, ""),
func.coalesce(Customer.name, ""),
func.coalesce(RemarkScope.scope_type, ""),
func.coalesce(RemarkScope.backup_software, ""),
func.coalesce(RemarkScope.backup_type, ""),
func.coalesce(RemarkScope.job_name_match, ""),
func.coalesce(Job.job_name, ""),
cast(Remark.start_date, String),
cast(Remark.resolved_at, String),
],
patterns,
)
if match_expr is not None:
query = query.filter(match_expr)
query = query.distinct()
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[Remark.start_date.desc().nullslast()],
)
_enrich_paging(section, total, current_page, total_pages)
for r in rows:
customer_display = "-"
scope_summary = "-"
try:
scope_rows = (
db.session.query(
RemarkScope.scope_type.label("scope_type"),
RemarkScope.backup_software.label("backup_software"),
RemarkScope.backup_type.label("backup_type"),
Customer.name.label("customer_name"),
)
.select_from(RemarkScope)
.outerjoin(Customer, Customer.id == RemarkScope.customer_id)
.filter(RemarkScope.remark_id == r.id)
.all()
)
customer_names = []
for s in scope_rows:
cname = getattr(s, "customer_name", None)
if cname and cname not in customer_names:
customer_names.append(cname)
if customer_names:
customer_display = customer_names[0]
if len(customer_names) > 1:
customer_display = f"{customer_display} +{len(customer_names)-1}"
if scope_rows:
s = scope_rows[0]
bits = []
if getattr(s, "scope_type", None):
bits.append(str(getattr(s, "scope_type")))
if getattr(s, "backup_software", None):
bits.append(str(getattr(s, "backup_software")))
if getattr(s, "backup_type", None):
bits.append(str(getattr(s, "backup_type")))
scope_summary = " / ".join(bits) if bits else "-"
except Exception:
customer_display = "-"
scope_summary = "-"
preview = (r.title or r.body or "").strip()
if len(preview) > 80:
preview = preview[:77] + "..."
section["items"].append(
{
"title": preview or f"Remark #{r.id}",
"subtitle": f"{customer_display} | {scope_summary}",
"meta": _format_datetime(r.start_date),
"link": url_for("main.remark_detail", remark_id=r.id),
}
)
return section
def _build_overrides_results(patterns: list[str], page: int) -> dict:
section = {
"key": "overrides",
"title": "Existing overrides",
"view_all_url": url_for("main.overrides"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("overrides"):
return section
query = (
db.session.query(
Override.id.label("id"),
Override.level.label("level"),
Override.backup_software.label("backup_software"),
Override.backup_type.label("backup_type"),
Override.object_name.label("object_name"),
Override.start_at.label("start_at"),
Override.end_at.label("end_at"),
Override.comment.label("comment"),
Customer.name.label("customer_name"),
Job.job_name.label("job_name"),
)
.select_from(Override)
.outerjoin(Job, Job.id == Override.job_id)
.outerjoin(Customer, Customer.id == Job.customer_id)
)
match_expr = _contains_all_terms(
[
func.coalesce(Override.level, ""),
func.coalesce(Customer.name, ""),
func.coalesce(Override.backup_software, ""),
func.coalesce(Override.backup_type, ""),
func.coalesce(Job.job_name, ""),
func.coalesce(Override.object_name, ""),
cast(Override.start_at, String),
cast(Override.end_at, String),
func.coalesce(Override.comment, ""),
],
patterns,
)
if match_expr is not None:
query = query.filter(match_expr)
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[Override.level.asc(), Override.start_at.desc()],
)
_enrich_paging(section, total, current_page, total_pages)
for row in rows:
scope_bits = []
if row.customer_name:
scope_bits.append(row.customer_name)
if row.backup_software:
scope_bits.append(row.backup_software)
if row.backup_type:
scope_bits.append(row.backup_type)
if row.job_name:
scope_bits.append(row.job_name)
if row.object_name:
scope_bits.append(f"object: {row.object_name}")
scope_text = " / ".join(scope_bits) if scope_bits else "All jobs"
section["items"].append(
{
"title": (row.level or "override").capitalize(),
"subtitle": scope_text,
"meta": f"From {_format_datetime(row.start_at)} to {_format_datetime(row.end_at) if row.end_at else '-'} | {row.comment or ''}",
"link": url_for("main.overrides"),
}
)
return section
def _build_reports_results(patterns: list[str], page: int) -> dict:
section = {
"key": "reports",
"title": "Reports",
"view_all_url": url_for("main.reports"),
"total": 0,
"items": [],
"current_page": 1,
"total_pages": 1,
"has_prev": False,
"has_next": False,
"prev_url": "",
"next_url": "",
}
if not _is_section_allowed("reports"):
return section
query = ReportDefinition.query
match_expr = _contains_all_terms(
[
func.coalesce(ReportDefinition.name, ""),
func.coalesce(ReportDefinition.report_type, ""),
cast(ReportDefinition.period_start, String),
cast(ReportDefinition.period_end, String),
func.coalesce(ReportDefinition.output_format, ""),
],
patterns,
)
if match_expr is not None:
query = query.filter(match_expr)
total, current_page, total_pages, rows = _paginate_query(
query,
page,
[ReportDefinition.created_at.desc()],
)
_enrich_paging(section, total, current_page, total_pages)
can_edit = get_active_role() in ("admin", "operator", "reporter")
for r in rows:
section["items"].append(
{
"title": r.name or f"Report #{r.id}",
"subtitle": f"{r.report_type or '-'} | {r.output_format or '-'}",
"meta": f"{_format_datetime(r.period_start)} -> {_format_datetime(r.period_end)}",
"link": (url_for("main.reports_edit", report_id=r.id) if can_edit else url_for("main.reports")),
}
)
return section
@main_bp.route("/search")
@login_required
def search_page():
query = (request.args.get("q") or "").strip()
patterns = _build_patterns(query)
requested_pages = {
key: _parse_page(request.args.get(f"p_{key}"))
for key in SEARCH_SECTION_KEYS
}
sections = []
if patterns:
sections.append(_build_inbox_results(patterns, requested_pages["inbox"]))
sections.append(_build_customers_results(patterns, requested_pages["customers"]))
sections.append(_build_jobs_results(patterns, requested_pages["jobs"]))
sections.append(_build_daily_jobs_results(patterns, requested_pages["daily_jobs"]))
sections.append(_build_run_checks_results(patterns, requested_pages["run_checks"]))
sections.append(_build_tickets_results(patterns, requested_pages["tickets"]))
sections.append(_build_remarks_results(patterns, requested_pages["remarks"]))
sections.append(_build_overrides_results(patterns, requested_pages["overrides"]))
sections.append(_build_reports_results(patterns, requested_pages["reports"]))
else:
sections = [
{"key": "inbox", "title": "Inbox", "view_all_url": url_for("main.inbox"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
{"key": "customers", "title": "Customers", "view_all_url": url_for("main.customers"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
{"key": "jobs", "title": "Jobs", "view_all_url": url_for("main.jobs"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
{"key": "daily_jobs", "title": "Daily Jobs", "view_all_url": url_for("main.daily_jobs"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
{"key": "run_checks", "title": "Run Checks", "view_all_url": url_for("main.run_checks_page"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
{"key": "tickets", "title": "Tickets", "view_all_url": url_for("main.tickets_page"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
{"key": "remarks", "title": "Remarks", "view_all_url": url_for("main.tickets_page", tab="remarks"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
{"key": "overrides", "title": "Existing overrides", "view_all_url": url_for("main.overrides"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
{"key": "reports", "title": "Reports", "view_all_url": url_for("main.reports"), "total": 0, "items": [], "current_page": 1, "total_pages": 1, "has_prev": False, "has_next": False, "prev_url": "", "next_url": ""},
]
visible_sections = [s for s in sections if _is_section_allowed(s["key"])]
current_pages = {
s["key"]: int(s.get("current_page", 1) or 1)
for s in sections
}
def _build_search_url(page_overrides: dict[str, int]) -> str:
args = {"q": query}
for key in SEARCH_SECTION_KEYS:
args[f"p_{key}"] = int(page_overrides.get(key, current_pages.get(key, 1)))
return url_for("main.search_page", **args)
for s in visible_sections:
key = s["key"]
cur = int(s.get("current_page", 1) or 1)
if query:
if key == "inbox":
s["view_all_url"] = url_for("main.inbox", q=query)
elif key == "customers":
s["view_all_url"] = url_for("main.customers", q=query)
elif key == "jobs":
s["view_all_url"] = url_for("main.jobs", q=query)
elif key == "daily_jobs":
s["view_all_url"] = url_for("main.daily_jobs", q=query)
elif key == "run_checks":
s["view_all_url"] = url_for("main.run_checks_page", q=query)
elif key == "tickets":
s["view_all_url"] = url_for("main.tickets_page", q=query)
elif key == "remarks":
s["view_all_url"] = url_for("main.tickets_page", tab="remarks", q=query)
elif key == "overrides":
s["view_all_url"] = url_for("main.overrides", q=query)
elif key == "reports":
s["view_all_url"] = url_for("main.reports", q=query)
if s.get("has_prev"):
prev_pages = dict(current_pages)
prev_pages[key] = cur - 1
s["prev_url"] = _build_search_url(prev_pages)
if s.get("has_next"):
next_pages = dict(current_pages)
next_pages[key] = cur + 1
s["next_url"] = _build_search_url(next_pages)
total_hits = sum(int(s.get("total", 0) or 0) for s in visible_sections)
return render_template(
"main/search.html",
query=query,
sections=visible_sections,
total_hits=total_hits,
limit_per_section=SEARCH_LIMIT_PER_SECTION,
)

View File

@ -585,6 +585,7 @@ def settings_jobs_export():
@roles_required("admin") @roles_required("admin")
def settings_jobs_import(): def settings_jobs_import():
upload = request.files.get("jobs_file") upload = request.files.get("jobs_file")
include_autotask_ids = bool(request.form.get("include_autotask_ids"))
if not upload or not upload.filename: if not upload or not upload.filename:
flash("No import file was provided.", "danger") flash("No import file was provided.", "danger")
return redirect(url_for("main.settings", section="general")) return redirect(url_for("main.settings", section="general"))
@ -621,14 +622,17 @@ def settings_jobs_import():
if not cust_name: if not cust_name:
continue continue
autotask_company_id = None
autotask_company_name = None
if include_autotask_ids:
# Read Autotask fields (backwards compatible - optional) # Read Autotask fields (backwards compatible - optional)
autotask_company_id = cust_item.get("autotask_company_id") autotask_company_id = cust_item.get("autotask_company_id")
autotask_company_name = cust_item.get("autotask_company_name") autotask_company_name = cust_item.get("autotask_company_name")
existing_customer = Customer.query.filter_by(name=cust_name).first() existing_customer = Customer.query.filter_by(name=cust_name).first()
if existing_customer: if existing_customer:
# Update Autotask mapping if provided # Update Autotask mapping only when explicitly allowed by import option.
if autotask_company_id is not None: if include_autotask_ids and autotask_company_id is not None:
existing_customer.autotask_company_id = autotask_company_id existing_customer.autotask_company_id = autotask_company_id
existing_customer.autotask_company_name = autotask_company_name existing_customer.autotask_company_name = autotask_company_name
existing_customer.autotask_mapping_status = None # Will be resynced existing_customer.autotask_mapping_status = None # Will be resynced
@ -747,7 +751,7 @@ def settings_jobs_import():
db.session.commit() db.session.commit()
flash( flash(
f"Import completed. Customers created: {created_customers}, updated: {updated_customers}. Jobs created: {created_jobs}, updated: {updated_jobs}.", f"Import completed. Customers created: {created_customers}, updated: {updated_customers}. Jobs created: {created_jobs}, updated: {updated_jobs}. Autotask IDs imported: {'yes' if include_autotask_ids else 'no'}.",
"success", "success",
) )
@ -758,6 +762,7 @@ def settings_jobs_import():
details=json.dumps({ details=json.dumps({
"format": "JSON", "format": "JSON",
"schema": payload.get("schema"), "schema": payload.get("schema"),
"include_autotask_ids": include_autotask_ids,
"customers_created": created_customers, "customers_created": created_customers,
"customers_updated": updated_customers, "customers_updated": updated_customers,
"jobs_created": created_jobs, "jobs_created": created_jobs,

View File

@ -52,6 +52,7 @@ from ..models import (
FeedbackItem, FeedbackItem,
FeedbackVote, FeedbackVote,
FeedbackReply, FeedbackReply,
FeedbackAttachment,
NewsItem, NewsItem,
NewsRead, NewsRead,
ReportDefinition, ReportDefinition,
@ -678,6 +679,10 @@ def _infer_schedule_map_from_runs(job_id: int):
return schedule return schedule
if bs == 'qnap' and bt == 'firmware update': if bs == 'qnap' and bt == 'firmware update':
return schedule return schedule
if bs == '3cx' and bt == 'update':
return schedule
if bs == '3cx' and bt == 'ssl certificate':
return schedule
if bs == 'syncovery' and bt == 'syncovery': if bs == 'syncovery' and bt == 'syncovery':
return schedule return schedule
except Exception: except Exception:
@ -993,4 +998,3 @@ def _next_ticket_code(now_utc: datetime) -> str:
seq = 1 seq = 1
return f"{prefix}{seq:04d}" return f"{prefix}{seq:04d}"

View File

@ -28,16 +28,32 @@ def tickets_page():
if tab == "tickets": if tab == "tickets":
query = Ticket.query query = Ticket.query
joined_scope = False
if active_only: if active_only:
query = query.filter(Ticket.resolved_at.is_(None)) query = query.filter(Ticket.resolved_at.is_(None))
if q: if q:
like_q = f"%{q}%" like_q = f"%{q}%"
query = (
query
.outerjoin(TicketScope, TicketScope.ticket_id == Ticket.id)
.outerjoin(Customer, Customer.id == TicketScope.customer_id)
.outerjoin(Job, Job.id == TicketScope.job_id)
)
joined_scope = True
query = query.filter( query = query.filter(
(Ticket.ticket_code.ilike(like_q)) (Ticket.ticket_code.ilike(like_q))
| (Ticket.description.ilike(like_q)) | (Ticket.description.ilike(like_q))
| (Customer.name.ilike(like_q))
| (TicketScope.scope_type.ilike(like_q))
| (TicketScope.backup_software.ilike(like_q))
| (TicketScope.backup_type.ilike(like_q))
| (TicketScope.job_name_match.ilike(like_q))
| (Job.job_name.ilike(like_q))
) )
query = query.distinct()
if customer_id or backup_software or backup_type: if customer_id or backup_software or backup_type:
if not joined_scope:
query = query.join(TicketScope, TicketScope.ticket_id == Ticket.id) query = query.join(TicketScope, TicketScope.ticket_id == Ticket.id)
if customer_id: if customer_id:
query = query.filter(TicketScope.customer_id == customer_id) query = query.filter(TicketScope.customer_id == customer_id)
@ -322,4 +338,3 @@ def ticket_detail(ticket_id: int):
scopes=scopes, scopes=scopes,
runs=runs, runs=runs,
) )

View File

@ -1095,6 +1095,7 @@ def run_migrations() -> None:
migrate_object_persistence_tables() migrate_object_persistence_tables()
migrate_feedback_tables() migrate_feedback_tables()
migrate_feedback_replies_table() migrate_feedback_replies_table()
migrate_feedback_attachments_table()
migrate_tickets_active_from_date() migrate_tickets_active_from_date()
migrate_tickets_resolved_origin() migrate_tickets_resolved_origin()
migrate_remarks_active_from_date() migrate_remarks_active_from_date()
@ -1446,6 +1447,49 @@ def migrate_feedback_replies_table() -> None:
print("[migrations] Feedback replies table ensured.") print("[migrations] Feedback replies table ensured.")
def migrate_feedback_attachments_table() -> None:
"""Ensure feedback attachments table exists.
Table:
- feedback_attachments (screenshots/images for feedback items and replies)
"""
engine = db.get_engine()
with engine.begin() as conn:
conn.execute(
text(
"""
CREATE TABLE IF NOT EXISTS feedback_attachments (
id SERIAL PRIMARY KEY,
feedback_item_id INTEGER NOT NULL REFERENCES feedback_items(id) ON DELETE CASCADE,
feedback_reply_id INTEGER REFERENCES feedback_replies(id) ON DELETE CASCADE,
filename VARCHAR(255) NOT NULL,
file_data BYTEA NOT NULL,
mime_type VARCHAR(64) NOT NULL,
file_size INTEGER NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT NOW()
);
"""
)
)
conn.execute(
text(
"""
CREATE INDEX IF NOT EXISTS idx_feedback_attachments_item
ON feedback_attachments (feedback_item_id);
"""
)
)
conn.execute(
text(
"""
CREATE INDEX IF NOT EXISTS idx_feedback_attachments_reply
ON feedback_attachments (feedback_reply_id);
"""
)
)
print("[migrations] Feedback attachments table ensured.")
def migrate_tickets_active_from_date() -> None: def migrate_tickets_active_from_date() -> None:
"""Ensure tickets.active_from_date exists and is populated. """Ensure tickets.active_from_date exists and is populated.

View File

@ -567,6 +567,23 @@ class FeedbackReply(db.Model):
created_at = db.Column(db.DateTime, default=datetime.utcnow, nullable=False) created_at = db.Column(db.DateTime, default=datetime.utcnow, nullable=False)
class FeedbackAttachment(db.Model):
__tablename__ = "feedback_attachments"
id = db.Column(db.Integer, primary_key=True)
feedback_item_id = db.Column(
db.Integer, db.ForeignKey("feedback_items.id", ondelete="CASCADE"), nullable=False
)
feedback_reply_id = db.Column(
db.Integer, db.ForeignKey("feedback_replies.id", ondelete="CASCADE"), nullable=True
)
filename = db.Column(db.String(255), nullable=False)
file_data = db.Column(db.LargeBinary, nullable=False)
mime_type = db.Column(db.String(64), nullable=False)
file_size = db.Column(db.Integer, nullable=False)
created_at = db.Column(db.DateTime, default=datetime.utcnow, nullable=False)
class NewsItem(db.Model): class NewsItem(db.Model):
__tablename__ = "news_items" __tablename__ = "news_items"

View File

@ -24,6 +24,10 @@ def try_parse_3cx(msg: MailMessage) -> Tuple[bool, Dict, List[Dict]]:
- SSL Certificate Renewal (informational) - SSL Certificate Renewal (informational)
Subject: '3CX Notification: SSL Certificate Renewal - <host>' Subject: '3CX Notification: SSL Certificate Renewal - <host>'
Body contains an informational message about the renewal. Body contains an informational message about the renewal.
- Update Successful (informational)
Subject: '3CX Notification: Update Successful - <host>'
Body confirms update completion and healthy services.
""" """
subject = (msg.subject or "").strip() subject = (msg.subject or "").strip()
if not subject: if not subject:
@ -38,11 +42,16 @@ def try_parse_3cx(msg: MailMessage) -> Tuple[bool, Dict, List[Dict]]:
subject, subject,
flags=re.IGNORECASE, flags=re.IGNORECASE,
) )
m_update = re.match(
r"^3CX Notification:\s*Update Successful\s*-\s*(.+)$",
subject,
flags=re.IGNORECASE,
)
if not m_backup and not m_ssl: if not m_backup and not m_ssl and not m_update:
return False, {}, [] return False, {}, []
job_name = (m_backup or m_ssl).group(1).strip() job_name = (m_backup or m_ssl or m_update).group(1).strip()
body = (getattr(msg, "text_body", None) or getattr(msg, "body", None) or "") body = (getattr(msg, "text_body", None) or getattr(msg, "body", None) or "")
if not body: if not body:
@ -60,6 +69,17 @@ def try_parse_3cx(msg: MailMessage) -> Tuple[bool, Dict, List[Dict]]:
} }
return True, result, [] return True, result, []
# Update successful: store as tracked informational run
if m_update:
result = {
"backup_software": "3CX",
"backup_type": "Update",
"job_name": job_name,
"overall_status": "Success",
"overall_message": body or None,
}
return True, result, []
# Backup complete # Backup complete
backup_file = None backup_file = None
m_file = re.search(r"^\s*Backup\s+name\s*:\s*(.+?)\s*$", body, flags=re.IGNORECASE | re.MULTILINE) m_file = re.search(r"^\s*Backup\s+name\s*:\s*(.+?)\s*$", body, flags=re.IGNORECASE | re.MULTILINE)

View File

@ -157,6 +157,18 @@
</li> </li>
{% endif %} {% endif %}
</ul> </ul>
<form method="get" action="{{ url_for('main.search_page') }}" class="d-flex me-3 mb-2 mb-lg-0" role="search" autocomplete="off">
<input
class="form-control form-control-sm me-2"
type="search"
name="q"
placeholder="Search"
aria-label="Search"
value="{{ request.args.get('q','') if request.path == url_for('main.search_page') else '' }}"
style="min-width: 220px;"
/>
<button class="btn btn-outline-secondary btn-sm" type="submit">Search</button>
</form>
<span class="navbar-text me-3"> <span class="navbar-text me-3">
<a class="text-decoration-none" href="{{ url_for('main.user_settings') }}"> <a class="text-decoration-none" href="{{ url_for('main.user_settings') }}">
{{ current_user.username }} ({{ active_role }}) {{ current_user.username }} ({{ active_role }})

View File

@ -15,6 +15,10 @@
<form method="post" action="{{ url_for('main.customers_import') }}" enctype="multipart/form-data" class="d-flex align-items-center gap-2 mb-0"> <form method="post" action="{{ url_for('main.customers_import') }}" enctype="multipart/form-data" class="d-flex align-items-center gap-2 mb-0">
<input type="file" name="file" accept=".csv,text/csv" class="form-control form-control-sm" required style="max-width: 420px;" /> <input type="file" name="file" accept=".csv,text/csv" class="form-control form-control-sm" required style="max-width: 420px;" />
<div class="form-check mb-0">
<input class="form-check-input" type="checkbox" value="1" id="include_autotask_ids_customers" name="include_autotask_ids" />
<label class="form-check-label small" for="include_autotask_ids_customers">Include Autotask IDs</label>
</div>
<button type="submit" class="btn btn-outline-secondary btn-sm" style="white-space: nowrap;">Import CSV</button> <button type="submit" class="btn btn-outline-secondary btn-sm" style="white-space: nowrap;">Import CSV</button>
</form> </form>
@ -45,7 +49,11 @@
{% if customers %} {% if customers %}
{% for c in customers %} {% for c in customers %}
<tr> <tr>
<td>{{ c.name }}</td> <td>
<a href="{{ url_for('main.jobs', customer_id=c.id) }}" class="link-primary text-decoration-none">
{{ c.name }}
</a>
</td>
<td> <td>
{% if c.active %} {% if c.active %}
<span class="badge bg-success">Active</span> <span class="badge bg-success">Active</span>

View File

@ -4,6 +4,9 @@
<h2 class="mb-3">Daily Jobs</h2> <h2 class="mb-3">Daily Jobs</h2>
<form method="get" class="row g-3 mb-3"> <form method="get" class="row g-3 mb-3">
{% if q %}
<input type="hidden" name="q" value="{{ q }}" />
{% endif %}
<div class="col-auto"> <div class="col-auto">
<label for="dj_date" class="form-label">Date</label> <label for="dj_date" class="form-label">Date</label>
<input <input
@ -771,9 +774,43 @@ if (tStatus) tStatus.textContent = '';
}); });
} }
function autoOpenJobFromQuery() {
try {
var params = new URLSearchParams(window.location.search || "");
var openJobId = (params.get("open_job_id") || "").trim();
if (!openJobId) {
return;
}
var rows = document.querySelectorAll(".daily-job-row");
var targetRow = null;
rows.forEach(function (row) {
if ((row.getAttribute("data-job-id") || "") === openJobId) {
targetRow = row;
}
});
if (!targetRow) {
return;
}
targetRow.click();
params.delete("open_job_id");
var nextQuery = params.toString();
var nextUrl = window.location.pathname + (nextQuery ? ("?" + nextQuery) : "");
if (window.history && window.history.replaceState) {
window.history.replaceState({}, document.title, nextUrl);
}
} catch (e) {
// no-op
}
}
document.addEventListener("DOMContentLoaded", function () { document.addEventListener("DOMContentLoaded", function () {
bindInlineCreateForms(); bindInlineCreateForms();
attachDailyJobsHandlers(); attachDailyJobsHandlers();
autoOpenJobFromQuery();
}); });
})(); })();
</script> </script>

View File

@ -34,6 +34,16 @@
<div class="col-6 col-md-3"> <div class="col-6 col-md-3">
<button class="btn btn-outline-secondary" type="submit">Apply</button> <button class="btn btn-outline-secondary" type="submit">Apply</button>
</div> </div>
{% if active_role == 'admin' %}
<div class="col-12">
<div class="form-check">
<input class="form-check-input" type="checkbox" name="show_deleted" value="1" id="show_deleted" {% if show_deleted %}checked{% endif %} onchange="this.form.submit()">
<label class="form-check-label" for="show_deleted">
Show deleted items
</label>
</div>
</div>
{% endif %}
</form> </form>
<div class="table-responsive"> <div class="table-responsive">
@ -46,6 +56,9 @@
<th style="width: 160px;">Component</th> <th style="width: 160px;">Component</th>
<th style="width: 120px;">Status</th> <th style="width: 120px;">Status</th>
<th style="width: 170px;">Created</th> <th style="width: 170px;">Created</th>
{% if active_role == 'admin' and show_deleted %}
<th style="width: 140px;">Actions</th>
{% endif %}
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
@ -56,20 +69,30 @@
{% endif %} {% endif %}
{% for i in items %} {% for i in items %}
<tr> <tr {% if i.is_deleted %}style="opacity: 0.6; background-color: var(--bs-secondary-bg);"{% endif %}>
<td> <td>
{% if not i.is_deleted %}
<form method="post" action="{{ url_for('main.feedback_vote', item_id=i.id) }}"> <form method="post" action="{{ url_for('main.feedback_vote', item_id=i.id) }}">
<input type="hidden" name="ref" value="list" /> <input type="hidden" name="ref" value="list" />
<button type="submit" class="btn btn-sm {% if i.user_voted %}btn-success{% else %}btn-outline-secondary{% endif %}"> <button type="submit" class="btn btn-sm {% if i.user_voted %}btn-success{% else %}btn-outline-secondary{% endif %}">
+ {{ i.vote_count }} + {{ i.vote_count }}
</button> </button>
</form> </form>
{% else %}
<span class="text-muted">+ {{ i.vote_count }}</span>
{% endif %}
</td> </td>
<td> <td>
<a href="{{ url_for('main.feedback_detail', item_id=i.id) }}">{{ i.title }}</a> <a href="{{ url_for('main.feedback_detail', item_id=i.id) }}">{{ i.title }}</a>
{% if i.is_deleted %}
<span class="badge text-bg-dark ms-2">Deleted</span>
{% endif %}
{% if i.created_by %} {% if i.created_by %}
<div class="text-muted" style="font-size: 0.85rem;">by {{ i.created_by }}</div> <div class="text-muted" style="font-size: 0.85rem;">by {{ i.created_by }}</div>
{% endif %} {% endif %}
{% if i.is_deleted and i.deleted_at %}
<div class="text-muted" style="font-size: 0.85rem;">Deleted {{ i.deleted_at|local_datetime }}</div>
{% endif %}
</td> </td>
<td> <td>
{% if i.item_type == 'bug' %} {% if i.item_type == 'bug' %}
@ -90,6 +113,15 @@
<div>{{ i.created_at|local_datetime }}</div> <div>{{ i.created_at|local_datetime }}</div>
<div class="text-muted" style="font-size: 0.85rem;">Updated {{ i.updated_at|local_datetime }}</div> <div class="text-muted" style="font-size: 0.85rem;">Updated {{ i.updated_at|local_datetime }}</div>
</td> </td>
{% if active_role == 'admin' and show_deleted %}
<td>
{% if i.is_deleted %}
<form method="post" action="{{ url_for('main.feedback_permanent_delete', item_id=i.id) }}" onsubmit="return confirm('Permanently delete this item and all screenshots? This cannot be undone!');">
<button type="submit" class="btn btn-sm btn-danger">Permanent Delete</button>
</form>
{% endif %}
</td>
{% endif %}
</tr> </tr>
{% endfor %} {% endfor %}
</tbody> </tbody>

View File

@ -15,6 +15,9 @@
{% else %} {% else %}
<span class="badge text-bg-warning">Open</span> <span class="badge text-bg-warning">Open</span>
{% endif %} {% endif %}
{% if item.deleted_at %}
<span class="badge text-bg-dark">Deleted</span>
{% endif %}
<span class="ms-2">by {{ created_by_name }}</span> <span class="ms-2">by {{ created_by_name }}</span>
</div> </div>
</div> </div>
@ -29,6 +32,23 @@
<div class="mb-2"><strong>Component:</strong> {{ item.component }}</div> <div class="mb-2"><strong>Component:</strong> {{ item.component }}</div>
{% endif %} {% endif %}
<div style="white-space: pre-wrap;">{{ item.description }}</div> <div style="white-space: pre-wrap;">{{ item.description }}</div>
{% if item_attachments %}
<div class="mt-3">
<strong>Screenshots:</strong>
<div class="d-flex flex-wrap gap-2 mt-2">
{% for att in item_attachments %}
<a href="{{ url_for('main.feedback_attachment', attachment_id=att.id) }}" target="_blank">
<img src="{{ url_for('main.feedback_attachment', attachment_id=att.id) }}"
alt="{{ att.filename }}"
class="img-thumbnail"
style="max-height: 200px; max-width: 300px; cursor: pointer;"
title="Click to view full size" />
</a>
{% endfor %}
</div>
</div>
{% endif %}
</div> </div>
<div class="card-footer d-flex justify-content-between align-items-center"> <div class="card-footer d-flex justify-content-between align-items-center">
<div class="text-muted" style="font-size: 0.9rem;"> <div class="text-muted" style="font-size: 0.9rem;">
@ -63,6 +83,22 @@
</span> </span>
</div> </div>
<div style="white-space: pre-wrap;">{{ r.message }}</div> <div style="white-space: pre-wrap;">{{ r.message }}</div>
{% if r.id in reply_attachments_map %}
<div class="mt-2">
<div class="d-flex flex-wrap gap-2">
{% for att in reply_attachments_map[r.id] %}
<a href="{{ url_for('main.feedback_attachment', attachment_id=att.id) }}" target="_blank">
<img src="{{ url_for('main.feedback_attachment', attachment_id=att.id) }}"
alt="{{ att.filename }}"
class="img-thumbnail"
style="max-height: 150px; max-width: 200px; cursor: pointer;"
title="Click to view full size" />
</a>
{% endfor %}
</div>
</div>
{% endif %}
</div> </div>
{% endfor %} {% endfor %}
</div> </div>
@ -76,10 +112,15 @@
<div class="card-body"> <div class="card-body">
<h5 class="card-title mb-3">Add reply</h5> <h5 class="card-title mb-3">Add reply</h5>
{% if item.status == 'open' %} {% if item.status == 'open' %}
<form method="post" action="{{ url_for('main.feedback_reply', item_id=item.id) }}"> <form method="post" action="{{ url_for('main.feedback_reply', item_id=item.id) }}" enctype="multipart/form-data">
<div class="mb-2"> <div class="mb-2">
<textarea class="form-control" name="message" rows="4" required></textarea> <textarea class="form-control" name="message" rows="4" required></textarea>
</div> </div>
<div class="mb-2">
<label class="form-label">Screenshots (optional)</label>
<input type="file" name="screenshots" class="form-control" multiple accept="image/png,image/jpeg,image/jpg,image/gif,image/webp" />
<div class="form-text">You can attach multiple screenshots (PNG, JPG, GIF, WEBP, max 5MB each)</div>
</div>
<button type="submit" class="btn btn-primary">Post reply</button> <button type="submit" class="btn btn-primary">Post reply</button>
</form> </form>
{% else %} {% else %}
@ -95,6 +136,16 @@
<h2 class="h6">Actions</h2> <h2 class="h6">Actions</h2>
{% if active_role == 'admin' %} {% if active_role == 'admin' %}
{% if item.deleted_at %}
{# Item is deleted - show permanent delete option #}
<div class="alert alert-warning mb-2" style="font-size: 0.9rem;">
This item is deleted.
</div>
<form method="post" action="{{ url_for('main.feedback_permanent_delete', item_id=item.id) }}" onsubmit="return confirm('Permanently delete this item and all screenshots? This cannot be undone!');">
<button type="submit" class="btn btn-danger w-100">Permanent Delete</button>
</form>
{% else %}
{# Item is not deleted - show normal actions #}
{% if item.status == 'resolved' %} {% if item.status == 'resolved' %}
<form method="post" action="{{ url_for('main.feedback_resolve', item_id=item.id) }}" class="mb-2"> <form method="post" action="{{ url_for('main.feedback_resolve', item_id=item.id) }}" class="mb-2">
<input type="hidden" name="action" value="reopen" /> <input type="hidden" name="action" value="reopen" />
@ -110,6 +161,7 @@
<form method="post" action="{{ url_for('main.feedback_delete', item_id=item.id) }}" onsubmit="return confirm('Delete this item?');"> <form method="post" action="{{ url_for('main.feedback_delete', item_id=item.id) }}" onsubmit="return confirm('Delete this item?');">
<button type="submit" class="btn btn-danger w-100">Delete</button> <button type="submit" class="btn btn-danger w-100">Delete</button>
</form> </form>
{% endif %}
{% else %} {% else %}
<div class="text-muted">Only administrators can resolve or delete items.</div> <div class="text-muted">Only administrators can resolve or delete items.</div>
{% endif %} {% endif %}

View File

@ -6,7 +6,7 @@
<a class="btn btn-outline-secondary" href="{{ url_for('main.feedback_page') }}">Back</a> <a class="btn btn-outline-secondary" href="{{ url_for('main.feedback_page') }}">Back</a>
</div> </div>
<form method="post" class="card"> <form method="post" enctype="multipart/form-data" class="card">
<div class="card-body"> <div class="card-body">
<div class="row g-3"> <div class="row g-3">
<div class="col-12 col-md-3"> <div class="col-12 col-md-3">
@ -28,6 +28,11 @@
<label class="form-label">Component (optional)</label> <label class="form-label">Component (optional)</label>
<input type="text" name="component" class="form-control" /> <input type="text" name="component" class="form-control" />
</div> </div>
<div class="col-12">
<label class="form-label">Screenshots (optional)</label>
<input type="file" name="screenshots" class="form-control" multiple accept="image/png,image/jpeg,image/jpg,image/gif,image/webp" />
<div class="form-text">You can attach multiple screenshots (PNG, JPG, GIF, WEBP, max 5MB each)</div>
</div>
</div> </div>
</div> </div>
<div class="card-footer d-flex justify-content-end"> <div class="card-footer d-flex justify-content-end">

View File

@ -14,12 +14,12 @@
<div class="d-flex justify-content-between align-items-center my-2"> <div class="d-flex justify-content-between align-items-center my-2">
<div> <div>
{% if has_prev %} {% if has_prev %}
<a class="btn btn-outline-secondary btn-sm" href="{{ url_for('main.inbox', page=page-1) }}">Previous</a> <a class="btn btn-outline-secondary btn-sm" href="{{ url_for('main.inbox', page=page-1, q=q) }}">Previous</a>
{% else %} {% else %}
<button class="btn btn-outline-secondary btn-sm" disabled>Previous</button> <button class="btn btn-outline-secondary btn-sm" disabled>Previous</button>
{% endif %} {% endif %}
{% if has_next %} {% if has_next %}
<a class="btn btn-outline-secondary btn-sm ms-2" href="{{ url_for('main.inbox', page=page+1) }}">Next</a> <a class="btn btn-outline-secondary btn-sm ms-2" href="{{ url_for('main.inbox', page=page+1, q=q) }}">Next</a>
{% else %} {% else %}
<button class="btn btn-outline-secondary btn-sm ms-2" disabled>Next</button> <button class="btn btn-outline-secondary btn-sm ms-2" disabled>Next</button>
{% endif %} {% endif %}
@ -73,7 +73,7 @@
<tr> <tr>
{% if can_bulk_delete %} {% if can_bulk_delete %}
<th scope="col" style="width: 34px;"> <th scope="col" style="width: 34px;">
<input class="form-check-input" type="checkbox" id="inbox_select_all" /> <input class="form-check-input" type="checkbox" id="inbox_select_all" autocomplete="off" />
</th> </th>
{% endif %} {% endif %}
<th scope="col">From</th> <th scope="col">From</th>
@ -93,7 +93,7 @@
<tr class="inbox-row" data-message-id="{{ row.id }}" style="cursor: pointer;"> <tr class="inbox-row" data-message-id="{{ row.id }}" style="cursor: pointer;">
{% if can_bulk_delete %} {% if can_bulk_delete %}
<td onclick="event.stopPropagation();"> <td onclick="event.stopPropagation();">
<input class="form-check-input inbox_row_cb" type="checkbox" value="{{ row.id }}" /> <input class="form-check-input inbox_row_cb" type="checkbox" value="{{ row.id }}" autocomplete="off" />
</td> </td>
{% endif %} {% endif %}
<td>{{ row.from_address }}</td> <td>{{ row.from_address }}</td>

View File

@ -287,6 +287,60 @@
(function () { (function () {
var currentRunId = null; var currentRunId = null;
// Cross-browser copy to clipboard function
function copyToClipboard(text, button) {
// Method 1: Modern Clipboard API (works in most browsers with HTTPS)
if (navigator.clipboard && navigator.clipboard.writeText) {
navigator.clipboard.writeText(text)
.then(function () {
showCopyFeedback(button);
})
.catch(function () {
// Fallback to method 2 if clipboard API fails
fallbackCopy(text, button);
});
} else {
// Method 2: Legacy execCommand method
fallbackCopy(text, button);
}
}
function fallbackCopy(text, button) {
var textarea = document.createElement('textarea');
textarea.value = text;
textarea.style.position = 'fixed';
textarea.style.opacity = '0';
textarea.style.top = '0';
textarea.style.left = '0';
document.body.appendChild(textarea);
textarea.focus();
textarea.select();
try {
var successful = document.execCommand('copy');
if (successful) {
showCopyFeedback(button);
} else {
// If execCommand fails, use prompt as last resort
window.prompt('Copy ticket number:', text);
}
} catch (err) {
// If all else fails, show prompt
window.prompt('Copy ticket number:', text);
}
document.body.removeChild(textarea);
}
function showCopyFeedback(button) {
if (!button) return;
var original = button.textContent;
button.textContent = '✓';
setTimeout(function () {
button.textContent = original;
}, 800);
}
function apiJson(url, opts) { function apiJson(url, opts) {
opts = opts || {}; opts = opts || {};
opts.headers = opts.headers || {}; opts.headers = opts.headers || {};
@ -319,12 +373,14 @@
html += '<div class="mb-2"><strong>Tickets</strong><div class="mt-1">'; html += '<div class="mb-2"><strong>Tickets</strong><div class="mt-1">';
tickets.forEach(function (t) { tickets.forEach(function (t) {
var status = t.resolved_at ? 'Resolved' : 'Active'; var status = t.resolved_at ? 'Resolved' : 'Active';
var ticketCode = (t.ticket_code || '').toString();
html += '<div class="mb-2 border rounded p-2" data-alert-type="ticket" data-id="' + t.id + '">' + html += '<div class="mb-2 border rounded p-2" data-alert-type="ticket" data-id="' + t.id + '">' +
'<div class="d-flex align-items-start justify-content-between gap-2">' + '<div class="d-flex align-items-start justify-content-between gap-2">' +
'<div class="flex-grow-1 min-w-0">' + '<div class="flex-grow-1 min-w-0">' +
'<div class="text-truncate">' + '<div class="text-truncate">' +
'<span class="me-1" title="Ticket">🎫</span>' + '<span class="me-1" title="Ticket">🎫</span>' +
'<span class="fw-semibold">' + escapeHtml(t.ticket_code || '') + '</span>' + '<span class="fw-semibold">' + escapeHtml(ticketCode) + '</span>' +
'<button type="button" class="btn btn-sm btn-outline-secondary ms-2 py-0 px-1" title="Copy ticket number" data-action="copy-ticket" data-code="' + escapeHtml(ticketCode) + '"></button>' +
'<span class="ms-2 badge ' + (t.resolved_at ? 'bg-secondary' : 'bg-warning text-dark') + '">' + status + '</span>' + '<span class="ms-2 badge ' + (t.resolved_at ? 'bg-secondary' : 'bg-warning text-dark') + '">' + status + '</span>' +
'</div>' + '</div>' +
'</div>' + '</div>' +
@ -371,7 +427,16 @@
ev.preventDefault(); ev.preventDefault();
var action = btn.getAttribute('data-action'); var action = btn.getAttribute('data-action');
var id = btn.getAttribute('data-id'); var id = btn.getAttribute('data-id');
if (!action || !id) return; if (!action) return;
if (action === 'copy-ticket') {
var code = btn.getAttribute('data-code') || '';
if (!code) return;
copyToClipboard(code, btn);
return;
}
if (!id) return;
if (action === 'resolve-ticket') { if (action === 'resolve-ticket') {
if (!confirm('Mark ticket as resolved?')) return; if (!confirm('Mark ticket as resolved?')) return;
apiJson('/api/tickets/' + encodeURIComponent(id) + '/resolve', {method: 'POST', body: '{}'}) apiJson('/api/tickets/' + encodeURIComponent(id) + '/resolve', {method: 'POST', body: '{}'})

View File

@ -2,6 +2,16 @@
{% block content %} {% block content %}
<h2 class="mb-3">Jobs</h2> <h2 class="mb-3">Jobs</h2>
{% if selected_customer_id %}
<div class="alert alert-info d-flex justify-content-between align-items-center py-2" role="alert">
<span>
Filtered on customer:
<strong>{{ selected_customer_name or ('#' ~ selected_customer_id) }}</strong>
</span>
<a href="{{ url_for('main.jobs') }}" class="btn btn-sm btn-outline-primary">Clear filter</a>
</div>
{% endif %}
<div class="table-responsive"> <div class="table-responsive">
<table class="table table-sm table-hover align-middle"> <table class="table table-sm table-hover align-middle">
<thead class="table-light"> <thead class="table-light">

View File

@ -422,7 +422,10 @@ function loadRawData() {
function loadReports() { function loadReports() {
setTableLoading('Loading…'); setTableLoading('Loading…');
fetch('/api/reports', { credentials: 'same-origin' }) var params = new URLSearchParams(window.location.search || '');
var q = (params.get('q') || '').trim();
var apiUrl = '/api/reports' + (q ? ('?q=' + encodeURIComponent(q)) : '');
fetch(apiUrl, { credentials: 'same-origin' })
.then(function (r) { return r.json(); }) .then(function (r) { return r.json(); })
.then(function (data) { .then(function (data) {
renderTable((data && data.items) ? data.items : []); renderTable((data && data.items) ? data.items : []);

View File

@ -48,7 +48,7 @@
<thead class="table-light"> <thead class="table-light">
<tr> <tr>
<th scope="col" style="width: 34px;"> <th scope="col" style="width: 34px;">
<input class="form-check-input" type="checkbox" id="rc_select_all" /> <input class="form-check-input" type="checkbox" id="rc_select_all" autocomplete="off" />
</th> </th>
<th scope="col">Customer</th> <th scope="col">Customer</th>
<th scope="col">Backup</th> <th scope="col">Backup</th>
@ -63,7 +63,7 @@
{% for r in rows %} {% for r in rows %}
<tr class="rc-job-row" data-job-id="{{ r.job_id }}" style="cursor: pointer;"> <tr class="rc-job-row" data-job-id="{{ r.job_id }}" style="cursor: pointer;">
<td onclick="event.stopPropagation();"> <td onclick="event.stopPropagation();">
<input class="form-check-input rc_row_cb" type="checkbox" value="{{ r.job_id }}" /> <input class="form-check-input rc_row_cb" type="checkbox" value="{{ r.job_id }}" autocomplete="off" />
</td> </td>
<td>{{ r.customer_name }}</td> <td>{{ r.customer_name }}</td>
<td>{{ r.backup_software }}</td> <td>{{ r.backup_software }}</td>
@ -447,6 +447,60 @@ function escapeHtml(s) {
.replace(/'/g, "&#39;"); .replace(/'/g, "&#39;");
} }
// Cross-browser copy to clipboard function
function copyToClipboard(text, button) {
// Method 1: Modern Clipboard API (works in most browsers with HTTPS)
if (navigator.clipboard && navigator.clipboard.writeText) {
navigator.clipboard.writeText(text)
.then(function () {
showCopyFeedback(button);
})
.catch(function () {
// Fallback to method 2 if clipboard API fails
fallbackCopy(text, button);
});
} else {
// Method 2: Legacy execCommand method
fallbackCopy(text, button);
}
}
function fallbackCopy(text, button) {
var textarea = document.createElement('textarea');
textarea.value = text;
textarea.style.position = 'fixed';
textarea.style.opacity = '0';
textarea.style.top = '0';
textarea.style.left = '0';
document.body.appendChild(textarea);
textarea.focus();
textarea.select();
try {
var successful = document.execCommand('copy');
if (successful) {
showCopyFeedback(button);
} else {
// If execCommand fails, use prompt as last resort
window.prompt('Copy ticket number:', text);
}
} catch (err) {
// If all else fails, show prompt
window.prompt('Copy ticket number:', text);
}
document.body.removeChild(textarea);
}
function showCopyFeedback(button) {
if (!button) return;
var original = button.textContent;
button.textContent = '✓';
setTimeout(function () {
button.textContent = original;
}, 800);
}
function getSelectedJobIds() { function getSelectedJobIds() {
var cbs = table.querySelectorAll('tbody .rc_row_cb'); var cbs = table.querySelectorAll('tbody .rc_row_cb');
var ids = []; var ids = [];
@ -840,20 +894,7 @@ table.addEventListener('change', function (e) {
if (action === 'copy-ticket') { if (action === 'copy-ticket') {
var code = btn.getAttribute('data-code') || ''; var code = btn.getAttribute('data-code') || '';
if (!code) return; if (!code) return;
if (navigator.clipboard && navigator.clipboard.writeText) { copyToClipboard(code, btn);
navigator.clipboard.writeText(code)
.then(function () {
var original = btn.textContent;
btn.textContent = '✓';
setTimeout(function () { btn.textContent = original; }, 800);
})
.catch(function () {
// Fallback: select/copy via prompt
window.prompt('Copy ticket number:', code);
});
} else {
window.prompt('Copy ticket number:', code);
}
return; return;
} }

View File

@ -0,0 +1,75 @@
{% extends "layout/base.html" %}
{% block content %}
<h2 class="mb-3">Search</h2>
{% if query %}
<p class="text-muted mb-3">
Query: <strong>{{ query }}</strong> | Total hits: <strong>{{ total_hits }}</strong>
</p>
{% else %}
<div class="alert alert-secondary py-2">
Enter a search term in the top navigation bar.
</div>
{% endif %}
{% for section in sections %}
<div class="card mb-3" id="search-section-{{ section['key'] }}" style="scroll-margin-top: 96px;">
<div class="card-header d-flex justify-content-between align-items-center">
<span>{{ section['title'] }} ({{ section['total'] }})</span>
<a href="{{ section['view_all_url'] }}" class="btn btn-sm btn-outline-secondary">Open {{ section['title'] }}</a>
</div>
{% if section['key'] == 'daily_jobs' %}
<div class="px-3 py-2 small text-muted border-bottom">
Note: The Daily Jobs page itself only shows results for the selected day. Search results can include matches that relate to jobs across other days.
</div>
{% endif %}
<div class="card-body p-0">
{% if section['items'] %}
<div class="table-responsive">
<table class="table table-sm mb-0 align-middle">
<thead class="table-light">
<tr>
<th>Result</th>
<th>Details</th>
<th>Meta</th>
</tr>
</thead>
<tbody>
{% for item in section['items'] %}
<tr>
<td>
{% if item.link %}
<a href="{{ item.link }}">{{ item.title }}</a>
{% else %}
{{ item.title }}
{% endif %}
</td>
<td>{{ item.subtitle }}</td>
<td>{{ item.meta }}</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% else %}
<div class="p-3 text-muted">No results in this section.</div>
{% endif %}
</div>
{% if section['total_pages'] > 1 %}
<div class="card-footer d-flex justify-content-between align-items-center small">
<span class="text-muted">
Page {{ section['current_page'] }} of {{ section['total_pages'] }} ({{ section['total'] }} results)
</span>
<div class="d-flex gap-2">
{% if section['has_prev'] %}
<a class="btn btn-sm btn-outline-secondary" href="{{ section['prev_url'] }}#search-section-{{ section['key'] }}">Previous</a>
{% endif %}
{% if section['has_next'] %}
<a class="btn btn-sm btn-outline-secondary" href="{{ section['next_url'] }}#search-section-{{ section['key'] }}">Next</a>
{% endif %}
</div>
</div>
{% endif %}
</div>
{% endfor %}
{% endblock %}

View File

@ -528,8 +528,16 @@
<div class="col-md-4 d-flex align-items-end"> <div class="col-md-4 d-flex align-items-end">
<button type="submit" class="btn btn-primary w-100">Import jobs</button> <button type="submit" class="btn btn-primary w-100">Import jobs</button>
</div> </div>
<div class="col-12">
<div class="form-check">
<input class="form-check-input" type="checkbox" value="1" id="include_autotask_ids_jobs" name="include_autotask_ids" />
<label class="form-check-label" for="include_autotask_ids_jobs">
Include Autotask IDs from import file
</label>
</div>
</div>
<div class="col-md-8"> <div class="col-md-8">
<div class="form-text">Use a JSON export created by this application.</div> <div class="form-text">Use a JSON export created by this application. Leave Autotask IDs unchecked for sandbox/development environments with a different Autotask database.</div>
</div> </div>
</div> </div>
</form> </form>

310
cove_api_test.py Normal file
View File

@ -0,0 +1,310 @@
#!/usr/bin/env python3
"""
Cove Data Protection API Test Script
=======================================
Verified working via Postman (2026-02-23). Uses confirmed column codes.
Usage:
python3 cove_api_test.py --username "api-user" --password "secret"
Or via environment variables:
COVE_USERNAME="api-user" COVE_PASSWORD="secret" python3 cove_api_test.py
Optional:
--url API endpoint (default: https://api.backup.management/jsonapi)
--records Max records to fetch (default: 50)
"""
import argparse
import json
import os
import sys
from datetime import datetime, timezone
import requests
API_URL = "https://api.backup.management/jsonapi"
# Session status codes (F00 / F15 / F09)
SESSION_STATUS = {
1: "In process",
2: "Failed",
3: "Aborted",
5: "Completed",
6: "Interrupted",
7: "NotStarted",
8: "CompletedWithErrors",
9: "InProgressWithFaults",
10: "OverQuota",
11: "NoSelection",
12: "Restarted",
}
# Backupchecks status mapping
STATUS_MAP = {
1: "Warning", # In process
2: "Error", # Failed
3: "Error", # Aborted
5: "Success", # Completed
6: "Error", # Interrupted
7: "Warning", # NotStarted
8: "Warning", # CompletedWithErrors
9: "Warning", # InProgressWithFaults
10: "Error", # OverQuota
11: "Warning", # NoSelection
12: "Warning", # Restarted
}
# Confirmed working columns (verified via Postman 2026-02-23)
COLUMNS = [
"I1", "I18", "I8", "I78",
"D09F00", "D09F09", "D09F15", "D09F08",
"D1F00", "D1F15",
"D10F00", "D10F15",
"D11F00", "D11F15",
"D19F00", "D19F15",
"D20F00", "D20F15",
"D5F00", "D5F15",
"D23F00", "D23F15",
]
# Datasource labels
DATASOURCE_LABELS = {
"D09": "Total",
"D1": "Files & Folders",
"D2": "System State",
"D10": "VssMsSql (SQL Server)",
"D11": "VssSharePoint",
"D19": "M365 Exchange",
"D20": "M365 OneDrive",
"D5": "M365 SharePoint",
"D23": "M365 Teams",
}
def _post(url: str, payload: dict, timeout: int = 30) -> dict:
headers = {"Content-Type": "application/json"}
resp = requests.post(url, json=payload, headers=headers, timeout=timeout)
resp.raise_for_status()
return resp.json()
def login(url: str, username: str, password: str) -> tuple[str, int]:
"""Authenticate and return (visa, partner_id)."""
payload = {
"jsonrpc": "2.0",
"id": "jsonrpc",
"method": "Login",
"params": {
"username": username,
"password": password,
},
}
data = _post(url, payload)
if "error" in data:
raise RuntimeError(f"Login failed: {data['error']}")
visa = data.get("visa")
if not visa:
raise RuntimeError(f"No visa token in response: {data}")
result = data.get("result", {})
partner_id = result.get("PartnerId") or result.get("result", {}).get("PartnerId")
if not partner_id:
raise RuntimeError(f"Could not find PartnerId in response: {data}")
return visa, int(partner_id)
def enumerate_statistics(url: str, visa: str, partner_id: int, columns: list[str], records: int = 50) -> dict:
payload = {
"jsonrpc": "2.0",
"visa": visa,
"id": "jsonrpc",
"method": "EnumerateAccountStatistics",
"params": {
"query": {
"PartnerId": partner_id,
"StartRecordNumber": 0,
"RecordsCount": records,
"Columns": columns,
}
},
}
return _post(url, payload)
def fmt_ts(value) -> str:
if not value:
return "(none)"
try:
ts = int(value)
if ts == 0:
return "(none)"
dt = datetime.fromtimestamp(ts, tz=timezone.utc)
return dt.strftime("%Y-%m-%d %H:%M UTC")
except (ValueError, TypeError, OSError):
return str(value)
def fmt_status(value) -> str:
if value is None:
return "(none)"
try:
code = int(value)
bc = STATUS_MAP.get(code, "?")
label = SESSION_STATUS.get(code, f"Unknown")
return f"{code} ({label}) → {bc}"
except (ValueError, TypeError):
return str(value)
def fmt_colorbar(value: str) -> str:
if not value:
return "(none)"
icons = {"5": "", "8": "⚠️", "2": "", "1": "🔄", "0": "·"}
return "".join(icons.get(c, c) for c in str(value))
def print_header(title: str) -> None:
print()
print("=" * 70)
print(f" {title}")
print("=" * 70)
def run(url: str, username: str, password: str, records: int, debug: bool = False) -> None:
print_header("Cove Data Protection API Test")
print(f" URL: {url}")
print(f" Username: {username}")
# Login
print_header("Step 1: Login")
visa, partner_id = login(url, username, password)
print(f" ✅ Login OK")
print(f" PartnerId: {partner_id}")
print(f" Visa: {visa[:40]}...")
# Fetch statistics
print_header("Step 2: EnumerateAccountStatistics")
print(f" Columns: {', '.join(COLUMNS)}")
print(f" Records: max {records}")
data = enumerate_statistics(url, visa, partner_id, COLUMNS, records)
if debug:
print(f"\n RAW response (first 2000 chars):")
print(json.dumps(data, indent=2)[:2000])
if "error" in data:
err = data["error"]
print(f" ❌ FAILED error {err.get('code')}: {err.get('message')}")
print(f" Data: {err.get('data')}")
sys.exit(1)
result = data.get("result")
if result is None:
print(" ⚠️ result is null raw response:")
print(json.dumps(data, indent=2)[:1000])
sys.exit(0)
if debug:
print(f"\n result type: {type(result).__name__}")
if isinstance(result, dict):
print(f" result keys: {list(result.keys())}")
# Unwrap possible nested result
if isinstance(result, dict) and "result" in result:
result = result["result"]
# Result can be a list directly or wrapped in Accounts key
accounts = result if isinstance(result, list) else result.get("Accounts", []) if isinstance(result, dict) else []
total = len(accounts)
print(f" ✅ SUCCESS {total} account(s) returned")
# Per-account output
print_header(f"Step 3: Account Details ({total} total)")
for i, acc in enumerate(accounts):
# Settings is a list of single-key dicts: [{"D09F00": "5"}, {"I1": "name"}, ...]
# Flatten to a single dict for easy lookup.
s: dict = {}
for item in acc.get("Settings", []):
s.update(item)
account_id = acc.get("AccountId", "?")
device_name = s.get("I1", "(no name)")
computer = s.get("I18") or "(M365 tenant)"
customer = s.get("I8", "")
active_ds = s.get("I78", "")
print(f"\n [{i+1}/{total}] {device_name} (AccountId: {account_id})")
print(f" Computer : {computer}")
print(f" Customer : {customer}")
print(f" Datasrc : {active_ds}")
# Total (D09)
print(f" Total:")
print(f" Status : {fmt_status(s.get('D09F00'))}")
print(f" Last session: {fmt_ts(s.get('D09F15'))}")
print(f" Last success: {fmt_ts(s.get('D09F09'))}")
print(f" 28-day bar : {fmt_colorbar(s.get('D09F08'))}")
# Per-datasource (only if present in response)
ds_pairs = [
("D1", "D1F00", "D1F15"),
("D10", "D10F00", "D10F15"),
("D11", "D11F00", "D11F15"),
("D19", "D19F00", "D19F15"),
("D20", "D20F00", "D20F15"),
("D5", "D5F00", "D5F15"),
("D23", "D23F00", "D23F15"),
]
for ds_code, f00_col, f15_col in ds_pairs:
f00 = s.get(f00_col)
f15 = s.get(f15_col)
if f00 is None and f15 is None:
continue
label = DATASOURCE_LABELS.get(ds_code, ds_code)
print(f" {label}:")
print(f" Status : {fmt_status(f00)}")
print(f" Last session: {fmt_ts(f15)}")
# Summary
print_header("Summary")
status_counts: dict[str, int] = {}
for acc in accounts:
flat: dict = {}
for item in acc.get("Settings", []):
flat.update(item)
raw = flat.get("D09F00")
bc = STATUS_MAP.get(int(raw), "Unknown") if raw is not None else "No data"
status_counts[bc] = status_counts.get(bc, 0) + 1
for status, count in sorted(status_counts.items()):
icon = {"Success": "", "Warning": "⚠️", "Error": ""}.get(status, " ")
print(f" {icon} {status}: {count}")
print(f"\n Total accounts: {total}")
print()
def main() -> None:
parser = argparse.ArgumentParser(description="Test Cove Data Protection API")
parser.add_argument("--url", default=os.environ.get("COVE_URL", API_URL))
parser.add_argument("--username", default=os.environ.get("COVE_USERNAME", ""))
parser.add_argument("--password", default=os.environ.get("COVE_PASSWORD", ""))
parser.add_argument("--records", type=int, default=50, help="Max accounts to fetch")
parser.add_argument("--debug", action="store_true", help="Print raw API responses")
args = parser.parse_args()
if not args.username or not args.password:
print("Error: --username and --password are required.")
print("Or set COVE_USERNAME and COVE_PASSWORD environment variables.")
sys.exit(1)
run(args.url, args.username, args.password, args.records, args.debug)
if __name__ == "__main__":
main()

View File

@ -2,9 +2,111 @@
This file documents all changes made to this project via Claude Code. This file documents all changes made to this project via Claude Code.
## [2026-02-10] ## [2026-02-23]
### Added
- `cove_api_test.py` standalone Python test script to verify Cove Data Protection API column codes
- Tests D9Fxx (Total), D10Fxx (VssMsSql), D11Fxx (VssSharePoint), and D1Fxx (Files&Folders)
- Displays backup status (F00), timestamps (F09/F15/F18), error counts (F06) per account
- Accepts credentials via CLI args or environment variables
- Summary output showing which column sets work
- Updated `docs/cove_data_protection_api_calls_known_info.md` with N-able support feedback:
- D02/D03 are legacy use D10/D11 or D9 (Total) instead
- All users have the same API access (no MSP-level restriction)
- Session status codes documented (D9F00: 2=Failed, 5=Completed, 8=CompletedWithErrors, etc.)
- Updated `TODO-cove-data-protection.md` with breakthrough status and next steps
## [2026-02-19]
### Added
- Explicit `Include Autotask IDs` import option in the Approved Jobs JSON import form (Settings -> Maintenance)
- Explicit `Include Autotask IDs` import option in the Customers CSV import form
### Changed
- Approved Jobs import now only applies `autotask_company_id` and `autotask_company_name` when the import option is checked
- Customers CSV import now only applies Autotask mapping fields when the import option is checked
- Import success and audit output now includes whether Autotask IDs were imported
- 3CX parser now recognizes `3CX Notification: Update Successful - <host>` as an informational run with `backup_software: 3CX`, `backup_type: Update`, and `overall_status: Success`, and excludes this type from schedule inference (no Expected/Missed generation)
- Run Checks now hides only non-backup 3CX informational types (`Update`, `SSL Certificate`), while other backup software/types remain visible
- Restored remark visibility in Run Checks and Job Details alerts by loading remarks from both sources: explicit run links (`remark_job_runs`) and active job scopes (`remark_scopes`) with duplicate prevention
## [2026-02-16]
### Added
- Customer-to-jobs navigation by making customer names clickable on the Customers page (`/jobs?customer_id=<id>`)
- Jobs page customer filter context UI with an active filter banner and a "Clear filter" action
- Global search page (`/search`) with grouped results for Inbox, Customers, Jobs, Daily Jobs, Run Checks, Tickets, Existing overrides, and Reports
- Navbar search form to trigger global search from all authenticated pages
- Dedicated Remarks section in global search results (with paging and detail links), so remark records are searchable alongside tickets
### Changed
- `/jobs` route now accepts optional `customer_id` and returns only jobs for that customer when provided
- Default Jobs listing keeps inactive-customer filtering only when no `customer_id` filter is applied
- Updated `docs/technical-notes-codex.md` with a new "Last updated" date, Customers->Jobs navigation notes, and test build/push validation snapshot
- Search matching is now case-insensitive with wildcard support (`*`) and automatic contains behavior (`*term*`) per search term
- Global search visibility now only includes sections accessible to the currently active role
- Updated `docs/technical-notes-codex.md` with a dedicated Global Grouped Search section (route/UI/behavior/access rules) and latest test build digest for `v20260216-02-global-search`
- Global search now supports per-section pagination (previous/next), so results beyond the first 10 can be browsed per section while preserving current query/state
- Daily Jobs search result metadata now includes expected run time, success indicator, and run count for the selected day
- Daily Jobs search result links now open the same Daily Jobs modal flow via `open_job_id` (instead of only navigating to the overview page)
- Updated `docs/technical-notes-codex.md` with search pagination query params, Daily Jobs modal-open search behavior, and latest successful test-build digest
- Search pagination buttons now preserve scroll position by linking back to the active section anchor after page navigation
- "Open <section>" behavior now passes `q` into destination pages and applies page-level filtering, so opened overviews reflect the same search term
- Filtering support on Inbox, Customers, Jobs, Daily Jobs, Run Checks, Tickets, Overrides, and Reports now accepts wildcard-enabled `q` terms from search
- Reports frontend loading (`/api/reports`) now forwards URL `q` so client-side refresh keeps the same filtered result set
- Daily Jobs search section UI now shows an explicit English note that the Daily Jobs page itself is day-scoped while search matches can reflect jobs across other days
- Updated `docs/technical-notes-codex.md` to include remarks in grouped search sections, `p_remarks` pagination key, q-forwarding to overview pages, and latest test-build digest
### Fixed ### Fixed
- `/search` page crash (`TypeError: 'builtin_function_or_method' object is not iterable`) by replacing Jinja dict access from `section.items` to `section['items']` in `templates/main/search.html`
## [2026-02-13]
### Added
- Added internal technical reference document `docs/technical-notes-codex.md` with repository structure, application architecture, processing flow, parser system rules, ticketing/Autotask constraints, feedback attachment notes, deployment/build workflow, and operational attention points
### Changed
- Changed `docs/technical-notes-codex.md` language from Dutch to English to align with project language rules for documentation
### Fixed
- Fixed Autotask tickets and internal tickets not being linked to missed runs by calling `link_open_internal_tickets_to_run` after creating missed JobRun records in `_ensure_missed_runs_for_job` (both weekly and monthly schedules), ensuring missed runs now receive the same ticket propagation as email-based runs
- Fixed checkboxes being automatically re-selected after delete actions on Inbox and Run Checks pages by adding `autocomplete="off"` attribute to all checkboxes, preventing browser from restoring previous checkbox states after page reload
## [2026-02-12]
### Fixed
- Fixed tickets not being displayed in Run Checks modal detail view (Meldingen section) by extending `/api/job-runs/<run_id>/alerts` endpoint to include both run-specific tickets (via ticket_job_runs) and job-level tickets (via ticket_scopes), ensuring newly created tickets are visible immediately in the modal instead of only after being resolved
- Fixed copy ticket button not working in Edge browser on Job Details page by moving clipboard functions (copyToClipboard, fallbackCopy, showCopyFeedback) inside IIFE scope for proper closure access (Edge is stricter than Firefox about scope resolution)
## [2026-02-10]
### Added
- Added screenshot attachment support to Feedback/Bug system (user request: allow screenshots for bugs/features)
- New database model: `FeedbackAttachment` with file_data (BYTEA), filename, mime_type, file_size
- Upload support on feedback creation form (multiple files, PNG/JPG/GIF/WEBP, max 5MB each)
- Upload support on reply forms (attach screenshots when replying)
- Inline image display on feedback detail page (thumbnails with click-to-view-full-size)
- Screenshot display for both main feedback items and replies
- File validation: image type verification using imghdr (not just extension), size limits, secure filename handling
- New route: `/feedback/attachment/<id>` to serve images (access-controlled, admins can view deleted item attachments)
- Database migration: auto-creates `feedback_attachments` table with indexes on startup
- Automatic CASCADE delete: removing feedback item or reply automatically removes associated attachments
- Added admin-only deleted items view and permanent delete functionality to Feedback system
- "Show deleted items" checkbox on feedback list page (admin only)
- Deleted items shown with reduced opacity + background color and "Deleted" badge
- Permanent delete action removes item + all attachments from database (hard delete with CASCADE)
- Attachment count shown in deletion confirmation message
- Admins can view detail pages of deleted items including their screenshots
- Two-stage delete: soft delete (audit trail) → permanent delete (database cleanup)
- Prevents accidental permanent deletion (requires item to be soft-deleted first)
- Security: non-admin users cannot view deleted items or their attachments (404 response)
- Added copy ticket button (⧉) to Job Details page modal for quickly copying ticket numbers to clipboard (previously only available on Run Checks page)
### Fixed
- Fixed cross-browser clipboard copy functionality for ticket numbers (previously required manual copy popup in Edge browser)
- Implemented three-tier fallback mechanism: modern Clipboard API → legacy execCommand('copy') → prompt fallback
- Copy button now works directly in all browsers (Firefox, Edge, Chrome) without requiring user interaction
- Applied improved copy mechanism to both Run Checks and Job Details pages
- Fixed Autotask ticket not being automatically linked to new runs when internal ticket is resolved by implementing independent Autotask propagation strategy (now checks for most recent non-deleted and non-resolved Autotask ticket on job regardless of internal ticket status, ensuring PSA ticket reference persists across runs until explicitly resolved or deleted) - Fixed Autotask ticket not being automatically linked to new runs when internal ticket is resolved by implementing independent Autotask propagation strategy (now checks for most recent non-deleted and non-resolved Autotask ticket on job regardless of internal ticket status, ensuring PSA ticket reference persists across runs until explicitly resolved or deleted)
- Fixed internal and Autotask tickets being linked to new runs even after being resolved by removing date-based "open" logic from ticket query (tickets now only link to new runs if they are genuinely unresolved, not based on run date comparisons) - Fixed internal and Autotask tickets being linked to new runs even after being resolved by removing date-based "open" logic from ticket query (tickets now only link to new runs if they are genuinely unresolved, not based on run date comparisons)
- Fixed Job Details page showing resolved tickets for ALL runs by implementing two-source ticket display: directly linked tickets (via ticket_job_runs) are always shown for audit trail, while active window tickets (via scope query) are only shown if unresolved, preserving historical ticket links while preventing resolved tickets from appearing on new runs - Fixed Job Details page showing resolved tickets for ALL runs by implementing two-source ticket display: directly linked tickets (via ticket_job_runs) are always shown for audit trail, while active window tickets (via scope query) are only shown if unresolved, preserving historical ticket links while preventing resolved tickets from appearing on new runs

View File

@ -0,0 +1,230 @@
# Cove Data Protection (N-able Backup) Known Information on API Calls
Date: 2026-02-10 (updated 2026-02-23)
Status: Pending re-test with corrected column codes
## ⚠️ Important Update (2026-02-23)
**N-able support (Andrew Robinson, Applications Engineer) confirmed:**
1. **D02 and D03 are legacy column codes** use **D10 and D11** instead.
2. **There is no MSP-level restriction** all API users have the same access level.
3. New documentation: https://developer.n-able.com/n-able-cove/docs/getting-started
4. Column code reference: https://developer.n-able.com/n-able-cove/docs/column-codes
**Impact:** The security error 13501 was caused by using legacy D02Fxx/D03Fxx codes.
Using D9Fxx (Total aggregate), D10Fxx (VssMsSql), D11Fxx (VssSharePoint) should work.
**Key newly available columns (pending re-test):**
- `D9F00` = Last Session Status (2=Failed, 5=Completed, 8=CompletedWithErrors, etc.)
- `D9F06` = Last Session Errors Count
- `D9F09` = Last Successful Session Timestamp (Unix)
- `D9F12` = Session Duration
- `D9F15` = Last Session Timestamp (Unix)
- `D9F17` = Last Completed Session Status
- `D9F18` = Last Completed Session Timestamp (Unix)
**Session status codes (F00):**
1=In process, 2=Failed, 3=Aborted, 5=Completed, 6=Interrupted,
7=NotStarted, 8=CompletedWithErrors, 9=InProgressWithFaults,
10=OverQuota, 11=NoSelection, 12=Restarted
**Test script:** `cove_api_test.py` in project root run this to verify new column codes.
---
## Summary of original findings (2026-02-10)
API access to Cove Data Protection via JSON-RPC **works**, but was **heavily restricted**
because legacy column codes (D02Fxx, D03Fxx) were being used. Now resolved.
Previous error:
```
Operation failed because of security reasons (error 13501)
```
---
## Authentication model (confirmed)
- Endpoint: https://api.backup.management/jsonapi
- Protocol: JSONRPC 2.0
- Method: POST only
- Authentication flow:
1. Login method is called
2. Response returns a **visa** token (toplevel field)
3. The visa **must be included in every subsequent call**
4. Cove may return a new visa in later responses (token chaining)
### Login request (working)
```json
{
"jsonrpc": "2.0",
"method": "Login",
"params": {
"partner": "<EXACT customer/partner name>",
"username": "<api login name>",
"password": "<password>"
},
"id": "1"
}
```
### Login response structure (important)
```json
{
"result": {
"result": {
"PartnerId": <number>,
"Name": "<login name>",
"Flags": ["SecurityOfficer","NonInteractive"]
}
},
"visa": "<visa token>"
}
```
Notes:
- `visa` is **not** inside `result`, but at top level
- `PartnerId` is found at `result.result.PartnerId`
---
## API user scope (critical finding)
- API users are **always bound to a single Partner (customer)** unless created at MSP/root level
- In this environment, it is **not possible to create an MSPlevel API user**
- All testing was therefore done with **customerscoped API users**
Impact:
- Crosscustomer enumeration is impossible
- Only data belonging to the linked customer can be queried
- Some enumerate/reporting calls are blocked regardless of role
---
## EnumerateAccountStatistics what works and what does not
### Method
```json
{
"jsonrpc": "2.0",
"method": "EnumerateAccountStatistics",
"visa": "<visa>",
"params": {
"query": {
"PartnerId": <partner_id>,
"SelectionMode": "Merged",
"StartRecordNumber": 0,
"RecordsCount": 50,
"Columns": [ ... ]
}
}
}
```
### Mandatory behavior
- **Columns are required**; omitting them returns `result: null`
- The API behaves as an **allowlist**:
- If *any* requested column is restricted, the **entire call fails** with error 13501
### Confirmed working (safe) column set
The following column set works reliably:
- I1 → account / device / tenant identifier
- I14 → used storage (bytes)
- I18 → computer name (if applicable)
- D01F00 D01F07 → numeric metrics (exact semantics TBD)
- D09F00 → numeric status/category code
Example (validated working):
```json
"Columns": [
"I1","I14","I18",
"D01F00","D01F01","D01F02","D01F03",
"D01F04","D01F05","D01F06","D01F07",
"D09F00"
]
```
### Confirmed restricted (cause security error 13501)
- Entire D02Fxx range
- Entire D03Fxx range
- Broad Iranges (e.g. I1I10 batches)
- Many individually tested Icodes not in the safe set
Even adding **one restricted code** causes the entire call to fail.
---
## EnumerateAccounts
- Method consistently fails with `Operation failed because of security reasons`
- This applies even with:
- SuperUser role
- SecurityOfficer flag enabled
Conclusion:
- EnumerateAccounts is **not usable** in this tenant for customerscoped API users
---
## Other tested methods
- EnumerateStatistics → Method not found
- GetPartnerInfo → works only for basic partner metadata (not statistics)
---
## Practical implications for BackupChecks
What **is possible**:
- Enumerate accounts implicitly via EnumerateAccountStatistics
- Identify devices/accounts via AccountId + I1/I18
- Collect storage usage (I14)
- Collect numeric status/metrics via D01Fxx and D09F00
What is **not possible (via this API scope)**:
- Reliable last backup timestamp
- Explicit success / failure / warning text
- Error messages
- Enumerating devices via EnumerateAccounts
- Crosscustomer aggregation
### Suggested internal model mapping
- Customer
- external_id = PartnerId
- Job
- external_id = AccountId
- display_name = I1
- hostname = I18 (if present)
- Run (limited)
- metrics only (bytes, counters)
- status must be **derived heuristically** from numeric fields (if possible)
---
## Open questions / next steps
1. Confirm official meaning of:
- D01F00 D01F07
- D09F00
2. Investigate whether:
- A tokenbased (nonJSONRPC) reporting endpoint exists
- Nable support can enable additional reporting columns
- An MSPlevel API user can be provisioned by Nable
3. Decide whether Cove integration in BackupChecks will be:
- Metricsonly (no run result semantics)
- Or require vendor cooperation for expanded API access

View File

@ -0,0 +1,393 @@
# Technical Notes (Internal)
Last updated: 2026-02-19
## Purpose
Internal technical snapshot of the `backupchecks` repository for faster onboarding, troubleshooting, and change impact analysis.
## Repository Overview
- Application: Flask web app with SQLAlchemy and Flask-Migrate.
- Runtime: Containerized (Docker), deployed via Docker Compose stack.
- Primary source code location: `containers/backupchecks/src`.
- The project also contains extensive functional documentation in `docs/` and multiple roadmap TODO files at repository root.
## Main Structure
- `containers/backupchecks/Dockerfile`: Python 3.12-slim image, starts `gunicorn` with `backend.app:create_app()`.
- `containers/backupchecks/requirements.txt`: Flask stack + PostgreSQL driver + reporting libraries (`reportlab`, `Markdown`).
- `containers/backupchecks/src/backend/app`: backend domain logic, routes, parsers, models, migrations.
- `containers/backupchecks/src/templates`: Jinja templates for auth/main/documentation pages.
- `containers/backupchecks/src/static`: CSS, images, favicon.
- `deploy/backupchecks-stack.yml`: compose stack with `backupchecks`, `postgres`, `adminer`.
- `build-and-push.sh`: release/test build script with version bumping, tags, and image push.
- `docs/`: functional design, changelogs, migration notes, API notes.
## Application Architecture (Current Observation)
- Factory pattern: `create_app()` in `containers/backupchecks/src/backend/app/__init__.py`.
- Blueprints:
- `auth_bp` for authentication.
- `main_bp` for core functionality.
- `doc_bp` for internal documentation pages.
- Database initialization at startup:
- `db.create_all()`
- `run_migrations()`
- Background task:
- `start_auto_importer(app)` starts the automatic mail importer thread.
- Health endpoint:
- `GET /health` returns `{ "status": "ok" }`.
## Functional Processing Flow
- Import:
- Email is fetched via Microsoft Graph API.
- Parse:
- Parser selection through registry + software-specific parser implementations.
- Approve:
- New jobs first appear in Inbox for initial customer assignment.
- Auto-process:
- Subsequent emails for known jobs automatically create `JobRun` records.
- Monitor:
- Runs appear in Daily Jobs and Run Checks.
- Review:
- Manual review removes items from the unreviewed operational queue.
## Configuration and Runtime
- Config is built from environment variables in `containers/backupchecks/src/backend/app/config.py`.
- Important variables:
- `APP_SECRET_KEY`
- `APP_ENV`
- `APP_PORT`
- `POSTGRES_DB`
- `POSTGRES_USER`
- `POSTGRES_PASSWORD`
- `DB_HOST`
- `DB_PORT`
- Database URI pattern:
- `postgresql+psycopg2://<user>:<pass>@<host>:<port>/<db>`
- Default timezone in config: `Europe/Amsterdam`.
## Data Model (High-level)
File: `containers/backupchecks/src/backend/app/models.py`
- Auth/users:
- `User` with role(s), active role in session.
- System settings:
- `SystemSettings` with Graph/mail settings, import settings, UI timezone, dashboard policy, sandbox flag.
- Autotask configuration and cache fields are present.
- Logging:
- `AuditLog` (legacy alias `AdminLog`).
- Domain:
- `Customer`, `Job`, `JobRun`, `Override`
- `MailMessage`, `MailObject`
- `Ticket`, `TicketScope`, `TicketJobRun`
- `Remark`, `RemarkScope`, `RemarkJobRun`
- `FeedbackItem`, `FeedbackVote`, `FeedbackReply`, `FeedbackAttachment`
### Foreign Key Relationships & Deletion Order
Critical deletion order to avoid constraint violations:
1. Clean auxiliary tables (ticket_job_runs, remark_job_runs, scopes, overrides)
2. Unlink mails from jobs (UPDATE mail_messages SET job_id = NULL)
3. Delete mail_objects
4. Delete jobs (cascades to job_runs)
5. Delete mails
### Key Model Fields
**MailMessage model:**
- `from_address` (NOT `sender`!) - sender email
- `subject` - email subject
- `text_body` - plain text content
- `html_body` - HTML content
- `received_at` - timestamp
- `location` - inbox/processed/deleted
- `job_id` - link to Job (nullable)
**Job model:**
- `customer_id` - FK to Customer
- `job_name` - parsed from email
- `backup_software` - e.g., "Veeam", "Synology"
- `backup_type` - e.g., "Backup Job", "Active Backup"
## Parser Architecture
- Folder: `containers/backupchecks/src/backend/app/parsers/`
- Two layers:
- `registry.py`:
- matching/documentation/visibility on `/parsers`.
- examples must stay generic (no customer names).
- parser files (`veeam.py`, `synology.py`, etc.):
- actual detection and parsing logic.
- return structured output: software, type, job name, status, objects.
- Practical rule:
- extend patterns by adding, not replacing (backward compatibility).
### Parser Types
**Informational Parsers:**
- DSM Updates, Account Protection, Firmware Updates
- Set appropriate backup_type (e.g., "Updates", "Firmware Update")
- Do NOT participate in schedule learning
- Usually still visible in Run Checks for awareness
- Exception: non-backup 3CX informational types (`Update`, `SSL Certificate`) are hidden from Run Checks
**Regular Parsers:**
- Backup jobs (Veeam, Synology Active Backup, NAKIVO, etc.)
- Participate in schedule learning (daily/weekly/monthly detection)
- Generate missed runs when expected runs don't occur
**Example: Synology Updates Parser (synology.py)**
- Handles multiple update notification types under same job:
- DSM automatic update cancelled
- Packages out-of-date
- Combined notifications (DSM + packages)
- Detection patterns:
- DSM: "Automatische DSM-update", "DSM-update op", "automatic DSM update"
- Packages: "Packages on", "out-of-date", "Package Center"
- Hostname extraction from multiple patterns
- Returns: backup_type "Updates", job_name "Synology Automatic Update"
## Ticketing and Autotask (Critical Rules)
### Two Ticket Types
1. **Internal Tickets** (tickets table)
- Created manually or via Autotask integration
- Stored in `tickets` table with `ticket_code` (e.g., "T20250123.0001")
- Linked to runs via `ticket_job_runs` many-to-many table
- Scoped to jobs via `ticket_scopes` table
- Have `resolved_at` field for resolution tracking
- **Auto-propagation**: Automatically linked to new runs via `link_open_internal_tickets_to_run`
2. **Autotask Tickets** (job_runs columns)
- Created via Run Checks modal → "Create Autotask Ticket"
- Stored directly in JobRun columns: `autotask_ticket_id`, `autotask_ticket_number`, etc.
- When created, also creates matching internal ticket for legacy UI compatibility
- Have `autotask_ticket_deleted_at` field for deletion tracking
- Resolution tracked via matching internal ticket's `resolved_at` field
- **Auto-propagation**: Linked to new runs via two-strategy approach
### Ticket Propagation to New Runs
When a new JobRun is created (via email import OR missed run generation), `link_open_internal_tickets_to_run` ensures:
**Strategy 1: Internal ticket linking**
- Query finds tickets where: `COALESCE(ts.resolved_at, t.resolved_at) IS NULL`
- Creates `ticket_job_runs` links automatically
- Tickets remain visible until explicitly resolved
- **NO date-based logic** - resolved = immediately hidden from new runs
**Strategy 2: Autotask ticket propagation (independent)**
1. Check if internal ticket code exists → find matching Autotask run → copy ticket info
2. If no match, directly search for most recent Autotask ticket on job where:
- `autotask_ticket_deleted_at IS NULL` (not deleted in PSA)
- Internal ticket `resolved_at IS NULL` (not resolved in PSA)
3. Copy `autotask_ticket_id`, `autotask_ticket_number`, `created_at`, `created_by_user_id` to new run
### Where Ticket Linking is Called
`link_open_internal_tickets_to_run` is invoked in three locations:
1. **Email-based runs**: `routes_inbox.py` and `mail_importer.py` - after creating JobRun from parsed email
2. **Missed runs**: `routes_run_checks.py` in `_ensure_missed_runs_for_job` - after creating missed JobRun records
- Weekly schedule: After creating weekly missed run (with flush to get run.id)
- Monthly schedule: After creating monthly missed run (with flush to get run.id)
- **Critical**: Without this call, missed runs don't get ticket propagation!
### Display Logic - Link-Based System
All pages use **explicit link-based queries** (no date-based logic):
**Job Details Page:**
- **Two sources** for ticket display:
1. Direct links (`ticket_job_runs WHERE job_run_id = X`) → always show (audit trail)
2. Active window (`ticket_scopes WHERE job_id = Y AND resolved_at IS NULL`) → only unresolved
- Result: Old runs keep their ticket references, new runs don't get resolved tickets
**Run Checks Main Page (Indicators 🎫):**
- Query: `ticket_scopes JOIN tickets WHERE job_id = X AND resolved_at IS NULL`
- Only shows indicator if unresolved tickets exist for the job
**Run Checks Popup Modal:**
- API: `/api/job-runs/<run_id>/alerts`
- **Two-source ticket display**:
1. Direct links: `tickets JOIN ticket_job_runs WHERE job_run_id = X`
2. Job-level scope: `tickets JOIN ticket_scopes WHERE job_id = Y AND resolved_at IS NULL AND active_from_date <= run_date`
- Prevents duplicates by tracking seen ticket IDs
- Shows newly created tickets immediately (via scope) without waiting for resolve action
- **Two-source remark display**:
1. Direct links: `remarks JOIN remark_job_runs WHERE job_run_id = X`
2. Job-level scope: `remarks JOIN remark_scopes WHERE job_id = Y AND resolved_at IS NULL AND active_from_date <= run_date` (with timezone-safe fallback from `start_date`)
- Prevents duplicates by tracking seen remark IDs
### Resolved vs Deleted
- **Resolved**: Ticket completed in Autotask (tracked in internal `tickets.resolved_at`)
- Stops propagating to new runs
- Ticket still exists in PSA
- Synced via PSA polling
- **Deleted**: Ticket removed from Autotask (tracked in `job_runs.autotask_ticket_deleted_at`)
- Also stops propagating
- Ticket no longer exists in PSA
- Rare operation
### Critical Rules
- ❌ **NEVER** use date-based resolved logic: `resolved_at >= run_date` OR `active_from_date <= run_date`
- ✅ Only show tickets that are ACTUALLY LINKED via `ticket_job_runs` table
- ✅ Resolved tickets stop linking immediately when resolved
- ✅ Old links preserved for audit trail (visible on old runs)
- ✅ All queries must use explicit JOIN to link tables
- ✅ Consistency: All pages use same "resolved = NULL" logic
- ✅ **CRITICAL**: Preserve description field during Autotask updates - must include "description" in optional_fields list
## UI and UX Notes
### Navbar
- Fixed-top positioning
- Collapses on mobile (hamburger menu)
- Dynamic padding adjustment via JavaScript (measures navbar height, adjusts main content padding-top)
- Role-based menu items (Admin sees more than Operator/Viewer)
### Status Badges
- Success: Green
- Warning: Yellow/Orange
- Failed/Error: Red
- Override applied: Blue badge
- Reviewed: Checkmark indicator
### Ticket Copy Functionality
- Copy button (⧉) available on both Run Checks and Job Details pages
- Allows quick copying of ticket numbers to clipboard
- Cross-browser compatible with three-tier fallback mechanism:
1. **Modern Clipboard API**: `navigator.clipboard.writeText()` - works in modern browsers with HTTPS
2. **Legacy execCommand**: `document.execCommand('copy')` - fallback for older browsers and Edge
3. **Prompt fallback**: `window.prompt()` - last resort if clipboard access fails
- Visual feedback: button changes to ✓ checkmark for 800ms after successful copy
- Implementation uses hidden textarea for execCommand method to ensure compatibility
- No user interaction required in modern browsers (direct copy)
### Checkbox Behavior
- All checkboxes on Inbox and Run Checks pages use `autocomplete="off"`
- Prevents browser from auto-selecting checkboxes after page reload
- Fixes issue where deleting items would cause same number of new items to be selected
### Customers to Jobs Navigation (2026-02-16)
- Customers page links each customer name to filtered Jobs view:
- `GET /jobs?customer_id=<customer_id>`
- Jobs route behavior:
- Accepts optional `customer_id` query parameter in `routes_jobs.py`.
- If set: returns jobs for that customer only.
- If not set: keeps default filter that hides jobs linked to inactive customers.
- Jobs UI behavior:
- Shows active filter banner with selected customer name.
- Provides "Clear filter" action back to unfiltered `/jobs`.
- Templates touched:
- `templates/main/customers.html`
- `templates/main/jobs.html`
### Global Grouped Search (2026-02-16)
- New route:
- `GET /search` in `main/routes_search.py`
- New UI:
- Navbar search form in `templates/layout/base.html`
- Grouped result page in `templates/main/search.html`
- Search behavior:
- Case-insensitive matching (`ILIKE`).
- `*` wildcard is supported and translated to SQL `%`.
- Automatic contains behavior is applied per term (`*term*`) when wildcard not explicitly set.
- Multi-term queries use AND across terms and OR across configured columns within each section.
- Per-section pagination is supported via query params: `p_inbox`, `p_customers`, `p_jobs`, `p_daily_jobs`, `p_run_checks`, `p_tickets`, `p_remarks`, `p_overrides`, `p_reports`.
- Pagination keeps search state for all sections while browsing one section.
- "Open <section>" links pass `q` to destination overview pages so page-level filtering matches the search term.
- Grouped sections:
- Inbox, Customers, Jobs, Daily Jobs, Run Checks, Tickets, Remarks, Existing overrides, Reports.
- Daily Jobs search result details:
- Meta now includes expected run time, success indicator, and run count for the selected day.
- Link now opens Daily Jobs with modal auto-open using `open_job_id` query parameter (same modal flow as clicking a row in Daily Jobs).
- Access control:
- Search results are role-aware and only show sections/data the active role can access.
- `run_checks` results are restricted to `admin`/`operator`.
- `reports` supports `admin`/`operator`/`viewer`/`reporter`.
- Current performance strategy:
- Per-section limit (`SEARCH_LIMIT_PER_SECTION = 10`), with total count per section.
- No schema migration required for V1.
## Feedback Module with Screenshots
- Models: `FeedbackItem`, `FeedbackVote`, `FeedbackReply`, `FeedbackAttachment`.
- Attachments:
- multiple uploads, type validation, per-file size limits, storage in database (BYTEA).
## Validation Snapshot
- 2026-02-16: Test build + push succeeded via `update-and-build.sh t`.
- Pushed image: `gitea.oskamp.info/ivooskamp/backupchecks:dev`.
- 2026-02-16: Test build + push succeeded on branch `v20260216-02-global-search`.
- Pushed image digest: `sha256:6996675b9529426fe2ad58b5f353479623f3ebe24b34552c17ad0421d8a7ee0f`.
- 2026-02-16: Additional test build + push cycles succeeded on `v20260216-02-global-search`.
- Latest pushed image digest: `sha256:8ec8bfcbb928e282182fa223ce8bf7f92112d20e79f4a8602d015991700df5d7`.
- 2026-02-16: Additional test build + push cycles succeeded after search enhancements.
- Latest pushed image digest: `sha256:b36b5cdd4bc7c4dadedca0534f1904a6e12b5b97abc4f12bc51e42921976f061`.
- Delete strategy:
- soft delete by default,
- permanent delete only for admins and only after soft delete.
## Deployment and Operations
- Stack exposes:
- app on `8080`
- adminer on `8081`
- PostgreSQL persistent volume:
- `/docker/appdata/backupchecks/backupchecks-postgres:/var/lib/postgresql/data`
- `deploy/backupchecks-stack.yml` also contains example `.env` variables at the bottom.
## Build/Release Flow
File: `build-and-push.sh`
- Bump options:
- `1` patch, `2` minor, `3` major, `t` test.
- Release build:
- update `version.txt`
- commit + tag + push
- docker push of `:<version>`, `:dev`, `:latest`
- Test build:
- only `:dev`
- no commit/tag.
- Services are discovered under `containers/*` with Dockerfile-per-service.
## Technical Observations / Attention Points
- `README.md` is currently empty; quick-start entry context is missing.
- `LICENSE` is currently empty.
- `docs/architecture.md` is currently empty.
- `deploy/backupchecks-stack.yml` contains hardcoded example values (`Changeme`), with risk if used without proper secrets management.
- The app performs DB initialization + migrations at startup; for larger schema changes this can impact startup time/robustness.
- There is significant parser and ticketing complexity; route changes carry regression risk without targeted testing.
- For Autotask update calls, the `description` field must be explicitly preserved to prevent unintended NULL overwrite.
- Security hygiene remains important:
- no customer names in parser examples/source,
- no hardcoded credentials.
## Quick References
- App entrypoint: `containers/backupchecks/src/backend/app/main.py`
- App factory: `containers/backupchecks/src/backend/app/__init__.py`
- Config: `containers/backupchecks/src/backend/app/config.py`
- Models: `containers/backupchecks/src/backend/app/models.py`
- Parsers: `containers/backupchecks/src/backend/app/parsers/registry.py`
- Ticketing utilities: `containers/backupchecks/src/backend/app/ticketing_utils.py`
- Run Checks routes: `containers/backupchecks/src/backend/app/main/routes_run_checks.py`
- Compose stack: `deploy/backupchecks-stack.yml`
- Build script: `build-and-push.sh`
## Recent Changes
### 2026-02-19
- **Added 3CX Update parser support**: `threecx.py` now recognizes subject `3CX Notification: Update Successful - <host>` and stores it as informational with:
- `backup_software = 3CX`
- `backup_type = Update`
- `overall_status = Success`
- **3CX informational schedule behavior**:
- `3CX / Update` and `3CX / SSL Certificate` are excluded from schedule inference in `routes_shared.py` (no Expected/Missed generation).
- **Run Checks visibility scope (3CX-only)**:
- Run Checks now hides only non-backup 3CX informational jobs (`Update`, `SSL Certificate`).
- Other backup software/types remain visible and unchanged.
- **Fixed remark visibility mismatch**:
- `/api/job-runs/<run_id>/alerts` now loads remarks from both:
1. `remark_job_runs` (explicit run links),
2. `remark_scopes` (active job-scoped remarks),
- with duplicate prevention by remark ID.
- This resolves cases where the remark indicator appeared but remarks were not shown in Run Checks modal or Job Details modal.
### 2026-02-13
- **Fixed missed runs ticket propagation**: Added `link_open_internal_tickets_to_run` calls in `_ensure_missed_runs_for_job` (routes_run_checks.py) after creating both weekly and monthly missed JobRun records. Previously only email-based runs got ticket linking, causing missed runs to not show internal tickets or Autotask tickets. Required `db.session.flush()` before linking to ensure run.id is available.
- **Fixed checkbox auto-selection**: Added `autocomplete="off"` to all checkboxes on Inbox and Run Checks pages. Prevents browser from automatically re-selecting checkboxes after page reload following delete actions.
### 2026-02-12
- **Fixed Run Checks modal ticket display**: Implemented two-source display logic (ticket_job_runs + ticket_scopes). Previously only showed tickets after they were resolved (when ticket_job_runs entry was created). Now shows tickets immediately upon creation via scope query.
- **Fixed copy button in Edge**: Moved clipboard functions inside IIFE scope for proper closure access (Edge is stricter than Firefox about scope resolution).
### 2026-02-10
- **Added screenshot support to Feedback system**: Multiple file upload, inline display, two-stage delete (soft delete for audit trail, permanent delete for cleanup).
- **Completed transition to link-based ticket system**: All pages now use JOIN queries, no date-based logic. Added cross-browser copy ticket functionality with three-tier fallback mechanism to both Run Checks and Job Details pages.