Skip to content

Conversation

@hongquanli
Copy link
Contributor

This pull request refactors and improves how OME-TIFF image saving is handled in the control software, with a focus on separating acquisition-wide metadata from per-image metadata and making the OME-TIFF writing process more robust and maintainable. The changes introduce a new AcquisitionInfo dataclass, update job processing and worker logic to use it, and enhance metadata handling for OME-TIFF files. Additionally, file locking is now done using the cross-platform filelock library.

The PR (using Google Antigravity) addresses @ianohara's comments on #339.

OME-TIFF job and metadata refactor:

  • Introduced the AcquisitionInfo dataclass to encapsulate acquisition-wide metadata (e.g., channel names, experiment path, time increments, physical sizes), separating it from per-image CaptureInfo. All OME-TIFF related functions and jobs now use this new class for metadata. [1] [2]
  • Added a new SaveOMETiffJob class, which requires AcquisitionInfo to be injected by the JobRunner. The OME-TIFF save logic now uses both CaptureInfo and AcquisitionInfo for validation and metadata initialization.
  • Refactored utils_ome_tiff_writer.py to require AcquisitionInfo for all OME-TIFF metadata operations, and replaced hardcoded metadata keys with constants for improved readability and maintainability. [1] [2]
  • Updated metadata handling functions to use acquisition-wide values (e.g., channel names, physical sizes) from AcquisitionInfo rather than from each CaptureInfo. [1] [2] [3]

Job runner and worker logic improvements:

  • The JobRunner class now accepts and stores an optional AcquisitionInfo, and automatically injects it into SaveOMETiffJob instances. This ensures all jobs have access to acquisition-wide metadata. [1] [2]
  • The multipoint worker (multi_point_worker.py) constructs a single AcquisitionInfo at initialization and passes it to the JobRunner and OME-TIFF jobs, simplifying job creation and metadata consistency. [1] [2]

File locking and platform compatibility:

  • Replaced the previous fcntl-based file locking mechanism with the cross-platform filelock library for metadata file access, improving compatibility and reliability. [1] [2]

Code cleanup and removal of duplication:

  • Removed duplicated acquisition metadata fields from CaptureInfo, which are now provided by AcquisitionInfo. [1] [2] [3]
  • Cleaned up conditional logic for job selection in the worker, ensuring OME-TIFF jobs are used only when appropriate.

These changes make the OME-TIFF saving process more robust, modular, and maintainable, and improve the overall reliability of metadata handling and file access.

hongquanli and others added 6 commits November 29, 2025 01:24
- JobRunner now accepts and stores AcquisitionInfo
- Ensures single source of truth for acquisition metadata
- Prevents accidental divergence across jobs in an acquisition
- SaveOMETiffJob accesses acquisition_info via runner injection
- Updated utils_ome_tiff_writer.py to accept AcquisitionInfo separately
- All tests pass
- Remove unsafe lock file deletion outside critical section (filelock handles cleanup)
- Convert SaveOMETiffJob.acquisition_info to proper dataclass field
- Add cleanup_stale_metadata_files() for orphaned temp files from crashes
- Use metadata key constants consistently in job_processing.py
- Remove redundant validation (acq_info None checks for non-Optional fields)
- Document imports inside functions (circular deps, lazy loading)
- Add missing type hint for piezo_z_um
- Remove duplicate line in multi_point_worker.py
- Add tests for JobRunner injection path and stale metadata cleanup

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors OME-TIFF image saving by introducing a new AcquisitionInfo dataclass to separate acquisition-wide metadata from per-image metadata, improving modularity and maintainability. The changes enhance metadata handling, switch to cross-platform file locking, and clean up duplicated fields.

Key changes:

  • Introduced AcquisitionInfo dataclass for acquisition-wide metadata (channel names, physical sizes, time increments) and removed these fields from CaptureInfo
  • Created SaveOMETiffJob class that requires AcquisitionInfo injection by JobRunner, ensuring consistent metadata access
  • Replaced platform-specific fcntl file locking with cross-platform filelock library

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
software/control/core/job_processing.py Introduced AcquisitionInfo dataclass, created SaveOMETiffJob class with acquisition_info injection, replaced fcntl with filelock library, and updated JobRunner to inject acquisition_info
software/control/core/utils_ome_tiff_writer.py Refactored functions to require AcquisitionInfo, added metadata key constants, and implemented stale metadata file cleanup
software/control/core/multi_point_worker.py Constructed AcquisitionInfo at initialization and passed it to JobRunner, removed duplicate acquisition metadata from CaptureInfo creation
software/tests/test_ome_tiff_saving.py Removed dependency stubs, updated test to use SaveOMETiffJob and AcquisitionInfo, added tests for acquisition_info injection and stale metadata cleanup
software/setup_22.04.sh Added filelock and lxml_html_clean to pip dependencies

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 252 to 253
# Create a fake stale metadata file
old_metadata_path = os.path.join(tempfile.gettempdir(), "ome_teststale123_metadata.json")
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The temporary directory created by tempfile.TemporaryDirectory() is not being used. The test creates tmp_dir but then uses tempfile.gettempdir() instead. Either remove the unused tmp_dir context manager or use it as intended if you want to isolate the test files.

Suggested change
# Create a fake stale metadata file
old_metadata_path = os.path.join(tempfile.gettempdir(), "ome_teststale123_metadata.json")
# Create a fake stale metadata file in the temporary directory
old_metadata_path = os.path.join(tmp_dir, "ome_teststale123_metadata.json")

Copilot uses AI. Check for mistakes.
Comment on lines 94 to 102
def _acquire_file_lock(lock_path: str):
lock_file = open(lock_path, "w")
lock = FileLock(lock_path, timeout=10)
try:
if fcntl is not None:
fcntl.flock(lock_file, fcntl.LOCK_EX)
yield
finally:
if fcntl is not None:
fcntl.flock(lock_file, fcntl.LOCK_UN)
lock_file.close()
with lock:
yield
except FileLockTimeout:
raise TimeoutError(
f"Failed to acquire file lock '{lock_path}' within 10 seconds. Another process may be holding the lock."
)
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _acquire_file_lock function creates a FileLock instance before entering the try block, but if there's an exception during lock acquisition, the lock object may not be properly cleaned up. Consider moving the lock creation inside the try block or use a finally block to ensure cleanup. Additionally, the context manager should not need to catch FileLockTimeout and re-raise as TimeoutError - the filelock library's timeout mechanism will automatically raise an exception if the lock cannot be acquired within the timeout period.

Copilot uses AI. Check for mistakes.
pip3 install qtpy pyserial pandas imageio crc==1.3.0 lxml numpy tifffile scipy napari pyreadline3
pip3 install opencv-python-headless opencv-contrib-python-headless
pip3 install napari[all] scikit-image dask_image ome_zarr aicsimageio basicpy pytest pytest-qt pytest-xvfb gitpython matplotlib pydantic_xml pyvisa hidapi
pip3 install napari[all] scikit-image dask_image ome_zarr aicsimageio basicpy pytest pytest-qt pytest-xvfb gitpython matplotlib pydantic_xml pyvisa hidapi filelock lxml_html_clean
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pip3 install invocation on this line installs many third-party Python packages from PyPI without pinning versions or verifying integrity, which exposes this installation script to supply chain attacks if any of those packages are compromised upstream. Because this script is the primary deployment path for the control software, a malicious update to one of these unpinned dependencies could result in arbitrary code execution on any system running the script. To reduce this risk, pin each dependency to vetted versions and install from a version-locked, integrity-checked requirements file rather than directly from mutable latest releases.

Copilot uses AI. Check for mistakes.
- Remove unused tmp_dir context manager in test_stale_metadata_cleanup
- Add try/finally for test cleanup in case of failure
- Preserve exception chain in _acquire_file_lock using 'raise from'
- Add docstring to _acquire_file_lock

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

def metadata_temp_path(acq_info: "AcquisitionInfo", info: "CaptureInfo", base_name: str) -> str:
base_identifier = acq_info.experiment_path or info.save_directory
key = f"{base_identifier}:{base_name}"
digest = hashlib.sha1(key.encode("utf-8")).hexdigest()
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The SHA-1 hash function is used here for creating file identifiers. While SHA-1 is deprecated for cryptographic purposes due to collision vulnerabilities, it's acceptable for non-security uses like generating temporary file names. However, consider using SHA-256 or another modern hash function for better future-proofing, or explicitly document that this is for non-cryptographic use only.

Suggested change
digest = hashlib.sha1(key.encode("utf-8")).hexdigest()
digest = hashlib.sha256(key.encode("utf-8")).hexdigest()

Copilot uses AI. Check for mistakes.
Comment on lines 349 to 352
pattern = os.path.join(temp_dir, "ome_*_metadata.json")
current_time = time.time()

for metadata_path in glob.glob(pattern):
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The glob pattern "ome_metadata.json" is very broad and could potentially match metadata files from other OME-TIFF related applications or processes running on the same system. Consider making the pattern more specific by including a unique application identifier or namespace prefix (e.g., "squid_omemetadata.json" or "ome_squid*_metadata.json") to avoid accidentally removing files from other processes.

Copilot uses AI. Check for mistakes.
hongquanli and others added 2 commits December 30, 2025 12:11
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Add application-specific prefix to avoid accidentally removing metadata
files from other OME-TIFF applications sharing the same temp directory.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 259 to 261
# Note: lock file cleanup is handled by filelock library


Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment states that "lock file cleanup is handled by filelock library", but this is misleading. The filelock library does NOT automatically clean up lock files - it leaves them on disk after use. This is why the cleanup_stale_metadata_files function at line 361-367 in utils_ome_tiff_writer.py explicitly attempts to remove lock files alongside metadata files. The comment should be updated to clarify that lock file cleanup happens during stale metadata cleanup, not automatically by the filelock library.

Suggested change
# Note: lock file cleanup is handled by filelock library
# Note: filelock does not remove lock files; stale lock/metadata files are cleaned up
# by ome_tiff_writer.cleanup_stale_metadata_files.

Copilot uses AI. Check for mistakes.
hongquanli and others added 2 commits December 30, 2025 12:19
- Use SHA-256 instead of SHA-1 for metadata filename hashing (future-proofing)
- Fix misleading comment about filelock cleanup behavior

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
SHA-1 is appropriate for non-cryptographic filename generation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 292 to 296
# Clean up stale metadata files from previous crashed acquisitions
if acquisition_info is not None:
removed = ome_tiff_writer.cleanup_stale_metadata_files()
if removed:
self._log.info(f"Cleaned up {len(removed)} stale OME-TIFF metadata files")
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cleanup of stale metadata files should only be performed when SaveOMETiffJob instances are being used. Currently, cleanup runs whenever acquisition_info is not None, even if OME-TIFF saving isn't being used. This could remove metadata files from other acquisitions that happen to use OME-TIFF. Consider moving the cleanup to only run when FILE_SAVING_OPTION is OME_TIFF, or add a flag parameter to control when cleanup runs.

Copilot uses AI. Check for mistakes.
Comment on lines 234 to 244
# Create JobRunner with acquisition_info and dispatch
runner = JobRunner(acquisition_info=acquisition_info)
runner.dispatch(job)

# Verify acquisition_info was injected
assert job.acquisition_info is not None
assert job.acquisition_info.total_time_points == 1
assert job.acquisition_info.channel_names == ["DAPI"]

# Clean up - don't actually start the runner process
runner._shutdown_event.set()
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test creates a JobRunner with acquisition_info but never starts the runner's background process. This means the cleanup code on line 294 in JobRunner.init() is never actually tested in this test case, since it runs in the main process during initialization. The test should verify that cleanup actually runs, perhaps by mocking or checking that the cleanup was called.

Copilot uses AI. Check for mistakes.
target_dtype = np.dtype(metadata["dtype"])
if (
not metadata.get(ome_tiff_writer.CHANNEL_NAMES_KEY)
and self.acquisition_info
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check for self.acquisition_info on line 214 is redundant. The run() method already validates that acquisition_info is not None on line 177-178, so by the time execution reaches line 214, self.acquisition_info is guaranteed to be non-None. This condition can be simplified to just check if channel_names exist.

Suggested change
and self.acquisition_info

Copilot uses AI. Check for mistakes.
- Remove redundant acquisition_info None check (already validated in run())
- Add cleanup_stale_ome_files parameter to JobRunner to control when cleanup runs
- Only run cleanup when OME-TIFF saving is actually being used
- Add test_job_runner_cleanup_flag to verify cleanup behavior

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 362 to 369
if os.path.exists(lock_path):
try:
os.remove(lock_path)
removed.append(lock_path)
except OSError:
pass # Lock file may be held by another process
except OSError:
pass # File may have been removed by another process
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cleanup logic has a potential race condition. If a file is removed by another process between the check on line 362 and the removal attempt on line 364, the OSError exception is caught and silently ignored. However, if the metadata file is removed by another process between lines 356-358, the outer OSError catch on line 368 will also silently ignore it. While this is acceptable for the metadata file, the lock file removal should ideally check if the metadata file still exists before attempting to remove its lock file, or the logic should be more explicit about which errors are expected.

Suggested change
if os.path.exists(lock_path):
try:
os.remove(lock_path)
removed.append(lock_path)
except OSError:
pass # Lock file may be held by another process
except OSError:
pass # File may have been removed by another process
try:
os.remove(lock_path)
removed.append(lock_path)
except FileNotFoundError:
# Lock file may have been removed by another process
pass
except FileNotFoundError:
# Metadata file may have been removed by another process
pass

Copilot uses AI. Check for mistakes.
Comment on lines 274 to 303
def dispatch(self, job: Job):
# Inject acquisition_info into SaveOMETiffJob instances
if isinstance(job, SaveOMETiffJob) and self._acquisition_info is not None:
job.acquisition_info = self._acquisition_info
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AcquisitionInfo instance stored in _acquisition_info is mutated in the main process's dispatch method (line 303), but JobRunner is a multiprocessing.Process. When the process is started, the _acquisition_info is pickled and sent to the child process, so mutations in the parent process's dispatch method do not affect the child process. However, since dispatch modifies the job object (not _acquisition_info) before putting it in the queue, and the job is then pickled when placed in the queue, this should work correctly. This is not a bug, but the design could be clearer. Consider documenting that the injection happens before the job is serialized for the queue.

Copilot uses AI. Check for mistakes.

def dispatch(self, job: Job):
# Inject acquisition_info into SaveOMETiffJob instances
if isinstance(job, SaveOMETiffJob) and self._acquisition_info is not None:
Copy link

Copilot AI Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The injection logic only sets acquisition_info when self._acquisition_info is not None, but SaveOMETiffJob.run() requires it to be set (line 177-178 in job_processing.py). If a SaveOMETiffJob is dispatched to a JobRunner that was initialized without acquisition_info, the job will fail at runtime with a ValueError. Consider either: (1) raising an error immediately in dispatch if attempting to dispatch a SaveOMETiffJob without acquisition_info, or (2) documenting that SaveOMETiffJob instances must only be dispatched to JobRunner instances that have acquisition_info set.

Suggested change
if isinstance(job, SaveOMETiffJob) and self._acquisition_info is not None:
if isinstance(job, SaveOMETiffJob):
if self._acquisition_info is None:
raise ValueError(
"Cannot dispatch SaveOMETiffJob: JobRunner was initialized without acquisition_info."
)

Copilot uses AI. Check for mistakes.
- Use FileNotFoundError for explicit race condition handling in cleanup
- Add early validation in dispatch() for SaveOMETiffJob without acquisition_info
- Document that injection happens before job serialization

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 26 to 36
@dataclass
class AcquisitionInfo:
total_time_points: int
total_z_levels: int
total_channels: int
channel_names: List[str]
experiment_path: Optional[str] = None
time_increment_s: Optional[float] = None
physical_size_z_um: Optional[float] = None
physical_size_x_um: Optional[float] = None
physical_size_y_um: Optional[float] = None
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AcquisitionInfo dataclass lacks documentation explaining its purpose and how it differs from CaptureInfo. Consider adding a docstring that explains: (1) this holds acquisition-wide metadata shared across all images in a multi-dimensional acquisition, (2) how it relates to CaptureInfo (which holds per-image metadata), and (3) that it's injected into SaveOMETiffJob by JobRunner.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

@hongquanli hongquanli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: Refactor AcquisitionInfo to JobRunner Context

Overview

This PR refactors OME-TIFF image saving by introducing a new AcquisitionInfo dataclass that separates acquisition-wide metadata from per-image CaptureInfo. It also replaces fcntl-based file locking with the cross-platform filelock library.


Positives

  • Clean separation of concerns: Moving acquisition-wide metadata (channel names, time points, physical sizes) into AcquisitionInfo eliminates redundant data passing through every CaptureInfo instance
  • Cross-platform compatibility: Replacing fcntl with filelock library enables Windows support
  • Improved maintainability: Replacing hardcoded string keys with constants (DTYPE_KEY, SHAPE_KEY, etc.) reduces typo risks and improves IDE support
  • Good test coverage: New tests verify AcquisitionInfo injection, stale metadata cleanup, and the cleanup flag behavior
  • Proper cleanup mechanism: Added cleanup_stale_metadata_files() to handle leftover metadata from crashed acquisitions

Issues & Suggestions

1. Duplicate Import (Bug)

File: job_processing.py:25 and job_processing.py:39

from control.core import utils_ome_tiff_writer as ome_tiff_writer  # line 25
...
from . import utils_ome_tiff_writer as ome_tiff_writer  # line 39

The same module is imported twice with the same alias. Remove one of these.

2. Missing skip_saving Logic

File: multi_point_worker.py:175-181

job_classes = []
use_ome_tiff = FILE_SAVING_OPTION == FileSavingOption.OME_TIFF
if use_ome_tiff:
    job_classes.append(SaveOMETiffJob)
else:
    job_classes.append(SaveImageJob)

The skip_saving check was removed. Previously the code had:

job_classes = [] if self.skip_saving else [SaveImageJob]

Now a save job is always added even when skip_saving=True. This appears to be a regression.

3. Lock File Cleanup Not Handled

File: job_processing.py:265-266

The comment notes that filelock doesn't remove lock files, but the cleanup only happens at JobRunner initialization. Consider:

  • Cleaning up lock files along with metadata files in cleanup_stale_metadata_files()
  • The function does attempt lock file cleanup, but only for stale files - immediate lock file cleanup after completion would be cleaner

4. Hardcoded Timeout Value

File: job_processing.py:105

lock = FileLock(lock_path, timeout=10)

Consider making this configurable or at least a named constant for easier tuning.

5. Error Message Could Include More Context

File: job_processing.py:110-112

raise TimeoutError(
    f"Failed to acquire file lock '{lock_path}' within 10 seconds..."
)

Consider including the current file being processed for easier debugging.

6. Potential Race Condition in Cleanup

File: utils_ome_tiff_writer.py:352-364

The cleanup logic checks mtime then removes, but another process could modify the file between check and removal. The broad exception handling mitigates this, but a more atomic approach could use try/remove/check-age-on-error.

7. Missing Dependency in setup_22.04.sh

The filelock package was added, but verify it's also added to any requirements.txt or pyproject.toml files if they exist.


Minor Suggestions

  • Type annotation inconsistency: piezo_z_um: Optional[float] on line 147 adds explicit type annotation where surrounding code doesn't - keep consistent style
  • Lazy import comment: The "Lazy import" comments for xml.etree.ElementTree are helpful but could note why (reducing import time on module load)

Security Considerations

  • File operations use proper error handling
  • Lock timeout prevents indefinite hangs
  • No obvious security issues

Summary

Recommendation: Request Changes

The refactoring design is sound and improves code quality, but the skip_saving regression (point #2) needs to be fixed before merging. The duplicate import (point #1) should also be addressed.

🤖 Generated with Claude Code

1. Remove duplicate import of utils_ome_tiff_writer in job_processing.py

2. Restore skip_saving logic in multi_point_worker.py - save jobs were
   being added even when skip_saving=True

3. Add lock file cleanup after successful OME-TIFF acquisition completion
   to prevent accumulation of stale .lock files

4. Extract hardcoded file lock timeout (10s) to FILE_LOCK_TIMEOUT_SECONDS
   constant for easier configuration

5. Improve error messages in _acquire_file_lock to include context about
   which output file is being written when lock acquisition fails

6. Improve race condition handling in cleanup_stale_metadata_files by
   using os.stat() for more atomic file age checking and clearer control
   flow with early continue for non-stale files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 11 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

# The job object is pickled when placed in the queue, so injection must happen here.
if isinstance(job, SaveOMETiffJob):
if self._acquisition_info is None:
raise ValueError("Cannot dispatch SaveOMETiffJob: JobRunner was initialized without acquisition_info.")
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message says "JobRunner was initialized without acquisition_info" but doesn't provide guidance on how to fix the issue. Consider adding information about when acquisition_info is required (i.e., when saving to OME-TIFF format) to help users understand the requirement.

Suggested change
raise ValueError("Cannot dispatch SaveOMETiffJob: JobRunner was initialized without acquisition_info.")
raise ValueError(
"Cannot dispatch SaveOMETiffJob: JobRunner was initialized without acquisition_info. "
"acquisition_info is required when saving to OME-TIFF format; initialize JobRunner with an "
"AcquisitionInfo instance when using OME-TIFF saving."
)

Copilot uses AI. Check for mistakes.
Comment on lines 28 to 38
@dataclass
class AcquisitionInfo:
total_time_points: int
total_z_levels: int
total_channels: int
channel_names: List[str]
experiment_path: Optional[str] = None
time_increment_s: Optional[float] = None
physical_size_z_um: Optional[float] = None
physical_size_x_um: Optional[float] = None
physical_size_y_um: Optional[float] = None
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AcquisitionInfo dataclass lacks documentation. Consider adding a docstring that explains the purpose of this class (acquisition-wide metadata for OME-TIFF files), what each field represents, and when/how it should be used. This would improve maintainability and help future developers understand the design.

Copilot uses AI. Check for mistakes.
Comment on lines 9 to 14
from typing import Optional, Generic, TypeVar, List, Dict, Any
from uuid import uuid4

try:
import fcntl
except ImportError: # pragma: no cover - platform without fcntl
fcntl = None

from dataclasses import dataclass, field
from filelock import FileLock, Timeout as FileLockTimeout
from typing import ClassVar, Dict, List, Optional, Tuple, Union
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate import of typing types. Line 9 imports Optional, Generic, TypeVar, List, Dict, Any from typing, and line 14 imports ClassVar, Dict, List, Optional, Tuple, Union from typing. This creates duplicate imports of Dict, List, and Optional. Consider consolidating these into a single import statement.

Copilot uses AI. Check for mistakes.
Comment on lines 276 to 280
# Clean up lock file after successful completion.
# Note: This runs inside the lock context, so the lock is still held.
# The lock file will be released when we exit the context manager.
# We schedule cleanup outside the lock by storing the path.
# Clean up lock file after successful acquisition completion
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "This runs inside the lock context, so the lock is still held" and "We schedule cleanup outside the lock by storing the path", but no path is actually being stored. The cleanup code at lines 280-286 runs outside the lock context. Consider removing or clarifying this misleading comment.

Suggested change
# Clean up lock file after successful completion.
# Note: This runs inside the lock context, so the lock is still held.
# The lock file will be released when we exit the context manager.
# We schedule cleanup outside the lock by storing the path.
# Clean up lock file after successful acquisition completion
# At this point, the OME-TIFF and its metadata have been finalized
# while holding the file lock. The lock will be released when we
# exit the context manager below.
# After releasing the lock, clean up the lock file once acquisition is complete

Copilot uses AI. Check for mistakes.
Comment on lines 354 to 364
for metadata_path in glob.glob(pattern):
try:
# Get file stats atomically and check age
file_stat = os.stat(metadata_path)
file_mtime = file_stat.st_mtime
if current_time - file_mtime <= max_age_seconds:
continue # File is not stale, skip it

# File is stale, attempt removal
os.remove(metadata_path)
removed.append(metadata_path)
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cleanup function uses glob.glob() to find metadata files and then removes them if they're stale. However, there's a race condition: between the glob and the os.stat/os.remove calls, another process could create a new file with the same name. While the FileNotFoundError is handled, newly created files could be incorrectly identified as stale if they match the timestamp check. Consider adding additional validation, such as checking if the file is currently locked before removal.

Copilot uses AI. Check for mistakes.
Comment on lines 265 to 269
ome_tiff_writer.write_metadata(metadata_path, metadata)

if metadata["saved_count"] >= metadata["expected_count"]:
metadata["completed"] = True
if metadata[ome_tiff_writer.SAVED_COUNT_KEY] >= metadata[ome_tiff_writer.EXPECTED_COUNT_KEY]:
metadata[ome_tiff_writer.COMPLETED_KEY] = True
ome_tiff_writer.write_metadata(metadata_path, metadata)
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The metadata is written twice when the acquisition completes: once at line 265 (with saved_count updated) and again at line 269 (with completed flag set to True). Consider combining these writes into a single operation by setting the completed flag before the first write, reducing I/O operations.

Copilot uses AI. Check for mistakes.
Comment on lines +118 to +123
except FileLockTimeout as exc:
context_msg = f" (writing to: {context})" if context else ""
raise TimeoutError(
f"Failed to acquire file lock '{lock_path}' within {FILE_LOCK_TIMEOUT_SECONDS} seconds{context_msg}. "
f"Another process may be holding the lock."
) from exc
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The FileLockTimeout exception is being caught and re-raised as TimeoutError. However, the original exception type information should be preserved using from exc (which is correctly done). The issue is that TimeoutError is a built-in Python exception, and some callers might expect FileLockTimeout. Consider either using a custom exception type or documenting this behavior clearly in the function docstring.

Copilot uses AI. Check for mistakes.
Comment on lines 331 to 332
# Default threshold for considering metadata files stale (24 hours)
STALE_METADATA_THRESHOLD_SECONDS = 24 * 60 * 60
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The constant STALE_METADATA_THRESHOLD_SECONDS is set to 24 hours (86400 seconds), which is used as the default threshold for considering metadata files stale. This seems reasonable, but for active acquisitions that run longer than 24 hours, metadata files could be incorrectly removed. Consider documenting this limitation or increasing the default threshold.

Copilot uses AI. Check for mistakes.
Comment on lines 27 to 28
@pytest.mark.parametrize("shape", [(64, 48), (32, 32)])
def test_ome_tiff_memmap_roundtrip(shape: tuple[int, int]) -> None:
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The removed stub function _ensure_dependency_stubs suggests that the tests previously worked without certain dependencies (cv2, git). Now that the stubs are removed, the tests may fail if these dependencies are not installed. Ensure that all test dependencies (including cv2/opencv and gitpython) are properly listed in test requirements or that the tests gracefully handle missing dependencies.

Copilot uses AI. Check for mistakes.

def run(self) -> bool:
if self.acquisition_info is None:
raise ValueError("SaveOMETiffJob requires acquisition_info to be set by JobRunner")
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message "SaveOMETiffJob requires acquisition_info to be set by JobRunner" could be more specific about what went wrong. Consider including information about whether the JobRunner was initialized without acquisition_info or if dispatch was called incorrectly, to help with debugging.

Suggested change
raise ValueError("SaveOMETiffJob requires acquisition_info to be set by JobRunner")
raise ValueError(
"SaveOMETiffJob.run() was called with acquisition_info=None. "
"This job is intended to be executed via JobRunner.dispatch(), "
"which must inject a non-None acquisition_info. "
"This usually indicates that JobRunner was initialized or called "
"without acquisition_info, or that SaveOMETiffJob was run directly "
"without going through JobRunner.dispatch()."
)

Copilot uses AI. Check for mistakes.
hongquanli and others added 2 commits December 31, 2025 02:19
- Consolidate duplicate typing imports into single import statement
- Add comprehensive docstring to AcquisitionInfo dataclass explaining
  its purpose, relationship to CaptureInfo, and usage pattern
- Combine double metadata write on completion into single write by
  setting completed flag before the write operation
- Fix misleading comment about lock file cleanup scheduling
- Simplify lock file cleanup logic with cleaner try-except pattern
- Improve error messages in SaveOMETiffJob.run() and JobRunner.dispatch()
  to provide actionable guidance on fixing the issue

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace time-based stale metadata detection with lock-based approach:
- Attempt to acquire file lock with zero timeout (non-blocking)
- If lock acquired, file is not in active use and safe to remove
- If lock held, skip the file (active acquisition)

This is more robust than time thresholds because:
- Works for any acquisition duration (even multi-week time-lapses)
- Works for any interval between image saves
- No arbitrary threshold to configure

Also addresses Copilot review feedback:
- Consolidate duplicate typing imports
- Add AcquisitionInfo docstring
- Combine double metadata write into single write
- Fix misleading comments about lock cleanup
- Improve error messages with actionable guidance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 306 to 312
# Clean up lock file after lock is released (only when acquisition completed)
# Uses try-except to handle race conditions with other processes
try:
if not os.path.exists(metadata_path):
os.remove(lock_path)
except OSError:
pass # Lock file may be held by another process, already removed, or recreated
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lock file cleanup logic may have a race condition. The lock is released when exiting the context manager at line 238, but then at line 309 the code checks if the metadata file exists and removes the lock file. Between releasing the lock and removing the lock file, another process could acquire the lock for a new acquisition using the same lock path. Removing the lock file in that scenario could interfere with the other process. Consider removing the lock file inside the context manager (before releasing the lock) when the acquisition is complete, or ensuring that lock file removal is more robust against concurrent access.

Copilot uses AI. Check for mistakes.
os.remove(lock_path)
removed.append(lock_path)
except OSError:
pass # Lock file may already be removed or held
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cleanup function attempts to remove a lock file that may still be held by the filelock library. At line 373, after successfully removing the metadata file and releasing the lock (line 355), the code tries to remove the lock file. However, on some systems, the filelock library may still hold a file handle to the lock file even after the context manager exits, which could cause the os.remove(lock_path) call to fail with an OSError on Windows. The try-except block catches OSError, so this is handled, but consider adding a comment explaining why the OSError is expected and acceptable here, as it might help future maintainers understand this behavior.

Suggested change
pass # Lock file may already be removed or held
# On some platforms (notably Windows), filelock may still hold a handle to
# the lock file briefly even after the context manager exits, causing
# os.remove(lock_path) to fail with a permission-related OSError. Since
# this lock file is only a best-effort cleanup artifact and not required
# for correctness, such errors are safe to ignore.
pass

Copilot uses AI. Check for mistakes.
Comment on lines 194 to 216
experiment_path="/tmp/test",
time_increment_s=1.0,
physical_size_z_um=1.0,
physical_size_x_um=0.5,
physical_size_y_um=0.5,
)

channel = ChannelMode(
id="1",
name="DAPI",
exposure_time=10.0,
analog_gain=1.0,
illumination_source=1,
illumination_intensity=5.0,
z_offset=0.0,
)

capture_info = CaptureInfo(
position=squid.abc.Pos(x_mm=0.0, y_mm=0.0, z_mm=0.0, theta_rad=None),
z_index=0,
capture_time=time.time(),
configuration=channel,
save_directory="/tmp/test",
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test uses a hardcoded path /tmp/test for experiment_path and save_directory, which may not exist on all systems (particularly Windows where /tmp is not a standard location). While this test doesn't actually write files to these paths in this specific scenario, it could cause confusion or issues in the future. Consider using tempfile.gettempdir() or a similar cross-platform approach for creating test paths.

Copilot uses AI. Check for mistakes.
from datetime import datetime
from contextlib import contextmanager
from typing import Optional, Generic, TypeVar, List, Dict, Any
from typing import Any, ClassVar, Dict, Generic, List, Optional, Tuple, TypeVar, Union
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'Any' is not used.

Suggested change
from typing import Any, ClassVar, Dict, Generic, List, Optional, Tuple, TypeVar, Union
from typing import ClassVar, Dict, Generic, List, Optional, Tuple, TypeVar, Union

Copilot uses AI. Check for mistakes.
- Remove unused 'Any' import from typing
- Add comment explaining OSError handling for Windows filelock behavior
- Use tempfile.gettempdir() instead of hardcoded /tmp/test in tests
- Expand race condition comment to explain why lock cleanup is safe

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.

Comments suppressed due to low confidence (1)

software/control/core/job_processing.py:272

  • The error handling for index validation at lines 267-272 could provide more helpful error messages by including the actual values. For example, instead of "Time point index out of range for OME stack", consider: f"Time point index {time_point} out of range [0, {shape[0]}) for OME stack". This would help users debug issues more quickly by showing the problematic values.
            if not (0 <= time_point < shape[0]):
                raise ValueError("Time point index out of range for OME stack")
            if not (0 <= z_index < shape[1]):
                raise ValueError("Z index out of range for OME stack")
            if not (0 <= channel_index < shape[2]):
                raise ValueError("Channel index out of range for OME stack")

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +306 to +314
# Clean up lock file after lock is released (only when acquisition completed).
# Race condition note: Between releasing the lock and this cleanup, another process
# could theoretically acquire the same lock path. However:
# 1. We only attempt removal if metadata_path is gone (acquisition completed)
# 2. If another process holds the lock, os.remove fails with OSError (caught below)
# 3. This is best-effort cleanup; stale locks are also cleaned by cleanup_stale_metadata_files
try:
if not os.path.exists(metadata_path):
os.remove(lock_path)
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lock file cleanup logic at lines 312-316 is only attempted when the acquisition completes, but stale lock files could remain if a process crashes before completion. While the comment mentions that cleanup_stale_metadata_files handles this, consider also attempting lock file removal even for incomplete acquisitions, as long as the lock is successfully released. This would reduce the accumulation of stale lock files over time.

Suggested change
# Clean up lock file after lock is released (only when acquisition completed).
# Race condition note: Between releasing the lock and this cleanup, another process
# could theoretically acquire the same lock path. However:
# 1. We only attempt removal if metadata_path is gone (acquisition completed)
# 2. If another process holds the lock, os.remove fails with OSError (caught below)
# 3. This is best-effort cleanup; stale locks are also cleaned by cleanup_stale_metadata_files
try:
if not os.path.exists(metadata_path):
os.remove(lock_path)
# Clean up lock file after the lock is released (best-effort, regardless of completion).
# Race condition note: Between releasing the lock and this cleanup, another process
# could theoretically acquire the same lock path. However:
# 1. If another process holds or has recreated the lock, os.remove fails with OSError (caught below)
# 2. This is best-effort cleanup; stale locks are also cleaned by cleanup_stale_metadata_files
try:
os.remove(lock_path)

Copilot uses AI. Check for mistakes.
class SaveOMETiffJob(Job):
"""Job for saving images to OME-TIFF format.
The acquisition_info field is injected by JobRunner.dispatch() before the job runs.
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The SaveOMETiffJob class lacks a comprehensive docstring. While it mentions that acquisition_info is injected by JobRunner, it should also document the behavior of the run() method, the purpose of capture_info and capture_image fields inherited from Job, and provide an example or more detail about the OME-TIFF saving process. This would help future maintainers understand the class's responsibilities.

Suggested change
The acquisition_info field is injected by JobRunner.dispatch() before the job runs.
This job writes a single captured image into an OME-TIFF dataset on disk,
using metadata from both :class:`CaptureInfo` (per-frame metadata) and
:class:`AcquisitionInfo` (acquisition-wide metadata).
Lifecycle and behavior
----------------------
``SaveOMETiffJob`` is typically created with a populated ``capture_info``
and ``capture_image`` (inherited from :class:`Job`), and then dispatched
via :meth:`JobRunner.dispatch`. ``JobRunner.dispatch`` injects the
corresponding :class:`AcquisitionInfo` instance into ``acquisition_info``
before calling :meth:`run`.
The :meth:`run` method:
* Validates that ``acquisition_info`` is not ``None`` and raises
:class:`ValueError` if it has not been set (for example, when the job is
run directly instead of via ``JobRunner.dispatch``).
* Calls :meth:`_save_ome_tiff` with the in-memory image array obtained from
:meth:`Job.image_array` and the associated :class:`CaptureInfo`.
* Returns ``True`` on successful completion. No value is returned from
``_save_ome_tiff`` itself; its responsibility is side-effectful writing
of OME-TIFF data and metadata to disk.
Fields inherited from :class:`Job`
----------------------------------
``capture_info``
A :class:`CaptureInfo` instance describing the context of this frame
(stage position, Z index, time point, channel configuration, output
directory and file identifier, region/FOV indices, etc.). This
information is used by the OME-TIFF writer helpers to determine where
in the dataset (e.g., which file, series, Z/T index) this image
belongs.
``capture_image``
A :class:`JobImage` wrapper around the actual image data. For
``SaveOMETiffJob``, :meth:`Job.image_array` is expected to return a
NumPy array that is compatible with the downstream OME-TIFF writer
(e.g., a 2D grayscale or 3D multi-channel image).
OME-TIFF saving details
-----------------------
Internally, :meth:`_save_ome_tiff` delegates to helpers in
:mod:`control.core.utils_ome_tiff_writer` to:
* Validate consistency between the image data, :class:`CaptureInfo`, and
:class:`AcquisitionInfo` (shape, dtype, channel ordering, indices, etc.).
* Determine the appropriate output folder and file name for the OME-TIFF
dataset (potentially shared by many frames).
* Create or append to the OME-TIFF file, updating both pixel data and the
OME-XML metadata so that downstream tools (e.g., ImageJ/FIJI, napari,
Bio-Formats) can correctly interpret the dataset as a multi-dimensional
image (X, Y, Z, C, T, and possibly multiple positions).
Example
-------
A typical usage pattern (simplified) is::
job = SaveOMETiffJob(
capture_info=capture_info,
capture_image=JobImage(image_array=image),
)
# JobRunner is responsible for setting job.acquisition_info
job_runner.dispatch(job) # will eventually call job.run()
When running ``SaveOMETiffJob`` directly (without ``JobRunner``), callers
must manually set ``acquisition_info`` before invoking :meth:`run`:::
job.acquisition_info = acquisition_info
job.run()

Copilot uses AI. Check for mistakes.
Comment on lines 236 to 246
runner.dispatch(job)

# Verify acquisition_info was injected
assert job.acquisition_info is not None
assert job.acquisition_info.total_time_points == 1
assert job.acquisition_info.channel_names == ["DAPI"]

# Clean up - don't actually start the runner process
runner._shutdown_event.set()


Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test creates a JobRunner without starting it, but doesn't properly clean up the multiprocessing resources. The runner has an _input_queue and _output_queue that are multiprocessing.Queue objects which should be properly closed and joined. Consider calling runner.shutdown() or explicitly closing the queues to ensure proper cleanup and avoid resource leaks in the test suite.

Suggested change
runner.dispatch(job)
# Verify acquisition_info was injected
assert job.acquisition_info is not None
assert job.acquisition_info.total_time_points == 1
assert job.acquisition_info.channel_names == ["DAPI"]
# Clean up - don't actually start the runner process
runner._shutdown_event.set()
try:
runner.dispatch(job)
# Verify acquisition_info was injected
assert job.acquisition_info is not None
assert job.acquisition_info.total_time_points == 1
assert job.acquisition_info.channel_names == ["DAPI"]
finally:
# Clean up - don't actually start the runner process
runner.shutdown()

Copilot uses AI. Check for mistakes.
Comment on lines 285 to 293
mock_cleanup.assert_not_called()
runner._shutdown_event.set()

# Test that cleanup IS called when flag is True
with patch("control.core.job_processing.ome_tiff_writer.cleanup_stale_metadata_files") as mock_cleanup:
mock_cleanup.return_value = []
runner = JobRunner(acquisition_info=acquisition_info, cleanup_stale_ome_files=True)
mock_cleanup.assert_called_once()
runner._shutdown_event.set()
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the previous test, this test creates JobRunner instances but doesn't properly clean up multiprocessing resources. The test should call runner.shutdown() or explicitly close the queues after setting the shutdown event to ensure proper cleanup and avoid resource leaks.

Suggested change
mock_cleanup.assert_not_called()
runner._shutdown_event.set()
# Test that cleanup IS called when flag is True
with patch("control.core.job_processing.ome_tiff_writer.cleanup_stale_metadata_files") as mock_cleanup:
mock_cleanup.return_value = []
runner = JobRunner(acquisition_info=acquisition_info, cleanup_stale_ome_files=True)
mock_cleanup.assert_called_once()
runner._shutdown_event.set()
try:
mock_cleanup.assert_not_called()
finally:
runner._shutdown_event.set()
runner.shutdown()
# Test that cleanup IS called when flag is True
with patch("control.core.job_processing.ome_tiff_writer.cleanup_stale_metadata_files") as mock_cleanup:
mock_cleanup.return_value = []
runner = JobRunner(acquisition_info=acquisition_info, cleanup_stale_ome_files=True)
try:
mock_cleanup.assert_called_once()
finally:
runner._shutdown_event.set()
runner.shutdown()

Copilot uses AI. Check for mistakes.
def metadata_for_imwrite(metadata: Dict[str, Any]) -> Dict[str, Any]:
channel_names = metadata.get("channel_names") or []
channel_names = metadata.get(CHANNEL_NAMES_KEY) or []
meta: Dict[str, Any] = {"axes": "TZCYX"}
Copy link

Copilot AI Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "axes" key is hardcoded as a string literal at line 161, but there is a constant AXES_KEY defined for this purpose at line 30. For consistency with the rest of the codebase, use the constant instead of the string literal: meta: Dict[str, Any] = {AXES_KEY: "TZCYX"}.

Suggested change
meta: Dict[str, Any] = {"axes": "TZCYX"}
meta: Dict[str, Any] = {AXES_KEY: "TZCYX"}

Copilot uses AI. Check for mistakes.
- Use AXES_KEY constant instead of hardcoded "axes" string literal
- Add try/finally blocks in tests for proper cleanup on failure
- Keep _shutdown_event.set() (not shutdown()) since process isn't started

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants