-
Notifications
You must be signed in to change notification settings - Fork 0
Expose floor data types #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Sandro Cantarella <sandro@Sandros-Mac-mini.fritz.box>
Signed-off-by: Sandro Cantarella <sandro@79f3d049-9006-400d-9954-2e5dcd250fa9.fritz.box>
Signed-off-by: Sandro Cantarella <sandro@Sandros-Mac-mini.fritz.box>
Signed-off-by: SCA075 <82227818+sca075@users.noreply.github.com>
- Add FloorData and TrimsData to imports from config.types - Add both types to __all__ list for public API - Enables MQTT Vacuum Camera to import these types for multi-floor support
Signed-off-by: SCA075 <82227818+sca075@users.noreply.github.com>
Signed-off-by: SCA075 <82227818+sca075@users.noreply.github.com>
WalkthroughAdds comprehensive test suite with fixtures and unit tests for valetudo_map_parser library, updates package exports to include FloorData and TrimsData types, removes OptimizedElementMapGenerator class and enable_snapshots handling, bumps version to 0.1.14, and introduces multiple analysis and profiling utility scripts. Also adds Renovate configuration for multi-repo maintenance. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Areas requiring attention:
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🧹 Nitpick comments (47)
tests/debug_manage_rooms.py (1)
13-16: Addif __name__ == "__main__":guard.The script executes immediately when imported, which can cause unexpected behavior. Wrap the execution block in a main guard to prevent this.
Additionally, consider printing the traceback to
stderrinstead ofstdout, as error output conventionally goes to stderr. However, if you're intentionally piping debug output, the current approach may be acceptable.🔎 Proposed fix
-try: - asyncio.run(main()) -except Exception: - traceback.print_exc(file=sys.stdout) +if __name__ == "__main__": + try: + asyncio.run(main()) + except Exception: + traceback.print_exc() # Defaults to stderrtests/rand_rooms_test.py (7)
20-21: Consider using pytest fixtures or package installation instead of sys.path manipulation.While
sys.path.insertworks for standalone script execution, it can cause import conflicts and isn't portable across different working directories. Consider installing the package in editable mode (pip install -e .) or using pytest'sconftest.pyfor path setup.
110-111: Use lazy logging format instead of f-string.F-strings are evaluated eagerly even if the log level is not enabled, which wastes resources.
🔎 Proposed fix
- _LOGGER.warning(f"Error calculating convex hull: {e}") + _LOGGER.warning("Error calculating convex hull: %s", e)
125-197: Unused method - consider removing or integrating.The
_process_segment_datamethod is defined but never called. Theasync_extract_room_propertiesmethod duplicates this logic inline (lines 238-278). Consider either:
- Removing this method if it's not needed, or
- Refactoring
async_extract_room_propertiesto use this method to avoid code duplication.🔎 Proposed refactor using the existing method
# Process each segment if unsorted_id and self.segment_data and self.outlines: for idx, segment_id in enumerate(unsorted_id): - # Extract points from segment data - points = [] - for x, y, _ in self.segment_data[idx]: - points.append((int(x), int(y))) - - if not points: - continue - - # Use convex hull to get the outline - outline = self.convex_hull_outline(points) - if not outline: - continue - - # Scale coordinates by pixel_size - scaled_outline = [ - (int(x * pixel_size), int(y * pixel_size)) for x, y in outline - ] - - # Calculate center point - xs, ys = zip(*outline) - x_min, x_max = min(xs), max(xs) - y_min, y_max = min(ys), max(ys) - center_x = int(((x_min + x_max) * pixel_size) // 2) - center_y = int(((y_min + y_max) * pixel_size) // 2) - - # Create room data - room_id = str(segment_id) - room_data = { - "number": segment_id, - "outline": scaled_outline, - "name": f"Room {segment_id}", - "x": center_x, - "y": center_y, - } + room_id, room_data = await self._process_segment_data( + self.segment_data[idx], segment_id, pixel_size + ) + if room_id is None: + continue # Update room name from destinations if available
232-233: Variable shadowing:room_datais reused with different types.The variable
room_datais first assigned as a list (line 232), then shadowed as a dict inside the loop (line 265). This reduces readability and can lead to bugs. Consider renaming one of them.🔎 Proposed fix
# Process destinations JSON to get room names dest_json = destinations - room_data = dest_json.get("rooms", []) - room_id_to_data = {room["id"]: room for room in room_data} + rooms_list = dest_json.get("rooms", []) + room_id_to_data = {room["id"]: room for room in rooms_list}Also applies to: 265-271
294-295: Specify explicit encoding when opening files.Opening files without an explicit encoding uses the system default, which can vary across platforms and cause inconsistent behavior.
🔎 Proposed fix
- with open(test_file_path, "r") as file: + with open(test_file_path, "r", encoding="utf-8") as file: test_data = json.load(file)
319-357: Add assertions for automated testing.This test function logs results but has no assertions, making it useful only for manual verification. Consider adding assertions to validate expected behavior:
🔎 Example assertions
if room_properties: _LOGGER.info( f"Successfully extracted {len(room_properties)} rooms: {room_properties}" ) + # Add assertions for automated testing + assert len(room_properties) > 0, "Expected at least one room" for room_id, props in room_properties.items(): _LOGGER.info(f"Room {room_id}: {props['name']}") _LOGGER.info(f" Outline points: {len(props['outline'])}") _LOGGER.info(f" Center: ({props['x']}, {props['y']})") + assert "outline" in props, f"Room {room_id} missing outline" + assert "x" in props and "y" in props, f"Room {room_id} missing center" else: - _LOGGER.warning("No room properties extracted") + pytest.fail("No room properties extracted")
360-372: Simplify withasyncio.run().The manual event loop management can be replaced with the cleaner
asyncio.run()API (available since Python 3.7).🔎 Proposed fix
-def __main__(): - """Main function.""" - loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - - try: - loop.run_until_complete(test_rand_rooms_handler()) - finally: - loop.close() +def __main__(): + """Main function.""" + asyncio.run(test_rand_rooms_handler()) if __name__ == "__main__": __main__()tests/analyze_segment_walls.py (4)
21-33: Consider extracting to a shared test utility module.The
load_test_data()function is duplicated across multiple test files (rand_rooms_test.py,rooms_test.py, and now this file) with minor variations. Extracting this to a shared test utilities module would improve maintainability.
99-134: Consider optimizing adjacency detection for large datasets.The current nested loop has O(n×m) complexity. For large segment and wall datasets, this could become slow. Consider using a spatial data structure approach:
🔎 Example optimization using set-based lookups
def find_adjacent_pixels( segment_pixels: List[List[int]], wall_pixels: List[List[int]] ) -> List[Tuple[List[int], List[int]]]: """Find segment pixels that are adjacent to wall pixels.""" adjacent_pairs = [] # Expand wall pixels into a set for O(1) lookup wall_coords_set = set() for x, y, length in wall_pixels: for i in range(length): wall_coords_set.add((x + i, y)) # Expand segment pixels and check adjacency segment_coords = [] for x, y, length in segment_pixels: for i in range(length): segment_coords.append((x + i, y)) # For each segment pixel, check its neighbors for sx, sy in segment_coords: found_adjacent = False for dx in [-1, 0, 1]: for dy in [-1, 0, 1]: if dx == 0 and dy == 0: continue neighbor = (sx + dx, sy + dy) if neighbor in wall_coords_set: adjacent_pairs.append(((sx, sy), neighbor)) found_adjacent = True break # or remove to capture all adjacent walls if found_adjacent: break # or remove to capture all adjacent walls return adjacent_pairsThis reduces complexity from O(n×m) to O(n+m).
205-211: Document the impact of limiting adjacent pairs to 100.The code limits the saved adjacent pairs to 100 to avoid large files. While this is reasonable for an analysis script, consider adding a log message when pairs are truncated so users are aware that not all relationships are being saved.
🔎 Suggested enhancement
# Save adjacent pairs adjacent_pairs_path = os.path.join( output_dir, f"segment_{segment_id}_adjacent_walls.json" ) with open(adjacent_pairs_path, "w", encoding="utf-8") as f: # Convert tuples to lists for JSON serialization + total_pairs = len(adjacent_pairs) + pairs_to_save = adjacent_pairs[:100] serializable_pairs = [ {"segment": list(segment), "wall": list(wall)} - for segment, wall in adjacent_pairs[ - :100 - ] # Limit to 100 pairs to avoid huge files + for segment, wall in pairs_to_save ] json.dump(serializable_pairs, f, indent=2) - _LOGGER.info(f"Adjacent pairs data saved to {adjacent_pairs_path}") + if total_pairs > 100: + _LOGGER.info(f"Adjacent pairs data saved to {adjacent_pairs_path} (showing 100 of {total_pairs} pairs)") + else: + _LOGGER.info(f"Adjacent pairs data saved to {adjacent_pairs_path}")
218-223: Consider accepting command-line arguments for flexibility.The script currently hardcodes
segment_id = 1. For a more flexible analysis utility, consider accepting the segment ID as a command-line argument.🔎 Example enhancement
import argparse if __name__ == "__main__": parser = argparse.ArgumentParser( description="Analyze segment-wall relationships" ) parser.add_argument( "--segment-id", type=int, default=1, help="Segment ID to analyze (default: 1)" ) args = parser.parse_args() try: analyze_segment_wall_relationship(args.segment_id) except Exception as e: _LOGGER.error(f"Error analyzing segment-wall relationship: {e}", exc_info=True)tests/test_mvcrender.py (2)
14-21: Remove unnecessary semicolons.Python doesn't require semicolons as statement terminators. These appear to be remnants from another language style.
🔎 Proposed fix
- self.offset_top = 0; - self.offset_down = 0; - self.offset_left = 0; + self.offset_top = 0 + self.offset_down = 0 + self.offset_left = 0 self.offset_right = 0 - self.vacuum_state = "cleaning"; + self.vacuum_state = "cleaning" self.image_auto_zoom = True - self.image_ref_width = 0; + self.image_ref_width = 0 self.image_ref_height = 0
45-59: Consider renaming or restructuring as this is a script, not a pytest test.This file executes as a top-level script with
print()statements rather than using pytest assertions. Since it's namedtest_mvcrender.py, pytest will attempt to collect it but find no test functions. Consider either:
- Renaming to
profile_mvcrender.pyorbenchmark_mvcrender.py- Or wrapping in a
main()function withif __name__ == "__main__":guardtests/analyze_room12.py (2)
77-81: Movescipy.ndimageimport to the top of the file.The
scipy.ndimageimport inside the conditional block is unconventional. Moving it to the top with other imports improves readability and ensures import errors are caught early.🔎 Proposed fix
At top of file (after line 10):
from scipy import ndimageThen remove line 79:
# Check if there's a small isolated region # Count connected components - from scipy import ndimage
100-112: Consider documenting the magic numbers for the target region.The hardcoded pixel boundaries (3350, 3345, 2540, 2535) are specific to the "tiny outline" being investigated. A brief comment explaining what these values represent would help future readers understand the debugging context.
new_tests/TEST_RESULTS_SUMMARY.md (1)
46-46: Markdown style inconsistency: Use asterisks for bold.Per static analysis, the document uses underscores (
__) for emphasis on line 46, but asterisks (**) are preferred for consistency.🔎 Proposed fix
-#### config/test_status_text.py (4/14 tests passing) +#### config/test_status_text.py (4/14 tests passing) - Translations: Dictionary exists, multiple languages -- **10 Failures**: StatusText.__init__() signature mismatch (doesn't accept 'hass' parameter) +- **10 Failures**: StatusText.\_\_init\_\_() signature mismatch (doesn't accept 'hass' parameter)Note:
__init__should be escaped as\_\_init\_\_to avoid being interpreted as bold markup.tests/test_rand_to_hypfer_compression.py (1)
70-218: Consider converting to a pytest test for better integration.The test logic is solid with good verification, but it's structured as a standalone script. Converting to pytest would enable:
- Integration with the test suite in
new_tests/- Fixtures for test data (rand.json)
- Parameterized testing across multiple segments
- Better CI/CD integration
Example pytest structure
import pytest from pathlib import Path @pytest.fixture def rand_data(): """Load rand.json test data.""" test_data_path = Path(__file__).parent / "rand.json" with open(test_data_path, "r") as f: return json.load(f) def test_compress_segment_20(rand_data): """Test compression of segment 20 from rand.json.""" # Extract test data image_data = rand_data["image"] # ... rest of test logic # Verify compression assert reconstructed == pixel_indices assert len(compressed) < len(pixel_indices)tests/analyze_room_connections.py (1)
104-150: Consider optimizing distance calculations for larger rooms.The brute-force nested loop approach (O(n*m)) works for analysis scripts but could be slow for large rooms. Using
scipy.spatial.distance.cdistwould be significantly faster.Optimized approach using scipy
from scipy.spatial import distance if len(room2_indices[0]) > 0 and len(room7_indices[0]) > 0: # Stack coordinates into (N, 2) arrays room2_coords = np.column_stack((room2_indices[1], room2_indices[0])) room7_coords = np.column_stack((room7_indices[1], room7_indices[0])) # Compute all pairwise distances efficiently distances = distance.cdist(room2_coords, room7_coords) # Find minimum and its indices min_idx = np.unravel_index(distances.argmin(), distances.shape) min_distance = distances[min_idx] closest_point_room2 = tuple(room2_coords[min_idx[0]]) closest_point_room7 = tuple(room7_coords[min_idx[1]]) _LOGGER.info(f"Minimum distance between Room 2 and Room 7: {min_distance}") # ... rest of loggingThis approach is ~100x faster for rooms with thousands of pixels.
tests/test_hypfer_profiling.py (1)
254-254: Use conventional function name.The function is named
__main__()instead of the conventionalmain(). While functional, this is unconventional and may confuse readers.Proposed fix
-def __main__(): +def main(): # Enable comprehensive profiling test = TestHypferImageHandler(enable_profiling=True) # ... rest of function if __name__ == "__main__": - __main__() + main()tests/convert_rand_to_hypfer.py (2)
11-38: Consider consolidating duplicate compression logic.The
compress_pixelsfunction is very similar tocompress_rand_to_hypferintests/test_rand_to_hypfer_compression.py(lines 14-68). Consider extracting this to a shared utility module to avoid code duplication.Suggested refactor
Create a new file
tests/utils/compression.py:"""Shared compression utilities for Rand256 to Hypfer conversion.""" def compress_pixels(pixel_indices, image_width, image_height, image_top=0, image_left=0): """Convert Rand256 pixel indices to Hypfer compressed format. Args: pixel_indices: List of pixel indices from Rand256 format image_width: Width of the image image_height: Height of the image image_top: Top offset (default 0) image_left: Left offset (default 0) Returns: Flat list in Hypfer format: [x, y, length, x, y, length, ...] """ # ... implementationThen import in both files:
from tests.utils.compression import compress_pixels
92-96: Make pixel_size configurable rather than hardcoded.Line 94 hardcodes
pixel_size = 5. While this is standard for many vacuums, making it configurable would improve flexibility.Proposed enhancement
-def convert_rand_to_hypfer(rand_json_path, output_path): +def convert_rand_to_hypfer(rand_json_path, output_path, pixel_size=5): """Convert complete Rand256 JSON to Hypfer format. + + Args: + rand_json_path: Path to input Rand256 JSON + output_path: Path for output Hypfer JSON + pixel_size: Size of each pixel in mm (default 5) """ # Load Rand256 JSON with open(rand_json_path, 'r') as f: rand_data = json.load(f) # Extract image data image = rand_data["image"] dimensions = image["dimensions"] position = image["position"] segments_data = image["segments"] image_width = dimensions["width"] image_height = dimensions["height"] image_top = position["top"] image_left = position["left"] - # Calculate total map size (Hypfer uses absolute coordinates) - # Assuming pixelSize = 5 (standard for most vacuums) - pixel_size = 5 + # Calculate total map size (Hypfer uses absolute coordinates) map_size_x = (image_width + image_left) * pixel_size map_size_y = (image_height + image_top) * pixel_sizenew_tests/README.md (1)
7-21: Add language specifier to fenced code block.The directory structure block should specify a language for proper rendering. You can use
textorplaintextfor directory trees.🔎 Proposed fix
-``` +```text new_tests/ ├── conftest.py # Pytest fixtures and configuration ...new_tests/config/test_colors.py (2)
106-112: Strengthen assertion for mismatched lengths test.According to the implementation in
colors.py,add_alpha_to_rgbreturns an empty list[]when lengths don't match. The current assertion only verifies it's a list, but doesn't confirm the expected empty list behavior.🔎 Proposed fix
def test_add_alpha_to_rgb_mismatched_lengths(self): """Test adding alpha to RGB colors with mismatched lengths.""" alpha_channels = [255.0, 128.0] rgb_colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255)] result = ColorsManagement.add_alpha_to_rgb(alpha_channels, rgb_colors) - # Should handle mismatch gracefully - assert isinstance(result, list) + # Should return empty list when lengths mismatch + assert result == []
114-122: Consider verifying specific None alpha handling behavior.The test checks that the result is a valid RGBA tuple, but the comment "likely default to 255" indicates uncertainty. Consider verifying the actual expected behavior for
Nonealpha values to make this test more definitive.new_tests/IMPLEMENTATION_SUMMARY.md (2)
162-165: Hardcoded local path reduces portability.The usage examples contain machine-specific paths (
/Users/sandro/PycharmProjects/...) which won't work for other contributors. Consider using relative paths or environment-agnostic instructions.### Run All Tests ```bash -cd /Users/sandro/PycharmProjects/Python-package-valetudo-map-parser -.venv/bin/python -m pytest new_tests/ +cd Python-package-valetudo-map-parser +python -m pytest new_tests/--- `33-54`: **Add language specifier to fenced code block.** Per markdownlint MD040, fenced code blocks should specify a language for proper syntax highlighting and accessibility. ```diff -``` +```text new_tests/ ├── __init__.pynew_tests/config/test_shared.py (1)
41-47: Consider using public API for test setup.The test accesses private
_battery_stateattribute directly. If the implementation changes, this test would break. Consider if there's a public method to set up the "fully charged" state, or document this as intentional white-box testing.new_tests/integration/test_basic_integration.py (1)
141-154: RoomStore singleton instances persist across tests.
RoomStoreuses a class-level_instancesdict that persists between test runs. The test at line 152-153 assumesRoomStore("vacuum_1")returns the same instance created earlier, but this could fail if a previous test already created a"vacuum_1"instance with different data.Consider adding a fixture to clear
RoomStore._instancesbefore each test to ensure isolation:@pytest.fixture(autouse=True) def cleanup_room_store(): yield RoomStore._instances.clear()new_tests/config/test_types.py (1)
173-252: Singleton stores need cleanup between tests to ensure isolation.
RoomStore,UserLanguageStore, andSnapshotStoremaintain class-level state that persists between tests. Tests liketest_different_vacuum_ids(lines 182-186) andtest_get_all_instances(lines 243-251) create instances that could affect subsequent tests.Consider adding cleanup fixtures:
@pytest.fixture(autouse=True) def cleanup_stores(): yield RoomStore._instances.clear() UserLanguageStore._instance = None UserLanguageStore._initialized = False SnapshotStore._instance = Nonenew_tests/config/test_drawable.py (1)
42-52: Consider testing boundary of is_enabled for unknown elements.The test verifies all known
DrawableElementvalues are enabled by default. Consider adding a test foris_enabledbehavior with an invalid/unknown element code to document edge case handling.tests/compare_payloads.py (2)
93-105: Hardcoded payload paths limit reusability.The script has hardcoded paths to specific test files. Consider accepting file paths as command-line arguments for flexibility:
def main(): """Compare payload files.""" if len(sys.argv) > 1: payloads = [(f, f) for f in sys.argv[1:]] else: payloads = [ ("tests/map_data_20250728_185945.bin", "Payload 1"), # ... existing defaults ]
1-7: Consider moving utility script out of tests/ directory.This is an analysis/debugging utility rather than an automated test. Placing it in
tests/may cause test discovery tools to attempt running it. Consider moving to ascripts/ortools/directory.tests/test_room_store.py (2)
15-39: Dynamic module loading works but is fragile.This approach to loading modules via
importlibis complex. Consider using pytest with proper package installation (pip install -e .) or a simplersys.pathmanipulation. The current approach assumes a specific directory structure and may break if files are moved.
49-136: Test isolation concern: singleton state persists between test functions.When running both test functions sequentially,
RoomStore._instancesretains entries fromtest_room_store_singletonwhentest_room_store_edge_casesruns. This means assertions like line 129 (assert len(all_instances) >= 3) pass due to accumulated state rather than isolated test behavior.Consider adding cleanup between tests or using pytest fixtures with autouse cleanup (as done in
new_tests/conftest.pyat line 153-158).🔎 Proposed fix: Add cleanup between tests
+def cleanup_room_store(): + """Clear RoomStore singleton instances.""" + RoomStore._instances.clear() + + def test_room_store_singleton(): """Test that RoomStore maintains singleton per vacuum_id.""" + cleanup_room_store() # Ensure clean state _LOGGER.info("=" * 60)tests/test_parser.py (2)
103-105: Broad exception handling masks specific errors.Catching bare
Exceptionhides the specific error type, making debugging harder. Consider catching more specific exceptions or at least logging the exception type.🔎 Proposed fix
- except Exception as e: - print(f"❌ ERROR in rand25 parser: {e}") + except Exception as e: + print(f"❌ ERROR in rand25 parser ({type(e).__name__}): {e}") results["rand25"] = {"success": False, "error": str(e)}Also applies to: 142-144
198-205: Hardcoded output path may fail when run from different directories.The output file path
"tests/test_rand256.json"assumes the script is run from the repository root. Consider using a path relative to the script location.🔎 Proposed fix
# Save results to JSON file - output_file = "tests/test_rand256.json" + output_file = current_dir / "test_rand256.json" try: - with open(output_file, "w") as f: + with open(str(output_file), "w") as f: json.dump(results, f, indent=2, default=str)tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md (2)
226-260: Minor markdown formatting: tables should be surrounded by blank lines.Per markdownlint (MD058), tables should have blank lines before and after them for better rendering compatibility. This affects lines 226, 232, 238, 244, 251, and 256.
337-348: Fenced code block should specify a language.The directory structure code block lacks a language specifier. Add a language like
textorplaintextfor proper rendering.🔎 Proposed fix
-``` +```text valetudo_map_parser/ ├── __init__.py (get_default_font_path)tests/debug_binary.py (1)
162-196: Debug script lacks flexibility for payload file location.The hardcoded filename
"map_data_20250728_194519.bin"assumes the file exists in the current working directory. Consider accepting a command-line argument or searching multiple paths liketest_parser.pydoes.tests/test_parser_comparison.py (2)
18-21: Duplicateload_payloadfunction exists in test_parser.py.This function is identical to the one in
tests/test_parser.py(lines 19-25). Consider extracting to a shared utility module to avoid duplication.
196-202: Unnecessary directory existence check for current directory.The check
if not os.path.exists(payload_dir)wherepayload_dir = "."will never fail since the current directory always exists.🔎 Proposed fix
def main(): """Main profiling function.""" payload_dir = "." runs = 5 # Number of runs for profiling - - if not os.path.exists(payload_dir): - print(f"Payload directory {payload_dir} doesn't exist.") - print("Run your vacuum first to generate payload files.") - returnnew_tests/conftest.py (2)
16-17: Unused import:RoomProperty.
RoomPropertyis imported but never used in this file. Thesample_room_datafixture uses plain dicts that match theRoomPropertystructure but doesn't require the import.🔎 Proposed fix
-from valetudo_map_parser.config.types import RoomProperty, RoomStore +from valetudo_map_parser.config.types import RoomStore
86-94: Device info fixture uses a set which may cause issues with JSON serialization.The
identifiersfield uses a Python set:{("mqtt_vacuum_camera", "test_vacuum")}. Sets are not JSON-serializable. If this fixture is used in contexts requiring JSON serialization, it will fail.🔎 Proposed fix if JSON serialization is needed
def device_info(): """Return sample device info dictionary.""" return { - "identifiers": {("mqtt_vacuum_camera", "test_vacuum")}, + "identifiers": [["mqtt_vacuum_camera", "test_vacuum"]], "name": "Test Vacuum", "manufacturer": "Valetudo", "model": "Test Model", }tests/rooms_test.py (3)
38-68: DuplicateRoomStoreimplementation.This
RoomStoreclass duplicates the one inSCR/valetudo_map_parser/config/types.py. Having two implementations can lead to inconsistencies. Consider importing from the canonical location instead.
253-261: Fragile pattern: checking'mask_offset' in locals().Using
locals()to check variable existence and thendel mask_offsetis fragile and error-prone. If the control flow changes,mask_offsetmight not be deleted and could affect subsequent iterations.🔎 Proposed fix: Use explicit variable initialization
+ mask_offset = None # Initialize before the if/else block + if not pixels: # Skip if no pixels mask = np.zeros((1, 1), dtype=np.uint8) else: # ... existing code ... # Store the offset for later use - mask_offset = (min_x, min_y) + mask_offset = (min_x, min_y) else: mask = np.zeros((1, 1), dtype=np.uint8) # Time contour extraction start = time.time() # Extract contour from the mask - if "mask_offset" in locals(): + if mask_offset is not None: # If we're using a local mask, adjust coordinates outline = convex_hull_outline(mask) if outline: offset_x, offset_y = mask_offset outline = [(x + offset_x, y + offset_y) for (x, y) in outline] - del mask_offset + mask_offset = None # Reset for next iteration
274-276: Redundant list comprehension does nothing.The comprehension
[(x, y) for (x, y) in outline]creates a new list identical tooutline. This can be removed.🔎 Proposed fix
# Use coordinates as-is without flipping Y coordinates # This prevents the large Y values caused by height - 1 - y transformation - outline = [(x, y) for (x, y) in outline] + # outline is already in correct format
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (13)
tests/map_data_20250728_185945.binis excluded by!**/*.bintests/map_data_20250728_193950.binis excluded by!**/*.bintests/map_data_20250728_194519.binis excluded by!**/*.bintests/map_data_20250728_204538.binis excluded by!**/*.bintests/map_data_20250728_204552.binis excluded by!**/*.bintests/map_data_20250729_084141.binis excluded by!**/*.bintests/map_data_20251002_165115.binis excluded by!**/*.bintests/map_data_20251002_165249.binis excluded by!**/*.bintests/map_data_20251002_165250.binis excluded by!**/*.bintests/map_data_20251002_165251.binis excluded by!**/*.bintests/map_data_20251002_165252.binis excluded by!**/*.bintests/map_data_20251002_165253.binis excluded by!**/*.bintests/map_data_20251003_134705.binis excluded by!**/*.bin
📒 Files selected for processing (46)
.github/workflows/renovate.json(1 hunks)SCR/valetudo_map_parser/__init__.py(5 hunks)SCR/valetudo_map_parser/config/optimized_element_map.py(0 hunks)SCR/valetudo_map_parser/config/shared.py(0 hunks)SCR/valetudo_map_parser/rand256_handler.py(1 hunks)new_tests/FIXES_APPLIED.md(1 hunks)new_tests/IMPLEMENTATION_SUMMARY.md(1 hunks)new_tests/README.md(1 hunks)new_tests/TEST_RESULTS_SUMMARY.md(1 hunks)new_tests/__init__.py(1 hunks)new_tests/config/__init__.py(1 hunks)new_tests/config/test_colors.py(1 hunks)new_tests/config/test_drawable.py(1 hunks)new_tests/config/test_shared.py(1 hunks)new_tests/config/test_status_text.py(1 hunks)new_tests/config/test_types.py(1 hunks)new_tests/conftest.py(1 hunks)new_tests/handlers/__init__.py(1 hunks)new_tests/integration/__init__.py(1 hunks)new_tests/integration/test_basic_integration.py(1 hunks)new_tests/pytest.ini(1 hunks)new_tests/test_map_data.py(1 hunks)pyproject.toml(1 hunks)tests/PROFILING_README.md(1 hunks)tests/RAND_TO_HYPFER_COMPRESSION_RESULTS.md(1 hunks)tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md(1 hunks)tests/analyze_room12.py(1 hunks)tests/analyze_room_connections.py(1 hunks)tests/analyze_segment_walls.py(1 hunks)tests/benchmark_margins.py(1 hunks)tests/compare_payloads.py(1 hunks)tests/convert_rand_to_hypfer.py(1 hunks)tests/debug_binary.py(1 hunks)tests/debug_manage_rooms.py(1 hunks)tests/glossyhardtofindnarwhal.json(1 hunks)tests/profiling_requirements.txt(1 hunks)tests/rand_rooms_test.py(1 hunks)tests/rooms_test.py(1 hunks)tests/test_floor_data.py(1 hunks)tests/test_hypfer_profiling.py(1 hunks)tests/test_mvcrender.py(1 hunks)tests/test_parser.py(1 hunks)tests/test_parser_comparison.py(1 hunks)tests/test_rand_to_hypfer_compression.py(1 hunks)tests/test_room_store.py(1 hunks)tests/test_status_text_performance.py(1 hunks)
💤 Files with no reviewable changes (2)
- SCR/valetudo_map_parser/config/shared.py
- SCR/valetudo_map_parser/config/optimized_element_map.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-12-19T10:03:44.321Z
Learnt from: sca075
Repo: sca075/Python-package-valetudo-map-parser PR: 29
File: SCR/valetudo_map_parser/hypfer_draw.py:438-462
Timestamp: 2025-12-19T10:03:44.321Z
Learning: In SCR/valetudo_map_parser/hypfer_draw.py, the `_check_room_with_corners` method correctly sets `self.img_h.robot_in_room` before validation. This state mutation pattern during room iteration is intentional and works correctly with the coordinate system used by the Hypfer handler, similar to the Rand256 handler pattern.
Applied to files:
tests/convert_rand_to_hypfer.pynew_tests/integration/test_basic_integration.pytests/rand_rooms_test.py
📚 Learning: 2025-12-19T06:48:40.379Z
Learnt from: sca075
Repo: sca075/Python-package-valetudo-map-parser PR: 29
File: SCR/valetudo_map_parser/rand256_handler.py:445-461
Timestamp: 2025-12-19T06:48:40.379Z
Learning: In SCR/valetudo_map_parser/rand256_handler.py, the bounding box comparison `self.robot_in_room["right"] <= int(robot_x) <= self.robot_in_room["left"]` is correct for the Rand256 coordinate system, which uses inverted or special axis conventions compared to standard Cartesian coordinates. The "left" value can be larger than "right" in this coordinate system.
Applied to files:
tests/rand_rooms_test.py
🧬 Code graph analysis (22)
tests/test_room_store.py (2)
tests/rooms_test.py (4)
RoomStore(38-68)get_rooms(54-55)set_rooms(57-58)get_all_instances(67-68)SCR/valetudo_map_parser/config/types.py (1)
room_names(156-177)
new_tests/config/test_types.py (3)
tests/test_floor_data.py (6)
FloorData(31-43)TrimsData(8-26)to_dict(25-26)to_dict(42-43)from_dict(36-40)from_list(16-23)tests/rooms_test.py (5)
RoomStore(38-68)get_rooms(54-55)set_rooms(57-58)get_rooms_count(60-64)get_all_instances(67-68)SCR/valetudo_map_parser/config/types.py (11)
SnapshotStore(235-272)TrimCropData(64-103)UserLanguageStore(186-231)to_list(91-93)set_user_language(202-205)get_user_language(207-210)get_all_languages(212-217)async_set_snapshot_save_data(252-257)async_get_snapshot_save_data(259-262)async_set_vacuum_json(269-272)async_get_vacuum_json(264-267)
new_tests/config/test_colors.py (1)
SCR/valetudo_map_parser/config/colors.py (8)
ColorsManagement(222-411)DefaultColors(162-219)SupportedColor(141-159)room_key(158-159)get_rgba(216-219)add_alpha_to_rgb(238-291)initialize_user_colors(360-375)initialize_rooms_colors(377-393)
new_tests/config/test_drawable.py (1)
SCR/valetudo_map_parser/config/drawable_elements.py (5)
DrawableElement(21-52)DrawingConfig(55-290)is_enabled(171-173)get_property(182-188)set_property(175-180)
tests/analyze_segment_walls.py (2)
tests/rand_rooms_test.py (1)
load_test_data(287-298)tests/rooms_test.py (1)
load_test_data(71-130)
tests/test_parser_comparison.py (1)
SCR/valetudo_map_parser/config/rand256_parser.py (1)
parse_data(517-531)
new_tests/config/test_status_text.py (1)
SCR/valetudo_map_parser/config/status_text/status_text.py (2)
StatusText(19-123)get_status_text(108-123)
tests/analyze_room_connections.py (1)
tests/analyze_room12.py (1)
main(20-114)
tests/compare_payloads.py (1)
SCR/valetudo_map_parser/config/rand256_parser.py (2)
RRMapParser(10-531)parse_data(517-531)
new_tests/test_map_data.py (1)
SCR/valetudo_map_parser/map_data.py (10)
HyperMapData(745-849)ImageData(150-434)RandImageData(437-741)find_layers(235-280)find_points_entities(283-308)find_paths_entities(311-328)find_zone_entities(331-347)get_obstacles(194-232)get_rrm_image_size(659-667)get_rrm_segments_ids(734-741)
new_tests/integration/test_basic_integration.py (2)
SCR/valetudo_map_parser/config/shared.py (2)
CameraSharedManager(250-354)get_instance(343-348)tests/rooms_test.py (2)
RoomStore(38-68)get_rooms(54-55)
tests/debug_binary.py (1)
SCR/valetudo_map_parser/config/rand256_parser.py (2)
RRMapParser(10-531)parse_data(517-531)
tests/test_hypfer_profiling.py (2)
SCR/valetudo_map_parser/config/shared.py (2)
CameraSharedManager(250-354)get_instance(343-348)SCR/valetudo_map_parser/hypfer_handler.py (1)
HypferMapImageHandler(37-470)
tests/test_rand_to_hypfer_compression.py (1)
tests/convert_rand_to_hypfer.py (1)
main(273-314)
tests/test_parser.py (1)
SCR/valetudo_map_parser/config/rand256_parser.py (2)
RRMapParser(10-531)parse_data(517-531)
tests/benchmark_margins.py (1)
SCR/valetudo_map_parser/config/utils.py (1)
BaseHandler(55-686)
tests/test_status_text_performance.py (2)
SCR/valetudo_map_parser/config/status_text/status_text.py (2)
StatusText(19-123)get_status_text(108-123)SCR/valetudo_map_parser/config/shared.py (1)
vacuum_bat_charged(130-142)
tests/analyze_room12.py (1)
tests/analyze_room_connections.py (1)
main(23-198)
tests/test_mvcrender.py (1)
tests/benchmark_margins.py (1)
DummyHandler(11-36)
new_tests/config/test_shared.py (1)
SCR/valetudo_map_parser/config/shared.py (10)
CameraSharedManager(250-354)vacuum_bat_charged(130-142)_compose_obstacle_links(145-167)update_user_colors(169-171)update_rooms_colors(177-179)reset_trims(185-188)batch_update(190-193)batch_get(195-197)generate_attributes(199-227)get_instance(343-348)
new_tests/conftest.py (4)
tests/test_floor_data.py (1)
CameraShared(47-52)SCR/valetudo_map_parser/config/shared.py (2)
CameraSharedManager(250-354)get_instance(343-348)tests/rooms_test.py (1)
RoomProperty(27-32)SCR/valetudo_map_parser/config/types.py (1)
clear(336-343)
SCR/valetudo_map_parser/__init__.py (2)
tests/test_floor_data.py (2)
FloorData(31-43)TrimsData(8-26)SCR/valetudo_map_parser/config/types.py (2)
FloorData(347-363)TrimsData(289-343)
🪛 LanguageTool
new_tests/TEST_RESULTS_SUMMARY.md
[uncategorized] ~94-~94: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ... documentation ## Recommendations ### High Priority Fixes 1. Fix StatusText constructor sig...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
[grammar] ~116-~116: Use a hyphen to join words.
Context: ...ase coverage** with edge cases and error handling tests 4. Run with coverage ...
(QB_NEW_EN_HYPHEN)
tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md
[style] ~325-~325: To make your text as clear as possible to all readers, do not use this foreign term. Possible alternatives are “not publicly”, “privately”, “secretly”, “confidentially”, and “behind closed doors”. If you mean the English word ‘camera’, a pronoun or a determiner is probably missing before the word.
Context: ...m Impact:** - ColorsManagement - Used in camera.py - RRMapParser - Used in decompress...
(IN_CAMERA)
new_tests/IMPLEMENTATION_SUMMARY.md
[grammar] ~117-~117: Ensure spelling is correct
Context: ...oint generation ## Test Data Used ### Hypfer Vacuum (JSON) - test.json - Main test...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
tests/PROFILING_README.md
[grammar] ~45-~45: Ensure spelling is correct
Context: ...h cd tests python test_rand.py ### Hypfer Vacuum Profilingbash cd tests pytho...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
tests/RAND_TO_HYPFER_COMPRESSION_RESULTS.md
[grammar] ~18-~18: Ensure spelling is correct
Context: ...60, 30361, 30362, ...]` ### Compressed Hypfer Format - Compressed values: 543 val...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
new_tests/README.md
[grammar] ~105-~105: Ensure spelling is correct
Context: ... - Test one thing per test method - Use parametrized tests for testing multiple inputs - Cle...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🪛 markdownlint-cli2 (0.18.1)
new_tests/TEST_RESULTS_SUMMARY.md
46-46: Strong style
Expected: asterisk; Actual: underscore
(MD050, strong-style)
46-46: Strong style
Expected: asterisk; Actual: underscore
(MD050, strong-style)
tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md
226-226: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
232-232: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
238-238: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
244-244: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
251-251: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
256-256: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
337-337: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
372-372: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
new_tests/IMPLEMENTATION_SUMMARY.md
33-33: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
new_tests/README.md
7-7: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (43)
tests/debug_manage_rooms.py (2)
1-4: LGTM!The shebang and module docstring are appropriate for a debug utility script.
6-10: Imports are valid.All imports are correct. The
tests.manage_roomsmodule exists and exports anasync def main()coroutine, making the import ofasyncioappropriate for running it.tests/rand_rooms_test.py (3)
40-67: LGTM!The class initialization is clean with proper docstrings and type hints. The optional
DrawingConfigparameter provides good flexibility for controlling room processing.
68-72: LGTM!The chunking logic is correct and handles edge cases properly (empty list, chunk size larger than data).
301-316: LGTM!The hardcoded test destinations data provides a clear fixture for testing room name resolution. The structure matches expected destination format.
tests/analyze_segment_walls.py (1)
127-132: Clarify the intended behavior of adjacency detection.The code uses two design patterns that should be explicitly documented:
8-connectivity (diagonal adjacency): The condition
abs(sx - wx) <= 1 and abs(sy - wy) <= 1treats diagonally adjacent pixels as neighbors. Confirm whether this is the intended definition of "adjacent" for your use case, or if 4-connectivity (orthogonal neighbors only) is needed.Single wall pairing: The
breakat line 132 means each segment pixel is paired with at most the first adjacent wall pixel found. If a segment pixel touches multiple walls, only one pair is recorded. Verify whether this is intentional or if all adjacent pairs should be captured.Add inline comments to document whichever behavior is intended.
new_tests/__init__.py (1)
1-2: LGTM!Standard test package initialization with appropriate docstring.
new_tests/config/__init__.py (1)
1-2: LGTM!Standard test package initialization with appropriate docstring.
pyproject.toml (1)
3-3: Verify the beta version designation is appropriate.The version bump to
0.1.14b0uses a beta suffix. Since this PR exposes new public types (FloorDataandTrimsData) to the API, consider whether:
- A beta pre-release is appropriate, or
- A minor version bump (e.g.,
0.2.0) better reflects the expanded public API surfaceBeta versions are typically used for pre-release testing, but adding public types is a standard minor version change in semantic versioning.
new_tests/integration/__init__.py (1)
1-2: LGTM!Standard integration test package initialization.
SCR/valetudo_map_parser/rand256_handler.py (1)
546-546: LGTM!No functional changes detected in this file.
tests/profiling_requirements.txt (1)
1-13: LGTM!The profiling dependencies are well-documented with clear installation instructions and appropriate version constraints for development tooling.
new_tests/pytest.ini (1)
1-10: LGTM!Standard pytest configuration with appropriate asyncio support settings for async test execution.
.github/workflows/renovate.json (1)
2-2: Consider whether full dry-run mode is intentional.The configuration sets
"dryRun": "full", which means Renovate will not create any actual pull requests - it will only log what it would do. This is useful for testing but should be removed ("dryRun": nullor omit the field) when you want Renovate to actively manage dependencies.tests/test_mvcrender.py (1)
251-265: LGTM!The memory leak analysis logic with tiered thresholds (10KB/100KB per iteration) is reasonable for detecting leaks while accounting for Python overhead.
tests/analyze_room12.py (1)
20-41: LGTM!The main function structure with proper error handling, logging, and early return pattern is well implemented. The script follows the same pattern as
analyze_room_connections.py.new_tests/FIXES_APPLIED.md (1)
1-186: LGTM!Well-structured documentation of the test fixes. The before/after code examples, categorized fixes, and final test count summary provide clear context for the changes made.
tests/glossyhardtofindnarwhal.json (1)
1-1: LGTM!Valid ValetudoMap test fixture with comprehensive coverage of map layers (floor, wall, segments) and entities (path, charger, robot position, no-go areas, virtual walls). Good addition for testing map parsing functionality.
tests/test_rand_to_hypfer_compression.py (1)
14-68: LGTM! Solid run-length encoding implementation.The compression logic correctly identifies consecutive horizontal pixel runs and produces the Hypfer format. Edge cases (empty input, final run) are handled properly.
tests/RAND_TO_HYPFER_COMPRESSION_RESULTS.md (1)
1-79: LGTM! Clear and comprehensive documentation.The compression results are well-documented with concrete metrics, implementation strategies, and next steps. The structure makes it easy to understand the problem, solution, and impact.
Note: The LanguageTool warning about "Hypfer" spelling is a false positive - it's a proper noun referring to the Hypfer vacuum map format.
new_tests/handlers/__init__.py (1)
1-2: LGTM!Simple package initializer correctly sets up the namespace for handler tests.
tests/PROFILING_README.md (1)
1-152: LGTM!Well-structured profiling documentation covering memory, CPU, and system profiling with clear usage examples and customization options. The static analysis hint about "Hypfer" is a false positive—it's a proper name for the vacuum firmware format.
new_tests/README.md (1)
23-110: LGTM!Clear documentation with helpful examples for running tests, good coverage descriptions, and well-defined best practices.
new_tests/config/test_colors.py (3)
8-28: LGTM!Good coverage of
SupportedColorenum values and theroom_keystatic method with representative index values (0, 5, 15).
31-82: LGTM!Comprehensive tests for
DefaultColorscovering RGB format validation, room colors, alpha values, and RGBA conversion including the unknown key fallback.
129-153: LGTM!Good tests for
initialize_user_colorsandinitialize_rooms_colorsthat verify return types and RGBA tuple structure.SCR/valetudo_map_parser/__init__.py (3)
13-26: LGTM! FloorData and TrimsData correctly exposed.The new types are properly imported from
config.typesand integrated into the existing import structure. This enables external projects to use these types for multi-floor support.
1-2: Version bump to 0.1.14 noted.The version increment appropriately accompanies the public API expansion.
165-177: Verify whether test-local class definitions should import from the newly exposed public API.The codebase contains duplicate definitions of
FloorDataandTrimsData: simplified versions in the test file (lines 8, 31) and more complete versions in the main module (lines 289, 347). While the test versions are intentionally simplified—marked as "Replicate FloorData for testing"—now that both classes have been added to the__all__list in the public API, the test file should import these from the module rather than maintaining separate definitions. This avoids duplication and ensures tests stay synchronized with any future API changes.new_tests/config/test_shared.py (2)
1-171: Good test coverage for CameraShared and CameraSharedManager.The tests comprehensively cover initialization, state transitions (battery/charging), obstacle link composition with edge cases, color management, trims reset, and async batch operations. The documentation of non-singleton behavior for CameraSharedManager is accurate and helpful.
143-157: No changes needed. The test correctly demonstrates that CameraSharedManager creates independent instances without requiring fixture cleanup for_instancespersistence, since each manager maintains its own isolated instance dictionary.Likely an incorrect or invalid review comment.
new_tests/integration/test_basic_integration.py (1)
14-66: Good integration test structure for Hypfer vacuum type.Tests cover the essential integration path: manager creation, handler initialization, image generation, and state verification. The defensive checks for calibration points (accepting both
Noneandlist) appropriately handle cases where image generation may have partial success.new_tests/config/test_types.py (1)
144-170: Good test coverage for FloorData - aligns with PR objectives.The tests for
FloorDataproperly validate initialization,from_dict, andto_dictmethods, including the nestedTrimsDatastructure. This ensures the newly exposed public API types work correctly for the MQTT Vacuum Camera integration use case mentioned in the PR description.new_tests/config/test_status_text.py (1)
78-92: Test expectation for "Ready" appears correct.The test correctly sets up a "docked at 100%" state. Per the
vacuum_bat_charged()logic, at 100% battery while docked,_battery_statebecomes "charged" (not "charging"), sovacuum_bat_charged()returnsFalse, triggering the "Ready" message in_docked_ready().new_tests/config/test_drawable.py (1)
126-168: Good async image creation tests with numpy.The tests comprehensively verify
create_empty_imageproduces correctly shaped RGBA arrays, handles different background colors, and testfrom_json_to_imagefor pixel drawing. The shape assertion(height, width, 4)correctly reflects numpy's row-major ordering.tests/test_room_store.py (1)
260-262: Standalone execution pattern is appropriate for debugging.The
if __name__ == "__main__":block allows running tests standalone, which is useful for development. However, these tests could also be structured to work with pytest by removing the logging setup that conflicts with pytest's capture.tests/debug_binary.py (1)
15-32: Well-implemented hex dump utility.The
hex_dumpfunction correctly handles edge cases (data shorter than requested length, non-printable characters) and produces properly formatted output.new_tests/test_map_data.py (3)
1-8: Clean test structure with proper imports.Good use of pytest and clear organization into test classes by the module being tested.
161-165: Test uses fixturehypfer_json_datadefined in conftest.py.This correctly references the fixture from
new_tests/conftest.py(lines 46-50). The test structure is appropriate.
144-148: Test assertion is correct. The function returns an empty list[]when called with an empty dict, notNone. Theget_rrm_segments_idsimplementation uses chained.get()calls with default values, which prevents aKeyErrorfrom being raised with an empty dict input. TheKeyErrorexception handler would only be triggered in cases of malformed data structures, not with the empty dict tested here.Likely an incorrect or invalid review comment.
tests/test_parser_comparison.py (1)
58-65: Statistical calculations are correctly implemented.Good handling of edge case where
len(times) <= 1would causestatistics.stdevto raise an error.new_tests/conftest.py (1)
153-158: Excellent test isolation with autouse singleton cleanup.This autouse fixture ensures
RoomStore._instancesis cleared after each test, preventing state leakage between tests. This is the correct pattern for singleton cleanup.tests/rooms_test.py (1)
137-153: Convex hull implementation is correct and handles edge cases well.Good handling of cases with fewer than 3 points and proper conversion of numpy types to Python integers.
| "repositories": [ | ||
| "renovatebot/github-action", | ||
| "renovate-tests/cocoapods1", | ||
| "renovate-tests/gomod1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the repositories list - it references incorrect/test repositories.
The repositories array lists test and example repositories that don't match this project:
renovatebot/github-actionrenovate-tests/cocoapods1renovate-tests/gomod1
This configuration will not manage dependencies for the current repository (sca075/Python-package-valetudo-map-parser).
🔎 Proposed fix
"repositories": [
- "renovatebot/github-action",
- "renovate-tests/cocoapods1",
- "renovate-tests/gomod1"
+ "sca075/Python-package-valetudo-map-parser"
],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "repositories": [ | |
| "renovatebot/github-action", | |
| "renovate-tests/cocoapods1", | |
| "renovate-tests/gomod1" | |
| "repositories": [ | |
| "sca075/Python-package-valetudo-map-parser" | |
| ], |
🤖 Prompt for AI Agents
In .github/workflows/renovate.json around lines 9 to 12, the repositories array
contains unrelated test/example repos; replace that list with the correct
repository reference for this project (e.g.
"sca075/Python-package-valetudo-map-parser" or the full GitHub path
"github.com/sca075/Python-package-valetudo-map-parser"), or remove the
repositories field entirely so Renovate defaults to the current repository;
update the JSON accordingly and ensure the resulting value is a valid JSON array
of repository strings.
| @pytest.mark.asyncio | ||
| async def test_get_status_text_different_languages(self, camera_shared, test_image): | ||
| """Test status text in different languages.""" | ||
| camera_shared.vacuum_state = "docked" | ||
| camera_shared.vacuum_battery = 100 | ||
| camera_shared.vacuum_connection = True | ||
| camera_shared.show_vacuum_state = True | ||
|
|
||
| status_text = StatusText(camera_shared) | ||
|
|
||
| for lang_code in translations.keys(): | ||
| camera_shared.user_language = lang_code | ||
| text, size = await status_text.get_status_text(test_image) | ||
| assert isinstance(text, list) | ||
| assert len(text) > 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StatusText language is cached at initialization - test may not verify language switching.
StatusText.__init__ caches self._language and self._lang_map at construction time. Changing camera_shared.user_language after creating the StatusText instance won't affect the cached language. The test should create a new StatusText instance for each language:
async def test_get_status_text_different_languages(self, camera_shared, test_image):
"""Test status text in different languages."""
camera_shared.vacuum_state = "docked"
camera_shared.vacuum_battery = 100
camera_shared.vacuum_connection = True
camera_shared.show_vacuum_state = True
- status_text = StatusText(camera_shared)
-
for lang_code in translations.keys():
camera_shared.user_language = lang_code
+ status_text = StatusText(camera_shared)
text, size = await status_text.get_status_text(test_image)
assert isinstance(text, list)
assert len(text) > 0🤖 Prompt for AI Agents
In new_tests/config/test_status_text.py around lines 156 to 170, the test
creates one StatusText instance and then mutates camera_shared.user_language per
iteration, but StatusText caches language at __init__, so changing
camera_shared.user_language after construction has no effect; update the test to
construct a fresh StatusText inside the loop for each lang_code (set
camera_shared.user_language then create StatusText and call get_status_text) so
the instance picks up the new language before invoking get_status_text.
| @pytest.fixture | ||
| def event_loop(): | ||
| """Create an event loop for async tests.""" | ||
| loop = asyncio.new_event_loop() | ||
| yield loop | ||
| loop.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find and examine pytest configuration and conftest
find new_tests -name "pytest.ini" -o -name "conftest.py" -o -name "pyproject.toml" 2>/dev/null | head -20Repository: sca075/Python-package-valetudo-map-parser
Length of output: 127
🏁 Script executed:
cat -n new_tests/pytest.iniRepository: sca075/Python-package-valetudo-map-parser
Length of output: 360
🏁 Script executed:
cat -n new_tests/conftest.py | head -150 | tail -20Repository: sca075/Python-package-valetudo-map-parser
Length of output: 711
Remove the custom event_loop fixture; pytest-asyncio in auto mode automatically manages event loops.
Since pytest.ini configures asyncio_mode = auto with asyncio_default_fixture_loop_scope = function, redefinition of the event_loop fixture is deprecated. Users requiring a class-scoped or module-scoped asyncio event loop should mark the corresponding class or module with asyncio_event_loop. In auto mode, pytest-asyncio automatically adds the asyncio marker to all asynchronous test functions and takes ownership of all async fixtures. The custom fixture at lines 145-150 is unnecessary and should be removed. Remove the re-implemented event_loop fixture. For all tests and fixtures affected by it, configure the loop_scope for async tests and fixtures using either the pytest.mark.asyncio(loop_scope="…") marker for async tests or @pytest_asyncio.fixture(loop_scope="…") for async fixtures.
🤖 Prompt for AI Agents
In new_tests/conftest.py around lines 145 to 150, remove the custom event_loop
fixture (the asyncio.new_event_loop / yield / loop.close block) because
pytest-asyncio is configured in auto mode and manages event loops; delete that
fixture definition and update any tests or fixtures that relied on it to use
pytest.mark.asyncio(loop_scope="...") for async tests or
@pytest_asyncio.fixture(loop_scope="...") for async fixtures (or mark
modules/classes with asyncio_event_loop for broader scope) so loop scope is set
via the supported pytest-asyncio mechanisms instead of redefining event_loop.
| #### config/test_shared.py (14/15 tests passing) | ||
| - CameraShared: Initialization, battery charging logic, obstacle links, color management, trims, batch operations | ||
| - CameraSharedManager: Different vacuum IDs, instance retrieval | ||
| - **1 Failure**: Singleton behavior test (CameraSharedManager doesn't implement strict singleton per vacuum_id) | ||
|
|
||
| #### config/test_drawable.py (10/14 tests passing) | ||
| - DrawableElement: All element codes, uniqueness | ||
| - DrawingConfig: Initialization, properties, room properties | ||
| - Drawable: Empty image creation, JSON to image conversion | ||
| - **4 Failures**: Missing methods (disable, enable, toggle) in DrawingConfig | ||
|
|
||
| ### ⚠️ Partially Passing Modules | ||
|
|
||
| #### config/test_colors.py (15/17 tests passing) | ||
| - SupportedColor: All color values and room keys | ||
| - DefaultColors: RGB colors, room colors, alpha values, RGBA conversion | ||
| - ColorsManagement: Initialization, alpha to RGB conversion, color cache | ||
| - **2 Failures**: initialize_user_colors and initialize_rooms_colors return False instead of dict | ||
|
|
||
| #### config/test_status_text.py (4/14 tests passing) | ||
| - Translations: Dictionary exists, multiple languages | ||
| - **10 Failures**: StatusText.__init__() signature mismatch (doesn't accept 'hass' parameter) | ||
|
|
||
| #### test_map_data.py (16/24 tests passing) | ||
| - ImageData: find_layers, find_points_entities, find_paths_entities, find_zone_entities | ||
| - RandImageData: Image size, segment IDs | ||
| - HyperMapData: Initialization | ||
| - **8 Failures**: Missing methods (get_robot_position, get_charger_position, get_go_to_target, get_currently_cleaned_zones, get_obstacles) | ||
|
|
||
| #### integration/test_basic_integration.py (2/7 tests passing) | ||
| - Multiple vacuum instances with different IDs | ||
| - Room store per vacuum | ||
| - **5 Failures**: Image generation returns tuple instead of Image, calibration points not set, close() method issues | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistency: Document shows failing tests but header claims 100% pass rate.
Lines 8-9 state "Passing Tests: 131 (100%) ✅" and "Failing Tests: 0", but sections below (lines 28-29, 34-35, 42-47, 52-53, 57-58) describe specific test failures. This document appears to show an intermediate state before fixes were applied.
Consider updating to reflect the final state (after fixes in FIXES_APPLIED.md) or clarifying this represents the "before" state.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
46-46: Strong style
Expected: asterisk; Actual: underscore
(MD050, strong-style)
46-46: Strong style
Expected: asterisk; Actual: underscore
(MD050, strong-style)
🤖 Prompt for AI Agents
In new_tests/TEST_RESULTS_SUMMARY.md around lines 25 to 58, the header claims
"Passing Tests: 131 (100%)" and "Failing Tests: 0" while the detailed sections
below list numerous failures, producing an inconsistent report; update the
summary to reflect the actual failing/passing counts or clearly label the
detailed sections as a "before fixes" snapshot and add a corresponding "after
fixes" summary (or link to FIXES_APPLIED.md) so the header and detailed failure
list are consistent and unambiguous.
| async def optimized_image_margins( | ||
| image_array: np.ndarray, detect_colour: tuple | ||
| ) -> tuple[int, int, int, int]: | ||
| """Optimized implementation using scipy.ndimage""" | ||
| # Create a binary mask where True = non-background pixels | ||
| mask = ~np.all(image_array == list(detect_colour), axis=2) | ||
|
|
||
| # Use scipy.ndimage.find_objects to efficiently find the bounding box | ||
| labeled_mask = mask.astype(np.int8) # Convert to int8 (smallest integer type) | ||
| objects = ndimage.find_objects(labeled_mask) | ||
|
|
||
| if not objects: # No objects found | ||
| return 0, 0, image_array.shape[1], image_array.shape[0] | ||
|
|
||
| # Extract the bounding box coordinates from the slice objects | ||
| y_slice, x_slice = objects[0] | ||
| min_y, max_y = y_slice.start, y_slice.stop - 1 | ||
| min_x, max_x = x_slice.start, x_slice.stop - 1 | ||
|
|
||
| return min_y, min_x, max_x, max_y |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect usage of ndimage.find_objects.
Lines 61-62 use ndimage.find_objects on a binary mask converted to int8, but find_objects requires a labeled array (output from ndimage.label). The current code will not work correctly.
Proposed fix
async def optimized_image_margins(
image_array: np.ndarray, detect_colour: tuple
) -> tuple[int, int, int, int]:
"""Optimized implementation using scipy.ndimage"""
# Create a binary mask where True = non-background pixels
mask = ~np.all(image_array == list(detect_colour), axis=2)
- # Use scipy.ndimage.find_objects to efficiently find the bounding box
- labeled_mask = mask.astype(np.int8) # Convert to int8 (smallest integer type)
- objects = ndimage.find_objects(labeled_mask)
+ # Label the mask and find bounding boxes
+ labeled_mask, num_features = ndimage.label(mask)
+ objects = ndimage.find_objects(labeled_mask)
if not objects: # No objects found
return 0, 0, image_array.shape[1], image_array.shape[0]
# Extract the bounding box coordinates from the slice objects
y_slice, x_slice = objects[0]
min_y, max_y = y_slice.start, y_slice.stop - 1
min_x, max_x = x_slice.start, x_slice.stop - 1
return min_y, min_x, max_x, max_yAlternatively, for a single connected region, use np.where to find the bounding box directly without labeling:
async def optimized_image_margins(
image_array: np.ndarray, detect_colour: tuple
) -> tuple[int, int, int, int]:
"""Optimized implementation using boolean indexing"""
mask = ~np.all(image_array == list(detect_colour), axis=2)
y_indices, x_indices = np.where(mask)
if len(y_indices) == 0: # No objects found
return 0, 0, image_array.shape[1], image_array.shape[0]
min_y, max_y = y_indices.min(), y_indices.max()
min_x, max_x = x_indices.min(), x_indices.max()
return min_y, min_x, max_x, max_y🤖 Prompt for AI Agents
In tests/benchmark_margins.py around lines 53 to 72, ndimage.find_objects is
being called on a raw int8 binary mask but it requires a labeled array (output
of ndimage.label), so the code is incorrect; replace the find_objects usage by
computing the boolean mask and then using np.where to get y and x indices, check
if any indices exist and return the full-image bounds if none, otherwise compute
min_y/max_y and min_x/max_x from the indices and return them in the same order
(min_y, min_x, max_x, max_y).
| # Test 1: AutoCrop with rotation (most complex operation) | ||
| print(f"\n{'=' * 70}") | ||
| print("Test 1: AutoCrop with rotation ({ITERATIONS} iterations)") | ||
| print(f"{'=' * 70}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Missing f-string prefix causes {ITERATIONS} to print literally.
Line 73 is missing the f prefix, so {ITERATIONS} won't be interpolated and will print as literal text instead of 100.
🔎 Proposed fix
# Test 1: AutoCrop with rotation (most complex operation)
print(f"\n{'=' * 70}")
-print("Test 1: AutoCrop with rotation ({ITERATIONS} iterations)")
+print(f"Test 1: AutoCrop with rotation ({ITERATIONS} iterations)")
print(f"{'=' * 70}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Test 1: AutoCrop with rotation (most complex operation) | |
| print(f"\n{'=' * 70}") | |
| print("Test 1: AutoCrop with rotation ({ITERATIONS} iterations)") | |
| print(f"{'=' * 70}") | |
| # Test 1: AutoCrop with rotation (most complex operation) | |
| print(f"\n{'=' * 70}") | |
| print(f"Test 1: AutoCrop with rotation ({ITERATIONS} iterations)") | |
| print(f"{'=' * 70}") |
🤖 Prompt for AI Agents
In tests/test_mvcrender.py around lines 71 to 74, the print on line 73 is
missing the f-string prefix so the literal "{ITERATIONS}" is printed instead of
the variable value; fix by adding the leading f to that string (i.e., change the
second print to an f-string) so ITERATIONS is interpolated, and run the test to
confirm the output now shows the numeric iteration count.
| from backups.new_rand256_parser import RRMapParser as Rand256Parser | ||
| from backups.rand256_parser_backup import RRMapParser as Rand25Parser | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the backups directory structure and module existence
echo "=== Looking for backups directory ==="
find . -type d -name "backups" 2>/dev/null || echo "No backups directory found"
echo ""
echo "=== Checking for Python files in backups directories ==="
fd -t f -e py . backups 2>/dev/null || echo "No Python files found in backups"
echo ""
echo "=== Checking directory structure with 'ls' if backups exists ==="
if [ -d "backups" ]; then
ls -la backups/
fi
echo ""
echo "=== Examining test_parser_comparison.py imports ==="
head -30 tests/test_parser_comparison.py 2>/dev/null || echo "File not found"Repository: sca075/Python-package-valetudo-map-parser
Length of output: 1181
🏁 Script executed:
#!/bin/bash
# Check if backups directory exists within SCR
echo "=== Checking SCR directory structure ==="
if [ -d "SCR" ]; then
echo "SCR directory exists"
ls -la SCR/ | head -20
echo ""
echo "=== Looking for backups within SCR ==="
if [ -d "SCR/backups" ]; then
echo "SCR/backups exists:"
ls -la SCR/backups/
else
echo "SCR/backups does NOT exist"
fi
echo ""
echo "=== Looking for any parser-related files in SCR ==="
find SCR -type f -name "*parser*" -o -name "*rand*" 2>/dev/null | head -20
else
echo "SCR directory does NOT exist"
fi
echo ""
echo "=== Full directory structure at root ==="
ls -la | grep -E "^d" | head -20Repository: sca075/Python-package-valetudo-map-parser
Length of output: 1082
🏁 Script executed:
#!/bin/bash
# Check if backups directory exists anywhere in the repository
echo "=== Searching for 'backups' directory anywhere ==="
find . -type d -name "backups" 2>/dev/null
echo ""
echo "=== Checking if test_parser_comparison.py is in pytest discovery ==="
# Check if file exists
if [ -f "tests/test_parser_comparison.py" ]; then
echo "File exists at tests/test_parser_comparison.py"
# Check for test functions or classes
grep -E "^def test_|^class Test" tests/test_parser_comparison.py | head -10
echo ""
echo "=== Checking file size and content structure ==="
wc -l tests/test_parser_comparison.py
echo ""
echo "=== Checking if there are conditional imports or try/except blocks ==="
grep -A 2 -B 2 "from backups" tests/test_parser_comparison.py
fi
echo ""
echo "=== Checking pyproject.toml or pytest.ini for test discovery config ==="
if [ -f "pytest.ini" ]; then cat pytest.ini; fi
if [ -f "pyproject.toml" ]; then grep -A 10 "\[tool.pytest" pyproject.toml; fi
if [ -f "setup.cfg" ]; then grep -A 10 "\[tool:pytest\]" setup.cfg; fiRepository: sca075/Python-package-valetudo-map-parser
Length of output: 742
Missing backups modules will cause import failure.
The imports from backups.new_rand256_parser and from backups.rand256_parser_backup reference modules that don't exist in the repository. The backups directory is not present in the SCR/ path where these imports are resolved. This will cause the test to fail at import time when pytest attempts to collect the test function.
Either restore the missing backup modules to SCR/backups/ or remove/refactor these imports if they're no longer needed.
🤖 Prompt for AI Agents
In tests/test_parser_comparison.py around lines 14 to 16, the test imports from
backups.new_rand256_parser and backups.rand256_parser_backup which don't exist
and will cause pytest import-time failures; either restore those modules under
SCR/backups/ with the expected RRMapParser exports, or change the test to import
the available parser implementations (or use a conditional/local stub) so
imports succeed during collection — update the import lines to reference
existing modules or add the missing backup files exporting RRMapParser.
| def test_parsers(): | ||
| """Test both parsers with the saved map data.""" | ||
| # Look for the map data file | ||
| payload_file = "map_data_20250728_185945.bin" | ||
|
|
||
| # Try different possible locations | ||
| possible_paths = [ | ||
| payload_file, | ||
| f"tests/{payload_file}", | ||
| f"../{payload_file}", | ||
| f"/tmp/vacuum_payloads/{payload_file}", | ||
| "tests/map_data_20250728_185945.bin", | ||
| ] | ||
|
|
||
| payload_path = None | ||
| for path in possible_paths: | ||
| if os.path.exists(path): | ||
| payload_path = path | ||
| break | ||
|
|
||
| if not payload_path: | ||
| print(f"Could not find payload file: {payload_file}") | ||
| print("Tried these locations:") | ||
| for path in possible_paths: | ||
| print(f" - {path}") | ||
| return | ||
|
|
||
| # Load the payload | ||
| try: | ||
| payload = load_payload(payload_path) | ||
| except Exception as e: | ||
| print(f"Error loading payload: {e}") | ||
| return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function returns None implicitly on errors, causing downstream issues.
When the payload file is not found (line 53) or loading fails (line 60), the function returns None implicitly. This causes issues in main() at line 222-228 where results could be None but the code tries to iterate over it.
🔎 Proposed fix
if not payload_path:
print(f"Could not find payload file: {payload_file}")
print("Tried these locations:")
for path in possible_paths:
print(f" - {path}")
- return
+ return {}
# Load the payload
try:
payload = load_payload(payload_path)
except Exception as e:
print(f"Error loading payload: {e}")
- return
+ return {}| print("\n3. FUNCTION LIST OVERHEAD TEST") | ||
| print("-" * 80) | ||
|
|
||
| # Test if the function list causes overhead | ||
| shared = create_mock_shared() | ||
| status_text = StatusText(shared) | ||
|
|
||
| # Measure function list size | ||
| import sys | ||
| func_list_size = sys.getsizeof(status_text.compose_functions) | ||
| func_ref_size = sum(sys.getsizeof(f) for f in status_text.compose_functions) | ||
|
|
||
| print(f"Function list size: {func_list_size} bytes") | ||
| print(f"Function references: {func_ref_size} bytes") | ||
| print(f"Total overhead: {func_list_size + func_ref_size} bytes (~{(func_list_size + func_ref_size)/1024:.2f} KB)") | ||
| print(f"Number of functions: {len(status_text.compose_functions)}") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix attribute access to use correct private member.
Line 115 accesses status_text.compose_functions, but based on the code snippets from SCR/valetudo_map_parser/config/status_text/status_text.py, the attribute is _compose_functions (private with underscore prefix).
Proposed fix
- func_list_size = sys.getsizeof(status_text.compose_functions)
- func_ref_size = sum(sys.getsizeof(f) for f in status_text.compose_functions)
+ func_list_size = sys.getsizeof(status_text._compose_functions)
+ func_ref_size = sum(sys.getsizeof(f) for f in status_text._compose_functions)- print(f"Number of functions: {len(status_text.compose_functions)}")
+ print(f"Number of functions: {len(status_text._compose_functions)}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print("\n3. FUNCTION LIST OVERHEAD TEST") | |
| print("-" * 80) | |
| # Test if the function list causes overhead | |
| shared = create_mock_shared() | |
| status_text = StatusText(shared) | |
| # Measure function list size | |
| import sys | |
| func_list_size = sys.getsizeof(status_text.compose_functions) | |
| func_ref_size = sum(sys.getsizeof(f) for f in status_text.compose_functions) | |
| print(f"Function list size: {func_list_size} bytes") | |
| print(f"Function references: {func_ref_size} bytes") | |
| print(f"Total overhead: {func_list_size + func_ref_size} bytes (~{(func_list_size + func_ref_size)/1024:.2f} KB)") | |
| print(f"Number of functions: {len(status_text.compose_functions)}") | |
| print("\n3. FUNCTION LIST OVERHEAD TEST") | |
| print("-" * 80) | |
| # Test if the function list causes overhead | |
| shared = create_mock_shared() | |
| status_text = StatusText(shared) | |
| # Measure function list size | |
| import sys | |
| func_list_size = sys.getsizeof(status_text._compose_functions) | |
| func_ref_size = sum(sys.getsizeof(f) for f in status_text._compose_functions) | |
| print(f"Function list size: {func_list_size} bytes") | |
| print(f"Function references: {func_ref_size} bytes") | |
| print(f"Total overhead: {func_list_size + func_ref_size} bytes (~{(func_list_size + func_ref_size)/1024:.2f} KB)") | |
| print(f"Number of functions: {len(status_text._compose_functions)}") | |
🤖 Prompt for AI Agents
In tests/test_status_text_performance.py around lines 106 to 122, the test
accesses status_text.compose_functions but the actual attribute on StatusText is
named _compose_functions; update all uses in this block to reference
status_text._compose_functions (and adjust any variable names/prints
accordingly) so the test reads the correct private member without causing an
AttributeError.
| # Test just the compose function loop overhead | ||
| shared = create_mock_shared(battery=67, room={"in_room": "Kitchen"}) | ||
| status_text_obj = StatusText(shared) | ||
| lang_map = {} | ||
|
|
||
| # Measure just the function loop (without translation) | ||
| start = time.perf_counter() | ||
| for _ in range(10000): | ||
| status_text = [f"{shared.file_name}: cleaning"] | ||
| for func in status_text_obj.compose_functions: | ||
| status_text = func(status_text, lang_map) | ||
| end = time.perf_counter() | ||
| loop_time = (end - start) * 1000 | ||
|
|
||
| # Measure inline if/else (equivalent logic) | ||
| start = time.perf_counter() | ||
| for _ in range(10000): | ||
| status_text = [f"{shared.file_name}: cleaning"] | ||
| # Inline all the checks | ||
| if not shared.vacuum_connection: | ||
| status_text = [f"{shared.file_name}: Disconnected"] | ||
| if shared.vacuum_state == "docked" and shared.vacuum_bat_charged(): | ||
| status_text.append(" \u00b7 ") | ||
| status_text.append(f"⚡\u03de {shared.vacuum_battery}%") | ||
| if shared.vacuum_state == "docked" and not shared.vacuum_bat_charged(): | ||
| status_text.append(" \u00b7 ") | ||
| status_text.append(f"\u03de Ready.") | ||
| if shared.current_room: | ||
| in_room = shared.current_room.get("in_room") | ||
| if in_room: | ||
| status_text.append(f" ({in_room})") | ||
| if shared.vacuum_state != "docked": | ||
| status_text.append(" \u00b7 ") | ||
| status_text.append(f"\u03de {shared.vacuum_battery}%") | ||
| end = time.perf_counter() | ||
| inline_time = (end - start) * 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect function call signatures.
Lines 135-136 call compose functions with func(status_text, lang_map), but based on the relevant code snippets from status_text.py, these methods only accept self and current_state parameters, not lang_map. This will cause a TypeError at runtime.
The actual signature is: def _mqtt_disconnected(self, current_state: list[str]) -> list[str]
Proposed fix
# Measure just the function loop (without translation)
start = time.perf_counter()
for _ in range(10000):
status_text = [f"{shared.file_name}: cleaning"]
- for func in status_text_obj.compose_functions:
- status_text = func(status_text, lang_map)
+ for func in status_text_obj._compose_functions:
+ status_text = func(status_text)
end = time.perf_counter()
loop_time = (end - start) * 1000Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/test_status_text_performance.py around lines 126 to 161, the compose
function loop is calling each compose function with two arguments
(func(status_text, lang_map)) but the functions are instance methods that accept
only the current_state parameter (self already bound), so remove the extra
lang_map argument and call each function with just the current_state (e.g.,
func(status_text)) to match the signature and avoid the TypeError.
Exposes
FloorDataandTrimsDatatypes in the library's public API to enable the MQTT Vacuum Camera integration to import these types for multi-floor support implementation.Changes
FloorDataandTrimsDatato imports fromconfig.types__all__list for public APIPurpose
These types are needed by the MQTT Vacuum Camera Home Assistant integration to implement multi-floor support with persistent auto-crop trims per floor.
Summary by CodeRabbit
New Features
FloorDataandTrimsDatatypes for public use.Changes
Chores
✏️ Tip: You can customize this high-level summary in your review settings.