Describe your issue
The current implementation of task persistence in
lib/app/utils/home_path/impl/data.dart uses non-atomic, synchronous file operations without any locking mechanism, which introduces severe race conditions and data loss scenarios.
Specifically, the _mergeTasks() method:
Reads the entire file into memory
Clears the file (writeAsStringSync(''))
Rewrites tasks line-by-line using multiple append operations
This approach is unsafe under concurrent access or unexpected crashes.
Why this is critical
File is explicitly truncated before rewrite
Multiple independent write operations increase failure risk
No locking → race conditions between sync and local updates
Crash between truncate and write → permanent data loss
This impacts:
Sync operations
Local task creation
Background updates
Steps to reproduce
Launch the app
Trigger a sync operation (fetch tasks from server)
At the same time, add a new task locally
Observe .task/all.data file behavior
Result:
File may become partially written or completely empty
Some or all tasks disappear permanently
What was the expected result?
Task file updates should be atomic and crash-safe
Concurrent operations should not corrupt data
No data loss should occur under any circumstance
Code reference
Problematic implementation:
void _mergeTasks(List tasks) {
var lines = File('${home.path}/.task/all.data')
.readAsStringSync(); // NOT ATOMIC
var taskMap = { /* build map */ };
File('${home.path}/.task/all.data')
.writeAsStringSync(''); // TRUNCATE
for (var task in taskMap.values) {
File('${home.path}/.task/all.data').writeAsStringSync(
'$task\n',
mode: FileMode.append, // Multiple writes
);
}
}
Root cause
No file locking mechanism
Non-atomic write strategy
Multiple I/O operations instead of a single transaction
No crash recovery or backup strategy
Key improvements:
Use a mutex/lock (e.g., synchronized package)
Write to a temporary file first
Replace original file using atomic rename
Maintain a backup file for recovery
Replace multiple writes with single batch write
High-level approach:
await lock.synchronized(() async {
final tasks = _readAllTasks();
final updated = merge(tasks, newTasks);
await tempFile.writeAsString(allTasksInOneString);
if (mainFile.existsSync()) {
await mainFile.copy(backupFile.path);
}
await tempFile.rename(mainFile.path); // atomic replace
});
Impact
Prevents total data loss
Eliminates race conditions
Ensures crash-safe persistence
Improves reliability of sync + local updates
Put here any screenshots or videos (optional)
No response
How can we contact you (optional)
No response
Would you like to work on this issue?
Yes
By submitting this issue, I have confirmed that:
Describe your issue
The current implementation of task persistence in
lib/app/utils/home_path/impl/data.dart uses non-atomic, synchronous file operations without any locking mechanism, which introduces severe race conditions and data loss scenarios.
Specifically, the _mergeTasks() method:
Reads the entire file into memory
Clears the file (writeAsStringSync(''))
Rewrites tasks line-by-line using multiple append operations
This approach is unsafe under concurrent access or unexpected crashes.
Why this is critical
File is explicitly truncated before rewrite
Multiple independent write operations increase failure risk
No locking → race conditions between sync and local updates
Crash between truncate and write → permanent data loss
This impacts:
Sync operations
Local task creation
Background updates
Steps to reproduce
Launch the app
Trigger a sync operation (fetch tasks from server)
At the same time, add a new task locally
Observe .task/all.data file behavior
Result:
File may become partially written or completely empty
Some or all tasks disappear permanently
What was the expected result?
Task file updates should be atomic and crash-safe
Concurrent operations should not corrupt data
No data loss should occur under any circumstance
Code reference
Problematic implementation:
void _mergeTasks(List tasks) {
var lines = File('${home.path}/.task/all.data')
.readAsStringSync(); // NOT ATOMIC
var taskMap = { /* build map */ };
File('${home.path}/.task/all.data')
.writeAsStringSync(''); // TRUNCATE
for (var task in taskMap.values) {
File('${home.path}/.task/all.data').writeAsStringSync(
'$task\n',
mode: FileMode.append, // Multiple writes
);
}
}
Root cause
No file locking mechanism
Non-atomic write strategy
Multiple I/O operations instead of a single transaction
No crash recovery or backup strategy
Key improvements:
Use a mutex/lock (e.g., synchronized package)
Write to a temporary file first
Replace original file using atomic rename
Maintain a backup file for recovery
Replace multiple writes with single batch write
High-level approach:
await lock.synchronized(() async {
final tasks = _readAllTasks();
final updated = merge(tasks, newTasks);
await tempFile.writeAsString(allTasksInOneString);
if (mainFile.existsSync()) {
await mainFile.copy(backupFile.path);
}
await tempFile.rename(mainFile.path); // atomic replace
});
Impact
Prevents total data loss
Eliminates race conditions
Ensures crash-safe persistence
Improves reliability of sync + local updates
Put here any screenshots or videos (optional)
No response
How can we contact you (optional)
No response
Would you like to work on this issue?
Yes
By submitting this issue, I have confirmed that: