The application employs an Offline-First strategy. The local database serves as the primary .Data is synchronized with remote services (like the CategoryApiService) using a Background Sync pattern. When new apps are detected, they are queued for categorization via , ensuring that data eventually reaches a consistent state even with intermittent connectivity.
The data layer is abstracted through several specialized repositories:
AppCategoryRepository: Manages the classification of installed apps (e.g., Social, Productivity). It handles batch API requests and respects user-defined overrides.
LimitsRepository: Manages usage constraints and “Snooze” logic for restricted apps.
ScrollDataRepository: Aggregates raw usage events into meaningful metrics like “Time Used Today.”
JourneysRepository: Tracks the , persisting chronological interactions to the user_journey table.
user_journey: Stores granular interaction events. Indexed by date_string and timestamp for fast timeline reconstruction.
notifications: A flattened table tracking notification lifecycle (Posted, Dismissed, Batched).
app_categories: Maps package names to categories. Includes a is_user_categorized flag to prevent remote syncs from overwriting manual user adjustments.
limit_outcomes: Tracks the success or failure of daily limits, including snooze_count for behavioral analysis.
Indexing: Critical columns like package_name, date_string, and session_id are decorated with @Index to prevent full table scans during UI rendering.
Batching: The AppCategoryRepository chunks package synchronization into groups of 100 to optimize network overhead and reduce API roundtrips.
Migration Strategy: Uses a robust Migration system (currently at v44) to evolve the schema without data loss. It includes a fallbackToDestructiveMigrationFrom(42) safety net to handle incompatible development builds.
Database: Room persistence with multiple s (e.g., UserJourneyDao, AppCategoryDao).
Key-Value Store: is used for lightweight metadata like PREFS_APP_CATEGORY.
Conflict Resolution: The system uses OnConflictStrategy.REPLACE (upsert) for most metadata, but implements custom logic in the Repository layer to protect user-modified data from being overwritten by server s.
The application strictly separates concerns using Kotlin Coroutines:
@IoDispatcher: All disk and network operations are offloaded to Dispatchers.IO to prevent UI jank.
@ApplicationScope: Long-running observers (like the App Metadata Observer) use a SupervisorJob tied to the application lifecycle, ensuring they aren’t cancelled when a specific screen is closed.
Flow-based Updates: Repositories expose Flow or StateFlow to the UI, allowing for real-time updates as the underlying database changes.
Race Condition Mitigation: The AppCategoryRepository uses debounce(1000L) when observing app installs to handle “burst” events (e.g., multiple updates from the Play Store) without triggering redundant API calls.