chore: establish governance baseline and migration workflow
This commit is contained in:
parent
dfaab1dfcb
commit
7fb28e659f
335
.github/copilot-instructions.md
vendored
335
.github/copilot-instructions.md
vendored
@ -1,324 +1,21 @@
|
||||
# Costco Grocery List - AI Agent Instructions
|
||||
# Copilot Compatibility Instructions
|
||||
|
||||
## Architecture Overview
|
||||
## Precedence
|
||||
- Source of truth: `PROJECT_INSTRUCTIONS.md` (repo root).
|
||||
- Agent workflow constraints: `AGENTS.md` (repo root).
|
||||
- Bugfix protocol: `DEBUGGING_INSTRUCTIONS.md` (repo root).
|
||||
|
||||
This is a full-stack grocery list management app with **role-based access control (RBAC)**:
|
||||
- **Backend**: Node.js + Express + PostgreSQL (port 5000)
|
||||
- **Frontend**: React 19 + TypeScript + Vite (port 3000/5173)
|
||||
- **Deployment**: Docker Compose with separate dev/prod configurations
|
||||
If any guidance in this file conflicts with the root instruction files, follow the root instruction files.
|
||||
|
||||
## Mobile-First Design Principles
|
||||
## Current stack note
|
||||
This repository is currently:
|
||||
- Backend: Express (`backend/`)
|
||||
- Frontend: React + Vite (`frontend/`)
|
||||
|
||||
**CRITICAL**: All UI components MUST be designed for both mobile and desktop from the start.
|
||||
Apply architecture intent from `PROJECT_INSTRUCTIONS.md` using the current stack mapping in:
|
||||
- `docs/AGENTIC_CONTRACT_MAP.md`
|
||||
|
||||
**Responsive Design Requirements**:
|
||||
- Use relative units (`rem`, `em`, `%`, `vh/vw`) over fixed pixels where possible
|
||||
- Implement mobile breakpoints: `480px`, `768px`, `1024px`
|
||||
- Test layouts at: 320px (small phone), 375px (phone), 768px (tablet), 1024px+ (desktop)
|
||||
- Avoid horizontal scrolling on mobile devices
|
||||
- Touch targets minimum 44x44px for mobile usability
|
||||
- Use `max-width` with `margin: 0 auto` for content containers
|
||||
- Stack elements vertically on mobile, use flexbox/grid for larger screens
|
||||
- Hide/collapse navigation into hamburger menus on mobile
|
||||
- Ensure modals/dropdowns work well on small screens
|
||||
|
||||
**Common Patterns**:
|
||||
```css
|
||||
/* Mobile-first approach */
|
||||
.container {
|
||||
padding: 1rem;
|
||||
max-width: 100%;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.container {
|
||||
padding: 2rem;
|
||||
max-width: 800px;
|
||||
margin: 0 auto;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Key Design Patterns
|
||||
|
||||
**Dual RBAC System** - Two separate role hierarchies:
|
||||
|
||||
**1. System Roles** (users.role column):
|
||||
- `system_admin`: Access to Admin Panel for system-wide management (stores, users)
|
||||
- `user`: Regular system user (default for new registrations)
|
||||
- Defined in [backend/models/user.model.js](backend/models/user.model.js)
|
||||
- Used for Admin Panel access control
|
||||
|
||||
**2. Household Roles** (household_members.role column):
|
||||
- `admin`: Can manage household members, change roles, delete household
|
||||
- `user`: Can add/edit items, mark as bought (standard member permissions)
|
||||
- Defined per household membership
|
||||
- Used for household-level permissions (item management, member management)
|
||||
|
||||
**Important**: Always distinguish between system role and household role:
|
||||
- **System role**: From `AuthContext` or `req.user.role` - controls Admin Panel access
|
||||
- **Household role**: From `activeHousehold.role` or `household_members.role` - controls household operations
|
||||
|
||||
**Middleware chain pattern** for protected routes:
|
||||
```javascript
|
||||
// System-level protection
|
||||
router.get("/stores", auth, requireRole("system_admin"), controller.getAllStores);
|
||||
|
||||
// Household-level checks done in controller
|
||||
router.post("/lists/:householdId/items", auth, controller.addItem);
|
||||
```
|
||||
- `auth` middleware extracts JWT from `Authorization: Bearer <token>` header
|
||||
- `requireRole` checks system role only
|
||||
- Household role checks happen in controllers using `household.model.js` methods
|
||||
|
||||
**Frontend route protection**:
|
||||
- `<PrivateRoute>`: Requires authentication, redirects to `/login` if no token
|
||||
- `<RoleGuard allowed={[ROLES.SYSTEM_ADMIN]}>`: Requires system_admin role for Admin Panel
|
||||
- Household permissions: Check `activeHousehold.role` in components (not route-level)
|
||||
- Example in [frontend/src/App.jsx](frontend/src/App.jsx)
|
||||
|
||||
**Multi-Household Architecture**:
|
||||
- Users can belong to multiple households
|
||||
- Each household has its own grocery lists, stores, and item classifications
|
||||
- `HouseholdContext` manages active household selection
|
||||
- All list operations are scoped to the active household
|
||||
|
||||
## Database Schema
|
||||
|
||||
**PostgreSQL server runs externally** - not in Docker Compose. Connection configured in [backend/.env](backend/.env) via standard environment variables.
|
||||
|
||||
**Core Tables**:
|
||||
|
||||
**users** - System users
|
||||
- `id` (PK), `username`, `password` (bcrypt), `name`, `display_name`
|
||||
- `role`: `system_admin` | `user` (default: `viewer` - legacy)
|
||||
- System-level authentication and authorization
|
||||
|
||||
**households** - Household entities
|
||||
- `id` (PK), `name`, `invite_code`, `created_by`, `created_at`
|
||||
- Each household is independent with own lists and members
|
||||
|
||||
**household_members** - Junction table (users ↔ households)
|
||||
- `id` (PK), `household_id` (FK), `user_id` (FK), `role`, `joined_at`
|
||||
- `role`: `admin` | `user` (household-level permissions)
|
||||
- One user can belong to multiple households with different roles
|
||||
|
||||
**items** - Master item catalog
|
||||
- `id` (PK), `name`, `default_image`, `default_image_mime_type`, `usage_count`
|
||||
- Shared across all households, case-insensitive unique names
|
||||
|
||||
**stores** - Store definitions (system-wide)
|
||||
- `id` (PK), `name`, `default_zones` (JSONB array)
|
||||
- Managed by system_admin in Admin Panel
|
||||
|
||||
**household_stores** - Stores available to each household
|
||||
- `id` (PK), `household_id` (FK), `store_id` (FK), `is_default`
|
||||
- Links households to stores they use
|
||||
|
||||
**household_lists** - Grocery list items per household
|
||||
- `id` (PK), `household_id` (FK), `store_id` (FK), `item_id` (FK)
|
||||
- `quantity`, `bought`, `custom_image`, `custom_image_mime_type`
|
||||
- `added_by`, `modified_on`
|
||||
- Scoped to household + store combination
|
||||
|
||||
**household_list_history** - Tracks quantity contributions
|
||||
- `id` (PK), `household_list_id` (FK), `quantity`, `added_by`, `added_on`
|
||||
- Multi-contributor tracking (who added how much)
|
||||
|
||||
**household_item_classifications** - Item classifications per household/store
|
||||
- `id` (PK), `household_id`, `store_id`, `item_id`
|
||||
- `item_type`, `item_group`, `zone`, `confidence`, `source`
|
||||
- Household-specific overrides of global classifications
|
||||
|
||||
**item_classification** - Global item classifications
|
||||
- `id` (PK), `item_type`, `item_group`, `zone`, `confidence`, `source`
|
||||
- System-wide defaults for item categorization
|
||||
|
||||
**Legacy Tables** (deprecated, may still exist):
|
||||
- `grocery_list`, `grocery_history` - Old single-household implementation
|
||||
|
||||
**Important patterns**:
|
||||
- No formal migration system - schema changes are manual SQL
|
||||
- Items use case-insensitive matching (`ILIKE`) to prevent duplicates
|
||||
- JOINs with `ARRAY_AGG` for multi-contributor queries (see [backend/models/list.model.v2.js](backend/models/list.model.v2.js))
|
||||
- All list operations require `household_id` parameter for scoping
|
||||
- Image storage: `bytea` columns for images with separate MIME type columns
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Local Development
|
||||
```bash
|
||||
# Start all services with hot-reload against LOCAL database
|
||||
docker-compose -f docker-compose.dev.yml up
|
||||
|
||||
# Backend runs nodemon (watches backend/*.js)
|
||||
# Frontend runs Vite dev server with HMR on port 3000
|
||||
```
|
||||
|
||||
**Key dev setup details**:
|
||||
- Volume mounts preserve `node_modules` in containers while syncing source code
|
||||
- Backend uses `Dockerfile` (standard) with `npm run dev` override
|
||||
- Frontend uses `Dockerfile.dev` with `CHOKIDAR_USEPOLLING=true` for file watching
|
||||
- Both connect to **external PostgreSQL server** (configured in `backend/.env`)
|
||||
- No database container in compose - DB is managed separately
|
||||
|
||||
### Production Build
|
||||
```bash
|
||||
# Local production build (for testing)
|
||||
docker-compose -f docker-compose.prod.yml up --build
|
||||
|
||||
# Actual production uses pre-built images
|
||||
docker-compose up # Pulls from private registry
|
||||
```
|
||||
|
||||
### CI/CD Pipeline (Gitea Actions)
|
||||
|
||||
See [.gitea/workflows/deploy.yml](.gitea/workflows/deploy.yml) for full workflow:
|
||||
|
||||
**Build stage** (on push to `main`):
|
||||
1. Run backend tests (`npm test --if-present`)
|
||||
2. Build backend image with tags: `:latest` and `:<commit-sha>`
|
||||
3. Build frontend image with tags: `:latest` and `:<commit-sha>`
|
||||
4. Push both images to private registry
|
||||
|
||||
**Deploy stage**:
|
||||
1. SSH to production server
|
||||
2. Upload `docker-compose.yml` to deployment directory
|
||||
3. Pull latest images and restart containers with `docker compose up -d`
|
||||
4. Prune old images
|
||||
|
||||
**Notify stage**:
|
||||
- Sends deployment status via webhook
|
||||
|
||||
**Required secrets**:
|
||||
- `REGISTRY_USER`, `REGISTRY_PASS`: Docker registry credentials
|
||||
- `DEPLOY_HOST`, `DEPLOY_USER`, `DEPLOY_KEY`: SSH deployment credentials
|
||||
|
||||
### Backend Scripts
|
||||
- `npm run dev`: Start with nodemon
|
||||
- `npm run build`: esbuild compilation + copy public assets to `dist/`
|
||||
- `npm test`: Run Jest tests (currently no tests exist)
|
||||
|
||||
### Frontend Scripts
|
||||
- `npm run dev`: Vite dev server (port 5173)
|
||||
- `npm run build`: TypeScript compilation + Vite production build
|
||||
|
||||
### Docker Configurations
|
||||
|
||||
**docker-compose.yml** (production):
|
||||
- Pulls pre-built images from private registry
|
||||
- Backend on port 5000, frontend on port 3000 (nginx serves on port 80)
|
||||
- Requires `backend.env` and `frontend.env` files
|
||||
|
||||
**docker-compose.dev.yml** (local development):
|
||||
- Builds images locally from Dockerfile/Dockerfile.dev
|
||||
- Volume mounts for hot-reload: `./backend:/app` and `./frontend:/app`
|
||||
- Named volumes preserve `node_modules` between rebuilds
|
||||
- Backend uses `backend/.env` directly
|
||||
- Frontend uses `Dockerfile.dev` with polling enabled for cross-platform compatibility
|
||||
|
||||
**docker-compose.prod.yml** (local production testing):
|
||||
- Builds images locally using production Dockerfiles
|
||||
- Backend: Standard Node.js server
|
||||
- Frontend: Multi-stage build with nginx serving static files
|
||||
|
||||
## Configuration & Environment
|
||||
|
||||
**Backend** ([backend/.env](backend/.env)):
|
||||
- Database connection variables (host, user, password, database name)
|
||||
- `JWT_SECRET`: Token signing key
|
||||
- `ALLOWED_ORIGINS`: Comma-separated CORS whitelist (supports static origins + `192.168.*.*` IP ranges)
|
||||
- `PORT`: Server port (default 5000)
|
||||
|
||||
**Frontend** (environment variables):
|
||||
- `VITE_API_URL`: Backend base URL
|
||||
|
||||
**Config accessed via**:
|
||||
- Backend: `process.env.VAR_NAME`
|
||||
- Frontend: `import.meta.env.VITE_VAR_NAME` (see [frontend/src/config.ts](frontend/src/config.ts))
|
||||
|
||||
## Authentication Flow
|
||||
|
||||
1. User logs in → backend returns `{token, userId, role, username}` ([backend/controllers/auth.controller.js](backend/controllers/auth.controller.js))
|
||||
- `role` is the **system role** (`system_admin` or `user`)
|
||||
2. Frontend stores in `localStorage` and `AuthContext` ([frontend/src/context/AuthContext.jsx](frontend/src/context/AuthContext.jsx))
|
||||
3. `HouseholdContext` loads user's households and sets active household
|
||||
- Active household includes `household.role` (the **household role**)
|
||||
4. Axios interceptor auto-attaches `Authorization: Bearer <token>` header ([frontend/src/api/axios.js](frontend/src/api/axios.js))
|
||||
5. Backend validates JWT on protected routes ([backend/middleware/auth.js](backend/middleware/auth.js))
|
||||
- Sets `req.user = { id, role, username }` with **system role**
|
||||
6. Controllers check household membership/role using [backend/models/household.model.js](backend/models/household.model.js)
|
||||
7. On 401 "Invalid or expired token" response, frontend clears storage and redirects to login
|
||||
|
||||
## Critical Conventions
|
||||
|
||||
### Security Practices
|
||||
- **Never expose credentials**: Do not hardcode or document actual values for `JWT_SECRET`, database passwords, API keys, or any sensitive configuration
|
||||
- **No infrastructure details**: Avoid documenting specific IP addresses, domain names, deployment paths, or server locations in code or documentation
|
||||
- **Environment variables**: Reference `.env` files conceptually - never include actual contents
|
||||
- **Secrets in CI/CD**: Document that secrets are required, not their values
|
||||
- **Code review**: Scan all changes for accidentally committed credentials before pushing
|
||||
|
||||
### Backend
|
||||
- **No SQL injection**: Always use parameterized queries (`$1`, `$2`, etc.) with [backend/db/pool.js](backend/db/pool.js)
|
||||
- **Password hashing**: Use `bcryptjs` for hashing (see [backend/controllers/auth.controller.js](backend/controllers/auth.controller.js))
|
||||
- **CORS**: Dynamic origin validation in [backend/app.js](backend/app.js) allows configured origins + local IPs
|
||||
- **Error responses**: Return JSON with `{message: "..."}` structure
|
||||
|
||||
### Frontend
|
||||
- **Mixed JSX/TSX**: Some components are `.jsx` (JavaScript), others `.tsx` (TypeScript) - maintain existing file extensions
|
||||
- **API calls**: Use centralized `api` instance from [frontend/src/api/axios.js](frontend/src/api/axios.js), not raw axios
|
||||
- **Role checks**: Access role from `AuthContext`, compare with constants from [frontend/src/constants/roles.js](frontend/src/constants/roles.js)
|
||||
- **Navigation**: Use React Router's `<Navigate>` for redirects, not `window.location` (except in interceptor)
|
||||
|
||||
## Common Tasks
|
||||
|
||||
**Add a new protected route**:
|
||||
1. Backend: Add route with `auth` middleware (+ `requireRole(...)` if system role check needed)
|
||||
2. Frontend: Add route in [frontend/src/App.jsx](frontend/src/App.jsx) wrapped in `<PrivateRoute>` (and `<RoleGuard>` for Admin Panel)
|
||||
|
||||
**Access user info in backend controller**:
|
||||
```javascript
|
||||
const { id, role } = req.user; // Set by auth middleware (system role)
|
||||
const userId = req.user.id;
|
||||
```
|
||||
|
||||
**Check household permissions in backend controller**:
|
||||
```javascript
|
||||
const householdRole = await household.getUserRole(householdId, userId);
|
||||
if (!householdRole) return res.status(403).json({ message: "Not a member of this household" });
|
||||
if (householdRole !== 'admin') return res.status(403).json({ message: "Household admin required" });
|
||||
```
|
||||
|
||||
**Check household permissions in frontend**:
|
||||
```javascript
|
||||
const { activeHousehold } = useContext(HouseholdContext);
|
||||
const householdRole = activeHousehold?.role; // 'admin' or 'user'
|
||||
|
||||
// Allow all members except viewers (no viewer role in households)
|
||||
const canManageItems = householdRole && householdRole !== 'viewer'; // Usually just check if role exists
|
||||
|
||||
// Admin-only actions
|
||||
const canManageMembers = householdRole === 'admin';
|
||||
```
|
||||
|
||||
**Query grocery items with contributors**:
|
||||
Use the JOIN pattern in [backend/models/list.model.v2.js](backend/models/list.model.v2.js) - aggregates user names via `household_list_history` table.
|
||||
|
||||
## Testing
|
||||
|
||||
**Backend**:
|
||||
- Jest configured at root level ([package.json](package.json))
|
||||
- Currently **no test files exist** - testing infrastructure needs development
|
||||
- CI/CD runs `npm test --if-present` but will pass if no tests found
|
||||
- Focus area: API endpoint testing (use `supertest` with Express)
|
||||
|
||||
**Frontend**:
|
||||
- ESLint only (see [frontend/eslint.config.js](frontend/eslint.config.js))
|
||||
- No test runner configured
|
||||
- Manual testing workflow in use
|
||||
|
||||
**To add backend tests**:
|
||||
1. Create `backend/__tests__/` directory
|
||||
2. Use Jest + Supertest pattern for API tests
|
||||
3. Mock database calls or use test database
|
||||
## Safety reminders
|
||||
- External DB only (`DATABASE_URL`), no DB container assumptions.
|
||||
- No cron/worker additions unless explicitly approved.
|
||||
- Never log secrets, receipt bytes, or full invite codes.
|
||||
|
||||
53
AGENTS.md
Normal file
53
AGENTS.md
Normal file
@ -0,0 +1,53 @@
|
||||
# AGENTS.md - Fiddy (External DB)
|
||||
|
||||
## Authority
|
||||
- Source of truth: `PROJECT_INSTRUCTIONS.md` (repo root). If conflict, follow it.
|
||||
- Bugfix protocol: `DEBUGGING_INSTRUCTIONS.md` (repo root).
|
||||
- Do not implement features unless required to fix the bug.
|
||||
|
||||
## Non-negotiables
|
||||
- External DB: `DATABASE_URL` points to on-prem Postgres (NOT a container).
|
||||
- Dev/Prod share schema via migrations in `packages/db/migrations`.
|
||||
- No cron/worker jobs. Fixes must work without background tasks.
|
||||
- Server-side RBAC only. Client checks are UX only.
|
||||
|
||||
## Security / logging (hard rules)
|
||||
- Never log secrets (passwords/tokens/cookies).
|
||||
- Never log receipt bytes.
|
||||
- Never log full invite codes; logs/audit store last4 only.
|
||||
|
||||
## Non-regression contracts
|
||||
- Sessions are DB-backed (`sessions` table) and cookies are HttpOnly.
|
||||
- Receipt images stored in `receipts` (`bytea`).
|
||||
- Entries list endpoints must NEVER return receipt bytes.
|
||||
- API responses must include `request_id`; audit logs must include `request_id`.
|
||||
|
||||
## Architecture boundaries (follow existing patterns; do not invent)
|
||||
1) API routes: `app/api/**/route.ts`
|
||||
- Thin: parse/validate + call service, return JSON.
|
||||
2) Server services: `lib/server/*`
|
||||
- Own DB + authz. Must include `import "server-only";`.
|
||||
3) Client wrappers: `lib/client/*`
|
||||
- Typed fetch + error normalization; always send credentials.
|
||||
4) Hooks: `hooks/use-*.ts`
|
||||
- Primary UI-facing API layer; components avoid raw `fetch()`.
|
||||
|
||||
## Next.js dynamic route params (required)
|
||||
- In `app/api/**/[param]/route.ts`, treat `context.params` as async:
|
||||
- `const { id } = await context.params;`
|
||||
|
||||
## Working style
|
||||
- Scan repo first; do not guess file names or patterns.
|
||||
- Make the smallest change that resolves the issue.
|
||||
- Keep touched files free of TS warnings and lint errors.
|
||||
- Add/update tests when API behavior changes (include negative cases).
|
||||
- Keep text encoding clean (no mojibake).
|
||||
|
||||
## Response icon legend
|
||||
Use the same status icons defined in `PROJECT_INSTRUCTIONS.md` section "Agent Response Legend (required)":
|
||||
- `🔄` in progress
|
||||
- `✅` completed
|
||||
- `🧪` verification/test result
|
||||
- `⚠️` risk/blocker/manual action
|
||||
- `❌` failure
|
||||
- `🧭` recommendation/next step
|
||||
48
DEBUGGING_INSTRUCTIONS.md
Normal file
48
DEBUGGING_INSTRUCTIONS.md
Normal file
@ -0,0 +1,48 @@
|
||||
# Debugging Instructions - Fiddy
|
||||
|
||||
## Scope and authority
|
||||
- This file is required for bugfix work.
|
||||
- `PROJECT_INSTRUCTIONS.md` remains the source of truth for global project rules.
|
||||
- For debugging tasks, ship the smallest safe fix that resolves the verified issue.
|
||||
|
||||
## Required bugfix workflow
|
||||
1. Reproduce:
|
||||
- Capture exact route/page, inputs, actor role, and expected vs actual behavior.
|
||||
- Record a concrete repro sequence before changing code.
|
||||
2. Localize:
|
||||
- Identify the failing boundary (route/controller/model/service/client wrapper/hook/ui).
|
||||
- Confirm whether failure is validation, authorization, data, or rendering.
|
||||
3. Fix minimally:
|
||||
- Modify only the layers needed to resolve the bug.
|
||||
- Do not introduce parallel mechanisms for the same state flow.
|
||||
4. Verify:
|
||||
- Re-run repro.
|
||||
- Run lint/tests for touched areas.
|
||||
- Confirm no regression against contracts in `PROJECT_INSTRUCTIONS.md`.
|
||||
|
||||
## Guardrails while debugging
|
||||
- External DB only:
|
||||
- Use `DATABASE_URL`.
|
||||
- Never add a DB container for a fix.
|
||||
- No background jobs:
|
||||
- Do not add cron, workers, or polling daemons.
|
||||
- Security:
|
||||
- Never log secrets, receipt bytes, or full invite codes.
|
||||
- Invite logs/audit may include only last4.
|
||||
- Authorization:
|
||||
- Enforce RBAC server-side; client checks are UX only.
|
||||
|
||||
## Contract-specific debug checks
|
||||
- Auth:
|
||||
- Sessions must remain DB-backed and cookie-based (HttpOnly).
|
||||
- Receipts:
|
||||
- List endpoints must never include receipt bytes.
|
||||
- Byte retrieval must be through dedicated endpoint only.
|
||||
- Request IDs/audit:
|
||||
- Ensure `request_id` appears in responses and audit trail for affected paths.
|
||||
|
||||
## Evidence to include with every bugfix
|
||||
- Root cause summary (one short paragraph).
|
||||
- Changed files list with rationale.
|
||||
- Verification steps performed and outcome.
|
||||
- Any residual risk, fallback, or operator action.
|
||||
201
PROJECT_INSTRUCTIONS.md
Normal file
201
PROJECT_INSTRUCTIONS.md
Normal file
@ -0,0 +1,201 @@
|
||||
# Project Instructions - Fiddy (External DB)
|
||||
|
||||
## 1) Core expectation
|
||||
This project connects to an **external Postgres instance (on-prem server)**. Dev and Prod must share the **same schema** through **migrations**.
|
||||
|
||||
## 2) Authority & doc order
|
||||
1) **PROJECT_INSTRUCTIONS.md** (this file) is the source of truth.
|
||||
2) **DEBUGGING_INSTRUCTIONS.md** (repo root) is required for bugfix work.
|
||||
3) Other instruction files (e.g. `.github/copilot-instructions.md`) must not conflict with this doc.
|
||||
|
||||
If anything conflicts, follow **this** doc.
|
||||
|
||||
---
|
||||
|
||||
## 3) Non-negotiables (hard rules)
|
||||
|
||||
### External DB + migrations
|
||||
- `DATABASE_URL` points to **on-prem Postgres** (**NOT** a container).
|
||||
- Dev/Prod share schema via migrations in: `packages/db/migrations`.
|
||||
- Active migration runbook: `docs/DB_MIGRATION_WORKFLOW.md` (active set + status commands).
|
||||
|
||||
### No background jobs
|
||||
- **No cron/worker jobs**. Any fix must work without background tasks.
|
||||
|
||||
### Security / logging
|
||||
- **Never log secrets** (passwords, tokens, session cookies).
|
||||
- **Never log receipt bytes**.
|
||||
- **Never log full invite codes** - logs/audit store **last4 only**.
|
||||
|
||||
### Server-side authorization only
|
||||
- **Server-side RBAC only.** Client checks are UX only and must not be trusted.
|
||||
|
||||
---
|
||||
|
||||
## 4) Non-regression contracts (do not break)
|
||||
|
||||
### Auth
|
||||
- Custom email/password auth.
|
||||
- Sessions are **DB-backed** and stored in table `sessions`.
|
||||
- Session cookies are **HttpOnly**.
|
||||
|
||||
### Receipts
|
||||
- Receipt images are stored in Postgres `bytea` table `receipts`.
|
||||
- **Entries list endpoints must never return receipt image bytes.**
|
||||
- Receipt bytes are fetched only via a **separate endpoint** when inspecting a single item.
|
||||
|
||||
### Request IDs + audit
|
||||
- API must generate a **`request_id`** and return it in responses.
|
||||
- Audit logs must include `request_id`.
|
||||
- Audit logs must never store full invite codes (store **last4 only**).
|
||||
|
||||
---
|
||||
|
||||
## 5) Architecture contract (Backend <-> Client <-> Hooks <-> UI)
|
||||
|
||||
### No-assumptions rule (required)
|
||||
Before making structural changes, first scan the repo and identify:
|
||||
- where `app/`, `components/`, `features/`, `hooks/`, `lib/` live
|
||||
- existing API routes and helpers
|
||||
- patterns already in use
|
||||
Do not invent files/endpoints/conventions. If something is missing, add it **minimally** and **consistently**.
|
||||
|
||||
### Single mechanism rule (required)
|
||||
For any cross-component state propagation concern, keep **one** canonical mechanism only:
|
||||
- Context **OR** custom events **OR** cache invalidation
|
||||
Do not keep old and new mechanisms in parallel. Remove superseded utilities/imports/files in the same PR.
|
||||
|
||||
### Layering (hard boundaries)
|
||||
For every domain (auth, groups, entries, receipts, etc.) follow this flow:
|
||||
|
||||
1) **API Route Handlers** - `app/api/.../route.ts`
|
||||
- Thin: parse/validate input, call a server service, return JSON.
|
||||
- No direct DB queries in route files unless there is no existing server service.
|
||||
|
||||
2) **Server Services (DB + authorization)** - `lib/server/*`
|
||||
- Own all DB access and authorization helpers.
|
||||
- Server-only modules must include: `import "server-only";`
|
||||
- Prefer small domain modules: `lib/server/auth.ts`, `lib/server/groups.ts`, `lib/server/entries.ts`, `lib/server/receipts.ts`, `lib/server/session.ts`.
|
||||
|
||||
3) **Client API Wrappers** - `lib/client/*`
|
||||
- Typed fetch helpers only (no React state).
|
||||
- Centralize fetch + error normalization.
|
||||
- Always send credentials (cookies) and never trust client-side RBAC.
|
||||
|
||||
4) **Hooks (UI-facing API layer)** - `hooks/use-*.ts`
|
||||
- Hooks are the primary interface for components/pages to call APIs.
|
||||
- Components should not call `fetch()` directly unless there is a strong reason.
|
||||
|
||||
### API conventions
|
||||
- Prefer consistent JSON error shape:
|
||||
- `{ error: { code: string, message: string }, request_id?: string }`
|
||||
- Validate inputs at the route boundary (shape/type), authorize in server services.
|
||||
- Mirror existing REST style used in the project.
|
||||
|
||||
### Next.js route params checklist (required)
|
||||
For `app/api/**/[param]/route.ts`:
|
||||
- Treat `context.params` as **async** and `await` it before reading properties.
|
||||
- Example: `const { id } = await context.params;`
|
||||
|
||||
### Frontend structure preference
|
||||
- Prefer domain-first structure: `features/<domain>/...` + `shared/...`.
|
||||
- Use `components/*` only for compatibility shims during migrations (remove them after imports are migrated).
|
||||
|
||||
### Maintainability thresholds (refactor triggers)
|
||||
- Component files > **400 lines** should be split into container/presentational parts.
|
||||
- Hook files > **150 lines** should extract helper functions/services.
|
||||
- Functions with more than **3 nested branches** should be extracted.
|
||||
|
||||
---
|
||||
|
||||
## 6) Decisions / constraints (Group Settings)
|
||||
- Add `GROUP_OWNER` role to group roles; migrate existing groups so the first admin becomes owner.
|
||||
- Join policy default is `NOT_ACCEPTING`. Policies: `NOT_ACCEPTING`, `AUTO_ACCEPT`, `APPROVAL_REQUIRED`.
|
||||
- Both owner and admins can approve join requests and manage invite links.
|
||||
- Invite links:
|
||||
- TTL limited to 1-7 days.
|
||||
- Settings are immutable after creation (policy, single-use, etc.).
|
||||
- Single-use does not override approval-required.
|
||||
- Expired links are retained and can be revived.
|
||||
- Single-use links are deleted after successful use.
|
||||
- Revive resets `used_at` and `revoked_at`, refreshes `expires_at`, and creates a new audit event.
|
||||
- No cron/worker jobs for now (auto ownership transfer and invite rotation are paused).
|
||||
- Group role icons must be consistent: owner, admin, member.
|
||||
|
||||
---
|
||||
|
||||
## 7) Do first (vertical slice)
|
||||
1) DB migrate command + schema
|
||||
2) Register/Login/Logout (custom sessions)
|
||||
3) Protected dashboard page
|
||||
4) Group create/join + group switcher (approval-based joins + optional join disable)
|
||||
5) Entries CRUD (no receipt bytes in list)
|
||||
6) Receipt upload/download endpoints
|
||||
7) Settings + Reports
|
||||
|
||||
---
|
||||
|
||||
## 8) Definition of done
|
||||
- Works via `docker-compose.dev.yml` with external DB
|
||||
- Migrations applied via `npm run db:migrate`
|
||||
- Tests + lint pass
|
||||
- RBAC enforced server-side
|
||||
- No large files
|
||||
- No TypeScript warnings or lint errors in touched files
|
||||
- No new cron/worker dependencies unless explicitly approved
|
||||
- No orphaned utilities/hooks/contexts after refactors
|
||||
- No duplicate mechanisms for the same state flow
|
||||
- Text encoding remains clean in user-facing strings/docs
|
||||
|
||||
---
|
||||
|
||||
## 9) Desktop + mobile UX checklist (required)
|
||||
- Touch: long-press affordance for item-level actions when no visible button.
|
||||
- Mouse: hover affordance on interactive rows/cards.
|
||||
- Tap targets remain >= 40px on mobile.
|
||||
- Modal overlays must close on outside click/tap.
|
||||
- Use bubble notifications for main actions (create/update/delete/join).
|
||||
- Add Playwright UI tests for new UI features and critical flows.
|
||||
|
||||
---
|
||||
|
||||
## 10) Tests (required)
|
||||
- Add/update tests for API behavior changes (auth, groups, entries, receipts).
|
||||
- Include negative cases where applicable:
|
||||
- unauthorized
|
||||
- not-a-member
|
||||
- invalid input
|
||||
|
||||
---
|
||||
|
||||
## 11) Agent Response Legend (required)
|
||||
Use emoji/icons in agent progress and final responses so status is obvious at a glance.
|
||||
|
||||
Legend:
|
||||
- `🔄` in progress
|
||||
- `✅` completed
|
||||
- `🧪` test/lint/verification result
|
||||
- `📄` documentation update
|
||||
- `🗄️` database or migration change
|
||||
- `🚀` deploy/release step
|
||||
- `⚠️` risk, blocker, or manual operator action needed
|
||||
- `❌` failed command or unsuccessful attempt
|
||||
- `ℹ️` informational context
|
||||
- `🧭` recommendation or next-step option
|
||||
|
||||
Usage rules:
|
||||
- Include at least one status icon in each substantive agent response.
|
||||
- Use one icon per bullet/line; avoid icon spam.
|
||||
- Keep icon meaning consistent with this legend.
|
||||
|
||||
---
|
||||
|
||||
## 12) Commit Discipline (required)
|
||||
- Commit in small, logical slices (no broad mixed-purpose commits).
|
||||
- Each commit must:
|
||||
- follow Conventional Commits style (`feat:`, `fix:`, `docs:`, `refactor:`, `test:`, `chore:`)
|
||||
- include only related files for that slice
|
||||
- exclude secrets, credentials, and generated noise
|
||||
- Run verification before commit when applicable (lint/tests/build or targeted checks for touched areas).
|
||||
- Prefer frequent checkpoint commits during agentic work rather than one large end-state commit.
|
||||
- If a rule or contract changes, commit docs first (or in the same atomic slice as enforcing code).
|
||||
@ -5,7 +5,7 @@ const User = require("../models/user.model");
|
||||
exports.register = async (req, res) => {
|
||||
let { username, password, name } = req.body;
|
||||
username = username.toLowerCase();
|
||||
console.log(`🆕 Registration attempt for ${name} => username:${username}, password:${password}`);
|
||||
console.log(`Registration attempt for ${name} => username:${username}`);
|
||||
|
||||
try {
|
||||
const hash = await bcrypt.hash(password, 10);
|
||||
@ -30,7 +30,7 @@ exports.login = async (req, res) => {
|
||||
|
||||
const valid = await bcrypt.compare(password, user.password);
|
||||
if (!valid) {
|
||||
console.log(`⛔ Login attempt for user ${username} with password ${password}`);
|
||||
console.log(`Invalid login attempt for user ${username}`);
|
||||
return res.status(401).json({ message: "Invalid credentials" });
|
||||
}
|
||||
|
||||
|
||||
49
docs/AGENTIC_CONTRACT_MAP.md
Normal file
49
docs/AGENTIC_CONTRACT_MAP.md
Normal file
@ -0,0 +1,49 @@
|
||||
# Agentic Contract Map (Current Stack)
|
||||
|
||||
This file maps `PROJECT_INSTRUCTIONS.md` architecture intent to the current repository stack.
|
||||
|
||||
## Current stack
|
||||
- Backend: Express (`backend/`)
|
||||
- Frontend: React + Vite (`frontend/`)
|
||||
|
||||
## Contract mapping
|
||||
|
||||
### API Route Handlers (`app/api/**/route.ts` intent)
|
||||
Current equivalent:
|
||||
- `backend/routes/*.js`
|
||||
- `backend/controllers/*.js`
|
||||
|
||||
Expectation:
|
||||
- Keep these thin for parsing/validation and response shape.
|
||||
- Delegate DB and authorization-heavy logic to model/service layers.
|
||||
|
||||
### Server Services (`lib/server/*` intent)
|
||||
Current equivalent:
|
||||
- `backend/models/*.js`
|
||||
- `backend/middleware/*.js`
|
||||
- `backend/db/*`
|
||||
|
||||
Expectation:
|
||||
- Concentrate DB access and authorization logic in these backend layers.
|
||||
- Avoid raw DB usage directly in route files unless no service/model exists.
|
||||
|
||||
### Client Wrappers (`lib/client/*` intent)
|
||||
Current equivalent:
|
||||
- `frontend/src/api/*.js`
|
||||
|
||||
Expectation:
|
||||
- Centralize fetch/axios calls and error normalization here.
|
||||
- Always send credentials/authorization headers as required.
|
||||
|
||||
### Hooks (`hooks/use-*.ts` intent)
|
||||
Current equivalent:
|
||||
- `frontend/src/context/*`
|
||||
- `frontend/src/utils/*` for route guards
|
||||
|
||||
Expectation:
|
||||
- Keep components free of direct raw network calls where possible.
|
||||
- Favor one canonical state propagation mechanism per concern.
|
||||
|
||||
## Notes
|
||||
- This map does not force a framework migration.
|
||||
- It defines how to apply the contract consistently in the existing codebase.
|
||||
50
docs/DB_MIGRATION_WORKFLOW.md
Normal file
50
docs/DB_MIGRATION_WORKFLOW.md
Normal file
@ -0,0 +1,50 @@
|
||||
# DB Migration Workflow (External Postgres)
|
||||
|
||||
This project uses an external on-prem Postgres database. Migration files are canonical in:
|
||||
|
||||
- `packages/db/migrations`
|
||||
|
||||
## Preconditions
|
||||
- `DATABASE_URL` is set and points to the on-prem Postgres instance.
|
||||
- `psql` is installed and available in PATH.
|
||||
- You are in repo root.
|
||||
|
||||
## Commands
|
||||
- Apply pending migrations:
|
||||
- `npm run db:migrate`
|
||||
- Show migration status:
|
||||
- `npm run db:migrate:status`
|
||||
- Fail if pending migrations exist:
|
||||
- `npm run db:migrate:verify`
|
||||
|
||||
## Active migration set
|
||||
Migration files are applied in lexicographic filename order from `packages/db/migrations`.
|
||||
|
||||
Current baseline files:
|
||||
- `add_display_name_column.sql`
|
||||
- `add_image_columns.sql`
|
||||
- `add_modified_on_column.sql`
|
||||
- `add_notes_column.sql`
|
||||
- `create_item_classification_table.sql`
|
||||
- `multi_household_architecture.sql`
|
||||
|
||||
## Tracking table
|
||||
Applied migrations are recorded in:
|
||||
|
||||
- `schema_migrations(filename text unique, applied_at timestamptz)`
|
||||
|
||||
## Expected operator flow
|
||||
1. Check status:
|
||||
- `npm run db:migrate:status`
|
||||
2. Apply pending:
|
||||
- `npm run db:migrate`
|
||||
3. Verify clean state:
|
||||
- `npm run db:migrate:verify`
|
||||
|
||||
## Troubleshooting
|
||||
- `DATABASE_URL is required`:
|
||||
- Export/set `DATABASE_URL` in your environment.
|
||||
- `psql executable was not found in PATH`:
|
||||
- Install PostgreSQL client tools and retry.
|
||||
- SQL failure:
|
||||
- Fix migration SQL and rerun; only successful files are recorded in `schema_migrations`.
|
||||
@ -39,7 +39,10 @@ Historical documentation of completed features. Useful for reference but not act
|
||||
These files remain at the project root for easy access:
|
||||
|
||||
- **[../README.md](../README.md)** - Project overview and quick start
|
||||
- **[../.github/copilot-instructions.md](../.github/copilot-instructions.md)** - AI assistant instructions (architecture, RBAC, conventions)
|
||||
- **[../PROJECT_INSTRUCTIONS.md](../PROJECT_INSTRUCTIONS.md)** - Canonical project constraints and delivery contract
|
||||
- **[../AGENTS.md](../AGENTS.md)** - Agent behavior and guardrails
|
||||
- **[../DEBUGGING_INSTRUCTIONS.md](../DEBUGGING_INSTRUCTIONS.md)** - Required bugfix workflow
|
||||
- **[../.github/copilot-instructions.md](../.github/copilot-instructions.md)** - Copilot compatibility shim to root instructions
|
||||
|
||||
---
|
||||
|
||||
@ -51,9 +54,9 @@ These files remain at the project root for easy access:
|
||||
|
||||
**Working on mobile UI?** → Check [MOBILE_RESPONSIVE_AUDIT.md](guides/MOBILE_RESPONSIVE_AUDIT.md)
|
||||
|
||||
**Need architecture context?** → Read [../.github/copilot-instructions.md](../.github/copilot-instructions.md)
|
||||
**Need architecture context?** → Read [AGENTIC_CONTRACT_MAP.md](AGENTIC_CONTRACT_MAP.md) and [../PROJECT_INSTRUCTIONS.md](../PROJECT_INSTRUCTIONS.md)
|
||||
|
||||
**Running migrations?** → Follow [MIGRATION_GUIDE.md](migration/MIGRATION_GUIDE.md)
|
||||
**Running migrations?** → Follow [DB_MIGRATION_WORKFLOW.md](DB_MIGRATION_WORKFLOW.md)
|
||||
|
||||
---
|
||||
|
||||
|
||||
@ -1,4 +1,9 @@
|
||||
{
|
||||
"scripts": {
|
||||
"db:migrate": "node scripts/db-migrate.js",
|
||||
"db:migrate:status": "node scripts/db-migrate-status.js",
|
||||
"db:migrate:verify": "node scripts/db-migrate-verify.js"
|
||||
},
|
||||
"devDependencies": {
|
||||
"cross-env": "^10.1.0",
|
||||
"jest": "^30.2.0",
|
||||
|
||||
9
packages/db/migrations/README.md
Normal file
9
packages/db/migrations/README.md
Normal file
@ -0,0 +1,9 @@
|
||||
# Migration Directory
|
||||
|
||||
This directory is the canonical location for SQL migrations.
|
||||
|
||||
- Use `npm run db:migrate` to apply pending migrations.
|
||||
- Use `npm run db:migrate:status` to view applied/pending migrations.
|
||||
- Use `npm run db:migrate:verify` to fail when pending migrations exist.
|
||||
|
||||
Do not place new canonical migrations under `backend/migrations`.
|
||||
10
packages/db/migrations/add_display_name_column.sql
Normal file
10
packages/db/migrations/add_display_name_column.sql
Normal file
@ -0,0 +1,10 @@
|
||||
-- Add display_name column to users table
|
||||
-- This allows users to have a friendly name separate from their username
|
||||
|
||||
ALTER TABLE users
|
||||
ADD COLUMN IF NOT EXISTS display_name VARCHAR(100);
|
||||
|
||||
-- Set display_name to name for existing users (as default)
|
||||
UPDATE users
|
||||
SET display_name = name
|
||||
WHERE display_name IS NULL;
|
||||
20
packages/db/migrations/add_image_columns.sql
Normal file
20
packages/db/migrations/add_image_columns.sql
Normal file
@ -0,0 +1,20 @@
|
||||
# Database Migration: Add Image Support
|
||||
|
||||
Run these SQL commands on your PostgreSQL database:
|
||||
|
||||
```sql
|
||||
-- Add image columns to grocery_list table
|
||||
ALTER TABLE grocery_list
|
||||
ADD COLUMN item_image BYTEA,
|
||||
ADD COLUMN image_mime_type VARCHAR(50);
|
||||
|
||||
-- Optional: Add index for faster queries when filtering by items with images
|
||||
CREATE INDEX idx_grocery_list_has_image ON grocery_list ((item_image IS NOT NULL));
|
||||
```
|
||||
|
||||
## To Verify:
|
||||
```sql
|
||||
\d grocery_list
|
||||
```
|
||||
|
||||
You should see the new columns `item_image` and `image_mime_type`.
|
||||
8
packages/db/migrations/add_modified_on_column.sql
Normal file
8
packages/db/migrations/add_modified_on_column.sql
Normal file
@ -0,0 +1,8 @@
|
||||
-- Add modified_on column to grocery_list table
|
||||
ALTER TABLE grocery_list
|
||||
ADD COLUMN modified_on TIMESTAMP DEFAULT NOW();
|
||||
|
||||
-- Set modified_on to NOW() for existing records
|
||||
UPDATE grocery_list
|
||||
SET modified_on = NOW()
|
||||
WHERE modified_on IS NULL;
|
||||
7
packages/db/migrations/add_notes_column.sql
Normal file
7
packages/db/migrations/add_notes_column.sql
Normal file
@ -0,0 +1,7 @@
|
||||
-- Add notes column to household_lists table
|
||||
-- This allows users to add custom notes/descriptions to list items
|
||||
|
||||
ALTER TABLE household_lists
|
||||
ADD COLUMN IF NOT EXISTS notes TEXT;
|
||||
|
||||
COMMENT ON COLUMN household_lists.notes IS 'Optional user notes/description for the item';
|
||||
29
packages/db/migrations/create_item_classification_table.sql
Normal file
29
packages/db/migrations/create_item_classification_table.sql
Normal file
@ -0,0 +1,29 @@
|
||||
-- Migration: Create item_classification table
|
||||
-- This table stores classification data for items in the grocery_list table
|
||||
-- Each row in grocery_list can have ONE corresponding classification row
|
||||
|
||||
CREATE TABLE IF NOT EXISTS item_classification (
|
||||
id INTEGER PRIMARY KEY REFERENCES grocery_list(id) ON DELETE CASCADE,
|
||||
item_type VARCHAR(50) NOT NULL,
|
||||
item_group VARCHAR(100) NOT NULL,
|
||||
zone VARCHAR(100),
|
||||
confidence DECIMAL(3,2) DEFAULT 1.0 CHECK (confidence >= 0 AND confidence <= 1),
|
||||
source VARCHAR(20) DEFAULT 'user' CHECK (source IN ('user', 'ml', 'default')),
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Index for faster lookups by type
|
||||
CREATE INDEX IF NOT EXISTS idx_item_classification_type ON item_classification(item_type);
|
||||
|
||||
-- Index for zone-based queries
|
||||
CREATE INDEX IF NOT EXISTS idx_item_classification_zone ON item_classification(zone);
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE item_classification IS 'Stores classification metadata for grocery list items';
|
||||
COMMENT ON COLUMN item_classification.id IS 'Foreign key to grocery_list.id (one-to-one relationship)';
|
||||
COMMENT ON COLUMN item_classification.item_type IS 'High-level category (produce, meat, dairy, etc.)';
|
||||
COMMENT ON COLUMN item_classification.item_group IS 'Subcategory within item_type (filtered by type)';
|
||||
COMMENT ON COLUMN item_classification.zone IS 'Store zone/location (optional)';
|
||||
COMMENT ON COLUMN item_classification.confidence IS 'Confidence score 0-1 (1.0 for user-provided, lower for ML-predicted)';
|
||||
COMMENT ON COLUMN item_classification.source IS 'Source of classification: user, ml, or default';
|
||||
397
packages/db/migrations/multi_household_architecture.sql
Normal file
397
packages/db/migrations/multi_household_architecture.sql
Normal file
@ -0,0 +1,397 @@
|
||||
-- ============================================================================
|
||||
-- Multi-Household & Multi-Store Architecture Migration
|
||||
-- ============================================================================
|
||||
-- This migration transforms the single-list app into a multi-tenant system
|
||||
-- supporting multiple households, each with multiple stores.
|
||||
--
|
||||
-- IMPORTANT: Backup your database before running this migration!
|
||||
-- pg_dump grocery_list > backup_$(date +%Y%m%d).sql
|
||||
--
|
||||
-- Migration Strategy:
|
||||
-- 1. Create new tables
|
||||
-- 2. Create "Main Household" for existing users
|
||||
-- 3. Migrate existing data to new structure
|
||||
-- 4. Update roles (keep users.role for system admin)
|
||||
-- 5. Verify data integrity
|
||||
-- 6. (Manual step) Drop old tables after verification
|
||||
-- ============================================================================
|
||||
|
||||
BEGIN;
|
||||
|
||||
-- ============================================================================
|
||||
-- STEP 1: CREATE NEW TABLES
|
||||
-- ============================================================================
|
||||
|
||||
-- Households table
|
||||
CREATE TABLE IF NOT EXISTS households (
|
||||
id SERIAL PRIMARY KEY,
|
||||
name VARCHAR(100) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
created_by INTEGER REFERENCES users(id) ON DELETE SET NULL,
|
||||
invite_code VARCHAR(20) UNIQUE NOT NULL,
|
||||
code_expires_at TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE INDEX idx_households_invite_code ON households(invite_code);
|
||||
COMMENT ON TABLE households IS 'Household groups (families, roommates, etc.)';
|
||||
COMMENT ON COLUMN households.invite_code IS 'Unique code for inviting users to join household';
|
||||
|
||||
-- Store types table
|
||||
CREATE TABLE IF NOT EXISTS stores (
|
||||
id SERIAL PRIMARY KEY,
|
||||
name VARCHAR(50) NOT NULL UNIQUE,
|
||||
default_zones JSONB,
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
COMMENT ON TABLE stores IS 'Store types/chains (Costco, Target, Walmart, etc.)';
|
||||
COMMENT ON COLUMN stores.default_zones IS 'JSON array of default zone names for this store type';
|
||||
|
||||
-- User-Household membership with per-household roles
|
||||
CREATE TABLE IF NOT EXISTS household_members (
|
||||
id SERIAL PRIMARY KEY,
|
||||
household_id INTEGER REFERENCES households(id) ON DELETE CASCADE,
|
||||
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
|
||||
role VARCHAR(20) NOT NULL CHECK (role IN ('admin', 'user')),
|
||||
joined_at TIMESTAMP DEFAULT NOW(),
|
||||
UNIQUE(household_id, user_id)
|
||||
);
|
||||
|
||||
CREATE INDEX idx_household_members_user ON household_members(user_id);
|
||||
CREATE INDEX idx_household_members_household ON household_members(household_id);
|
||||
COMMENT ON TABLE household_members IS 'User membership in households with per-household roles';
|
||||
COMMENT ON COLUMN household_members.role IS 'admin: full control, user: standard member';
|
||||
|
||||
-- Household-Store relationship
|
||||
CREATE TABLE IF NOT EXISTS household_stores (
|
||||
id SERIAL PRIMARY KEY,
|
||||
household_id INTEGER REFERENCES households(id) ON DELETE CASCADE,
|
||||
store_id INTEGER REFERENCES stores(id) ON DELETE CASCADE,
|
||||
is_default BOOLEAN DEFAULT FALSE,
|
||||
added_at TIMESTAMP DEFAULT NOW(),
|
||||
UNIQUE(household_id, store_id)
|
||||
);
|
||||
|
||||
CREATE INDEX idx_household_stores_household ON household_stores(household_id);
|
||||
COMMENT ON TABLE household_stores IS 'Which stores each household shops at';
|
||||
|
||||
-- Master item catalog (shared across all households)
|
||||
CREATE TABLE IF NOT EXISTS items (
|
||||
id SERIAL PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
default_image BYTEA,
|
||||
default_image_mime_type VARCHAR(50),
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
usage_count INTEGER DEFAULT 0
|
||||
);
|
||||
|
||||
CREATE INDEX idx_items_name ON items(name);
|
||||
CREATE INDEX idx_items_usage_count ON items(usage_count DESC);
|
||||
COMMENT ON TABLE items IS 'Master item catalog shared across all households';
|
||||
COMMENT ON COLUMN items.usage_count IS 'Popularity metric for suggestions';
|
||||
|
||||
-- Household-specific grocery lists (per store)
|
||||
CREATE TABLE IF NOT EXISTS household_lists (
|
||||
id SERIAL PRIMARY KEY,
|
||||
household_id INTEGER REFERENCES households(id) ON DELETE CASCADE,
|
||||
store_id INTEGER REFERENCES stores(id) ON DELETE CASCADE,
|
||||
item_id INTEGER REFERENCES items(id) ON DELETE CASCADE,
|
||||
quantity INTEGER NOT NULL DEFAULT 1,
|
||||
bought BOOLEAN DEFAULT FALSE,
|
||||
custom_image BYTEA,
|
||||
custom_image_mime_type VARCHAR(50),
|
||||
added_by INTEGER REFERENCES users(id) ON DELETE SET NULL,
|
||||
modified_on TIMESTAMP DEFAULT NOW(),
|
||||
UNIQUE(household_id, store_id, item_id)
|
||||
);
|
||||
|
||||
CREATE INDEX idx_household_lists_household_store ON household_lists(household_id, store_id);
|
||||
CREATE INDEX idx_household_lists_bought ON household_lists(household_id, store_id, bought);
|
||||
CREATE INDEX idx_household_lists_modified ON household_lists(modified_on DESC);
|
||||
COMMENT ON TABLE household_lists IS 'Grocery lists scoped to household + store combination';
|
||||
|
||||
-- Household-specific item classifications (per store)
|
||||
CREATE TABLE IF NOT EXISTS household_item_classifications (
|
||||
id SERIAL PRIMARY KEY,
|
||||
household_id INTEGER REFERENCES households(id) ON DELETE CASCADE,
|
||||
store_id INTEGER REFERENCES stores(id) ON DELETE CASCADE,
|
||||
item_id INTEGER REFERENCES items(id) ON DELETE CASCADE,
|
||||
item_type VARCHAR(50),
|
||||
item_group VARCHAR(100),
|
||||
zone VARCHAR(100),
|
||||
confidence DECIMAL(3,2) DEFAULT 1.0 CHECK (confidence >= 0 AND confidence <= 1),
|
||||
source VARCHAR(20) DEFAULT 'user' CHECK (source IN ('user', 'ml', 'default')),
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
UNIQUE(household_id, store_id, item_id)
|
||||
);
|
||||
|
||||
CREATE INDEX idx_household_classifications ON household_item_classifications(household_id, store_id);
|
||||
CREATE INDEX idx_household_classifications_type ON household_item_classifications(item_type);
|
||||
CREATE INDEX idx_household_classifications_zone ON household_item_classifications(zone);
|
||||
COMMENT ON TABLE household_item_classifications IS 'Item classifications scoped to household + store';
|
||||
|
||||
-- History tracking
|
||||
CREATE TABLE IF NOT EXISTS household_list_history (
|
||||
id SERIAL PRIMARY KEY,
|
||||
household_list_id INTEGER REFERENCES household_lists(id) ON DELETE CASCADE,
|
||||
quantity INTEGER NOT NULL,
|
||||
added_by INTEGER REFERENCES users(id) ON DELETE SET NULL,
|
||||
added_on TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX idx_household_history_list ON household_list_history(household_list_id);
|
||||
CREATE INDEX idx_household_history_user ON household_list_history(added_by);
|
||||
CREATE INDEX idx_household_history_date ON household_list_history(added_on DESC);
|
||||
COMMENT ON TABLE household_list_history IS 'Tracks who added items and when';
|
||||
|
||||
-- ============================================================================
|
||||
-- STEP 2: CREATE DEFAULT HOUSEHOLD AND STORE
|
||||
-- ============================================================================
|
||||
|
||||
-- Create default household for existing users
|
||||
INSERT INTO households (name, created_by, invite_code)
|
||||
SELECT
|
||||
'Main Household',
|
||||
(SELECT id FROM users WHERE role = 'admin' LIMIT 1), -- First admin as creator
|
||||
'MAIN' || LPAD(FLOOR(RANDOM() * 1000000)::TEXT, 6, '0') -- Random 6-digit code
|
||||
WHERE NOT EXISTS (SELECT 1 FROM households WHERE name = 'Main Household');
|
||||
|
||||
-- Create default Costco store
|
||||
INSERT INTO stores (name, default_zones)
|
||||
VALUES (
|
||||
'Costco',
|
||||
'{
|
||||
"zones": [
|
||||
"Entrance & Seasonal",
|
||||
"Fresh Produce",
|
||||
"Meat & Seafood",
|
||||
"Dairy & Refrigerated",
|
||||
"Deli & Prepared Foods",
|
||||
"Bakery & Bread",
|
||||
"Frozen Foods",
|
||||
"Beverages",
|
||||
"Snacks & Candy",
|
||||
"Pantry & Dry Goods",
|
||||
"Health & Beauty",
|
||||
"Household & Cleaning",
|
||||
"Other"
|
||||
]
|
||||
}'::jsonb
|
||||
)
|
||||
ON CONFLICT (name) DO NOTHING;
|
||||
|
||||
-- Link default household to default store
|
||||
INSERT INTO household_stores (household_id, store_id, is_default)
|
||||
SELECT
|
||||
(SELECT id FROM households WHERE name = 'Main Household'),
|
||||
(SELECT id FROM stores WHERE name = 'Costco'),
|
||||
TRUE
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM household_stores
|
||||
WHERE household_id = (SELECT id FROM households WHERE name = 'Main Household')
|
||||
);
|
||||
|
||||
-- ============================================================================
|
||||
-- STEP 3: MIGRATE USERS TO HOUSEHOLD MEMBERS
|
||||
-- ============================================================================
|
||||
|
||||
-- Add all existing users to Main Household
|
||||
-- Old admins become household admins, others become standard users
|
||||
INSERT INTO household_members (household_id, user_id, role)
|
||||
SELECT
|
||||
(SELECT id FROM households WHERE name = 'Main Household'),
|
||||
id,
|
||||
CASE
|
||||
WHEN role = 'admin' THEN 'admin'
|
||||
ELSE 'user'
|
||||
END
|
||||
FROM users
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM household_members hm
|
||||
WHERE hm.user_id = users.id
|
||||
AND hm.household_id = (SELECT id FROM households WHERE name = 'Main Household')
|
||||
);
|
||||
|
||||
-- ============================================================================
|
||||
-- STEP 4: MIGRATE ITEMS TO MASTER CATALOG
|
||||
-- ============================================================================
|
||||
|
||||
-- Extract unique items from grocery_list into master items table
|
||||
INSERT INTO items (name, default_image, default_image_mime_type, created_at, usage_count)
|
||||
SELECT
|
||||
LOWER(TRIM(item_name)) as name,
|
||||
item_image,
|
||||
image_mime_type,
|
||||
MIN(modified_on) as created_at,
|
||||
COUNT(*) as usage_count
|
||||
FROM grocery_list
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM items WHERE LOWER(items.name) = LOWER(TRIM(grocery_list.item_name))
|
||||
)
|
||||
GROUP BY LOWER(TRIM(item_name)), item_image, image_mime_type
|
||||
ON CONFLICT (name) DO NOTHING;
|
||||
|
||||
-- ============================================================================
|
||||
-- STEP 5: MIGRATE GROCERY_LIST TO HOUSEHOLD_LISTS
|
||||
-- ============================================================================
|
||||
|
||||
-- Migrate current list to household_lists
|
||||
INSERT INTO household_lists (
|
||||
household_id,
|
||||
store_id,
|
||||
item_id,
|
||||
quantity,
|
||||
bought,
|
||||
custom_image,
|
||||
custom_image_mime_type,
|
||||
added_by,
|
||||
modified_on
|
||||
)
|
||||
SELECT
|
||||
(SELECT id FROM households WHERE name = 'Main Household'),
|
||||
(SELECT id FROM stores WHERE name = 'Costco'),
|
||||
i.id,
|
||||
gl.quantity,
|
||||
gl.bought,
|
||||
CASE WHEN gl.item_image != i.default_image THEN gl.item_image ELSE NULL END, -- Only store if different
|
||||
CASE WHEN gl.item_image != i.default_image THEN gl.image_mime_type ELSE NULL END,
|
||||
gl.added_by,
|
||||
gl.modified_on
|
||||
FROM grocery_list gl
|
||||
JOIN items i ON LOWER(i.name) = LOWER(TRIM(gl.item_name))
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM household_lists hl
|
||||
WHERE hl.household_id = (SELECT id FROM households WHERE name = 'Main Household')
|
||||
AND hl.store_id = (SELECT id FROM stores WHERE name = 'Costco')
|
||||
AND hl.item_id = i.id
|
||||
)
|
||||
ON CONFLICT (household_id, store_id, item_id) DO NOTHING;
|
||||
|
||||
-- ============================================================================
|
||||
-- STEP 6: MIGRATE ITEM_CLASSIFICATION TO HOUSEHOLD_ITEM_CLASSIFICATIONS
|
||||
-- ============================================================================
|
||||
|
||||
-- Migrate classifications
|
||||
INSERT INTO household_item_classifications (
|
||||
household_id,
|
||||
store_id,
|
||||
item_id,
|
||||
item_type,
|
||||
item_group,
|
||||
zone,
|
||||
confidence,
|
||||
source,
|
||||
created_at,
|
||||
updated_at
|
||||
)
|
||||
SELECT
|
||||
(SELECT id FROM households WHERE name = 'Main Household'),
|
||||
(SELECT id FROM stores WHERE name = 'Costco'),
|
||||
i.id,
|
||||
ic.item_type,
|
||||
ic.item_group,
|
||||
ic.zone,
|
||||
ic.confidence,
|
||||
ic.source,
|
||||
ic.created_at,
|
||||
ic.updated_at
|
||||
FROM item_classification ic
|
||||
JOIN grocery_list gl ON ic.id = gl.id
|
||||
JOIN items i ON LOWER(i.name) = LOWER(TRIM(gl.item_name))
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM household_item_classifications hic
|
||||
WHERE hic.household_id = (SELECT id FROM households WHERE name = 'Main Household')
|
||||
AND hic.store_id = (SELECT id FROM stores WHERE name = 'Costco')
|
||||
AND hic.item_id = i.id
|
||||
)
|
||||
ON CONFLICT (household_id, store_id, item_id) DO NOTHING;
|
||||
|
||||
-- ============================================================================
|
||||
-- STEP 7: MIGRATE GROCERY_HISTORY TO HOUSEHOLD_LIST_HISTORY
|
||||
-- ============================================================================
|
||||
|
||||
-- Migrate history records
|
||||
INSERT INTO household_list_history (household_list_id, quantity, added_by, added_on)
|
||||
SELECT
|
||||
hl.id,
|
||||
gh.quantity,
|
||||
gh.added_by,
|
||||
gh.added_on
|
||||
FROM grocery_history gh
|
||||
JOIN grocery_list gl ON gh.list_item_id = gl.id
|
||||
JOIN items i ON LOWER(i.name) = LOWER(TRIM(gl.item_name))
|
||||
JOIN household_lists hl ON hl.item_id = i.id
|
||||
AND hl.household_id = (SELECT id FROM households WHERE name = 'Main Household')
|
||||
AND hl.store_id = (SELECT id FROM stores WHERE name = 'Costco')
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM household_list_history hlh
|
||||
WHERE hlh.household_list_id = hl.id
|
||||
AND hlh.added_by = gh.added_by
|
||||
AND hlh.added_on = gh.added_on
|
||||
);
|
||||
|
||||
-- ============================================================================
|
||||
-- STEP 8: UPDATE USER ROLES (SYSTEM-WIDE)
|
||||
-- ============================================================================
|
||||
|
||||
-- Update system roles: admin → system_admin, others → user
|
||||
UPDATE users
|
||||
SET role = 'system_admin'
|
||||
WHERE role = 'admin';
|
||||
|
||||
UPDATE users
|
||||
SET role = 'user'
|
||||
WHERE role IN ('editor', 'viewer');
|
||||
|
||||
-- ============================================================================
|
||||
-- VERIFICATION QUERIES
|
||||
-- ============================================================================
|
||||
|
||||
-- Run these to verify migration success:
|
||||
|
||||
-- Check household created
|
||||
-- SELECT * FROM households;
|
||||
|
||||
-- Check all users added to household
|
||||
-- SELECT u.username, u.role as system_role, hm.role as household_role
|
||||
-- FROM users u
|
||||
-- JOIN household_members hm ON u.id = hm.user_id
|
||||
-- ORDER BY u.id;
|
||||
|
||||
-- Check items migrated
|
||||
-- SELECT COUNT(*) as total_items FROM items;
|
||||
-- SELECT COUNT(*) as original_items FROM (SELECT DISTINCT item_name FROM grocery_list) sub;
|
||||
|
||||
-- Check lists migrated
|
||||
-- SELECT COUNT(*) as new_lists FROM household_lists;
|
||||
-- SELECT COUNT(*) as old_lists FROM grocery_list;
|
||||
|
||||
-- Check classifications migrated
|
||||
-- SELECT COUNT(*) as new_classifications FROM household_item_classifications;
|
||||
-- SELECT COUNT(*) as old_classifications FROM item_classification;
|
||||
|
||||
-- Check history migrated
|
||||
-- SELECT COUNT(*) as new_history FROM household_list_history;
|
||||
-- SELECT COUNT(*) as old_history FROM grocery_history;
|
||||
|
||||
-- ============================================================================
|
||||
-- MANUAL STEPS AFTER VERIFICATION
|
||||
-- ============================================================================
|
||||
|
||||
-- After verifying data integrity, uncomment and run these to clean up:
|
||||
|
||||
-- DROP TABLE IF EXISTS grocery_history CASCADE;
|
||||
-- DROP TABLE IF EXISTS item_classification CASCADE;
|
||||
-- DROP TABLE IF EXISTS grocery_list CASCADE;
|
||||
|
||||
COMMIT;
|
||||
|
||||
-- ============================================================================
|
||||
-- ROLLBACK (if something goes wrong)
|
||||
-- ============================================================================
|
||||
|
||||
-- ROLLBACK;
|
||||
|
||||
-- Then restore from backup:
|
||||
-- psql -U your_user -d grocery_list < backup_YYYYMMDD.sql
|
||||
@ -1,80 +1,21 @@
|
||||
@echo off
|
||||
REM Multi-Household Migration Runner (Windows)
|
||||
REM This script handles the complete migration process with safety checks
|
||||
setlocal
|
||||
|
||||
setlocal enabledelayedexpansion
|
||||
|
||||
REM Database configuration
|
||||
set DB_USER=postgres
|
||||
set DB_HOST=192.168.7.112
|
||||
set DB_NAME=grocery
|
||||
set PGPASSWORD=Asdwed123A.
|
||||
|
||||
set BACKUP_DIR=backend\migrations\backups
|
||||
set TIMESTAMP=%date:~-4%%date:~-10,2%%date:~-7,2%_%time:~0,2%%time:~3,2%%time:~6,2%
|
||||
set TIMESTAMP=%TIMESTAMP: =0%
|
||||
set BACKUP_FILE=%BACKUP_DIR%\backup_%TIMESTAMP%.sql
|
||||
|
||||
echo ================================================
|
||||
echo Multi-Household Architecture Migration
|
||||
echo ================================================
|
||||
echo.
|
||||
|
||||
REM Create backup directory
|
||||
if not exist "%BACKUP_DIR%" mkdir "%BACKUP_DIR%"
|
||||
|
||||
REM Step 1: Backup (SKIPPED - using database template copy)
|
||||
echo [1/5] Backup: SKIPPED (using 'grocery' database copy)
|
||||
echo.
|
||||
|
||||
REM Step 2: Show current stats
|
||||
echo [2/5] Current database statistics:
|
||||
psql -h %DB_HOST% -U %DB_USER% -d %DB_NAME% -c "SELECT 'Users' as table_name, COUNT(*) as count FROM users UNION ALL SELECT 'Grocery Items', COUNT(*) FROM grocery_list UNION ALL SELECT 'Classifications', COUNT(*) FROM item_classification UNION ALL SELECT 'History Records', COUNT(*) FROM grocery_history;"
|
||||
echo.
|
||||
|
||||
REM Step 3: Confirm
|
||||
echo [3/5] Ready to run migration
|
||||
echo Database: %DB_NAME% on %DB_HOST%
|
||||
echo Backup: %BACKUP_FILE%
|
||||
echo.
|
||||
set /p CONFIRM="Continue with migration? (yes/no): "
|
||||
if /i not "%CONFIRM%"=="yes" (
|
||||
echo Migration cancelled.
|
||||
exit /b 0
|
||||
if "%DATABASE_URL%"=="" (
|
||||
echo DATABASE_URL is required. Aborting.
|
||||
exit /b 1
|
||||
)
|
||||
echo.
|
||||
|
||||
REM Step 4: Run migration
|
||||
echo [4/5] Running migration script...
|
||||
psql -h %DB_HOST% -U %DB_USER% -d %DB_NAME% -f backend\migrations\multi_household_architecture.sql
|
||||
if %errorlevel% neq 0 (
|
||||
echo [ERROR] Migration failed! Rolling back...
|
||||
echo Restoring from backup: %BACKUP_FILE%
|
||||
psql -h %DB_HOST% -U %DB_USER% -d %DB_NAME% < "%BACKUP_FILE%"
|
||||
exit /b 1
|
||||
)
|
||||
echo [OK] Migration completed successfully
|
||||
echo.
|
||||
echo Checking migration status...
|
||||
call npm run db:migrate:status
|
||||
if errorlevel 1 exit /b 1
|
||||
|
||||
REM Step 5: Verification
|
||||
echo [5/5] Verifying migration...
|
||||
psql -h %DB_HOST% -U %DB_USER% -d %DB_NAME% -c "SELECT id, name, invite_code FROM households;"
|
||||
psql -h %DB_HOST% -U %DB_USER% -d %DB_NAME% -c "SELECT u.id, u.username, u.role as system_role, hm.role as household_role FROM users u LEFT JOIN household_members hm ON u.id = hm.user_id ORDER BY u.id LIMIT 10;"
|
||||
psql -h %DB_HOST% -U %DB_USER% -d %DB_NAME% -c "SELECT 'Items' as metric, COUNT(*)::text as count FROM items UNION ALL SELECT 'Household Lists', COUNT(*)::text FROM household_lists UNION ALL SELECT 'Classifications', COUNT(*)::text FROM household_item_classifications UNION ALL SELECT 'History Records', COUNT(*)::text FROM household_list_history;"
|
||||
echo.
|
||||
echo Applying pending migrations...
|
||||
call npm run db:migrate
|
||||
if errorlevel 1 exit /b 1
|
||||
|
||||
echo ================================================
|
||||
echo Migration Complete!
|
||||
echo ================================================
|
||||
echo.
|
||||
echo Next Steps:
|
||||
echo 1. Review verification results above
|
||||
echo 2. Test the application
|
||||
echo 3. If issues found, rollback with:
|
||||
echo psql -h %DB_HOST% -U %DB_USER% -d %DB_NAME% ^< %BACKUP_FILE%
|
||||
echo 4. If successful, proceed to Sprint 2 (Backend API)
|
||||
echo.
|
||||
echo Backup location: %BACKUP_FILE%
|
||||
echo.
|
||||
echo Final migration status...
|
||||
call npm run db:migrate:status
|
||||
if errorlevel 1 exit /b 1
|
||||
|
||||
pause
|
||||
echo Done.
|
||||
|
||||
150
run-migration.sh
150
run-migration.sh
@ -1,146 +1,24 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Multi-Household Migration Runner
|
||||
# This script handles the complete migration process with safety checks
|
||||
set -euo pipefail
|
||||
|
||||
set -e # Exit on error
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Database configuration (from .env)
|
||||
DB_USER="postgres"
|
||||
DB_HOST="192.168.7.112"
|
||||
DB_NAME="grocery"
|
||||
export PGPASSWORD="Asdwed123A."
|
||||
|
||||
BACKUP_DIR="./backend/migrations/backups"
|
||||
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
||||
BACKUP_FILE="${BACKUP_DIR}/backup_${TIMESTAMP}.sql"
|
||||
|
||||
echo -e "${BLUE}╔════════════════════════════════════════════════╗${NC}"
|
||||
echo -e "${BLUE}║ Multi-Household Architecture Migration ║${NC}"
|
||||
echo -e "${BLUE}╚════════════════════════════════════════════════╝${NC}"
|
||||
echo ""
|
||||
|
||||
# Create backup directory if it doesn't exist
|
||||
mkdir -p "$BACKUP_DIR"
|
||||
|
||||
# Step 1: Backup
|
||||
echo -e "${YELLOW}[1/5] Creating database backup...${NC}"
|
||||
pg_dump -h "$DB_HOST" -U "$DB_USER" -d "$DB_NAME" > "$BACKUP_FILE"
|
||||
if [ $? -eq 0 ]; then
|
||||
echo -e "${GREEN}✓ Backup created: $BACKUP_FILE${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Backup failed!${NC}"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Step 2: Show current stats
|
||||
echo -e "${YELLOW}[2/5] Current database statistics:${NC}"
|
||||
psql -h "$DB_HOST" -U "$DB_USER" -d "$DB_NAME" -c "
|
||||
SELECT
|
||||
'Users' as table_name, COUNT(*) as count FROM users
|
||||
UNION ALL
|
||||
SELECT 'Grocery Items', COUNT(*) FROM grocery_list
|
||||
UNION ALL
|
||||
SELECT 'Classifications', COUNT(*) FROM item_classification
|
||||
UNION ALL
|
||||
SELECT 'History Records', COUNT(*) FROM grocery_history;
|
||||
"
|
||||
echo ""
|
||||
|
||||
# Step 3: Confirm
|
||||
echo -e "${YELLOW}[3/5] Ready to run migration${NC}"
|
||||
echo -e "Database: ${BLUE}$DB_NAME${NC} on ${BLUE}$DB_HOST${NC}"
|
||||
echo -e "Backup: ${GREEN}$BACKUP_FILE${NC}"
|
||||
echo ""
|
||||
read -p "Continue with migration? (yes/no): " -r
|
||||
echo ""
|
||||
if [[ ! $REPLY =~ ^[Yy]es$ ]]; then
|
||||
echo -e "${RED}Migration cancelled.${NC}"
|
||||
exit 1
|
||||
if ! command -v node >/dev/null 2>&1; then
|
||||
echo "node is required."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Step 4: Run migration
|
||||
echo -e "${YELLOW}[4/5] Running migration script...${NC}"
|
||||
psql -h "$DB_HOST" -U "$DB_USER" -d "$DB_NAME" -f backend/migrations/multi_household_architecture.sql
|
||||
if [ $? -eq 0 ]; then
|
||||
echo -e "${GREEN}✓ Migration completed successfully${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Migration failed! Rolling back...${NC}"
|
||||
echo -e "${YELLOW}Restoring from backup: $BACKUP_FILE${NC}"
|
||||
psql -h "$DB_HOST" -U "$DB_USER" -d "$DB_NAME" < "$BACKUP_FILE"
|
||||
exit 1
|
||||
if [ -z "${DATABASE_URL:-}" ]; then
|
||||
echo "DATABASE_URL is required. Aborting."
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Step 5: Verification
|
||||
echo -e "${YELLOW}[5/5] Verifying migration...${NC}"
|
||||
psql -h "$DB_HOST" -U "$DB_USER" -d "$DB_NAME" << 'EOF'
|
||||
\echo ''
|
||||
\echo '=== Household Created ==='
|
||||
SELECT id, name, invite_code FROM households;
|
||||
echo "Checking migration status..."
|
||||
npm run db:migrate:status
|
||||
|
||||
\echo ''
|
||||
\echo '=== User Roles ==='
|
||||
SELECT u.id, u.username, u.role as system_role, hm.role as household_role
|
||||
FROM users u
|
||||
LEFT JOIN household_members hm ON u.id = hm.user_id
|
||||
ORDER BY u.id
|
||||
LIMIT 10;
|
||||
echo "Applying pending migrations..."
|
||||
npm run db:migrate
|
||||
|
||||
\echo ''
|
||||
\echo '=== Migration Counts ==='
|
||||
SELECT
|
||||
'Items (Master Catalog)' as metric, COUNT(*)::text as count FROM items
|
||||
UNION ALL
|
||||
SELECT 'Household Lists', COUNT(*)::text FROM household_lists
|
||||
UNION ALL
|
||||
SELECT 'Classifications', COUNT(*)::text FROM household_item_classifications
|
||||
UNION ALL
|
||||
SELECT 'History Records', COUNT(*)::text FROM household_list_history
|
||||
UNION ALL
|
||||
SELECT 'Household Members', COUNT(*)::text FROM household_members
|
||||
UNION ALL
|
||||
SELECT 'Stores', COUNT(*)::text FROM stores;
|
||||
echo "Final migration status..."
|
||||
npm run db:migrate:status
|
||||
|
||||
\echo ''
|
||||
\echo '=== Data Integrity Checks ==='
|
||||
\echo 'Users without household membership (should be 0):'
|
||||
SELECT COUNT(*) FROM users u
|
||||
LEFT JOIN household_members hm ON u.id = hm.user_id
|
||||
WHERE hm.id IS NULL;
|
||||
|
||||
\echo ''
|
||||
\echo 'Lists without valid items (should be 0):'
|
||||
SELECT COUNT(*) FROM household_lists hl
|
||||
LEFT JOIN items i ON hl.item_id = i.id
|
||||
WHERE i.id IS NULL;
|
||||
|
||||
\echo ''
|
||||
\echo 'History without valid lists (should be 0):'
|
||||
SELECT COUNT(*) FROM household_list_history hlh
|
||||
LEFT JOIN household_lists hl ON hlh.household_list_id = hl.id
|
||||
WHERE hl.id IS NULL;
|
||||
EOF
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}╔════════════════════════════════════════════════╗${NC}"
|
||||
echo -e "${GREEN}║ Migration Complete! ║${NC}"
|
||||
echo -e "${GREEN}╚════════════════════════════════════════════════╝${NC}"
|
||||
echo ""
|
||||
echo -e "${BLUE}Next Steps:${NC}"
|
||||
echo -e "1. Review verification results above"
|
||||
echo -e "2. Test the application"
|
||||
echo -e "3. If issues found, rollback with:"
|
||||
echo -e " ${YELLOW}psql -h $DB_HOST -U $DB_USER -d $DB_NAME < $BACKUP_FILE${NC}"
|
||||
echo -e "4. If successful, proceed to Sprint 2 (Backend API)"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Backup location: $BACKUP_FILE${NC}"
|
||||
echo ""
|
||||
echo "Done."
|
||||
|
||||
108
scripts/db-migrate-common.js
Normal file
108
scripts/db-migrate-common.js
Normal file
@ -0,0 +1,108 @@
|
||||
"use strict";
|
||||
|
||||
const fs = require("fs");
|
||||
const path = require("path");
|
||||
const { spawnSync } = require("child_process");
|
||||
|
||||
const migrationsDir = path.resolve(
|
||||
__dirname,
|
||||
"..",
|
||||
"packages",
|
||||
"db",
|
||||
"migrations"
|
||||
);
|
||||
|
||||
function ensureDatabaseUrl() {
|
||||
const databaseUrl = process.env.DATABASE_URL;
|
||||
if (!databaseUrl) {
|
||||
throw new Error("DATABASE_URL is required.");
|
||||
}
|
||||
return databaseUrl;
|
||||
}
|
||||
|
||||
function ensurePsql() {
|
||||
const result = spawnSync("psql", ["--version"], { stdio: "pipe" });
|
||||
if (result.error || result.status !== 0) {
|
||||
throw new Error("psql executable was not found in PATH.");
|
||||
}
|
||||
}
|
||||
|
||||
function ensureMigrationsDir() {
|
||||
if (!fs.existsSync(migrationsDir)) {
|
||||
throw new Error(`Migrations directory not found: ${migrationsDir}`);
|
||||
}
|
||||
}
|
||||
|
||||
function getMigrationFiles() {
|
||||
ensureMigrationsDir();
|
||||
return fs
|
||||
.readdirSync(migrationsDir)
|
||||
.filter((file) => file.endsWith(".sql"))
|
||||
.sort((a, b) => a.localeCompare(b));
|
||||
}
|
||||
|
||||
function runPsql(databaseUrl, args) {
|
||||
const result = spawnSync("psql", [databaseUrl, ...args], {
|
||||
stdio: "pipe",
|
||||
encoding: "utf8",
|
||||
});
|
||||
if (result.status !== 0) {
|
||||
const stderr = (result.stderr || "").trim();
|
||||
const stdout = (result.stdout || "").trim();
|
||||
const details = [stderr, stdout].filter(Boolean).join("\n");
|
||||
throw new Error(details || "psql command failed");
|
||||
}
|
||||
return result.stdout || "";
|
||||
}
|
||||
|
||||
function escapeSqlLiteral(value) {
|
||||
return value.replace(/'/g, "''");
|
||||
}
|
||||
|
||||
function ensureSchemaMigrationsTable(databaseUrl) {
|
||||
runPsql(databaseUrl, [
|
||||
"-v",
|
||||
"ON_ERROR_STOP=1",
|
||||
"-c",
|
||||
"CREATE TABLE IF NOT EXISTS schema_migrations (filename TEXT PRIMARY KEY, applied_at TIMESTAMPTZ NOT NULL DEFAULT NOW());",
|
||||
]);
|
||||
}
|
||||
|
||||
function getAppliedMigrations(databaseUrl) {
|
||||
const output = runPsql(databaseUrl, [
|
||||
"-At",
|
||||
"-v",
|
||||
"ON_ERROR_STOP=1",
|
||||
"-c",
|
||||
"SELECT filename FROM schema_migrations ORDER BY filename ASC;",
|
||||
]);
|
||||
return new Set(
|
||||
output
|
||||
.split(/\r?\n/)
|
||||
.map((line) => line.trim())
|
||||
.filter(Boolean)
|
||||
);
|
||||
}
|
||||
|
||||
function applyMigration(databaseUrl, filename) {
|
||||
const fullPath = path.join(migrationsDir, filename);
|
||||
runPsql(databaseUrl, ["-v", "ON_ERROR_STOP=1", "-f", fullPath]);
|
||||
runPsql(databaseUrl, [
|
||||
"-v",
|
||||
"ON_ERROR_STOP=1",
|
||||
"-c",
|
||||
`INSERT INTO schema_migrations (filename) VALUES ('${escapeSqlLiteral(
|
||||
filename
|
||||
)}') ON CONFLICT DO NOTHING;`,
|
||||
]);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
applyMigration,
|
||||
ensureDatabaseUrl,
|
||||
ensurePsql,
|
||||
ensureSchemaMigrationsTable,
|
||||
getAppliedMigrations,
|
||||
getMigrationFiles,
|
||||
migrationsDir,
|
||||
};
|
||||
42
scripts/db-migrate-status.js
Normal file
42
scripts/db-migrate-status.js
Normal file
@ -0,0 +1,42 @@
|
||||
"use strict";
|
||||
|
||||
const {
|
||||
ensureDatabaseUrl,
|
||||
ensurePsql,
|
||||
ensureSchemaMigrationsTable,
|
||||
getAppliedMigrations,
|
||||
getMigrationFiles,
|
||||
} = require("./db-migrate-common");
|
||||
|
||||
function main() {
|
||||
if (process.argv.includes("--help")) {
|
||||
console.log("Usage: npm run db:migrate:status");
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const databaseUrl = ensureDatabaseUrl();
|
||||
ensurePsql();
|
||||
ensureSchemaMigrationsTable(databaseUrl);
|
||||
|
||||
const files = getMigrationFiles();
|
||||
const applied = getAppliedMigrations(databaseUrl);
|
||||
|
||||
let pendingCount = 0;
|
||||
for (const file of files) {
|
||||
const status = applied.has(file) ? "APPLIED" : "PENDING";
|
||||
if (status === "PENDING") pendingCount += 1;
|
||||
console.log(`${status} ${file}`);
|
||||
}
|
||||
|
||||
console.log("");
|
||||
console.log(`Total: ${files.length}`);
|
||||
console.log(`Applied: ${files.length - pendingCount}`);
|
||||
console.log(`Pending: ${pendingCount}`);
|
||||
}
|
||||
|
||||
try {
|
||||
main();
|
||||
} catch (error) {
|
||||
console.error(error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
41
scripts/db-migrate-verify.js
Normal file
41
scripts/db-migrate-verify.js
Normal file
@ -0,0 +1,41 @@
|
||||
"use strict";
|
||||
|
||||
const {
|
||||
ensureDatabaseUrl,
|
||||
ensurePsql,
|
||||
ensureSchemaMigrationsTable,
|
||||
getAppliedMigrations,
|
||||
getMigrationFiles,
|
||||
} = require("./db-migrate-common");
|
||||
|
||||
function main() {
|
||||
if (process.argv.includes("--help")) {
|
||||
console.log("Usage: npm run db:migrate:verify");
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const databaseUrl = ensureDatabaseUrl();
|
||||
ensurePsql();
|
||||
ensureSchemaMigrationsTable(databaseUrl);
|
||||
|
||||
const files = getMigrationFiles();
|
||||
const applied = getAppliedMigrations(databaseUrl);
|
||||
const pending = files.filter((file) => !applied.has(file));
|
||||
|
||||
if (pending.length > 0) {
|
||||
console.error("Pending migrations detected:");
|
||||
for (const file of pending) {
|
||||
console.error(`- ${file}`);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log("Migration verification passed. No pending migrations.");
|
||||
}
|
||||
|
||||
try {
|
||||
main();
|
||||
} catch (error) {
|
||||
console.error(error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
44
scripts/db-migrate.js
Normal file
44
scripts/db-migrate.js
Normal file
@ -0,0 +1,44 @@
|
||||
"use strict";
|
||||
|
||||
const {
|
||||
applyMigration,
|
||||
ensureDatabaseUrl,
|
||||
ensurePsql,
|
||||
ensureSchemaMigrationsTable,
|
||||
getAppliedMigrations,
|
||||
getMigrationFiles,
|
||||
} = require("./db-migrate-common");
|
||||
|
||||
function main() {
|
||||
if (process.argv.includes("--help")) {
|
||||
console.log("Usage: npm run db:migrate");
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const databaseUrl = ensureDatabaseUrl();
|
||||
ensurePsql();
|
||||
ensureSchemaMigrationsTable(databaseUrl);
|
||||
|
||||
const files = getMigrationFiles();
|
||||
const applied = getAppliedMigrations(databaseUrl);
|
||||
const pending = files.filter((file) => !applied.has(file));
|
||||
|
||||
if (pending.length === 0) {
|
||||
console.log("No pending migrations.");
|
||||
return;
|
||||
}
|
||||
|
||||
for (const file of pending) {
|
||||
console.log(`Applying: ${file}`);
|
||||
applyMigration(databaseUrl, file);
|
||||
}
|
||||
|
||||
console.log(`Applied ${pending.length} migration(s).`);
|
||||
}
|
||||
|
||||
try {
|
||||
main();
|
||||
} catch (error) {
|
||||
console.error(error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
Loading…
Reference in New Issue
Block a user