docker compose
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 2m51s

This commit is contained in:
Dmitri 2026-04-24 20:27:12 +02:00
parent 3a51cf863e
commit e93a65ae9c
Signed by: kanopo
GPG Key ID: 759ADD40E3132AC7
10 changed files with 707 additions and 270 deletions

12
.dockerignore Normal file
View File

@ -0,0 +1,12 @@
.git/
.gitea/
bruno/
target/
.dockerignore
.env
.gitignore
BACKEND_BLUEPRINT.md
Dockerfile
LICENSE
compose.prod.yaml
compose.yaml

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
.env .env
/target /target
/logs /logs
postgres-data

View File

@ -1,51 +1,137 @@
# Go Backend Blueprint (Iteration Draft) # Rhythm Backend Blueprint (Rust/Axum)
## IDEAS ## IDEAS
- instead of elastic search use ts vector and ts query to allow easy iseeu searching for smooth experience (pg vector?) - instead of elastic search use ts vector and ts query to allow easy issue searching for smooth experience
- YouTrack competitor: issue tracker with organizations, projects, issues, and project-level RBAC
---
## Chosen Stack ## Chosen Stack
- **Framework:** Gin - **Language:** Rust (edition 2024)
- **Dependency Injection:** Uber fx (runtime DI, Spring-like autowiring) - **Framework:** Axum
- **Migrations:** Goose (SQL-only) - **Runtime:** Tokio
- **DB Access:** SQLC (no ORM), package per bounded context - **Migrations:** sqlx built-in migrations
- **PostgreSQL Driver/Pool:** pgx/v5 + pgxpool - **DB Access:** sqlx (compile-time checked queries, hand-written repositories)
- **Logging:** Uber Zap (structured logs) - **PostgreSQL Driver/Pool:** sqlx::PgPool
- **Logging:** tracing + tracing-subscriber + tracing-tree (structured, hierarchical)
- **Auth:** jsonwebtoken (HS256) + argon2 (password hashing)
- **API Docs:** utoipa (code-first OpenAPI generation)
- **Health Probes:** liveness/readiness endpoints - **Health Probes:** liveness/readiness endpoints
- **API Docs:** OpenAPI (for frontend TypeScript type generation)
- **Deployment:** Docker Compose on self-hosted hardware - **Deployment:** Docker Compose on self-hosted hardware
--- ---
## Decisions Made
1. **JWT signing:** HS256 (shared secret from env, can migrate to RS256 later)
2. **Token model:** Access + Refresh (short-lived access ~15min, long-lived refresh ~7d, refresh tokens stored hashed in DB)
3. **Roles:** Project-level roles (`admin`, `developer`, `reporter`) + Org-level roles (`owner`, `admin`, `member`)
4. **OpenAPI workflow:** Code-first with utoipa (auto-generate spec from Rust handlers/models)
5. **DB access:** sqlx with hand-written repositories (compile-time checked, no ORM)
6. **First entities:** Users → Organizations → Projects → Issues
7. **Sprint stages:** Custom per project (project admin defines board columns like Todo → In Review → Done)
8. **Assignees:** Multiple assignees per issue (join table)
9. **Issue content features:** Tags, comments, issue relations, time tracking
10. **Plugin system:** Lua scripting with cron + event-driven triggers (Phase 7)
---
## Architecture Direction ## Architecture Direction
### Layering ### Project Structure
- `cmd/api` - application entrypoint and fx DI wiring ```
- `internal/http` - Gin Server provider, `router.go` for fx mapping src/
- `internal/http/api/...` - Domain handlers structured by route hierarchy (e.g., `api/health`, `api/protected/users`) ├── main.rs # Entrypoint, server, graceful shutdown
- `internal/service` - business logic + transaction boundaries ├── config.rs # Config (exists, extend)
- `internal/db/<context>` - SQLC-generated code by bounded context ├── logging.rs # Logging (exists)
- `internal/store` - shared DB/Tx helpers ├── errors.rs # Unified error types → Axum responses
- `internal/auth` - JWT validation + role guards ├── state.rs # AppState (PgPool, config, etc.)
- `internal/config` - env configuration loading ├── routes.rs # Router composition (/api/v1/...)
- `migrations/` - Goose SQL migration files ├── auth/
- `api/openapi/` - OpenAPI spec + generated artifacts │ ├── jwt.rs # HS256 token creation/validation
│ ├── hash.rs # argon2 password hashing
│ ├── handlers.rs # register, login, refresh
│ ├── models.rs # auth DTOs
│ └── service.rs # auth business logic
├── middleware/
│ ├── auth.rs # JWT extraction layer
│ └── rbac.rs # Project-level role guard
├── models/
│ ├── user.rs
│ ├── org.rs
│ ├── project.rs
│ ├── issue.rs
│ ├── comment.rs
│ ├── tag.rs
│ ├── sprint.rs
│ ├── stage.rs
│ ├── time_entry.rs
│ └── role.rs # OrgRole, ProjectRole enums
├── handlers/
│ ├── health.rs
│ ├── orgs.rs
│ ├── projects.rs
│ ├── issues.rs
│ ├── comments.rs
│ ├── tags.rs
│ ├── sprints.rs
│ ├── stages.rs
│ └── time_entries.rs
├── services/
│ ├── org.rs
│ ├── project.rs
│ ├── issue.rs
│ ├── comment.rs
│ ├── tag.rs
│ ├── sprint.rs
│ ├── stage.rs
│ └── time_entry.rs
└── db/
├── mod.rs # Pool setup, migration runner
└── repos/
├── users.rs
├── orgs.rs
├── projects.rs
├── issues.rs
├── comments.rs
├── tags.rs
├── sprints.rs
├── stages.rs
├── time_entries.rs
├── memberships.rs
└── refresh_tokens.rs
migrations/
├── 001_create_users.sql
├── 002_create_organizations.sql
├── 003_create_memberships.sql
├── 004_create_projects.sql
├── 005_create_stages.sql
├── 006_create_issues.sql
├── 007_create_issue_assignees.sql
├── 008_create_tags.sql
├── 009_create_comments.sql
├── 010_create_issue_relations.sql
├── 011_create_time_entries.sql
├── 012_create_sprints.sql
├── 013_create_refresh_tokens.sql
└── 014_add_tsvector_search.sql
```
### Layering Principles
- Handlers stay thin — they extract, validate, call service, respond
- Service layer owns business logic + DB transactions
- Repositories own SQL queries (sqlx compile-time checked)
- No transaction logic in handlers
- AppState shared via Axum's State extractor
### Transaction Strategy ### Transaction Strategy
- Handlers stay thin. - Service layer owns DB transactions
- Service layer owns DB transactions. - Repositories accept either `&PgPool` or `&mut Transaction<Postgres>`
- SQLC queries are called with either pool or tx using `DBTX` interfaces. - sqlx compile-time query checking ensures SQL correctness at build time
- No transaction logic in handlers.
### Dependency Injection (fx)
- Use `fx.Provide` to register constructors (config, logger, db pool, server, handlers).
- Use `fx.Invoke` in `main.go` to orchestrate route mapping and application startup.
- `internal/http/router.go` acts as the central route mapping orchestrator injected via `fx.Invoke`.
- Each domain handler (`internal/http/api/*/handler.go`) receives its dependencies via constructor injection.
- Lifecycle hooks (`fx.Lifecycle`) are used for graceful startup/shutdown of the HTTP server and DB pool.
--- ---
@ -55,131 +141,588 @@
- REST JSON API - REST JSON API
- Possible future WebSocket support for interactive features - Possible future WebSocket support for interactive features
- Suggested versioning: `/api/v1` - Versioning: `/api/v1`
### Health Endpoints ### Health Endpoints
- `GET /api/health/live` - process is alive - `GET /api/v1/health/live` - process is alive
- `GET /api/health/ready` - DB ping succeeds (and optionally migration version check) - `GET /api/v1/health/ready` - DB ping succeeds (and optionally migration version check)
### Auth Endpoints
- `POST /api/v1/auth/register` - create account (email + password)
- `POST /api/v1/auth/login` - get access + refresh tokens
- `POST /api/v1/auth/refresh` - rotate refresh token
### Organization Endpoints
- `POST /api/v1/orgs` - create org (creator becomes owner)
- `GET /api/v1/orgs` - list user's orgs
- `GET /api/v1/orgs/{org_slug}` - get org
- `PATCH /api/v1/orgs/{org_slug}` - update org (owner/admin)
- `DELETE /api/v1/orgs/{org_slug}` - delete org (owner)
### Project Endpoints
- `POST /api/v1/orgs/{org_slug}/projects` - create project (org owner/admin)
- `GET /api/v1/orgs/{org_slug}/projects` - list projects
- `GET /api/v1/orgs/{org_slug}/projects/{project_slug}` - get project
- `PATCH /api/v1/orgs/{org_slug}/projects/{project_slug}` - update project (project admin)
- `DELETE /api/v1/orgs/{org_slug}/projects/{project_slug}` - delete project (project admin)
### Issue Endpoints
- `POST /api/v1/orgs/{org_slug}/projects/{project_slug}/issues` - create issue (reporter+)
- `GET /api/v1/orgs/{org_slug}/projects/{project_slug}/issues` - list issues (with `?q=` search, `?tag=`, `?assignee=`, `?status=`)
- `GET /api/v1/orgs/{org_slug}/projects/{project_slug}/issues/{number}` - get issue
- `PATCH /api/v1/orgs/{org_slug}/projects/{project_slug}/issues/{number}` - update issue (developer+)
- `DELETE /api/v1/orgs/{org_slug}/projects/{project_slug}/issues/{number}` - delete issue (admin)
### Issue Sub-resource Endpoints
**Tags:**
- `GET /api/v1/orgs/{org}/projects/{proj}/tags` - list project tags
- `POST /api/v1/orgs/{org}/projects/{proj}/tags` - create tag (developer+)
- `DELETE /api/v1/orgs/{org}/projects/{proj}/tags/{tag_id}` - delete tag (admin)
- `PUT /api/v1/orgs/{org}/projects/{proj}/issues/{num}/tags` - set issue tags (developer+)
**Comments:**
- `GET /api/v1/orgs/{org}/projects/{proj}/issues/{num}/comments` - list comments
- `POST /api/v1/orgs/{org}/projects/{proj}/issues/{num}/comments` - add comment (reporter+)
- `PATCH /api/v1/orgs/{org}/projects/{proj}/issues/{num}/comments/{id}` - edit comment (author only)
- `DELETE /api/v1/orgs/{org}/projects/{proj}/issues/{num}/comments/{id}` - delete comment (author or admin)
**Assignees:**
- `GET /api/v1/orgs/{org}/projects/{proj}/issues/{num}/assignees` - list assignees
- `PUT /api/v1/orgs/{org}/projects/{proj}/issues/{num}/assignees` - set assignees (developer+)
**Relations:**
- `GET /api/v1/orgs/{org}/projects/{proj}/issues/{num}/relations` - list relations
- `POST /api/v1/orgs/{org}/projects/{proj}/issues/{num}/relations` - add relation (developer+)
- `DELETE /api/v1/orgs/{org}/projects/{proj}/issues/{num}/relations/{id}` - remove relation (developer+)
**Time Tracking:**
- `GET /api/v1/orgs/{org}/projects/{proj}/issues/{num}/time-entries` - list time entries
- `POST /api/v1/orgs/{org}/projects/{proj}/issues/{num}/time-entries` - log time (developer+)
- `DELETE /api/v1/orgs/{org}/projects/{proj}/issues/{num}/time-entries/{id}` - delete entry (author or admin)
### Sprint Endpoints
- `POST /api/v1/orgs/{org}/projects/{proj}/sprints` - create sprint (developer+)
- `GET /api/v1/orgs/{org}/projects/{proj}/sprints` - list sprints
- `GET /api/v1/orgs/{org}/projects/{proj}/sprints/{id}` - get sprint
- `PATCH /api/v1/orgs/{org}/projects/{proj}/sprints/{id}` - update sprint (developer+)
- `PATCH /api/v1/orgs/{org}/projects/{proj}/sprints/{id}/start` - start sprint (admin)
- `PATCH /api/v1/orgs/{org}/projects/{proj}/sprints/{id}/complete` - complete sprint (admin)
- `GET /api/v1/orgs/{org}/projects/{proj}/sprints/{id}/board` - sprint board (issues grouped by stage)
### Stage Endpoints (project board columns)
- `POST /api/v1/orgs/{org}/projects/{proj}/stages` - create stage (admin)
- `PATCH /api/v1/orgs/{org}/projects/{proj}/stages` - reorder stages (admin)
- `DELETE /api/v1/orgs/{org}/projects/{proj}/stages/{id}` - delete stage (admin)
### Logging ### Logging
- Zap JSON logs - tracing with hierarchical layer (dev: pretty console, prod: JSON + file with rotation)
- Uber Zap logger will be initialized in `cmd/api/main.go` and injected into services/middleware (no global loggers).
- Logs will be written directly to a file from the application, with log rotation implemented via `lumberjack`.
- Correlation/request ID in middleware - Correlation/request ID in middleware
- Structured error logging from middleware and service boundaries - Structured error logging from middleware and service boundaries
--- ---
## Database and Migrations ## Database Schema
### Goose ```sql
-- Enums
CREATE TYPE org_role AS ENUM ('owner', 'admin', 'member');
CREATE TYPE project_role AS ENUM ('admin', 'developer', 'reporter');
CREATE TYPE issue_status AS ENUM ('open', 'in_progress', 'resolved', 'closed');
CREATE TYPE issue_priority AS ENUM ('critical', 'major', 'normal', 'minor');
CREATE TYPE issue_type AS ENUM ('bug', 'feature', 'task', 'improvement');
CREATE TYPE issue_relation_type AS ENUM ('blocks', 'is_blocked_by', 'duplicates', 'is_duplicated_by', 'relates_to', 'clones', 'is_cloned_by');
CREATE TYPE sprint_status AS ENUM ('planned', 'active', 'completed');
CREATE TYPE trigger_type AS ENUM ('event', 'schedule', 'manual');
- SQL-only migrations -- Users
- Keep up/down migration scripts CREATE TABLE users (
- Run on startup in non-prod optional, required in CI/CD/deploy step id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- Migrations are securely bundled into the binary using Go's `embed.FS` from a dedicated `migrations` package to isolate them from `internal` db logic. email TEXT NOT NULL UNIQUE,
password_hash TEXT NOT NULL,
display_name TEXT,
avatar_url TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
### SQLC -- Organizations
CREATE TABLE organizations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
slug TEXT NOT NULL UNIQUE,
description TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
- Generate one package per bounded context (similar to Spring repository modules) -- Org memberships (owner/admin/member)
- Keep SQL in context directories (`query.sql`, `models.sql` style) CREATE TABLE org_memberships (
- Service layer composes multiple repositories when needed id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
org_id UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE,
role org_role NOT NULL DEFAULT 'member',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE(user_id, org_id)
);
### Work in Progress Snapshot -- Projects
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
slug TEXT NOT NULL,
description TEXT,
org_id UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE,
created_by UUID NOT NULL REFERENCES users(id),
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE(org_id, slug)
);
- `sqlc.yaml` is now configured for PostgreSQL with schema from `migrations/*.sql` -- Project memberships (admin/developer/reporter)
- First bounded context added: `internal/db/users` CREATE TABLE project_memberships (
- Current users SQLC artifacts generated: id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- `db.go`, `models.go`, `queries.sql.go`, `querier.go` user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
- Current SQLC generation options in use: project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
- `sql_package: pgx/v5` role project_role NOT NULL DEFAULT 'reporter',
- `emit_interface: true` (generated `Querier` interface) created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
- `emit_json_tags: false` (can be revisited if API structs are returned directly) UNIQUE(user_id, project_id)
- Initial queries implemented for users: `GetUser`, `CreateUser`, `DeleteUser` );
- **Goose startup migrations** wired into `db.ProvidePool`, utilizing the `embed.FS` strategy.
- **fx DI fully implemented.** All dependencies are provided via `fx.Provide` constructors: -- Stages (custom board columns per project)
- `config.Provide``*Config` CREATE TABLE stages (
- `logger.NewFromConfig``*zap.SugaredLogger` id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- `db.ProvidePool``*pgxpool.Pool` (lifecycle hooks for graceful shutdown) name TEXT NOT NULL,
- `http.NewServer``*gin.Engine` (lifecycle hooks for startup/shutdown) position INT NOT NULL,
- `health.NewHandler``*health.Handler` project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
- **HTTP route hierarchy** via `internal/http/router.go` (`GlueRoutes`): UNIQUE(project_id, position)
- `GET /api/health/live` - process is alive );
- `GET /api/health/ready` - DB ping succeeds
- **Next Planned:** Auth handler (`/api/auth/login`, `/api/auth/register`), JWT middleware, service layer. -- Issues
CREATE TABLE issues (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
number BIGINT NOT NULL,
title TEXT NOT NULL,
description TEXT,
status issue_status NOT NULL DEFAULT 'open',
priority issue_priority NOT NULL DEFAULT 'normal',
issue_type issue_type NOT NULL DEFAULT 'task',
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
reporter_id UUID NOT NULL REFERENCES users(id),
stage_id UUID REFERENCES stages(id) ON DELETE SET NULL,
sprint_id UUID REFERENCES sprints(id) ON DELETE SET NULL,
search_vector TSVECTOR GENERATED ALWAYS AS (
setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(description, '')), 'B')
) STORED,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE(project_id, number)
);
-- Multiple assignees per issue
CREATE TABLE issue_assignees (
issue_id UUID NOT NULL REFERENCES issues(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
PRIMARY KEY (issue_id, user_id)
);
-- Tags (per project)
CREATE TABLE tags (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
color TEXT,
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
UNIQUE(project_id, name)
);
CREATE TABLE issue_tags (
issue_id UUID NOT NULL REFERENCES issues(id) ON DELETE CASCADE,
tag_id UUID NOT NULL REFERENCES tags(id) ON DELETE CASCADE,
PRIMARY KEY (issue_id, tag_id)
);
-- Comments
CREATE TABLE comments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
issue_id UUID NOT NULL REFERENCES issues(id) ON DELETE CASCADE,
author_id UUID NOT NULL REFERENCES users(id),
body TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Related issues
CREATE TABLE issue_relations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
from_issue_id UUID NOT NULL REFERENCES issues(id) ON DELETE CASCADE,
to_issue_id UUID NOT NULL REFERENCES issues(id) ON DELETE CASCADE,
relation_type issue_relation_type NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE(from_issue_id, to_issue_id, relation_type)
);
-- Time tracking
CREATE TABLE time_entries (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
issue_id UUID NOT NULL REFERENCES issues(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id),
seconds_spent INTEGER NOT NULL CHECK (seconds_spent > 0),
description TEXT,
logged_at TIMESTAMPTZ NOT NULL DEFAULT now(),
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Sprints
CREATE TABLE sprints (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
starts_at TIMESTAMPTZ NOT NULL,
ends_at TIMESTAMPTZ NOT NULL,
status sprint_status NOT NULL DEFAULT 'planned',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Refresh tokens
CREATE TABLE refresh_tokens (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
token_hash TEXT NOT NULL UNIQUE,
expires_at TIMESTAMPTZ NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Scripts (for Lua plugin system)
CREATE TABLE scripts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
source TEXT NOT NULL,
trigger_type trigger_type NOT NULL,
trigger_config JSONB,
project_id UUID REFERENCES projects(id) ON DELETE CASCADE,
org_id UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE,
enabled BOOLEAN NOT NULL DEFAULT true,
created_by UUID NOT NULL REFERENCES users(id),
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE script_revisions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
script_id UUID NOT NULL REFERENCES scripts(id) ON DELETE CASCADE,
source TEXT NOT NULL,
created_by UUID NOT NULL REFERENCES users(id),
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE script_execution_logs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
script_id UUID NOT NULL REFERENCES scripts(id) ON DELETE CASCADE,
status TEXT NOT NULL,
output TEXT,
error TEXT,
duration_ms INTEGER,
triggered_by TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Indexes
CREATE INDEX idx_issues_search ON issues USING GIN(search_vector);
CREATE INDEX idx_issues_project_sprint ON issues(project_id, sprint_id);
CREATE INDEX idx_issues_project_stage ON issues(project_id, stage_id);
CREATE INDEX idx_time_entries_issue ON time_entries(issue_id);
CREATE INDEX idx_comments_issue ON comments(issue_id);
```
**Note:** The `issues` table references `sprints(id)` via `sprint_id`. In practice, the `sprints` table migration must run before `issues` (or `sprint_id` is added in a later migration). The migration files are ordered to handle this.
--- ---
## Testing Approach (Beginner-Friendly) ## RBAC Model
### Org-Level Roles
| Role | Capabilities |
|---------|-------------|
| owner | Delete org, transfer ownership, manage members, create/delete projects |
| admin | Manage members, create/delete projects, update org settings |
| member | View org and its projects, join projects |
### Project-Level Roles
| Role | Capabilities |
|-----------|-------------|
| admin | Update/delete project, manage project members, delete issues, manage stages, start/complete sprints |
| developer | Create issues, assign issues, update issue status/priority/stage, log time, add tags, add relations |
| reporter | Create issues, comment, view issues |
### RBAC Implementation
- `RequireRole<ProjectRole>` and `RequireRole<OrgRole>` Axum extractors
- Extract `CurrentUser` from JWT middleware
- Query membership for the org/project (from path params)
- Reject if not a member or insufficient role (401/403)
- Org members automatically get `reporter` role in org's projects unless explicitly assigned
---
## Implementation Phases
### Phase 1: Foundation (Axum + DB + Health)
**Dependencies:** `axum`, `tokio`, `sqlx` (postgres, runtime-tokio, tls-rustls, migrate, uuid, chrono), `serde`/`serde_json`, `uuid`, `chrono`
**Tasks:**
1. Extend `Config` with `jwt_secret`, `db_url` parsing, `server_port`
2. Create `AppState` struct holding `PgPool` and `Config`
3. Set up `sqlx::PgPool` with `sqlx::migrate!().run()` on startup
4. Create health endpoints: `GET /api/v1/health/live` and `GET /api/v1/health/ready`
5. Wire up Axum router with `routes()`, shared state, and graceful shutdown via `tokio::signal`
6. Create first migration: `users` table
### Phase 2: Auth (JWT + Register/Login/Refresh)
**Additional dependencies:** `jsonwebtoken`, `argon2`, `validator`
**Tasks:**
1. Create `users` migration (if not done in Phase 1)
2. Create `refresh_tokens` migration
3. Implement `auth::hash` — argon2 password hashing/verification
4. Implement `auth::jwt` — HS256 access + refresh token creation/validation
5. Implement `db::repos::users` — create, get by email, get by id
6. Implement `db::repos::refresh_tokens` — store, find, delete
7. Implement `auth::service` — register, login, refresh logic
8. Implement `auth::handlers``POST /api/v1/auth/register`, `/login`, `/refresh`
9. Implement `auth::models` — request/response DTOs with `validator` checks
10. Create `errors.rs` — unified `AppError` enum → `IntoResponse`
11. Create `middleware::auth` — JWT extraction, inject `CurrentUser`
12. Protect routes with auth layer
### Phase 3: RBAC Layer
**Tasks:**
1. Create `organization_memberships` and `project_memberships` migrations
2. Define `OrgRole` and `ProjectRole` as SQL enums + Rust enums
3. Implement `db::repos::memberships` — add/remove/check, get role
4. Create `RequireRole<ProjectRole>` and `RequireRole<OrgRole>` Axum extractors
5. Wire role guards into project/issue routes
### Phase 4: Core Domain (Organizations → Projects → Issues)
**Tasks:**
1. **Organizations** — CRUD at `POST/GET/PATCH/DELETE /api/v1/orgs`
- Only org owner/admin can update/delete
- Org creator becomes owner automatically
- Slug auto-generated from name
2. **Projects** — CRUD at `/api/v1/orgs/{org_slug}/projects`
- Project membership inherits from org membership
- Org owner/admin can create projects
- Project admin can update project settings
3. **Issues** — CRUD at `/api/v1/orgs/{org_slug}/projects/{project_slug}/issues`
- reporter+ can create issues
- developer+ can assign/update status
- admin can delete issues
- `tsvector` column for full-text search on title + description
- `GET .../issues?q=search+terms` uses `tsquery`
4. **Pagination** — cursor-based pagination on all list endpoints
5. **utoipa** — add `#[derive(OpenApi)]` annotations, expose `GET /api/v1/docs/openapi.json`
### Phase 5: Issue Rich Content (Tags, Comments, Relations, Time Tracking, Assignees)
**Tasks:**
1. **Tags**`tags` + `issue_tags` tables
- Create/list/delete tags per project (developer+ to create, admin to delete)
- Set issue tags (developer+)
- Filter issues by tag: `GET .../issues?tag=bug`
2. **Comments**`comments` table
- List/create comments on issues (reporter+)
- Edit own comment, delete own comment or admin
3. **Issue Relations**`issue_relations` table with relation types
- Add/remove relations (developer+)
- Types: blocks, is_blocked_by, duplicates, is_duplicated_by, relates_to, clones, is_cloned_by
4. **Time Tracking**`time_entries` table
- Log time on issues (developer+)
- Delete own entry or admin
- Aggregate total time spent on issue (computed from entries)
5. **Multiple Assignees**`issue_assignees` join table
- Replace single `assignee_id` with join table
- Set/replace assignees on issue (developer+)
- Filter issues by assignee: `GET .../issues?assignee={user_id}`
### Phase 6: Sprints & Stages
**Tasks:**
1. **Stages**`stages` table (custom board columns per project)
- Create/reorder/delete stages (project admin)
- Default stages created with new project (e.g. Todo, In Progress, Done)
- Issue `stage_id` tracks which column it's in
2. **Sprints**`sprints` table
- Sprint CRUD endpoints
- Sprint state transitions: `planned``active``completed`
- Issue-sprint assignment via `sprint_id` on issues (developer+)
3. **Sprint Board**`GET .../sprints/{id}/board`
- Returns issues grouped by stage within the sprint
- Moving an issue between stages updates `stage_id`
### Phase 7: Polish & Hardening
**Tasks:**
1. CORS middleware (configurable origins from env)
2. Request ID middleware (propagate through logging)
3. Rate limiting (e.g., `tower-governor`)
4. Comprehensive error responses (validation → 422, auth → 401/403)
5. Integration tests with test DB containers (`testcontainers`)
6. Update Dockerfile and compose.yaml for production-ready build
### Phase 8: Plugin System (Lua Scripting)
**Additional dependencies:** `mlua` (lua54, vendored, async, serialize), `cron` (cron expression parsing)
**Concept:** Users write Lua scripts triggered by events or schedules. Scripts get a sandboxed API to automate workflows (e.g., auto-migrate tasks when a sprint ends, auto-assign issues, send notifications).
**Example user scripts:**
```lua
-- Auto-resolve in-progress tasks when sprint ends
on_event("sprint.ended", function(ctx)
local issues = ctx.project:issues({ status = "in_progress", sprint = ctx.sprint.id })
for _, issue in ipairs(issues) do
issue:update({ status = "resolved", comment = "Auto-resolved: sprint ended" })
end
end)
```
```lua
-- Every Friday at 6pm, notify about stale issues
on_schedule("0 18 * * 5", function(ctx)
local stale = ctx.project:issues({ status = "open", updated_before = "7d" })
for _, issue in ipairs(stale) do
issue:add_comment("This issue has been inactive for 7 days.")
end
end)
```
**Trigger models (cron + event-driven only):**
| Trigger | Example | Implementation |
|---------|---------|----------------|
| Event-driven | `on_event("issue.created", ...)` | Hook into service layer, emit events after mutations |
| Scheduled (cron) | `on_schedule("0 18 * * 5", ...)` | Cron expressions, background tokio task |
**Available events:**
- `issue.created`, `issue.updated`, `issue.deleted`
- `issue.status_changed`, `issue.assigned`, `issue.unassigned`
- `comment.created`, `comment.deleted`
- `sprint.started`, `sprint.ended`, `sprint.completed`
- `tag.created`, `tag.removed`
**Sandboxed Lua API surface:**
- `ctx.project:issues(filter)` — query issues
- `ctx.project:sprints()` — list sprints
- `issue:update(fields)` — mutate issue
- `issue:add_comment(body)` — add comment
- `ctx.user` — current user info
**Safety model:**
- No filesystem, no network access
- Execution timeout (e.g., 5s max per script)
- Memory limit via `mlua` Lua state options
- Scripts run in separate tokio tasks, never block the API
**Script storage:**
- `scripts` table in Postgres (org_id, project_id, name, source, trigger_type, trigger_config, enabled)
- `script_revisions` table for revision history / audit
- `script_execution_logs` table for past runs, errors, output
**Codebase location:**
```
src/
├── scripting/
│ ├── mod.rs # Script engine, mlua setup, sandbox config
│ ├── api.rs # Lua API bindings (ctx.project, ctx.issue, etc.)
│ ├── scheduler.rs # Cron-based trigger runner
│ ├── hooks.rs # Event emission from service layer
│ └── models.rs # Script DB model + DTOs
```
**Tasks:**
1. Add `scripts`, `script_revisions`, and `script_execution_logs` tables to DB
2. Implement `scripting::mod` — mlua sandbox setup, script compilation/validation
3. Implement `scripting::api` — expose safe Lua API (ctx.project, ctx.issue, ctx.sprint)
4. Implement `scripting::hooks` — event emission from service layer
5. Implement `scripting::scheduler` — cron expression parsing + background tokio task runner
6. Script CRUD endpoints at `/api/v1/orgs/{org_slug}/projects/{project_slug}/scripts`
7. Script execution endpoint: `POST .../scripts/{id}/run` (for manual trigger / testing)
8. Script execution log endpoint: `GET .../scripts/{id}/logs`
9. Admin controls: enable/disable scripts per org/project, execution timeout config
---
## Implementation Order (within each feature)
1. **Migration** → write and run the SQL migration
2. **Repository** → write the sqlx queries + Rust repo struct
3. **Model** → define the Rust domain model + DTOs
4. **Service** → business logic (validation, authorization rules)
5. **Handler** → thin Axum handler calling service
6. **Route** → wire into the router with appropriate middleware
7. **Test** → write a test for the endpoint
---
## Testing Approach
### Phase 1 (Recommended Start) ### Phase 1 (Recommended Start)
- Unit tests for pure service logic (no DB) - Unit tests for pure service logic (no DB, mock repositories)
- Integration tests for SQLC repositories with real Postgres via Docker - Integration tests for repositories with real Postgres via Docker
### DB Interface Testing (via SQLC `Querier`) ### Repository Testing
- Treat generated SQLC interfaces (e.g. `usersdb.Querier`) as service dependencies - Define trait interfaces for repositories
- For service unit tests, provide a fake/mock implementation of `Querier` - For service unit tests, provide mock implementations
- Focus unit tests on business rules, branching, and error mapping (not SQL behavior) - Focus unit tests on business rules, branching, and error mapping
- Keep SQL correctness in integration tests against real Postgres - Keep SQL correctness in integration tests against real Postgres
- This split gives fast unit tests plus high-confidence DB integration coverage - This split gives fast unit tests plus high-confidence DB integration coverage
### Phase 2 ### Phase 2
- HTTP handler tests with `httptest` - HTTP handler tests with Axum test helpers
- Auth middleware tests - Auth middleware tests
- RBAC extractor tests
### Phase 3 ### Phase 3
- Minimal end-to-end happy path tests - Minimal end-to-end happy path tests
- Load/stress testing
--- ---
## OpenAPI + Frontend Type Generation ## OpenAPI + Frontend Type Generation
- Keep spec in repo at `api/openapi/openapi.yaml` - utoipa annotations on handlers and models
- Generate frontend TypeScript types from OpenAPI (e.g. `openapi-typescript`) - Auto-generate OpenAPI spec at build time
- Optionally serve Swagger UI from backend - Expose at `GET /api/v1/docs/openapi.json`
- Generate frontend TypeScript types from OpenAPI (e.g., `openapi-typescript`)
- Optionally serve Swagger UI from backend via `utoipa-swagger-ui`
--- ---
## Deployment ## Deployment
- Docker Compose for app + postgres - Docker Compose for app + postgres
- Healthcheck in compose should target readiness endpoint - Healthcheck in compose targets readiness endpoint
- Env-based configuration (`.env`, `.env.example`) - Env-based configuration (`.env`, `.env.example`)
- Multi-stage Dockerfile (rust:alpine builder → alpine runtime)
---
## Pending Decisions
1. JWT signing:
- HS256 shared secret (simple)
- RS256 keypair (better long-term)
2. Token model:
- Access token only
- Access + refresh token
3. Initial roles:
- USER / ADMIN
- USER / MODERATOR / ADMIN
4. OpenAPI workflow:
- Contract-first (spec first)
- Code-first annotations
5. CORS policy:
- Allowed frontend origins in dev/prod
6. Schema strategy:
- Single schema (`public`) confirmation
7. Initial bounded contexts:
- e.g. `auth`, `users`, `rooms` (or your domain names)
--- ---
@ -187,3 +730,4 @@
- This file is intentionally a working draft. - This file is intentionally a working draft.
- We will refine decisions and turn this into a concrete implementation checklist. - We will refine decisions and turn this into a concrete implementation checklist.
- Originally started as Go/Gin, pivoted to Rust/Axum for the Rust learning journey.

View File

@ -1,19 +1,15 @@
FROM golang:1.26-bookworm AS builder FROM rust:1.95.0-alpine3.22 AS builder
WORKDIR /src WORKDIR /app
# Cache deps first ARG DB_URL
COPY go.mod go.sum ./ ARG APP_ENV
RUN go mod download
# Copy source and build
COPY . . COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \ RUN cargo build --release
go build -trimpath -ldflags="-s -w" -o /out/api ./cmd/api
# Small runtime image # Small runtime image
FROM gcr.io/distroless/static-debian12 FROM alpine:3.22.4
WORKDIR /app WORKDIR /app
COPY --from=builder /out/api /app/api COPY --from=builder /app/target/release/rhythm-backend /app/executable
EXPOSE 8080
USER nonroot:nonroot ENTRYPOINT ["/app/executable"]
ENTRYPOINT ["/app/api"]

9
bruno/.gitignore vendored
View File

@ -1,9 +0,0 @@
# Secrets
.env*
# Dependencies
node_modules
# OS files
.DS_Store
Thumbs.db

View File

@ -1,27 +0,0 @@
info:
name: login user
type: http
seq: 1
http:
method: POST
url: "{{host}}/api/v1/auth/login"
body:
type: json
data: |-
{
"email": "dmo@dmo.dmo",
"password": "password12345"
}
auth: inherit
runtime:
variables:
- name: host
value: http://localhost:8080
settings:
encodeUrl: true
timeout: 0
followRedirects: true
maxRedirects: 5

View File

@ -1,10 +0,0 @@
opencollection: 1.0.0
info:
name: rhythm
bundled: false
extensions:
bruno:
ignore:
- node_modules
- .git

View File

@ -1,27 +0,0 @@
info:
name: register user
type: http
seq: 2
http:
method: POST
url: "{{host}}/api/v1/auth/register"
body:
type: json
data: |-
{
"email": "dmo@dmo.dmo",
"password": "password12345"
}
auth: inherit
runtime:
variables:
- name: host
value: http://localhost:8080
settings:
encodeUrl: true
timeout: 0
followRedirects: true
maxRedirects: 5

View File

@ -1,36 +0,0 @@
services:
api-prod:
image: git.kanopo.dev/rhythm/rhythm-backend:latest
restart: unless-stopped
container_name: rhythm-api-prod
ports:
- "8080:8080"
environment:
DB_HOST: db-prod
env_file:
- ".env"
depends_on:
db-prod:
condition: service_healthy
profiles:
- prod
db-prod:
image: postgres:18.0-alpine
restart: unless-stopped
container_name: rhythm-db-prod
ports:
- "${DB_PORT_PROD:-5433}:5432"
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
volumes:
- ./.data/postgres-prod:/var/lib/postgresql/data
profiles:
- prod
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USERNAME} -d ${DB_NAME}"]
interval: 5s
timeout: 3s
retries: 10

View File

@ -1,40 +1,4 @@
services: services:
api-prod:
build: .
restart: unless-stopped
container_name: rhythm-api-prod
ports:
- "8080:8080"
environment:
DB_HOST: db-prod
env_file:
- ".env"
depends_on:
db-prod:
condition: service_healthy
profiles:
- prod
db-prod:
image: postgres:18.0-alpine
restart: unless-stopped
container_name: rhythm-db-prod
ports:
- "${DB_PORT_PROD:-5433}:5432"
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
volumes:
- ./.data/postgres-prod:/var/lib/postgresql/data
profiles:
- prod
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USERNAME} -d ${DB_NAME}"]
interval: 5s
timeout: 3s
retries: 10
db-dev: db-dev:
image: postgres:18.0-alpine image: postgres:18.0-alpine
restart: unless-stopped restart: unless-stopped
@ -53,5 +17,34 @@ services:
timeout: 3s timeout: 3s
retries: 10 retries: 10
volumes: api-prod:
postgres_data: image: git.kanopo.dev/rhythm/rhythm-backend:latest
container_name: rhythm-api-prod
restart: unless-stopped
environment:
DB_HOST: db-prod
env_file:
- ".env"
depends_on:
db-prod:
condition: service_healthy
profiles:
- prod
db-prod:
image: postgres:18.0-alpine
restart: unless-stopped
container_name: rhythm-db-prod
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: rhythm-prod
volumes:
- ./postgres-data:/var/lib/postgresql/data
profiles:
- prod
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USERNAME} -d rhythm-prod"]
interval: 5s
timeout: 3s
retries: 10