go
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 1m25s
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 1m25s
This commit is contained in:
parent
f79d830cf2
commit
cb5d273951
@ -1,8 +0,0 @@
|
|||||||
.git
|
|
||||||
.gitignore
|
|
||||||
target
|
|
||||||
.env
|
|
||||||
.data
|
|
||||||
compose.yaml
|
|
||||||
README.md
|
|
||||||
LICENSE
|
|
||||||
142
BACKEND_BLUEPRINT.md
Normal file
142
BACKEND_BLUEPRINT.md
Normal file
@ -0,0 +1,142 @@
|
|||||||
|
# Go Backend Blueprint (Iteration Draft)
|
||||||
|
|
||||||
|
## Chosen Stack
|
||||||
|
|
||||||
|
- **Framework:** Gin
|
||||||
|
- **Migrations:** Goose (SQL-only)
|
||||||
|
- **DB Access:** SQLC (no ORM), package per bounded context
|
||||||
|
- **PostgreSQL Driver/Pool:** pgx/v5 + pgxpool
|
||||||
|
- **Logging:** Uber Zap (structured logs)
|
||||||
|
- **Health Probes:** liveness/readiness endpoints
|
||||||
|
- **API Docs:** OpenAPI (for frontend TypeScript type generation)
|
||||||
|
- **Deployment:** Docker Compose on self-hosted hardware
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture Direction
|
||||||
|
|
||||||
|
### Layering
|
||||||
|
|
||||||
|
- `cmd/api` - application entrypoint and dependency wiring
|
||||||
|
- `internal/http` - Gin router, handlers, middleware
|
||||||
|
- `internal/service` - business logic + transaction boundaries
|
||||||
|
- `internal/db/<context>` - SQLC-generated code by bounded context
|
||||||
|
- `internal/store` - shared DB/Tx helpers
|
||||||
|
- `internal/auth` - JWT validation + role guards
|
||||||
|
- `internal/config` - env configuration loading
|
||||||
|
- `migrations/` - Goose SQL migration files
|
||||||
|
- `api/openapi/` - OpenAPI spec + generated artifacts
|
||||||
|
|
||||||
|
### Transaction Strategy
|
||||||
|
|
||||||
|
- Handlers stay thin.
|
||||||
|
- Service layer owns DB transactions.
|
||||||
|
- SQLC queries are called with either pool or tx using `DBTX` interfaces.
|
||||||
|
- No transaction logic in handlers.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API and Runtime
|
||||||
|
|
||||||
|
### API Shape
|
||||||
|
|
||||||
|
- REST JSON API
|
||||||
|
- Possible future WebSocket support for interactive features
|
||||||
|
- Suggested versioning: `/api/v1`
|
||||||
|
|
||||||
|
### Health Endpoints
|
||||||
|
|
||||||
|
- `GET /health/live` - process is alive
|
||||||
|
- `GET /health/ready` - DB ping succeeds (and optionally migration version check)
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
|
||||||
|
- Zap JSON logs
|
||||||
|
- Correlation/request ID in middleware
|
||||||
|
- Structured error logging from middleware and service boundaries
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database and Migrations
|
||||||
|
|
||||||
|
### Goose
|
||||||
|
|
||||||
|
- SQL-only migrations
|
||||||
|
- Keep up/down migration scripts
|
||||||
|
- Run on startup in non-prod optional, required in CI/CD/deploy step
|
||||||
|
|
||||||
|
### SQLC
|
||||||
|
|
||||||
|
- Generate one package per bounded context (similar to Spring repository modules)
|
||||||
|
- Keep SQL in context directories (`query.sql`, `models.sql` style)
|
||||||
|
- Service layer composes multiple repositories when needed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing Approach (Beginner-Friendly)
|
||||||
|
|
||||||
|
### Phase 1 (Recommended Start)
|
||||||
|
|
||||||
|
- Unit tests for pure service logic (no DB)
|
||||||
|
- Integration tests for SQLC repositories with real Postgres via Docker
|
||||||
|
|
||||||
|
### Phase 2
|
||||||
|
|
||||||
|
- HTTP handler tests with `httptest`
|
||||||
|
- Auth middleware tests
|
||||||
|
|
||||||
|
### Phase 3
|
||||||
|
|
||||||
|
- Minimal end-to-end happy path tests
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenAPI + Frontend Type Generation
|
||||||
|
|
||||||
|
- Keep spec in repo at `api/openapi/openapi.yaml`
|
||||||
|
- Generate frontend TypeScript types from OpenAPI (e.g. `openapi-typescript`)
|
||||||
|
- Optionally serve Swagger UI from backend
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
- Docker Compose for app + postgres
|
||||||
|
- Healthcheck in compose should target readiness endpoint
|
||||||
|
- Env-based configuration (`.env`, `.env.example`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pending Decisions
|
||||||
|
|
||||||
|
1. JWT signing:
|
||||||
|
- HS256 shared secret (simple)
|
||||||
|
- RS256 keypair (better long-term)
|
||||||
|
|
||||||
|
2. Token model:
|
||||||
|
- Access token only
|
||||||
|
- Access + refresh token
|
||||||
|
|
||||||
|
3. Initial roles:
|
||||||
|
- USER / ADMIN
|
||||||
|
- USER / MODERATOR / ADMIN
|
||||||
|
|
||||||
|
4. OpenAPI workflow:
|
||||||
|
- Contract-first (spec first)
|
||||||
|
- Code-first annotations
|
||||||
|
|
||||||
|
5. CORS policy:
|
||||||
|
- Allowed frontend origins in dev/prod
|
||||||
|
|
||||||
|
6. Schema strategy:
|
||||||
|
- Single schema (`public`) confirmation
|
||||||
|
|
||||||
|
7. Initial bounded contexts:
|
||||||
|
- e.g. `auth`, `users`, `rooms` (or your domain names)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Iteration Notes
|
||||||
|
|
||||||
|
- This file is intentionally a working draft.
|
||||||
|
- We will refine decisions and turn this into a concrete implementation checklist.
|
||||||
2539
Cargo.lock
generated
2539
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
19
Cargo.toml
19
Cargo.toml
@ -1,19 +0,0 @@
|
|||||||
[package]
|
|
||||||
name = "rhythm_backend"
|
|
||||||
version = "0.1.0"
|
|
||||||
edition = "2024"
|
|
||||||
|
|
||||||
[dependencies]
|
|
||||||
argon2 = "0.5.3"
|
|
||||||
axum = "0.8.9"
|
|
||||||
chrono = { version = "0.4.42", features = ["serde"] }
|
|
||||||
dotenvy = "0.15.7"
|
|
||||||
serde = { version = "1.0.228", features = ["derive"] }
|
|
||||||
serde_json = "1.0.149"
|
|
||||||
sqlx = { version = "0.8.6", features = ["runtime-tokio-rustls", "postgres", "uuid", "chrono", "migrate"] }
|
|
||||||
thiserror = "2.0.18"
|
|
||||||
tokio = { version = "1.52.0", features = ["macros", "rt-multi-thread"] }
|
|
||||||
tracing = "0.1.44"
|
|
||||||
tower-http = { version = "0.6.6", features = ["trace"] }
|
|
||||||
tracing-subscriber = { version = "0.3.23", features = ["env-filter"] }
|
|
||||||
uuid = { version = "1.18.1", features = ["serde", "v4"] }
|
|
||||||
35
Dockerfile
35
Dockerfile
@ -1,22 +1,19 @@
|
|||||||
FROM rust:1.88-bookworm AS builder
|
FROM golang:1.26-bookworm AS builder
|
||||||
|
WORKDIR /src
|
||||||
|
|
||||||
|
# Cache deps first
|
||||||
|
COPY go.mod go.sum ./
|
||||||
|
RUN go mod download
|
||||||
|
|
||||||
|
# Copy source and build
|
||||||
|
COPY . .
|
||||||
|
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
|
||||||
|
go build -trimpath -ldflags="-s -w" -o /out/api ./cmd/api
|
||||||
|
|
||||||
|
# Small runtime image
|
||||||
|
FROM gcr.io/distroless/static-debian12
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
COPY --from=builder /out/api /app/api
|
||||||
COPY Cargo.toml Cargo.lock ./
|
|
||||||
COPY src ./src
|
|
||||||
|
|
||||||
RUN cargo build --release
|
|
||||||
|
|
||||||
FROM debian:bookworm-slim AS runtime
|
|
||||||
|
|
||||||
WORKDIR /app
|
|
||||||
|
|
||||||
RUN apt-get update \
|
|
||||||
&& apt-get install -y --no-install-recommends ca-certificates \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
COPY --from=builder /app/target/release/rhythm_backend /app/rhythm_backend
|
|
||||||
|
|
||||||
EXPOSE 8080
|
EXPOSE 8080
|
||||||
|
USER nonroot:nonroot
|
||||||
ENTRYPOINT ["/app/rhythm_backend"]
|
ENTRYPOINT ["/app/api"]
|
||||||
|
|||||||
17
Makefile
Normal file
17
Makefile
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
.PHONY: help run test fmt
|
||||||
|
|
||||||
|
help:
|
||||||
|
@echo "Targets:"
|
||||||
|
@echo " run - run API"
|
||||||
|
@echo " test - run tests"
|
||||||
|
@echo " fmt - format Go code"
|
||||||
|
|
||||||
|
run:
|
||||||
|
go run ./cmd/api
|
||||||
|
|
||||||
|
test:
|
||||||
|
go test ./...
|
||||||
|
|
||||||
|
fmt:
|
||||||
|
go fmt ./...
|
||||||
|
|
||||||
65
README.md
65
README.md
@ -1,65 +0,0 @@
|
|||||||
# Rhythm Backend
|
|
||||||
|
|
||||||
High-performance Rust backend for a lightweight issue tracking platform focused on team execution.
|
|
||||||
|
|
||||||
## Main Objective
|
|
||||||
|
|
||||||
Build a simple, reliable issue tracker where teams can:
|
|
||||||
|
|
||||||
- Create and manage projects
|
|
||||||
- Add users to projects with role-based permissions
|
|
||||||
- Create, update, and track issues with practical metadata
|
|
||||||
- Organize work through Kanban and Scrum views
|
|
||||||
|
|
||||||
The first version prioritizes clean CRUD, clear access control, and fast text search on plain-text issue data.
|
|
||||||
Encryption and advanced collaboration features can be added in later iterations.
|
|
||||||
|
|
||||||
## How To Start
|
|
||||||
|
|
||||||
1. Lock the V1 scope
|
|
||||||
- Entities: `users`, `projects`, `project_members`, `issues`
|
|
||||||
- Roles: `owner`, `maintainer`, `member`, `viewer`
|
|
||||||
- Issue fields: `title`, `description`, `status`, `priority`, `type`, `assignee_id`, `reporter_id`, optional `sprint_id`, timestamps
|
|
||||||
- Keep Kanban and Scrum as filtered endpoints first (not separate DB models)
|
|
||||||
|
|
||||||
2. Bootstrap the Rust service
|
|
||||||
- Stack: `axum`, `tokio`, `sqlx`, `postgres`, `tracing`
|
|
||||||
- Suggested modules: `http/`, `service/`, `repo/`, `model/`, `auth/`, `db/`
|
|
||||||
- First route: `GET /health`
|
|
||||||
|
|
||||||
3. Build the database first
|
|
||||||
- Migrations for `users`, `projects`, `project_members`, `issues`
|
|
||||||
- Add unique constraint on `(project_id, user_id)` in memberships
|
|
||||||
- Add indexes for issue listing and filtering
|
|
||||||
- Add `tsvector` + `GIN` index for title/description full-text search
|
|
||||||
|
|
||||||
4. Implement CRUD vertical slices
|
|
||||||
- Projects: create, read, update
|
|
||||||
- Memberships: add member, update role, remove member
|
|
||||||
- Issues: create, list with filters + pagination, update, delete/soft-delete
|
|
||||||
|
|
||||||
5. Add board endpoints
|
|
||||||
- Kanban endpoint grouped by status
|
|
||||||
- Scrum endpoints for sprint + backlog views
|
|
||||||
- Keep these as query/aggregation logic on top of `issues`
|
|
||||||
|
|
||||||
6. Harden the service
|
|
||||||
- Add authentication (JWT or session)
|
|
||||||
- Enforce role checks per project
|
|
||||||
- Add request validation and consistent error responses
|
|
||||||
- Add integration tests for permissions and issue lifecycle
|
|
||||||
- Add slow-query logging and tune key SQL with `EXPLAIN ANALYZE`
|
|
||||||
|
|
||||||
## Technical Direction
|
|
||||||
|
|
||||||
- Language: Rust
|
|
||||||
- API: REST
|
|
||||||
- Database: PostgreSQL
|
|
||||||
- Search: PostgreSQL full-text search (`tsvector` + `GIN` indexes)
|
|
||||||
- Scale target: under 1M issues
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
- Keep handlers thin, business rules in services, SQL in repository layer
|
|
||||||
- Prefer cursor pagination over deep offset pagination
|
|
||||||
- Add external search only when Postgres full-text search becomes limiting
|
|
||||||
@ -1,11 +1,18 @@
|
|||||||
info:
|
info:
|
||||||
name: Untitled
|
name: register user
|
||||||
type: http
|
type: http
|
||||||
seq: 1
|
seq: 1
|
||||||
|
|
||||||
http:
|
http:
|
||||||
method: POST
|
method: POST
|
||||||
url: "{{host}}/api/auth/register"
|
url: "{{host}}/api/auth/register"
|
||||||
|
body:
|
||||||
|
type: json
|
||||||
|
data: |-
|
||||||
|
{
|
||||||
|
"email": "dmo@dmo.dmo",
|
||||||
|
"password": "password12345"
|
||||||
|
}
|
||||||
auth: inherit
|
auth: inherit
|
||||||
|
|
||||||
runtime:
|
runtime:
|
||||||
12
cmd/api/main.go
Normal file
12
cmd/api/main.go
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"log"
|
||||||
|
|
||||||
|
"git.kanopo.dev/rhythm/rhythm-backend/internal/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
cfg := config.Load()
|
||||||
|
log.Println(cfg.DbUrl)
|
||||||
|
}
|
||||||
@ -1,95 +0,0 @@
|
|||||||
# SQLx Migration Guide
|
|
||||||
|
|
||||||
This project uses SQLx migrations and runs them on application startup.
|
|
||||||
|
|
||||||
## 1) Install SQLx CLI
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
- Rust toolchain installed (`rustup`)
|
|
||||||
- PostgreSQL reachable from your machine
|
|
||||||
- `DATABASE_URL` available
|
|
||||||
|
|
||||||
### Install command
|
|
||||||
```bash
|
|
||||||
cargo install sqlx-cli --no-default-features --features rustls,postgres
|
|
||||||
```
|
|
||||||
|
|
||||||
Verify installation:
|
|
||||||
```bash
|
|
||||||
sqlx --version
|
|
||||||
```
|
|
||||||
|
|
||||||
## 2) Configure `DATABASE_URL`
|
|
||||||
|
|
||||||
Set your database connection string:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export DATABASE_URL=postgres://DB_USERNAME:DB_PASSWORD@DB_HOST:DB_PORT/DB_NAME
|
|
||||||
```
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```bash
|
|
||||||
export DATABASE_URL=postgres://rhythm:rhythm@localhost:5432/rhythm_db
|
|
||||||
```
|
|
||||||
|
|
||||||
## 3) Create a new migration
|
|
||||||
|
|
||||||
Create a migration file:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sqlx migrate add create_users
|
|
||||||
```
|
|
||||||
|
|
||||||
This generates a timestamped SQL file under `migrations/`, for example:
|
|
||||||
- `migrations/20260416103000_create_users.sql`
|
|
||||||
|
|
||||||
If you want reversible migrations (up/down):
|
|
||||||
```bash
|
|
||||||
sqlx migrate add -r create_users
|
|
||||||
```
|
|
||||||
|
|
||||||
## 4) Run migrations manually
|
|
||||||
|
|
||||||
Apply pending migrations:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sqlx migrate run
|
|
||||||
```
|
|
||||||
|
|
||||||
Show migration status:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sqlx migrate info
|
|
||||||
```
|
|
||||||
|
|
||||||
Revert the most recent migration (only for reversible migrations):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sqlx migrate revert
|
|
||||||
```
|
|
||||||
|
|
||||||
## 5) Startup migrations (application)
|
|
||||||
|
|
||||||
Add this after creating the database pool in `src/main.rs`:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
sqlx::migrate!("./migrations").run(&pool).await?;
|
|
||||||
```
|
|
||||||
|
|
||||||
Behavior:
|
|
||||||
- Pending migrations are applied on every startup.
|
|
||||||
- Applied migrations are tracked in the `_sqlx_migrations` table.
|
|
||||||
- If a migration fails, the app fails fast and does not start listening.
|
|
||||||
|
|
||||||
## 6) Team workflow
|
|
||||||
|
|
||||||
- Create a new migration for every schema change.
|
|
||||||
- Commit migration files to git.
|
|
||||||
- Do not modify migration files that are already applied in shared environments.
|
|
||||||
- Add a new migration to evolve schema safely.
|
|
||||||
|
|
||||||
## 7) Production notes
|
|
||||||
|
|
||||||
- Running migrations on startup in production is supported.
|
|
||||||
- In multi-instance deployments, one instance may apply migrations while others wait/retry according to orchestration settings.
|
|
||||||
- Prefer backward-compatible migrations for rolling deployments.
|
|
||||||
5
go.mod
Normal file
5
go.mod
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
module git.kanopo.dev/rhythm/rhythm-backend
|
||||||
|
|
||||||
|
go 1.26.2
|
||||||
|
|
||||||
|
require github.com/joho/godotenv v1.5.1
|
||||||
2
go.sum
Normal file
2
go.sum
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
|
||||||
|
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
|
||||||
41
internal/config/config.go
Normal file
41
internal/config/config.go
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
_ "github.com/joho/godotenv/autoload"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Config struct {
|
||||||
|
DbUrl string
|
||||||
|
}
|
||||||
|
|
||||||
|
func Load() Config {
|
||||||
|
var dbUrl string
|
||||||
|
{
|
||||||
|
username := getEnv("DB_USERNAME")
|
||||||
|
password := getEnv("DB_PASSWORD")
|
||||||
|
name := getEnv("DB_NAME")
|
||||||
|
port := getEnv("DB_PORT")
|
||||||
|
host := getEnv("DB_HOST")
|
||||||
|
|
||||||
|
// postgres://admin:admin@localhost:5432/admin_db
|
||||||
|
dbUrl = fmt.Sprintf("postgres://%v:%v@%v:%v/%v?sslmode=disabled", username, password, host, port, name)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg := Config{
|
||||||
|
DbUrl: dbUrl,
|
||||||
|
}
|
||||||
|
|
||||||
|
return cfg
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func getEnv(key string) string {
|
||||||
|
v := os.Getenv(key)
|
||||||
|
if v == "" {
|
||||||
|
log.Fatalln("The env variable %v is not defined and the applciation can not operate without", key)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
@ -1,9 +0,0 @@
|
|||||||
-- Add migration script here
|
|
||||||
|
|
||||||
CREATE TABLE USERS (
|
|
||||||
ID UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
||||||
EMAIL VARCHAR(255) NOT NULL UNIQUE,
|
|
||||||
PASSWORD VARCHAR(255) NOT NULL,
|
|
||||||
CREATED_AT TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
|
||||||
UPDATED_AT TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
|
||||||
);
|
|
||||||
11
migrations/20260419141635_create_users_table.sql
Normal file
11
migrations/20260419141635_create_users_table.sql
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
-- +goose Up
|
||||||
|
create table users (
|
||||||
|
id uuid primary uuidv4(),
|
||||||
|
email varchar(255) not null unique,
|
||||||
|
password varchar(255) not null,
|
||||||
|
created_at timestamptz not null default now(),
|
||||||
|
updated_at timestamptz not null default now()
|
||||||
|
);
|
||||||
|
|
||||||
|
-- +goose Down
|
||||||
|
drop table if exists users;
|
||||||
12
sqlc.yaml
Normal file
12
sqlc.yaml
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
version: "2"
|
||||||
|
cloud:
|
||||||
|
organization: ""
|
||||||
|
project: ""
|
||||||
|
hostname: ""
|
||||||
|
servers: []
|
||||||
|
sql: []
|
||||||
|
overrides:
|
||||||
|
go: null
|
||||||
|
plugins: []
|
||||||
|
rules: []
|
||||||
|
options: {}
|
||||||
@ -1,5 +0,0 @@
|
|||||||
#[derive(Clone)]
|
|
||||||
pub struct AppState {
|
|
||||||
#[allow(dead_code)]
|
|
||||||
pub db: sqlx::PgPool,
|
|
||||||
}
|
|
||||||
@ -1,52 +0,0 @@
|
|||||||
use std::env;
|
|
||||||
|
|
||||||
use thiserror::Error;
|
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct Config {
|
|
||||||
pub db_user: String,
|
|
||||||
pub db_password: String,
|
|
||||||
pub db_host: String,
|
|
||||||
pub db_port: u16,
|
|
||||||
pub db_name: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Error)]
|
|
||||||
pub enum ConfigError {
|
|
||||||
#[error("missing env var: {0}")]
|
|
||||||
MissingVar(&'static str),
|
|
||||||
#[error("invalid DB_PORT: {0}")]
|
|
||||||
InvalidPort(String),
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Config {
|
|
||||||
pub fn from_env() -> Result<Self, ConfigError> {
|
|
||||||
dotenvy::dotenv().ok();
|
|
||||||
|
|
||||||
let db_user =
|
|
||||||
env::var("DB_USERNAME").map_err(|_| ConfigError::MissingVar("DB_USERNAME"))?;
|
|
||||||
let db_password =
|
|
||||||
env::var("DB_PASSWORD").map_err(|_| ConfigError::MissingVar("DB_PASSWORD"))?;
|
|
||||||
let db_host = env::var("DB_HOST").map_err(|_| ConfigError::MissingVar("DB_HOST"))?;
|
|
||||||
let db_name = env::var("DB_NAME").map_err(|_| ConfigError::MissingVar("DB_NAME"))?;
|
|
||||||
let db_port_raw = env::var("DB_PORT").map_err(|_| ConfigError::MissingVar("DB_PORT"))?;
|
|
||||||
let db_port = db_port_raw
|
|
||||||
.parse::<u16>()
|
|
||||||
.map_err(|_| ConfigError::InvalidPort(db_port_raw))?;
|
|
||||||
|
|
||||||
Ok(Self {
|
|
||||||
db_user,
|
|
||||||
db_password,
|
|
||||||
db_host,
|
|
||||||
db_port,
|
|
||||||
db_name,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn database_url(&self) -> String {
|
|
||||||
format!(
|
|
||||||
"postgres://{}:{}@{}:{}/{}",
|
|
||||||
self.db_user, self.db_password, self.db_host, self.db_port, self.db_name
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,9 +0,0 @@
|
|||||||
use sqlx::{PgPool, postgres::PgPoolOptions};
|
|
||||||
|
|
||||||
pub async fn create_pool(database_url: &str) -> Result<PgPool, crate::error::AppError> {
|
|
||||||
PgPoolOptions::new()
|
|
||||||
.max_connections(10)
|
|
||||||
.connect(database_url)
|
|
||||||
.await
|
|
||||||
.map_err(crate::error::AppError::from)
|
|
||||||
}
|
|
||||||
@ -1,3 +0,0 @@
|
|||||||
pub mod database;
|
|
||||||
pub mod model;
|
|
||||||
pub mod user_repo;
|
|
||||||
@ -1 +0,0 @@
|
|||||||
pub mod user;
|
|
||||||
@ -1 +0,0 @@
|
|||||||
pub mod user_dto;
|
|
||||||
@ -1,8 +0,0 @@
|
|||||||
#[derive(Debug, Clone, sqlx::FromRow)]
|
|
||||||
pub struct User {
|
|
||||||
pub id: uuid::Uuid,
|
|
||||||
pub email: String,
|
|
||||||
pub password: String,
|
|
||||||
pub created_at: chrono::DateTime<chrono::Utc>,
|
|
||||||
pub updated_at: chrono::DateTime<chrono::Utc>,
|
|
||||||
}
|
|
||||||
@ -1,42 +0,0 @@
|
|||||||
use crate::db::model::user::user_dto::User;
|
|
||||||
use crate::error::AppError;
|
|
||||||
use sqlx::{Postgres, Transaction};
|
|
||||||
|
|
||||||
pub async fn email_exists(
|
|
||||||
tx: &mut Transaction<'_, Postgres>,
|
|
||||||
email: &str,
|
|
||||||
) -> Result<bool, AppError> {
|
|
||||||
let existing = sqlx::query_scalar::<_, bool>(
|
|
||||||
r#"
|
|
||||||
SELECT TRUE
|
|
||||||
FROM users
|
|
||||||
WHERE email = $1
|
|
||||||
LIMIT 1
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(email)
|
|
||||||
.fetch_optional(&mut **tx)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(existing.unwrap_or(false))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn create_user_in_tx(
|
|
||||||
tx: &mut Transaction<'_, Postgres>,
|
|
||||||
email: &str,
|
|
||||||
password_hash: &str,
|
|
||||||
) -> Result<User, AppError> {
|
|
||||||
let user = sqlx::query_as::<_, User>(
|
|
||||||
r#"
|
|
||||||
INSERT INTO users (email, password)
|
|
||||||
VALUES ($1, $2)
|
|
||||||
RETURNING id, email, password, created_at, updated_at
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(email)
|
|
||||||
.bind(password_hash)
|
|
||||||
.fetch_one(&mut **tx)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(user)
|
|
||||||
}
|
|
||||||
44
src/error.rs
44
src/error.rs
@ -1,44 +0,0 @@
|
|||||||
use axum::{
|
|
||||||
http::StatusCode,
|
|
||||||
response::{IntoResponse, Response},
|
|
||||||
Json,
|
|
||||||
};
|
|
||||||
use serde::Serialize;
|
|
||||||
use thiserror::Error;
|
|
||||||
|
|
||||||
#[derive(Debug, Error)]
|
|
||||||
pub enum AppError {
|
|
||||||
#[error(transparent)]
|
|
||||||
Config(#[from] crate::config::ConfigError),
|
|
||||||
#[error(transparent)]
|
|
||||||
Db(#[from] sqlx::Error),
|
|
||||||
#[error(transparent)]
|
|
||||||
Io(#[from] std::io::Error),
|
|
||||||
#[error(transparent)]
|
|
||||||
Migration(#[from] sqlx::migrate::MigrateError),
|
|
||||||
#[error("validation error: {0}")]
|
|
||||||
Validation(String),
|
|
||||||
#[error("conflict: {0}")]
|
|
||||||
Conflict(String),
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
struct ErrorBody {
|
|
||||||
error: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl IntoResponse for AppError {
|
|
||||||
fn into_response(self) -> Response {
|
|
||||||
let status = match self {
|
|
||||||
Self::Validation(_) => StatusCode::BAD_REQUEST,
|
|
||||||
Self::Conflict(_) => StatusCode::CONFLICT,
|
|
||||||
_ => StatusCode::INTERNAL_SERVER_ERROR,
|
|
||||||
};
|
|
||||||
|
|
||||||
let body = Json(ErrorBody {
|
|
||||||
error: self.to_string(),
|
|
||||||
});
|
|
||||||
|
|
||||||
(status, body).into_response()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,10 +0,0 @@
|
|||||||
use crate::app_state::AppState;
|
|
||||||
use axum::Router;
|
|
||||||
|
|
||||||
use crate::http::{auth_router, health_router};
|
|
||||||
|
|
||||||
pub fn router() -> Router<AppState> {
|
|
||||||
Router::new()
|
|
||||||
.nest("/health", health_router::router())
|
|
||||||
.nest("/auth", auth_router::router())
|
|
||||||
}
|
|
||||||
@ -1,25 +0,0 @@
|
|||||||
use crate::http::model::register_user_req::RegisterUserReq;
|
|
||||||
use crate::http::model::register_user_res::RegisterUserRes;
|
|
||||||
use crate::service::auth_service;
|
|
||||||
use axum::{Json, Router, extract::State, routing::post};
|
|
||||||
|
|
||||||
use crate::app_state::AppState;
|
|
||||||
use crate::error::AppError;
|
|
||||||
|
|
||||||
pub fn router() -> Router<AppState> {
|
|
||||||
Router::new()
|
|
||||||
.route("/register", post(register))
|
|
||||||
.route("/login", post(login))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn register(
|
|
||||||
State(state): State<AppState>,
|
|
||||||
Json(body): Json<RegisterUserReq>,
|
|
||||||
) -> Result<Json<RegisterUserRes>, AppError> {
|
|
||||||
let response = auth_service::register(&state.db, body).await?;
|
|
||||||
Ok(Json(response))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn login() -> &'static str {
|
|
||||||
"login not implemented yet"
|
|
||||||
}
|
|
||||||
@ -1,17 +0,0 @@
|
|||||||
use axum::{Json, Router, routing::get};
|
|
||||||
use serde::Serialize;
|
|
||||||
|
|
||||||
use crate::app_state::AppState;
|
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
struct HealthResponse {
|
|
||||||
status: &'static str,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn router() -> Router<AppState> {
|
|
||||||
Router::new().route("/", get(get_health))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn get_health() -> Json<HealthResponse> {
|
|
||||||
Json(HealthResponse { status: "ok" })
|
|
||||||
}
|
|
||||||
@ -1,4 +0,0 @@
|
|||||||
pub mod api_router;
|
|
||||||
pub mod auth_router;
|
|
||||||
mod health_router;
|
|
||||||
pub mod model;
|
|
||||||
@ -1,2 +0,0 @@
|
|||||||
pub mod register_user_req;
|
|
||||||
pub mod register_user_res;
|
|
||||||
@ -1,7 +0,0 @@
|
|||||||
use serde::Deserialize;
|
|
||||||
|
|
||||||
#[derive(Deserialize)]
|
|
||||||
pub struct RegisterUserReq {
|
|
||||||
pub email: String,
|
|
||||||
pub password: String,
|
|
||||||
}
|
|
||||||
@ -1,7 +0,0 @@
|
|||||||
use serde::Serialize;
|
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
pub struct RegisterUserRes {
|
|
||||||
pub id: uuid::Uuid,
|
|
||||||
pub email: String,
|
|
||||||
}
|
|
||||||
48
src/main.rs
48
src/main.rs
@ -1,48 +0,0 @@
|
|||||||
mod app_state;
|
|
||||||
mod config;
|
|
||||||
mod db;
|
|
||||||
mod error;
|
|
||||||
mod http;
|
|
||||||
mod service;
|
|
||||||
|
|
||||||
use crate::db::database;
|
|
||||||
use crate::error::AppError;
|
|
||||||
use crate::http::api_router;
|
|
||||||
use axum::Router;
|
|
||||||
use sqlx::migrate;
|
|
||||||
use tower_http::trace::{DefaultOnRequest, DefaultOnResponse, TraceLayer};
|
|
||||||
use tracing::{Level, info};
|
|
||||||
use tracing_subscriber::{EnvFilter, fmt};
|
|
||||||
|
|
||||||
fn init_tracing() {
|
|
||||||
let filter = EnvFilter::try_from_default_env()
|
|
||||||
.unwrap_or_else(|_| EnvFilter::new("info,tower_http=info,sqlx=warn"));
|
|
||||||
|
|
||||||
fmt().with_env_filter(filter).init();
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::main]
|
|
||||||
async fn main() -> Result<(), AppError> {
|
|
||||||
init_tracing();
|
|
||||||
|
|
||||||
let cfg = config::Config::from_env()?;
|
|
||||||
let pool = database::create_pool(&cfg.database_url()).await?;
|
|
||||||
info!("database connection established");
|
|
||||||
|
|
||||||
migrate!().run(&pool).await?;
|
|
||||||
|
|
||||||
let state = app_state::AppState { db: pool };
|
|
||||||
let app = Router::new()
|
|
||||||
.nest("/api", api_router::router())
|
|
||||||
.with_state(state)
|
|
||||||
.layer(
|
|
||||||
TraceLayer::new_for_http()
|
|
||||||
.on_request(DefaultOnRequest::new().level(Level::INFO))
|
|
||||||
.on_response(DefaultOnResponse::new().level(Level::INFO)),
|
|
||||||
);
|
|
||||||
|
|
||||||
let listener = tokio::net::TcpListener::bind("0.0.0.0:8080").await?;
|
|
||||||
info!("server listening on 0.0.0.0:8080");
|
|
||||||
axum::serve(listener, app).await?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
@ -1,86 +0,0 @@
|
|||||||
use argon2::{
|
|
||||||
Argon2,
|
|
||||||
password_hash::{PasswordHasher, SaltString, rand_core::OsRng},
|
|
||||||
};
|
|
||||||
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
|
|
||||||
use tokio::time::sleep;
|
|
||||||
|
|
||||||
use crate::db::user_repo;
|
|
||||||
use crate::error::AppError;
|
|
||||||
use crate::http::model::{register_user_req::RegisterUserReq, register_user_res::RegisterUserRes};
|
|
||||||
|
|
||||||
pub async fn register(
|
|
||||||
pool: &sqlx::PgPool,
|
|
||||||
req: RegisterUserReq,
|
|
||||||
) -> Result<RegisterUserRes, AppError> {
|
|
||||||
let started = Instant::now();
|
|
||||||
let result = register_inner(pool, req).await;
|
|
||||||
apply_obfuscation_delay(started).await;
|
|
||||||
result
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn register_inner(
|
|
||||||
pool: &sqlx::PgPool,
|
|
||||||
req: RegisterUserReq,
|
|
||||||
) -> Result<RegisterUserRes, AppError> {
|
|
||||||
let email = req.email.trim().to_lowercase();
|
|
||||||
if email.is_empty() {
|
|
||||||
return Err(AppError::Validation("email is required".to_string()));
|
|
||||||
}
|
|
||||||
|
|
||||||
if req.password.len() < 8 {
|
|
||||||
return Err(AppError::Validation(
|
|
||||||
"password must be at least 8 characters".to_string(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut tx = pool.begin().await?;
|
|
||||||
if user_repo::email_exists(&mut tx, &email).await? {
|
|
||||||
return Err(AppError::Validation(
|
|
||||||
"invalid registration request".to_string(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
let salt = SaltString::generate(&mut OsRng);
|
|
||||||
let argon2 = Argon2::default();
|
|
||||||
let password_hash = argon2
|
|
||||||
.hash_password(req.password.as_bytes(), &salt)
|
|
||||||
.map_err(|e| AppError::Validation(format!("invalid password: {e}")))?
|
|
||||||
.to_string();
|
|
||||||
|
|
||||||
let user = match user_repo::create_user_in_tx(&mut tx, &email, &password_hash).await {
|
|
||||||
Ok(user) => user,
|
|
||||||
Err(AppError::Db(sqlx::Error::Database(db_err)))
|
|
||||||
if db_err.code().as_deref() == Some("23505") =>
|
|
||||||
{
|
|
||||||
return Err(AppError::Validation(
|
|
||||||
"invalid registration request".to_string(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
Err(err) => return Err(err),
|
|
||||||
};
|
|
||||||
|
|
||||||
tx.commit().await?;
|
|
||||||
|
|
||||||
Ok(RegisterUserRes {
|
|
||||||
id: user.id,
|
|
||||||
email: user.email,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn apply_obfuscation_delay(started: Instant) {
|
|
||||||
let min_ms = 120;
|
|
||||||
let max_ms = 320;
|
|
||||||
|
|
||||||
let nanos = SystemTime::now()
|
|
||||||
.duration_since(UNIX_EPOCH)
|
|
||||||
.map(|d| d.subsec_nanos())
|
|
||||||
.unwrap_or(0);
|
|
||||||
let jitter = (nanos as u64) % (max_ms - min_ms + 1);
|
|
||||||
let target = Duration::from_millis(min_ms + jitter);
|
|
||||||
|
|
||||||
let elapsed = started.elapsed();
|
|
||||||
if target > elapsed {
|
|
||||||
sleep(target - elapsed).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1 +0,0 @@
|
|||||||
pub mod auth_service;
|
|
||||||
Loading…
x
Reference in New Issue
Block a user