Skip to content

Architectural Requirements and Design Document

4.3.1 Architectural Structural Design & Requirements

Section titled “4.3.1 Architectural Structural Design & Requirements”

Goes into further detail in the Architecture document

System Architecture

Layers & Responsibilities

  • Presentation (React) — Guides the flows: import spec → discover endpoints → create/start scan → view results.
  • Access Layer — API Gateway (Node/Express) — Single REST surface, input validation, JWT auth, basic rate limiting, and brokering to the engine.
  • Service Layer (Python Engine, microkernel) — Long-running TCP server that executes JSON commands; hosts modular OWASP API Top‑10 checks and report generation.
  • Persistence Layer (Supabase/Postgres) — Users, APIs, endpoints, scans, scan results, tags, flags.

Interfaces & Contracts

  • UI → API (REST over HTTPS): JSON requests/responses, JWT in Authorization: Bearer <token>, idempotent GET, pagination/limits on list endpoints.
  • API ↔ Engine (TCP JSON): one request/response message { "command": "<name>", "data": { ... } } on TCP 127.0.0.1:9011; bounded sizes; timeouts; server closes after reply.
  • API/Engine → DB (Supabase HTTPS): parameterised queries, role‑restricted service key, minimal over‑fetch, index‑aware queries, pagination.

Representative Surfaces

  • Express routes (examples):
    /api/auth/signup|login|logout|profile|google-login/api/apis (CRUD) • /api/import/api/endpoints/api/endpoints/details/api/endpoints/tags/add|remove|replace/api/endpoints/flags/add|remove/api/tags/api/scan/create|start|status|progress|results|list|stop/api/scans/schedule/api/dashboard/overview/api/reports/*
  • Engine commands (examples):
    apis.get_all|create|update|delete|import_urlendpoints.list|details|tags.add|tags.remove|tags.replace|flags.add|flags.removescan.create|start|status|progress|results|list|stopscans.schedule.get|create_or_update|deletetemplates.list|details|usetags.listconnection.testuser.profile.get|updateuser.settings.get|update

Cross‑Cutting Concerns

Validation (schema & file‑type/size checks for imports) • Observability (structured logs, correlation id) • Configuration via .env • Security (JWT, rate limit, least privilege, no secrets in code, CORS restricted).


AttributeScopeTarget (Quantified)Measurement
Latency (reads)/api/** read endpointsp95 ≤ 1.5 s, p99 ≤ 3.0 s @ 50 VUs; p95 ≤ 2.5 s @ 100 VUsJMeter aggregate (p95/p99)
ThroughputMixed plan7 req/s sustained for ≥ 3 min @ 100 VUsJMeter “Throughput”
Scan duration100 endpoints, “Balanced” profile180 s end‑to‑end (create → start → results)API + engine timers
ReliabilityAPI at nominal loadError rate < 5% (excl. deliberate 401/403 negative tests)JMeter “Error %”
SecurityAll mutationsJWT on selected routes; auth rate‑limited; imports restricted to JSON/YAML; secrets via envCode + tests
MaintainabilityAPI + EngineUnit test coverage ≥ 60% lines; CI runs unit + integration smokeJest/Pytest + CI
UsabilityUI flowsImport spec and start a scan in ≤ 3 steps; clear labels/tooltips on each stepHeuristic eval/demo

4.3.3 Non‑Functional Testing (Method, Evidence, Reflection)

Section titled “4.3.3 Non‑Functional Testing (Method, Evidence, Reflection)”

Goes into further detail in the Deployment document.

Deployment Diagram

Assumptions: API (Node) and Engine (Python) co‑located; engine bound to 127.0.0.1:9011; Supabase remote; default socket & HTTP timeouts unless stated.

Performance tests report System Architecture

Aggregate and load with 50 users System Architecture System Architecture

Aggregate and load with 100 users System Architecture System Architecture

Aggregate report under 100 users System Architecture

Endpoint highlights (100 users, averages)

  • /api/auth/login241 ms (100% errors in that scripted run due to deliberate invalid creds).
  • /api/apis12.6 s (heaviest list; p95 ~60.6 s).
  • /api/scan/list4.3 s; /api/endpoints/details4.3 s.
TargetObservationPass?Actions
p95 ≤ 1.5 s (reads) @ 50 VUsSeveral reads exceed budget; /api/apis dominatesFailAdd server‑side pagination, field projection, per‑user cache; DB indexes on apis(user_id), endpoints(api_id), scans(api_id, created_at desc); pre‑compute dashboard aggregates.
p95 ≤ 2.5 s @ 100 VUsExceeded on hot pathsFailSame as above; move heavy aggregates to background/cache; stream/chunk large responses.
≥ 7 req/s @ 100 VUs~11.2 req/s sustainedPassSeparate negative tests from baseline to show true error rate.
Scan ≤ 180 s (100 endpoints)Meets locallyPass (local)Increase engine concurrency; reuse pooled HTTP sessions; async gather of endpoint probes.
Error rate < 5% (nominal)Inflated by negative tests/import guardsConditionalRun negative plan separately; keep strict import checks but offload large files to background processing.
  • Hot path slowness on large lists → paginate + cache + index; precompute summaries.
  • Long‑running scans → progress polling/streaming; async engine probes; aligned timeouts.
  • False positives → configurable scan profiles; rules tuning; per‑test evidence in reports.
  • Operational drift → CI runs unit/integration tests; nightly smoke with a small scan profile.

  1. Start API (Node) and Engine (Python); engine on 127.0.0.1:9011.
  2. Set Supabase env vars and frontend URL; seed if required.
  3. Open JMeter → load API Load Test plan.
  4. Execute 10, 50, 100 user thread groups for ≥3 minutes each.
  5. Export Summary/Aggregate CSVs; capture screenshots into docs/perf/.

Appendix B — Traceability (for assessors)

Section titled “Appendix B — Traceability (for assessors)”
  • Routes (examples): /api/auth/*, /api/apis CRUD, /api/import, /api/endpoints, /api/endpoints/details, /api/endpoints/tags/*, /api/endpoints/flags/*, /api/tags, /api/scan/*, /api/scans/schedule, /api/dashboard/overview, /api/reports/*.
  • Engine commands (examples): apis.*, endpoints.*, scan.*, scans.schedule.*, templates.*, tags.list, connection.test, user.profile.*, user.settings.*.