Repository Structure
Monorepo Management with Mise
The repository is organized as a polyglot monorepo managed with mise (formerly "mise en place"). Mise handles:
- Tool version management: Python 3.11, Java 21 (Temurin), Node.js, Gradle, Terraform, Buf, and more — all pinned in the root
mise.toml - Environment variables: Automatic loading of
.envfiles fromconfigs/based on theENVIRONMENTvariable (local,dev,prod) - Task orchestration: Each service and shared module defines its own
mise.tomlwith service-specific tasks (build, lint, test, dev), while the rootmise.tomlprovides aggregator tasks (test:all,lint:all,format:all) - Virtual environment auto-activation: Python services automatically create and activate
.venvdirectories viauv
mise.toml # Root: tool versions, env vars, aggregator tasks
├── services/ingestion/mise.toml # Python: dev, lint, format, build
├── services/state_manager/mise.toml # Java: dev, lint, format, build, test
├── services/path_optimizer/mise.toml # Python: start, lint
├── services/simulators/mise.toml # Python: run, lint
├── services/visualizer/mise.toml # TypeScript: dev, build
├── shared/proto/mise.toml # Protobuf: lint, format, generate, breaking
└── infra/local/mise.toml # Docker Compose: up, down, logs
Tasks are invoked using the monorepo path syntax:
- Root tasks: mise run <task> (e.g., mise run lint:all)
- Service tasks: mise //<path>:<task> (e.g., mise //services/ingestion:lint, mise //services/state_manager:build)
Directory Layout
dronefleet-optimizer/
├── services/ # Microservices (each independently deployable)
│ ├── ingestion/ # Python/FastAPI — HTTP gateway, Pub/Sub publisher
│ ├── state_manager/ # Java/Spring Boot — Event processing, Firestore persistence
│ ├── path_optimizer/ # Python/OR-Tools — VRP solver, batch optimization
│ ├── simulators/ # Python — Synthetic telemetry & order generation
│ └── visualizer/ # TypeScript/SolidJS — Real-time map dashboard
│
├── shared/ # Cross-service shared definitions
│ ├── proto/ # Protobuf source of truth (.proto files + Buf config)
│ ├── java/ # Generated Java models (betterproto/protobuf)
│ ├── python/ # Generated Python models + shared utilities
│ └── ts/ # Generated TypeScript models
│
├── libs/ # Reusable internal libraries
│ ├── python/
│ │ ├── config/ # Shared Python configuration (pydantic-settings)
│ │ ├── logging/ # Structured logging setup (structlog, JSON)
│ │ └── messaging/ # Message publisher abstraction (Factory + Adapter)
│ ├── java/
│ │ ├── config/ # Shared Java configuration
│ │ └── logging/ # Java logging setup (Slf4j, JSON)
│ └── ts/
│ ├── config/ # Shared TypeScript configuration
│ └── logging/ # TypeScript logging setup
│
├── configs/ # Environment-specific configuration files
│ ├── local.env # Local dev with emulators (PUBSUB_EMULATOR_HOST, etc.)
│ ├── dev.env # GCP dev environment (real Pub/Sub, Firestore)
│ └── prod.env # GCP production environment
│
├── infra/
│ ├── local/ # Docker Compose for local emulators (Pub/Sub, Firestore)
│ └── terraform/ # IaC: modules for Cloud Run, Pub/Sub, Firestore, IAM
│ ├── environments/dev/ # Dev environment Terraform config
│ ├── environments/prod/ # Prod environment Terraform config
│ └── modules/ # Reusable Terraform modules
│
├── tests/ # Cross-service tests
│ ├── unit/
│ ├── integration/
│ └── e2e/
│
└── docs/ # Documentation and architecture diagrams
Shared Models via Protocol Buffers + Buf
All data models shared across services are defined as Protocol Buffers (.proto files) in shared/proto/dronefleet/v1/. This is the single source of truth for:
- Drone, Order, Mission, Warehouse entities
- Event messages (telemetry, decisions)
- Enum definitions (DroneStatus, OrderStatus, OrderPriority, WaypointType)
The Buf CLI manages the protobuf workflow:
buf lint: Enforces consistent proto stylebuf format: Auto-formats.protofilesbuf generate: Generates typed code for Java, Python, and TypeScript simultaneouslybuf breaking: Detects breaking schema changes against themainbranch (run in CI)
Generated code is placed in shared/java/, shared/python/, and shared/ts/. This approach ensures that:
- All services share an identical, strongly-typed contract — no drift between a Python DTO and a Java DTO
- Schema versioning and breaking change detection are automated
- Migration to gRPC or binary serialization is possible in the future with minimal effort
The trade-off is slightly more complex tooling, but the generation and CI checks are fully automated via mise //shared/proto:generate and the CI pipeline.
Messaging Library: Factory + Adapter Pattern
The libs/python/messaging/ library abstracts the message bus implementation using a Factory + Adapter design pattern:
libs/python/messaging/src/dronefleet_messaging/
├── base_publisher.py # Abstract base class (MessagePublisher)
├── factory.py # PublisherFactory — selects implementation
└── publisher/
├── pubsub_publisher.py # Google Cloud Pub/Sub adapter
└── kafka_publisher.py # Apache Kafka adapter (on-premise option)
The PublisherFactory reads the DEPLOYMENT_STRATEGY environment variable and instantiates the appropriate publisher:
on_cloud: UsesPubSubPublisher— connects to GCP Pub/Sub (or the Pub/Sub emulator whenPUBSUB_EMULATOR_HOSTis set)on_premise: UsesKafkaPublisher— connects to a Kafka cluster (for hypothetical on-premise deployments)
This separation allows the system to run in three modes without any code changes:
| Mode | Strategy | Infrastructure | Use Case |
|---|---|---|---|
| Local sandbox | on_cloud |
Docker Compose + GCP emulators (Pub/Sub on localhost:8085, Firestore on localhost:8080) |
Day-to-day development, zero cost |
| GCP Dev/Prod | on_cloud |
Real GCP Pub/Sub + Firestore | Deployed environments (dev.env, prod.env) |
| On-premise | on_premise |
Self-managed Kafka cluster | Hypothetical enterprise deployment |
The environment configuration files in configs/ are loaded by mise and injected as environment variables. In CI/CD (e.g., cd-dev.yml), these same variables are set via the deployment workflow to configure services for the target GCP environment.