Real artifact appendix

Walk the real docs when the room gives you time.

The deck is the compressed version. This page is the longer cut: the brainstorm that framed the work, the plan that sequenced it, and the solution doc that captured a hard-earned infrastructure lesson. If the audience wants more than slide abstractions, this is the place to open.

Use it live

  • Open this when you want to show the artifacts behind the talk, not just talk about them.
  • The sections are ordered to match the compound loop: brainstorm, plan, then reusable solution.
  • Everything here is a local snapshot inside the talk repo, so it will render the same way in production.

Jump to section

Scroll continuously or hop straight to the artifact you want to walk through.

Brainstorm

Paylocity reporting sync brainstorm

This is the cleanest example of pre-plan thinking in the repo. It compares a standalone repo, a monorepo MVP, and a more ambitious near-real-time first pass before implementation begins.

3 min read

Plan

Paylocity time-entry reporting sync plan

This is the strongest full plan artifact in the repo right now. It turns a reporting ask into a real delivery contract: repo placement, runtime choice, Terraform boundaries, migration ownership, and phased rollout.

16 min read

Solution

RBAC bootstrap hardening solution

This one is useful because it captures a real infrastructure lesson with a clean permanent fix. The deploy contract was asking Terraform to look up Entra data it did not actually need.

3 min read

Paylocity Reporting Sync Foundation

What We're Building

An internal reporting ingestion foundation that pulls Paylocity time-entry data into a Monti-owned Azure SQL schema, with enough auditability to trust the sync and enough structure to feed Power BI without querying Paylocity directly.

The immediate business outcome is narrow: ship time-entry pull this week. That means the first slice should cover the minimum reference data needed to contextualize time entries, the time-entry ingestion path itself, sync tracking tables, and validation queries. It should not try to turn into a full data platform, generalized warehouse, or multi-source reporting system yet.

This should be treated as a Monti platform workload, not a standalone side project. The current spec is a useful functional reference, but the implementation home should follow the monorepo strategy already established in monti-platform.

Why This Approach

ADR 001 already commits Monti to a monorepo for infrastructure, applications, and services. The repo also already has Azure SQL, Key Vault, monitoring, and environment-root patterns. A separate monti-reporting-db repository would create another lifecycle to manage for a one-operator team and would break the repo's "agents can reason across the full system" rule.

The delivery target also argues for a thinner first release. Time-entry sync plus validation solves the stated reporting problem. Employee sync, webhooks, broader warehouse modeling, and more elaborate runtime flows are useful, but they are not required to prove business value this week.

Approaches Considered

1. Standalone Repo, Full Spec As Written

  • Create a dedicated monti-reporting-db repository with Terraform, SQL migrations, and Azure Functions exactly as laid out in the spec.
  • Pros: clean isolation, fastest path if treated as a one-off build.
  • Cons: conflicts with ADR 001, duplicates repo and CI overhead, makes platform-wide reasoning and shared module reuse harder.
  • Best for: a separate team or product boundary, which Monti does not have today.

2. Monorepo, Polling-First Reporting MVP (Recommended)

  • Bring this workload into monti-platform as a new integration service with Terraform-managed Azure resources, Azure SQL reporting tables, and a timer-based sync for time entries.
  • Pros: aligned with repo strategy, simplest path to shipping this week, keeps infra and service ownership together, leaves room to add richer dimensions later without re-platforming.
  • Cons: data freshness is limited to the sync cadence and some employee context may need to be deferred or represented minimally at first.
  • Best for: immediate reporting needs and a small team that should optimize for clarity and maintainability.

3. Monorepo, Near-Real-Time Sync From Day One

  • Use the monorepo, but include both the timer-based time-entry sync and webhook/event-driven updates in the first release for fresher downstream reporting.
  • Pros: fresher data and a stronger long-term sync architecture from the start.
  • Cons: depends on webhook or event capability, adds operational complexity, and raises delivery risk before the basic reporting feed is stable.
  • Best for: confirmed near-real-time requirements rather than a week-one reporting baseline.

Recommendation

Use Approach 2 for the first implementation pass.

The right first outcome is a trustworthy time-entry reporting feed inside monti-platform. Design the schema and sync audit tables so richer employee dimensions or webhook-driven freshness can be added later, but do not make first delivery depend on them.

Key Decisions (Current)

  • Home: implement this inside monti-platform, not as a new standalone repo.
  • Resource model: all Azure resources remain Terraform-managed and follow existing environment and module conventions.
  • Runtime boundary: treat the sync as a backend integration service, not an end-user app.
  • Data ownership: keep an app-owned Azure SQL reporting schema instead of querying Paylocity directly from Power BI.
  • First-release scope: time-entry sync, sync observability, and validation.
  • Deferred scope: employee sync, generalized warehouse work, broader master-data consolidation, and optional webhook-driven freshness.

Resolved Questions

  • Should this live outside the monorepo? No. Existing repo strategy says new services belong in monti-platform.
  • Do we need a broader reporting platform before starting? No. The first problem is one source system and one reporting store.
  • Should sync observability be part of v1? Yes. SyncRun, SyncEvent, and SyncWatermark are part of the core product because trust in the feed matters as much as the feed itself.
  • Should Power BI read Paylocity directly? No. Power BI should read Monti-owned tables in Azure SQL.
  • Should week-one scope include employee sync or webhook-driven updates? No. Week one is explicitly limited to time-entry pull plus validation, and the monorepo approach remains the preferred home.

Next Steps

  1. Convert this brainstorm into a plan that maps the workload into repo paths, Terraform environments, schema/migration ownership, testing, and rollout.

feat: Deliver Paylocity time-entry reporting sync foundation

Overview

Build the first Monti-owned reporting sync for Paylocity time-entry data inside monti-platform. The week-one outcome is narrow on purpose:

  • a Terraform-managed Azure runtime in this monorepo
  • an Azure SQL schema for time-entry reporting and sync observability
  • a timer-based ingestion path that pulls Paylocity company punch-detail data
  • validation queries that support first Power BI usage

This plan intentionally does not implement the full standalone spec as written. The spec is useful source material, but the chosen scope is smaller and the implementation home is the monorepo.

Problem Statement

Monti needs reporting-grade time-entry data from Paylocity this week, but no production sync exists yet. The provided standalone spec assumes:

  • a separate monti-reporting-db repository
  • employee sync and webhook support in the first slice
  • Linux Consumption Azure Functions with Python 3.11
  • a direct SQL-auth connection-string pattern as the primary database path

Those assumptions no longer fit the repo strategy or current platform guidance well enough:

  • ADR 001 keeps new workloads in the monorepo
  • the brainstorm fixed week-one scope to time-entry only
  • the repo already has Azure environment-root and shared-module conventions
  • Microsoft currently recommends Flex Consumption for new Function Apps, while Linux Consumption is a legacy plan

Delivery Decisions

  • Keep this workload in monti-platform.
  • Treat it as a backend integration service, not an end-user app.
  • Limit week-one scope to Paylocity time-entry pull, sync observability, and validation.
  • Defer employee sync, webhooks, and broader warehouse modeling until after the first reporting slice is stable.
  • Provision infrastructure with Terraform Cloud, but keep schema migration execution and Function code deployment out of Terraform itself.
  • Follow the repo's existing split-ownership deployment model: Terraform Cloud for infra, and GitHub Actions for service code deploys after the first dev smoke path is proven.
  • Use Paylocity OAuth2 client credentials authentication over TLS 1.2 with short-lived bearer tokens.
  • Default the sync cadence to hourly for week one, with an environment-configurable schedule.
  • Use only dev and prod environments for now.
  • Start with dev only in practice. Do not bootstrap or deploy prod before the first successful dev sync and data validation.

Research Summary

Internal Findings

  • The repo already defines Azure module and environment-root patterns under infrastructure/modules/ and infrastructure/environments/.
  • services/README.md expects each backend integration to live in its own subdirectory with a short README covering purpose, interfaces, and deployment.
  • infrastructure/modules/azure_sql/main.tf is still a placeholder, so this work must turn that module into a real contract before any environment root can use it for this service.
  • Recent learnings show that Azure bootstrap work in this repo must account for:
    • Key Vault RBAC propagation delays
    • Terraform runner permissions on freshly created vaults
    • region and quota assumptions that fail during first apply
    • defaults that match real operator constraints instead of idealized ones

External Findings

  • Paylocity’s latest developer portal still exposes the Company Punch Details flow in the latest API documentation version surfaced during research (v2026-02-17), and it is a three-step pattern:
    1. create a company punch-detail operation
    2. poll the operation status
    3. fetch the resulting data pages
  • The old employee punch-detail path is explicitly marked deprecated in current docs, which reinforces choosing the company-level punch-detail flow for this reporting workload.
  • Paylocity’s current auth docs specify OAuth2 client credentials authentication over TLS 1.2, with bearer tokens retrieved from environment-specific /token endpoints and expiring after 3600 seconds.
  • As of March 23, 2026, Microsoft documents Linux Consumption as a legacy Functions hosting plan, notes that no new language-stack support has been added there since September 30, 2025, and recommends Flex Consumption for new serverless Function Apps.
  • Microsoft also recommends managed identities for Azure Functions connections to Azure SQL.
  • Microsoft’s Python Functions guidance specifically calls out pyodbc connection-collision problems and points to mssql-python as Microsoft’s official Python driver for SQL Server and Azure SQL, but that driver is still relatively new enough that the plan should keep a fallback path available.

Proposed Solution

Implement a new backend integration service in the monorepo that runs on Azure Functions and persists reporting-grade time-entry data into Azure SQL.

Repository Placement

Create a new service root:

services/paylocity-time-entry-sync/
├── README.md
├── host.json
├── requirements.txt
├── local.settings.json.example
├── function_app.py
├── shared/
│   ├── paylocity_client.py
│   ├── db.py
│   └── hashing.py
├── sql/
│   ├── 001_create_reference_tables.sql
│   ├── 002_create_time_entry_tables.sql
│   ├── 003_create_sync_tables.sql
│   └── 004_create_reporting_views.sql
└── tests/
    ├── fixtures/
    ├── test_paylocity_client.py
    ├── test_time_entry_mapping.py
    └── test_watermark_logic.py

Key differences from the standalone spec:

  • service-owned code and SQL live under services/, not in a separate repo
  • the function app uses the Python v2 programming model
  • week-one scope omits employee and webhook function directories
  • SQL is narrowed to the smallest model that supports time-entry reporting and replay-safe syncs

Deployment And Migration Ownership

Use this explicit split:

  • Terraform Cloud provisions Azure resources only.
  • A service-owned migration runner applies SQL files in order and records applied versions in a lightweight schema-history table.
  • GitHub Actions deploys the Function App package for dev.

Do not use Terraform null_resource or provisioners to apply application schema. That would blur infrastructure and application ownership and create drift risk later in prod.

Infrastructure Placement

Create two environment roots with a shared-database boundary:

infrastructure/environments/monti-reporting/
├── dev/
└── prod/

infrastructure/environments/paylocity-time-entry-sync/
├── dev/
└── prod/

Only dev/ and prod/ are planned for now. Create and use prod/ only after dev is proven.

Expected Terraform shape for dev:

  • monti-reporting/dev
    • resource group
    • Key Vault for shared reporting database secrets
    • Azure SQL logical server + database via a completed azure_sql module
    • firewall rules for Azure-hosted callers and any approved BI/operator clients
  • paylocity-time-entry-sync/dev
    • resource group
    • storage account for Functions runtime and deployment package
    • Application Insights / Log Analytics
    • Key Vault for Paylocity credentials
    • Azure Function App via a new reusable module for Flex Consumption hosting
    • role assignments for Function managed identity and deploy identities to access required Azure resources

Runtime Choice

Use:

  • Azure Functions
  • Python 3.12
  • Python v2 programming model
  • Flex Consumption hosting for new deployment work

Default database client direction:

  • first choice: mssql-python
  • fallback: pyodbc + ODBC Driver 18 with singleton connection handling if mssql-python packaging or runtime stability proves weak in Flex Consumption during the dev spike

Do not plan on Linux Consumption for this new workload unless a later, explicit exception is documented in an ADR.

Technical Approach

Phase 0: Confirm the External Contract Before Coding Deeply

The standalone spec assumes a simple time-entry GET endpoint. Current Paylocity docs indicate the actual reporting flow is the company punch-detail operation pattern. Before schema and runtime work gets too deep, validate these inputs against a real client tenant or sandbox:

  • company IDs and facility mapping
  • actual company punch-detail request and response shape
  • the source record identifier to use for idempotency
  • which fields are consistently present for week-one reporting
  • whether the punch payload already carries enough facility/cost-center context to avoid a second reference-data sync immediately

This phase is short but mandatory because it prevents building the wrong table keys.

It should also end with two concrete decisions before broad implementation starts:

  • confirm the exact Paylocity token endpoint and scopes for the target tenant environment
  • validate whether mssql-python behaves reliably enough in the chosen Functions hosting model, or whether the service should switch immediately to the documented pyodbc fallback

Phase 1: Establish the Monorepo Service and Terraform Contracts

1. Add service scaffolding

  • create services/paylocity-time-entry-sync/
  • add a README describing the service purpose, inputs, outputs, and deployment model
  • create a minimal local development contract: requirements.txt, host.json, local.settings.json.example

2. Create the infrastructure root

  • add infrastructure/environments/paylocity-time-entry-sync/dev/
  • mirror the repo’s existing environment-root conventions:
    • main.tf
    • locals.tf
    • variables.tf
    • outputs.tf
    • providers.tf
    • terraform.tf
    • example.tfvars
    • README.md

3. Reuse and extend modules deliberately

  • complete infrastructure/modules/azure_sql/ with real resources and outputs
  • add a reusable Function App module for Flex Consumption instead of embedding one-off resources directly in the environment root
  • reuse storage, key_vault, monitoring, and role_assignments where they already fit
  • avoid pulling in entra_app and app_service_auth because this is not a user-facing web app

4. Capture non-trivial decisions in an ADR

Add an ADR for the reporting-sync runtime and hosting shape, covering:

  • monorepo placement
  • time-entry-only first slice
  • Azure Functions Flex Consumption
  • the chosen SQL connection/auth strategy

5. Establish the deployment workflow contract

Follow the requisition-workflow ownership pattern:

  • Terraform Cloud owns environment provisioning
  • GitHub Actions owns application package deployment

Week-one decision:

  • first dev code deployment may be manual if needed to prove the packaging/runtime path quickly
  • once that path is proven, standardize on a workflow such as .github/workflows/deploy-paylocity-time-entry-sync-dev.yml

That workflow should:

  • run Python tests
  • run the migration runner against dev
  • package the Function App source
  • deploy to Azure using OIDC-backed credentials

Phase 2: Define the Minimal Reporting Schema

Week one does not need the full employee model from the original spec.

Use a deliberately smaller schema:

  • Company
    • static or manually seeded company/facility mapping for the configured Paylocity companies
  • TimeEntry
    • reporting-grade normalized time-entry records
  • SyncRun
    • one row per sync attempt, per company
  • SyncWatermark
    • one row per company/data type to control incremental pulls

Defer for later:

  • Employee
  • EmployeePosition
  • SyncEvent
  • webhook-specific tables

Intentional narrowing note:

  • the brainstorm treated SyncEvent as part of broader core observability, but this week-one plan defers the table because there is no inbound webhook/event stream in scope yet
  • run-level observability lives in SyncRun, while step-level troubleshooting lives in structured logs and Application Insights for the first slice

SQL migration strategy

Use repo-owned versioned SQL files plus a migration runner, not manual ad hoc execution.

Expected pattern:

  • SQL files remain under services/paylocity-time-entry-sync/sql/
  • a small runner applies them in lexical order
  • the runner creates and updates a schema-history table such as SchemaMigration
  • the same runner can be invoked:
    • locally for development
    • from GitHub Actions before Function App deploy
    • later in prod with the same operational pattern

This keeps migration ownership explicit and avoids inventing a new release process for every environment.

Company seed strategy

The Company table should be seeded by versioned SQL in the repo, not by portal edits or one-off manual SQL sessions.

Expected pattern:

  • include a seed file such as sql/seed/001_seed_companies.sql
  • engineering updates confirmed company IDs and facility metadata through a PR
  • the migration runner applies the seed file as part of the initial schema bootstrap
  • the service README and environment-root README list the required inputs and where they come from

Suggested initial TimeEntry contract

The exact columns should follow the real payload, but the week-one plan should expect at least:

  • CompanyId
  • SourceEntryKey
  • PaylocityEmployeeId
  • EntryDate
  • ClockIn
  • ClockOut
  • TotalHours
  • RegularHours
  • OvertimeHours
  • DepartmentCode
  • CostCenterCode
  • EarningsCode
  • Shift
  • SourcePayloadHash
  • CreatedAt
  • UpdatedAt

Important modeling rule:

  • the uniqueness contract must be driven by the actual punch-detail payload, not by the standalone spec’s placeholder assumptions
  • if Paylocity does not expose a stable single-field entry ID, compute SourceEntryKey from a deterministic combination of source fields and document that choice

ERD

erDiagram
  COMPANY ||--o{ TIME_ENTRY : scopes
  COMPANY ||--o{ SYNC_RUN : syncs
  COMPANY ||--o{ SYNC_WATERMARK : tracks

  COMPANY {
    string CompanyId PK
    string CompanyName
    string FacilityCode
    string FacilityName
    string TimeZone
    bool IsActive
  }

  TIME_ENTRY {
    bigint TimeEntryId PK
    string CompanyId FK
    string SourceEntryKey UK
    string PaylocityEmployeeId
    date EntryDate
    datetime ClockIn
    datetime ClockOut
    decimal TotalHours
    decimal RegularHours
    decimal OvertimeHours
    string DepartmentCode
    string CostCenterCode
    string EarningsCode
    string Shift
    string SourcePayloadHash
  }

  SYNC_RUN {
    int SyncRunId PK
    string CompanyId FK
    string RunType
    datetime StartedAt
    datetime CompletedAt
    string Status
    int RecordsProcessed
    int RecordsInserted
    int RecordsUpdated
    int RecordsSkipped
    string ErrorMessage
  }

  SYNC_WATERMARK {
    int WatermarkId PK
    string CompanyId FK
    string DataType
    datetime LastSyncTimestamp
    int LastSyncRunId FK
    datetime UpdatedAt
  }

Phase 3: Implement the Time-Entry Sync Path

Build only the timer-based path for week one.

Trigger design

  • one timer-triggered sync function
  • runs on a defined schedule in dev
  • iterates configured company IDs
  • records a separate SyncRun per company

Paylocity client behavior

The client should implement the company punch-detail flow, not a guessed direct endpoint:

  1. submit the company punch-detail operation for the desired window
  2. poll operation status until completion or timeout
  3. fetch result pages
  4. respect pagination and retry/backoff behavior

Paylocity auth contract

Use the auth flow documented by Paylocity:

  • OAuth2 client credentials
  • TLS 1.2
  • short-lived bearer token in the Authorization header
  • refresh token before the 3600-second expiry window is hit

Expected configuration shape:

  • PAYLOCITY_CLIENT_ID from Key Vault
  • PAYLOCITY_CLIENT_SECRET from Key Vault
  • PAYLOCITY_TOKEN_URL as an environment-specific app setting
  • PAYLOCITY_BASE_URL as an environment-specific app setting

Known token endpoints from the current docs:

  • testing: https://dc1demogwext.paylocity.com/public/security/v1/token
  • production: https://dc1prodgwext.paylocity.com/public/security/v1/token

The implementation must also:

  • normalize company-local date ranges before request construction
  • keep rate limiting and retry policy explicit
  • capture enough logging to debug partial or empty syncs
  • check for API version or deprecation signals during integration testing

Watermark behavior

Week-one logic should be explicit:

  • first successful run backfills a small approved window, defaulting to the previous 7 completed days
  • subsequent runs use SyncWatermark
  • the watermark advances only after the company run completes successfully
  • partial failure in one company does not advance that company’s watermark
  • no-data responses still record a successful SyncRun

Cadence and backfill decisions:

  • default schedule: hourly at minute 5 using Azure Functions NCRONTAB, for example 0 5 * * * *
  • make the schedule environment-configurable so later environments can tighten or relax freshness
  • the initial 7-day backfill is a configurable default, not a hard-coded business rule
  • rationale: 7 days is enough to validate current reporting usefulness and replay safety without turning the first sync into an unnecessarily large historical import

Insert/update behavior

The sync must be idempotent:

  • rerunning the same date range must not create duplicate rows
  • changed source records should update the existing row when the chosen source key matches
  • unchanged rows should be counted as skipped where practical

Phase 4: Validate the Reporting Output

Validation is part of the product, not cleanup work.

Data validation

Create SQL queries that confirm:

  • time-entry counts by company and day
  • total hours by facility for the recent reporting window
  • recent sync-run status and counts
  • watermark movement by company
  • duplicate-key absence for the chosen SourceEntryKey

Operational validation

After the first dev deployment:

  • run one manual sync for a narrow window
  • compare output to Paylocity UI or an approved export for the same companies and dates
  • confirm Application Insights captures the run and error paths
  • confirm re-running the same window is safe

Reporting handoff

Only after the SQL output validates cleanly:

  • connect Power BI to Azure SQL
  • define the first reporting query or view for daily hours by facility

SpecFlow Findings To Carry Into Implementation

The spec-flow pass surfaced several gaps that the plan must close:

  1. Endpoint shape is not the one assumed in the standalone spec. The plan must use Paylocity’s current company punch-detail operation flow.

  2. Idempotency key is not yet proven. Schema freeze depends on confirming the actual source record identifier or deriving a stable composite key from sample payloads.

  3. Time-zone boundaries matter. Paylocity’s company-level punch-detail filtering uses date/time windows that need explicit handling per company/facility time zone.

  4. Week-one scope does not guarantee employee dimension data. Any reporting that depends on employee enrichment must be deferred or handled with denormalized fields already present in the punch data.

  5. Partial failures need a predictable recovery contract. The plan must define timeout, retry, watermark, and replay behavior before coding.

  6. Dev region must stay explicit if Azure regional capacity blocks the default. dev now uses centralus so the Function App and shared reporting database stay co-located after eastus2 Azure SQL provisioning was blocked for this subscription. If region support changes again, the repo defaults must be updated instead of relying on hidden workspace overrides.

Alternative Approaches Considered

1. Standalone repository and full original spec

Rejected for now because it conflicts with ADR 001, expands scope unnecessarily, and would duplicate infrastructure and workflow overhead for a single small-team service.

2. Monorepo but full employee plus webhook scope in week one

Rejected for now because the user explicitly narrowed week-one scope to time-entry pull only. It also adds dependency on webhook enablement and more schema work before the first reporting slice proves value.

Acceptance Criteria

Functional Requirements

  • A new service exists at services/paylocity-time-entry-sync/ with a clear README and local development contract.
  • A dedicated dev environment root exists under infrastructure/environments/paylocity-time-entry-sync/dev/.
  • The Azure SQL module is implemented as a real reusable module, not a placeholder.
  • A reusable Azure Function App module exists for this service’s hosting model.
  • The dev environment can provision the service’s Azure resources through Terraform.
  • The timer-based sync can ingest Paylocity time-entry data for configured company IDs into Azure SQL.
  • Re-running the same date range does not create duplicate time-entry rows.
  • SyncRun and SyncWatermark correctly reflect successful, empty, and failed per-company runs.
  • Validation queries exist and reconcile the first approved test window against Paylocity.

Non-Functional Requirements

  • The design follows monorepo conventions and keeps all infrastructure Terraform-managed.
  • The runtime choice uses currently supported Azure Functions guidance for new deployments.
  • Azure secrets and identities follow least-privilege defaults.
  • The implementation does not require user-facing auth components that do not apply to this backend service.
  • The service records enough telemetry and logs to debug sync failures without attaching a debugger to production code.

Quality Gates

  • terraform fmt -recursive infrastructure
  • terraform validate for the new environment root
  • tflint --chdir=infrastructure --recursive
  • Python unit tests for client, mapping, and watermark behavior
  • First dev deployment completes successfully
  • First manual sync completes successfully for an approved date window

Dependencies And Prerequisites

  • confirmed Paylocity company IDs and facility mapping
  • Paylocity API credentials for the target environment
  • at least one sample punch-detail payload or successful sandbox call
  • confirmed Flex Consumption region support for the chosen Azure region
  • GitHub environment and OIDC identity for dev Function App deploys
  • Azure subscription access to create:
    • Function App resources
    • storage account
    • Key Vault
    • Azure SQL
  • Terraform Cloud workspace bootstrap for the new service environment

Risks And Mitigations

RiskImpactMitigation
Real Paylocity punch-detail payload differs from the standalone spec assumptionsschema churn or wrong uniqueness contractmake payload confirmation Phase 0 and do not freeze table keys before it
Flex Consumption is unavailable in the preferred regioninfra plan blocked or region driftcheck supported regions first and document any deviation immediately
mssql-python proves immature or unstable in the chosen runtimedelayed first deploytest it in Phase 0 and fall back quickly to pyodbc + ODBC Driver 18 if needed
Key Vault and RBAC bootstrap timing breaks first applyfailed Terraform runreuse the repo’s proven runner-permission and RBAC propagation patterns
Scope creep pulls employee sync into week onedelayed reporting outcomekeep employee and webhook work explicitly out of the first implementation slice
SQL authentication becomes the fastest but weakest pathlong-term credential debtprefer managed identity; if a temporary SQL secret is required, log it as a follow-up hardening task

Documentation Plan

  • add an ADR for runtime/hosting and repo placement decisions
  • add the service README
  • add environment-root README notes for bootstrap and deployment
  • capture any Azure or Paylocity integration surprises in docs/solutions/ immediately after they are resolved

References & Research

Internal References

  • docs/brainstorms/2026-03-23-paylocity-reporting-sync-brainstorm.md
  • docs/decisions/001-monorepo-strategy.md
  • services/README.md
  • infrastructure/modules/README.md
  • infrastructure/modules/azure_sql/main.tf
  • infrastructure/environments/requisition-workflow/README.md
  • docs/solutions/integration-issues/requisition-workflow-dev-apply-quota-and-key-vault-rbac-failures-20260319.md
  • docs/solutions/integration-issues/requisition-workflow-bootstrap-auth-and-naming-hardening-20260319.md
  • docs/solutions/integration-issues/azure-ad-oidc-bootstrap-permission-failures-20260316.md
  • docs/solutions/workflow-issues/terraform-lint-module-stubs-requisition-workflow-20260319.md
  • /Users/matt.hartman/Downloads/monti-reporting-db-build-spec.md

External References

  • Paylocity Developer Portal, latest API reference surface reviewed on 2026-03-23:
    • Company punch-detail operation: https://developer.paylocity.com/integrations/reference/post_apihub_time_v2_companies_companyid_punchdetails
    • Authentication: https://developer.paylocity.com/integrations/reference/authentication
  • Microsoft Learn:
    • Azure Functions Consumption plan hosting (legacy notice): https://learn.microsoft.com/en-us/azure/azure-functions/consumption-plan
    • Create and manage Function Apps in Flex Consumption: https://learn.microsoft.com/en-in/azure/azure-functions/flex-consumption-how-to
    • Azure SQL bindings for Azure Functions: https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-azure-sql
    • Troubleshoot Python Function Apps in Azure Functions: https://learn.microsoft.com/en-us/azure/azure-functions/recover-python-functions
  • Microsoft mssql-python release notes:
    • https://github.com/microsoft/mssql-python/releases

Next Steps

  1. Review the plan for naming and scope alignment.
  2. If accepted, convert the plan into execution work focused on:
    • environment root and module contracts
    • schema definition
    • timer-based sync implementation
    • dev validation

Integration Issue: Paylocity Deploy RBAC Needed Principal IDs, Not Entra Lookups

Problem

The Paylocity service environments originally accepted GitHub deploy app client IDs through deployer_client_ids. Terraform then looked up each enterprise application with data.azuread_service_principal so it could assign Azure roles.

That worked only if the Terraform Cloud workspace identity had Microsoft Entra directory-read permissions. During the first production bootstrap, it did not. The plan failed before any Azure resources could be created, even though the actual Azure RBAC work only needed the enterprise application's service principal object ID.

Environment

  • Repository: monti-platform
  • Affected roots:
    • infrastructure/environments/paylocity-time-entry-sync/dev
    • infrastructure/environments/paylocity-time-entry-sync/prod
  • Failed workspace: paylocity-time-entry-sync-prod
  • Failing deploy identity:
    • GitHub OIDC app client ID: 9088cb5b-3d91-483d-8972-d0a5e39fec1a
    • Enterprise application object ID used for the fix: 11821858-d65a-45f6-8d0b-6107110786e0

Symptoms

Terraform failed during plan with:

Error: Listing service principals for filter "appId eq '9088cb5b-3d91-483d-8972-d0a5e39fec1a'"
unexpected status 403 (403 Forbidden) with error:
Authorization_RequestDenied: Insufficient privileges to complete the operation.

The failure happened before resource creation because the azuread provider lookup ran at plan time.

What Didn't Work

Original contract: pass GitHub deploy app client IDs into deployer_client_ids.

  • Why it failed: Azure RBAC assignments need a principal object ID, so Terraform had to query Microsoft Entra first. That made the whole environment depend on directory-read permissions the workspace identity did not have.

Dev workaround reused in prod: clear the deployer list and grant roles out of band.

  • Why it failed: it kept production bootstrap partially manual and broke the goal of managing deploy RBAC through Terraform where possible.

Solution

Change the environment contract to accept enterprise application service principal object IDs directly and drop the azuread provider from these roots.

1. Replace deployer_client_ids with deployer_principal_ids

The variables now take the object IDs Azure RBAC actually needs:

variable "deployer_principal_ids" {
  description = "Service principal object IDs for deployment identities ..."
  type        = list(string)
  default     = []
}

2. Remove the Entra lookup

The roots no longer query data.azuread_service_principal at plan time. Role assignments now use the provided object IDs directly:

[
  for principal_id in var.deployer_principal_ids : {
    scope                = local.reporting_environment.sql_server_id
    role_definition_name = "SQL Server Contributor"
    principal_id         = principal_id
  }
]

That eliminated the need for the azuread provider in these environment roots.

3. Look up the enterprise application object ID once, outside Terraform

Use Azure CLI once during bootstrap:

az ad sp show --id 9088cb5b-3d91-483d-8972-d0a5e39fec1a --query id -o tsv

Then store that object ID in the Terraform Cloud workspace variable:

deployer_principal_ids = ["11821858-d65a-45f6-8d0b-6107110786e0"]

Why This Works

The broken design mixed two separate concerns:

  1. identifying the GitHub workload identity in Microsoft Entra
  2. granting Azure RBAC on real Azure resources

Terraform only needed the second input. By forcing it to derive that from the client ID at plan time, the environment inherited an unnecessary dependency on Microsoft Entra directory reads.

Using service principal object IDs directly keeps the RBAC model reproducible in Terraform without requiring the Terraform runner to have extra Graph privileges.

Prevention

  • For Azure RBAC assignments, prefer storing principal object IDs directly when the set of identities is small and stable.
  • Avoid the azuread provider in environment roots unless a real directory read is part of the product contract, not just bootstrap convenience.
  • If an identity is created outside Terraform, record both the app client ID and the enterprise application object ID during bootstrap.
  • Treat Graph-read requirements as a separate privilege boundary from Azure RBAC.

Related Issues