Skip to content

BA Genie App - Developer Wiki

Table of Contents


Deployment

DEV Deployment

Trigger: Push to main branch

Workflow: .github/workflows/deploy-dev.yml

Process: - Triggered automatically on push to main - Runs SST deploy with stage dev - After successful deployment, runs Prisma database migration

Stage: dev
Environment Name: dev
URL: https://dev.{ROOT_DOMAIN}

Production Deployment

Trigger: Push a tag matching pattern prod-*

Workflow: .github/workflows/deploy-prod.yml

Process: 1. Database Migration - Runs first (Prisma migrations) 2. SST Deploy - Main application deployment 3. Verification - Waits 30 seconds, then verifies the deployment is responding

Tag Pattern: prod-YY.MM.DD-XX - YY - Year (2 digits) - MM - Month (2 digits) - DD - Day (2 digits) - XX - Release number for that day (starting from 01)

Examples: - prod-25.10.20-01 - First release on October 20, 2025 - prod-25.10.20-02 - Second release on October 20, 2025 - prod-25.12.05-01 - First release on December 5, 2025

How to deploy to production:

  1. Create a release via GitHub UI (recommended):
  2. Go to Repository → Releases → Draft a new release
  3. Click Choose a tag → Type new tag (e.g., prod-25.12.05-01) → Create new tag
  4. Set the release title (e.g., Production Release 25.12.05-01)
  5. Add release notes describing the changes
  6. Click Publish release

  7. Or via command line:

    # Create and push the tag
    git tag prod-25.12.05-01
    git push origin prod-25.12.05-01
    
    # Optionally create a GitHub release afterwards via CLI
    gh release create prod-25.12.05-01 --title "Production Release 25.12.05-01" --notes "Release notes here"
    

Stage: production
Environment Name: prod
URL: https://production.{ROOT_DOMAIN}

Preview/Branch Deployment

Trigger: Pull Request (opened, synchronize, reopened)

Workflow: .github/workflows/deploy-preview.yml

Process: - Generates a URL-safe slug from branch name - Deploys to a unique stage: review-{branch-slug}

Stage: review-{slug}
Environment Name: review/{slug}
URL: https://review-{slug}.{ROOT_DOMAIN}

Note: Preview environments are automatically created for each PR and use the same secrets as DEV.


Environment Variables

How Environment Variables Flow to Code

┌─────────────────────────────────────────────────────────────────────────────┐
│                              GitHub                                          │
│  ┌─────────────────────┐     ┌─────────────────────┐                        │
│  │   Repository        │     │   Repository        │                        │
│  │   Secrets           │     │   Variables         │                        │
│  │   (sensitive)       │     │   (non-sensitive)   │                        │
│  └──────────┬──────────┘     └──────────┬──────────┘                        │
└─────────────┼───────────────────────────┼───────────────────────────────────┘
              │                           │
              ▼                           ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                    GitHub Actions Workflow                                   │
│                    (.github/workflows/_sst-deploy.yml)                       │
│                                                                              │
│  env:                                                                        │
│    AUTH_SECRET: ${{ secrets.AUTH_SECRET }}      ◄── from secrets            │
│    AWS_REGION: ${{ vars.AWS_REGION }}           ◄── from variables          │
│    ROOT_DOMAIN: ${{ vars.ROOT_DOMAIN }}                                      │
└──────────────────────────────┬──────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│                         SST Deploy (sst.config.ts)                           │
│                                                                              │
│  process.env.AUTH_SECRET      ◄── available via process.env                 │
│  process.env.ROOT_DOMAIN                                                     │
│                                                                              │
│  const environment = {                                                       │
│    DATABASE_URL,                                                             │
│    AUTH_SECRET: process.env.AUTH_SECRET,                                     │
│    ...                                                                       │
│  };                                                                          │
└──────────────────────────────┬──────────────────────────────────────────────┘
              ┌────────────────┼────────────────┐
              ▼                ▼                ▼
┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐
│  Lambda         │  │  Next.js App    │  │  Step Functions │
│  Functions      │  │                 │  │                 │
│                 │  │  environment:   │  │                 │
│  environment:   │  │    {...}        │  │  (invokes       │
│    {...}        │  │                 │  │   Lambdas)      │
└─────────────────┘  └─────────────────┘  └─────────────────┘

Workflow File Configuration (.github/workflows/_sst-deploy.yml):

# Secrets are passed from calling workflow
secrets:
  AUTH_SECRET: { required: false }
  AUTH_COGNITO_SECRET: { required: false }
  # ...

# Environment variables are set for the job
env:
  # From GitHub Variables (non-sensitive)
  AWS_ACCOUNT_ID: ${{ vars.AWS_ACCOUNT_ID }}
  ROOT_DOMAIN: ${{ vars.ROOT_DOMAIN }}
  AWS_REGION: ${{ vars.AWS_REGION }}

  # From GitHub Secrets (sensitive)
  AUTH_SECRET: ${{ secrets.AUTH_SECRET }}
  AUTH_COGNITO_SECRET: ${{ secrets.AUTH_COGNITO_SECRET }}
  # ...

SST Config (sst.config.ts):

// Environment object passed to Lambda functions and Next.js
const environment = {
  DATABASE_URL,
  AUTH_SECRET: process.env.AUTH_SECRET,
  ROOT_DOMAIN: process.env.ROOT_DOMAIN,
  // ... other env vars
};

// Passed to Lambda functions
new sst.aws.Function('MyLambda', {
  environment,  // ◄── env vars available in Lambda
});

// Passed to Next.js app
new sst.aws.Nextjs('BAGenieApp', {
  environment: {
    ...environment,
    // Additional Next.js specific vars
  },
});

GitHub Secrets

GitHub has two levels of secrets: - Repository Secrets - Available to all workflows (used by DEV & Preview) - Environment Secrets - Scoped to specific environment (used by PROD)

Important: Environment secrets override Repository secrets with the same name. This allows PROD to use different values than DEV/Preview.

Repository Secrets

Settings → Secrets and variables → Actions → Repository secrets

Secret Used By Description
AUTH_SECRET DEV, Preview NextAuth.js secret for session encryption
AUTH_COGNITO_SECRET DEV, Preview AWS Cognito client secret
AZURE_OPENAI_API_KEY DEV, Preview, PROD Azure OpenAI API key
RECALLAI_API_KEY DEV, Preview Recall.ai API key for bot functionality
RECALLAI_WEBHOOK_SECRET DEV, Preview Webhook secret for Recall.ai callbacks

Environment Secrets - prod

Settings → Environments → prod → Environment secrets

Secret Description
AUTH_SECRET Production NextAuth.js secret (overrides repository)
AUTH_COGNITO_SECRET Production Cognito client secret (overrides repository)
AUTH_MICROSOFT_ENTRA_ID_SECRET Microsoft Entra ID OAuth secret (PROD only)
RECALLAI_API_KEY Production Recall.ai API key (overrides repository)
RECALLAI_WEBHOOK_SECRET Production webhook secret (overrides repository)

Note: AZURE_OPENAI_API_KEY is only in Repository secrets and is shared across all environments.

GitHub Variables

These are non-sensitive configuration values stored in GitHub Variables:

Variable Description
AWS_ACCOUNT_ID AWS Account ID for deployment
ROOT_DOMAIN Base domain (e.g., ba-genie.dev.bytemethod.ai)
AWS_REGION AWS Region (e.g., us-east-1)
AUTH_COGNITO_ID Cognito User Pool App Client ID
AUTH_COGNITO_ISSUER Cognito Issuer URL
RECALLAI_REGION Recall.ai region
AUTH_MICROSOFT_ENTRA_ID_ID PROD only - Microsoft Entra ID Client ID
AUTH_MICROSOFT_ENTRA_ID_ISSUER PROD only - Microsoft Entra ID Issuer URL

Debugging & Logs

Bot Workflow (Step Function)

Resource Name Pattern: {app-name}-{stage}-BotSFN
Example: ba-genie-app-dev-BotSFN

How to find logs in AWS Console: 1. Go to AWS Console → Step Functions 2. Search for BotSFN in the state machines list 3. Select the state machine matching your stage (e.g., ba-genie-app-dev-BotSFN) 4. Click on Executions tab to see all executions 5. Click on a specific execution to see: - Execution graph with step-by-step status - Input/Output for each step - Error details if failed

Bot SFN Flow:

Start Bot → Process Recording → Process Transcription → Convert Transcription → Generate Meeting Notes

Related Lambda Functions (check CloudWatch Logs): - StartBotLambda - Starts bot and waits for recording - ProcessRecordingLambda - Processes recording and waits for transcript - ProcessTranscriptionLambda - Converts transcription to BA Genie format - ConvertTranscriptionLambda - Converts transcription to input file

Document Processing (Step Function)

Resource Name Pattern: ba-genie-app-{stage}-ProcessDocumentsSFN
Example: ba-genie-app-dev-ProcessDocumentsSFN

How to find logs in AWS Console: 1. Go to AWS Console → Step Functions 2. Search for ProcessDocumentsSFN in the state machines list 3. Select the state machine matching your stage 4. Click on Executions tab

Document Processing SFN Flow:

Init → Execute Service (waits for task token) → Success Action → Success
                 ↓ (on error)
            Failure Action → Failure

Related Lambda Functions (check CloudWatch Logs): - InitLambda - Initializes document processing - SuccessActionLambda - Handles successful processing - FailureActionLambda - Handles failures - MeetingNotesLambda - Generates meeting notes - RequirementsBacklogLambda - Generates requirements backlog - DiscoveryDocLambda - Generates discovery documents - ProcessFlowLambda - Generates process flows - DocumentChunkingLambda - Chunks large documents

To find Lambda logs: 1. Go to AWS Console → CloudWatch → Log Groups 2. Search for /aws/lambda/{function-name} (e.g., /aws/lambda/ba-genie-app-dev-InitLambda)

Email Invite Handler

Resource Name Pattern: {app-name}-{stage}-EmailReceivedSnsHandler
Example: ba-genie-app-dev-EmailReceivedSnsHandler

How to find logs in AWS Console: 1. Go to AWS Console → CloudWatch → Log Groups 2. Search for EmailReceivedSnsHandler 3. Select the log group matching your stage

Email Processing Flow:

Email received → SES → S3 (EmailInviteS3) → SNS (EmailReceivedSNS) → Lambda (processEmailInvite) → Bot SFN

Related Resources: - S3 Bucket: EmailInviteS3 - Stores incoming emails (auto-deleted after 7 days) - SNS Topic: EmailReceivedSNS - Triggers on new email - Lambda Handler: functions/email.processEmailInvite


Local Development

Testing Step Functions with SST Dev

How it works: 1. SST deploys infrastructure to AWS (including Step Functions) 2. Lambda functions are proxied to your local machine 3. Step Functions execute in AWS but invoke your local Lambda code 4. Logs appear in your terminal in real-time

Steps:

  1. Start SST Dev:

    npx sst dev
    

  2. Trigger the Step Function:

  3. Bot SFN: Send an email invite or trigger via the UI
  4. Document Processing SFN: Upload a document via the UI
  5. View logs: Lambda execution logs appear in your terminal

  6. View Step Function execution in AWS Console:

  7. Go to AWS Console → Step Functions
  8. Find your state machine (e.g., ba-genie-app-{your-username}-BotSFN)
  9. Click on the execution to see the graph, input/output for each step, and errors

Useful Commands:

# Start dev mode
npx sst dev

Prerequisites: - AWS credentials configured with bagenie profile (see sst.config.ts) - Copy .env.example to .env.local and fill in required values - Set STAGE in .env.local to your name/nickname (e.g., STAGE=daniel) - this helps identify your resources in AWS Console (e.g., ba-genie-app-daniel-BotSFN)


Versions/Features

Baseline v1 - Core Bot & Document Generation

Overview

The foundational release of BA Genie includes automated meeting bot invitations, recording/transcription processing, and intelligent document generation from meeting content.

Features

1. Bot Invitation & Meeting Capture

Email-Based Bot Invitations - Users forward calendar invites to genie.bot@{stage}.{ROOT_DOMAIN} - System parses ICS files to extract meeting details (time, link, participants) - Automatically schedules Recall.ai bot to join meetings (Zoom, Google Meet, Teams) - Supports both immediate and future meeting scheduling

Manual Meeting Start - Direct meeting URL input via UI - Instant bot deployment to ongoing meetings - Real-time meeting validation

Architecture Flow:

User forwards calendar invite
SES receives email → S3 storage
SNS triggers Lambda (processEmailInvite)
Parse ICS with node-ical
Create Meeting record in database
Start BotSFN Step Function
Recall.ai bot joins meeting at scheduled time
Records video/audio + transcription
Webhook delivers recording & transcript to S3

Key Components: - Email Handler (functions/email.ts): Parses calendar invites, manages meeting lifecycle (create/update/cancel) - Bot Lambda (functions/bot.ts): Starts Recall.ai bot with task token, configures recording settings - BotSFN (infra/bot.ts): Orchestrates entire bot workflow with Step Functions - Webhook Handler (app/api/webhook/route.ts): Receives Recall.ai events (recording.done, transcript.done)

Supported Meeting Platforms: - Zoom - Google Meet
- Microsoft Teams


2. Process Flow Document Generation

Purpose: Automatically generates visual process flow diagrams and documentation from meeting transcripts.

Generator: ProcessFlowGenerator (lib/services/generators/process-flow.ts)

Input: Meeting transcripts with business process discussions

Output: Markdown document with: - Process overview and objectives - Step-by-step workflow descriptions - Decision points and branching logic - Actor/role descriptions - System interactions - Exception handling flows

AI Model: Uses Bedrock Claude to analyze conversations and extract structured process flows

Storage: Generated documents saved to ProjectsS3 bucket at projects/{projectId}/process-flow-{timestamp}.md

Lambda Handler: ProcessFlowLambda (functions/document-processing.ts)

Trigger: User-initiated via project UI after meeting notes are available


3. Discovery Document Generation

Purpose: Creates comprehensive discovery documentation capturing requirements, context, and business needs discussed in meetings.

Generator: DiscoveryGenerator (lib/services/generators/discovery.ts)

Input: Meeting transcripts and existing project context

Output: Structured markdown document with: - Business context and background - Stakeholder identification - Problem statements - Solution requirements - Constraints and assumptions - Success criteria - Open questions and risks

AI Model: Bedrock Claude analyzes transcripts to extract discovery information

Storage: Saved to ProjectsS3 at projects/{projectId}/discovery-doc-{timestamp}.md

Lambda Handler: DiscoveryDocLambda (functions/document-processing.ts)

Trigger: User-initiated via project UI


4. Requirements Backlog Generation

Purpose: Automatically extracts and organizes user stories, epics, and technical requirements from meeting discussions.

Generator: RequirementsBacklogGenerator (lib/services/generators/requirements-backlog.ts)

Input: Meeting transcripts and project documentation

Output: Markdown document with: - Epics (high-level features) - User stories with acceptance criteria - Technical requirements - Non-functional requirements - Dependencies between requirements - Priority indicators

Structure:

{
  "epics": [
    {
      "epicName": "Epic Title",
      "description": "...",
      "userStories": [
        {
          "storyTitle": "As a user...",
          "acceptanceCriteria": ["..."],
          "priority": "high"
        }
      ]
    }
  ]
}

AI Model: Bedrock Claude with specialized prompts for requirement extraction

Storage: Saved to ProjectsS3 at projects/{projectId}/requirements-backlog-{timestamp}.md

Lambda Handler: RequirementsBacklogLambda (functions/document-processing.ts)

Trigger: User-initiated via project UI


Step Function Flow (ProcessDocumentsSFN)

Init
Execute Service (waits for task token)
  - MeetingNotesLambda / ProcessFlowLambda / DiscoveryDocLambda / RequirementsBacklogLambda
Success Action
  - Update database
  - Notify via WebSocket
Success

(On Error)
Failure Action
  - Log error
  - Notify user
Failure

Common Features Across All Generators: - Streaming AI responses with progress updates via WebSocket - S3 storage with timestamped filenames - Database records linking documents to projects - Error handling with detailed logging - Retry logic for transient failures


Resource Names (v1)

  • Bot Step Function: ba-genie-app-{stage}-BotSFN
  • Document Processing SFN: ba-genie-app-{stage}-ProcessDocumentsSFN
  • Email Handler Lambda: ba-genie-app-{stage}-EmailReceivedSnsHandler
  • Process Flow Lambda: ba-genie-app-{stage}-ProcessFlowLambda
  • Discovery Doc Lambda: ba-genie-app-{stage}-DiscoveryDocLambda
  • Requirements Lambda: ba-genie-app-{stage}-RequirementsBacklogLambda

v2 - Video Processing & Manual Transcripts

Overview

Version 2 expands input capabilities by supporting direct video/audio uploads and manual transcript uploads, eliminating the dependency on live meeting bots for all workflows.

New Features

1. Video/Audio Upload Processing

Purpose: Process pre-recorded meetings or videos without requiring a live bot to join.

Supported Formats: - Video: MP4, MOV, AVI, WebM - Audio: MP3, WAV, M4A, OGG

Workflow:

User uploads video/audio file via UI
File stored in MeetingsS3
VideoUploadSFN triggered
ProcessVideoUploadLambda
  - Generates presigned S3 URL (12hr expiry)
  - Submits to external transcription API
  - Polls for completion (max 30 min timeout)
Receives transcription with speaker diarization
Converts to BA Genie transcript format
Stores in S3 as JSON
ConvertTranscription → MeetingNotes → Email flow

Lambda Handler: functions/video-processing.ts

Key Features: - Speaker Diarization: Automatically identifies and labels different speakers - Timestamp Accuracy: Preserves precise timestamps for each utterance - Long Video Support: 12-hour presigned URLs for processing lengthy recordings - Polling Mechanism: Checks transcription status every 30 seconds with exponential backoff - Chunk Processing: Handles large videos by processing in chunks

External API Integration: - Uses custom transcription service (configured via TRANSCRIPTION_API_KEY_HASHED) - Supports multi-chunk processing for videos >1 hour - Returns structured JSON with speaker labels and timestamps

Step Function: VideoUploadSFN (infra/bot.ts)

ProcessVideoUpload → ConvertTranscription → GenerateMeetingNotes → GenerateSummary → SendEmail

Error Handling: - Validates file size and format before processing - Retries failed API calls up to 3 times - Provides detailed error messages to users via WebSocket - Automatic cleanup of failed processing artifacts


2. Manual Transcript Upload Processing

Purpose: Allow users to upload existing transcripts in various formats without needing video/audio files.

Supported Formats: - Plain text (.txt) - JSON (.json) - BA Genie or generic transcript format - WebVTT (.vtt) - Video subtitle format - SubRip (.srt) - Subtitle format - DOCX (.docx) - Word documents (basic text extraction)

Workflow:

User uploads transcript file via UI
File stored in MeetingsS3
ManualUploadSFN triggered
ProcessManualUploadLambda
  - Detects file format from extension
  - Parses content based on format
  - Converts to BA Genie Transcription format
Stores standardized JSON in S3
ConvertTranscription → MeetingNotes → Email flow

Lambda Handler: functions/manual-upload.ts

Format-Specific Parsing:

Plain Text (.txt): - Splits by double newlines or speaker patterns - Detects speaker labels (e.g., "John:", "Speaker 1:") - Assigns sequential timestamps - Fallback: treats entire content as single speaker

JSON (.json):

// Supported structure:
{
  "transcription": [
    {
      "timestamp": 0,
      "text": "Hello world",
      "speaker": "John Doe"
    }
  ]
}

WebVTT (.vtt):

00:00:10.000 --> 00:00:15.000
Speaker 1: This is the text

Parsed into timestamped blocks with speaker labels

SubRip (.srt):

1
00:00:10,000 --> 00:00:15,000
Speaker 1: This is the text

Converts SRT timestamps to seconds

DOCX (.docx): - Uses mammoth library for text extraction - Strips formatting, keeps plain text - Processes as plain text with speaker detection

Speaker Detection Algorithm: 1. Searches for patterns: Name:, Speaker N:, [Name] 2. Extracts speaker label and dialogue 3. Maintains speaker consistency across blocks 4. Falls back to "Unknown Speaker" if no pattern found

Timestamp Handling: - If no timestamps in source: generates sequential timestamps (0, 10, 20, 30...) - If timestamps exist: preserves original timing - Converts various time formats (HH:MM:SS, MM:SS.SSS, seconds) to uniform seconds format

Step Function: ManualUploadSFN (infra/bot.ts)

ProcessManualUpload → ConvertTranscription → GenerateMeetingNotes → GenerateSummary → SendEmail

Validation: - Minimum content length check (prevents empty files) - Format validation (ensures parseable content) - Speaker detection quality check - Warns users if transcript quality is low


Unified Transcription Format

All processing methods (Bot, Video, Manual) convert to this standard format:

interface Transcription {
  transcription: TranscriptionBlock[];
}

interface TranscriptionBlock {
  timestamp: number;        // seconds from start
  text: string;             // spoken text
  speaker?: string;         // speaker identifier
  speakerIndex?: number;    // numeric speaker ID
}

This standardization enables: - Consistent meeting notes generation - Uniform document processing - Shared processing pipeline (ConvertTranscription → MeetingNotes)


Resource Names (v2)

  • Video Upload SFN: ba-genie-app-{stage}-VideoUploadSFN
  • Manual Upload SFN: ba-genie-app-{stage}-ManualUploadSFN
  • Video Processing Lambda: ba-genie-app-{stage}-ProcessVideoUploadLambda
  • Manual Upload Lambda: ba-genie-app-{stage}-ProcessManualUploadLambda
  • Convert Transcription Lambda: ba-genie-app-{stage}-ConvertTranscriptionLambda

v3 - Email, Meeting Summary and delete capability

Overview

After meeting notes are generated, the system automatically: 1. Generates an AI-powered summary of the meeting notes using Bedrock Claude 2. Sends an email to the user with the summary in the body and meeting notes as DOCX attachment

Architecture Flow

Meeting Notes Generated
Generate Meeting Summary Lambda
  - Fetches meeting notes from S3
  - Generates AI summary using Bedrock Claude
  - Stores metadata in S3
Send Meeting Email Lambda
  - Fetches metadata from S3
  - Converts MD to DOCX
  - Sends email via SES with attachment

Components

Lambda Functions

functions/meeting-summary.ts - Handler: generateSummary - Input: { meetingId: string } - Process: - Fetches meeting from database (includes user email) - Gets meeting notes location from meeting.data.externalData.meetingNotes - Fetches notes content from S3 - Calls Bedrock Claude (us.anthropic.claude-sonnet-4-20250514-v1:0) to generate 3-5 bullet point summary - Stores email metadata in S3 at meetings/{meetingId}/email-metadata.json - Timeout: 5 minutes - Permissions: - S3: GetObject, PutObject - Bedrock: InvokeModel, InvokeModelWithResponseStream

functions/send-meeting-email.ts - Handler: sendEmail - Input: { meetingId: string } - Process: - Fetches email metadata from S3 - Downloads meeting notes from S3 - Converts Markdown to DOCX (if not already available) - Sends email via SES using sendMeetingNotesEmail() - Updates meeting record with emailSent data - Timeout: 2 minutes - Permissions: - S3: GetObject - SES: SendEmail, SendRawEmail

Step Functions Integration

All three Step Functions include the email flow at the end:

  • BotSFN: startBot → processRecording → processTranscription → convertTranscription → meetingNotes → generateMeetingSummary → sendMeetingEmail
  • ManualUploadSFN: processManualUpload → convertTranscription → meetingNotes → generateMeetingSummary → sendMeetingEmail
  • VideoUploadSFN: processVideoUpload → convertTranscription → meetingNotes → generateMeetingSummary → sendMeetingEmail

Data Flow Through Step Functions

meetingNotesSfnInvoke (nested SFN)
  output: { meetingId }
generateMeetingSummaryLambdaInvoke
  payload: { meetingId }
  → Stores: meetings/{meetingId}/email-metadata.json
  output: { meetingId, success }
sendMeetingEmailLambdaInvoke
  payload: { meetingId }
  → Reads: meetings/{meetingId}/email-metadata.json
  output: { success, messageId }

Email Service

lib/services/email-sender.ts - Function: sendMeetingNotesEmail() - Sender: noreply@ba-genie.itsaiplatform.com - Format: MIME multipart with DOCX attachment - Region: us-east-1 (SES)

AWS SES Setup

Domain Verification

  1. Add domain identity in SES Console for ba-genie.itsaiplatform.com
  2. Add DNS records:
  3. DKIM: 3 CNAME records (provided by SES)
  4. SPF: TXT record v=spf1 include:amazonses.com ~all
  5. DMARC: TXT record v=DMARC1; p=quarantine; rua=mailto:dmarc@ba-genie.itsaiplatform.com

Sandbox vs Production Mode

Sandbox Mode (Development): - Both sender AND recipient must be verified - Verify recipient emails: SES Console → Verified Identities → Create Identity → Email - Limited sending quota

Production Mode: - Request production access in SES Console to remove sandbox restrictions - Can send to any email address - Higher sending quotas

Email Metadata Structure (S3)

Stored at s3://meetings-bucket/meetings/{meetingId}/email-metadata.json:

{
  "meetingId": "uuid",
  "summary": "• Point 1\n• Point 2\n• Point 3",
  "notesBucket": "bucket-name",
  "notesKey": "meetings/uuid/meeting-notes.md",
  "recipientEmail": "user@example.com",
  "meetingTitle": "Meeting Subject",
  "generatedAt": "2026-02-23T10:00:00.000Z"
}

Debugging Email Flow

Check if email was sent:

# View Lambda logs
aws logs tail /aws/lambda/ba-genie-app-{stage}-SendMeetingEmailLambdaFunction --follow

# Check SES sending statistics
aws ses get-send-statistics --region us-east-1

# Manually trigger email resend
aws lambda invoke \
  --function-name ba-genie-app-{stage}-SendMeetingEmailLambdaFunction \
  --region us-east-1 \
  --payload '{"meetingId": "your-meeting-id"}' \
  --cli-binary-format raw-in-base64-out \
  /tmp/response.json

Common Issues:

  1. Email not received:
  2. Check SES Console → Sending Statistics for delivery status
  3. Verify recipient email is verified (sandbox mode)
  4. Check spam/junk folder
  5. Check sender domain matches verified domain (noreply@ba-genie.itsaiplatform.com)
  6. Check CloudWatch logs for errors

  7. Summary generation fails:

  8. Check Bedrock model access: us.anthropic.claude-sonnet-4-20250514-v1:0
  9. Verify meeting notes exist in S3
  10. Check CloudWatch logs: /aws/lambda/ba-genie-app-{stage}-GenerateMeetingSummaryLambdaFunction

  11. Step Function fails:

  12. Check Step Function execution graph in AWS Console
  13. Verify meetingId flows through all steps
  14. Check individual Lambda CloudWatch logs

Resource Names: - Generate Summary Lambda: ba-genie-app-{stage}-GenerateMeetingSummaryLambdaFunction - Send Email Lambda: ba-genie-app-{stage}-SendMeetingEmailLambdaFunction - CloudWatch Logs: /aws/lambda/ba-genie-app-{stage}-{FunctionName}


Delete Capability

Purpose: Provide administrative control and workspace management through comprehensive delete functionality across the application.

Supported Delete Operations:

1. Delete Meetings - Single/Bulk: Delete individual meetings or multiple selected meetings - Action: lib/actions/meetings.tsdeleteMeetings() - Scope: Hard delete from database - Cleanup: - Deletes meeting records - Deletes associated S3 objects (meeting notes, transcripts, summaries) - Removes meeting-project associations - Deletes related documents (if meeting is the sole source) - Deletes document chunks for affected documents - UI: Meeting table row actions and bulk actions toolbar - Trigger: User-initiated via three-dot menu or bulk selection

2. Delete Projects - Action: lib/actions/projects.tsdeleteProject() - Scope: Soft delete (sets deletedAt timestamp) - Cleanup: - Soft deletes the project - Soft deletes all project documents - Soft deletes all document chunks - Soft deletes all meeting-project associations - Authorization: Only project owner can delete - UI: Project card dropdown menu and context menu - Trigger: User-initiated via three-dot menu or right-click

3. Delete Project Documents (Inputs/Outputs) - Single Delete: lib/actions/documents.tsdeleteProjectDocument() - Bulk Delete: lib/actions/documents.tsdeleteProjectDocuments() - Scope: Soft delete for database records - Cleanup: - Soft deletes document records - Soft deletes associated document chunks - Deletes S3 files (only for non-meeting documents) - Unlinks meeting from project if document source is a meeting - UI: Project document table row actions and bulk actions - Trigger: User-initiated for both input and output documents

Feature Flag Control:

// lib/utils/constants.ts
export const FRONTEND_DELETE_ENABLED = process.env.NEXT_PUBLIC_ENABLE_DELETE === 'true';

Delete Flow Architecture:

User initiates delete
DeleteConfirmationModal shown
User confirms deletion
Server action invoked (deleteMeetings/deleteProject/deleteProjectDocument)
Database transaction
  - Update deletedAt timestamps (soft delete)
  - OR hard delete records (meetings)
S3 cleanup (if applicable)
  - Delete objects via DeleteObjectCommand/DeleteObjectsCommand
Cascade deletions
  - Document chunks
  - Associations (meeting-project)
Revalidate paths
Toast notification to user

Key Components:

DeleteConfirmationModal (components/modals/delete-confirmation-modal.tsx): - Reusable confirmation dialog - Shows entity-specific title and message - Displays loading state during deletion - Used across meetings, projects, and documents

Meeting Deletion Details: - Collects S3 objects from meeting data: - externalData.meetingNotes (meeting notes file) - externalData.markdownTranscription (transcript file) - externalData.meetingSummary (summary file) - Groups S3 objects by bucket for batch deletion - Finds and deletes documents where meeting is sole source - Hard deletes from database (not soft delete) - Removes meeting-project associations

Project Deletion Details: - Verifies user ownership before deletion - Performs cascading soft deletes in transaction: 1. Document chunks for all project documents 2. All documents in the project 3. Meeting-project associations 4. Project record itself - Does NOT delete S3 files (preserves data) - Uses timestamps for audit trail

Document Deletion Details: - Distinguishes between meeting-sourced and user-uploaded documents - Only deletes S3 files for user-uploaded documents - Meeting-sourced documents: S3 files retained, only DB record soft deleted - Unlinks meeting from project if document is meeting-derived - Updates project activity timestamp

Best Practices: - Always show confirmation modal before deletion - Provide clear feedback via toast notifications - Use transactions for multi-step deletions - Revalidate paths after successful deletion - Log deletion operations for debugging - Distinguish between soft delete (audit trail) vs hard delete (cleanup)

Debugging:

# Check deleted records (soft delete)
# Query Prisma with where: { deletedAt: { not: null } }

# Check S3 cleanup
aws s3 ls s3://{bucket-name}/meetings/{meetingId}/

# View deletion logs
aws logs tail /aws/lambda/ba-genie-app-{stage}-{FunctionName} --follow

Security: - All delete operations require authentication - Project deletion requires ownership verification - Server-side validation prevents unauthorized deletions - Feature flag allows disabling delete UI per environment


v1.4 - Project Management Integration (Azure DevOps & Jira)

Overview

Version 1.4 introduces bidirectional integration with Azure DevOps (ADO) and Jira, enabling teams to seamlessly import existing work items, manage backlogs with AI assistance, and export enhanced requirements back to their project management tools.

Features

1. Azure DevOps Integration

Purpose: Connect BA Genie with Azure DevOps to import epics/features/user stories and export enhanced backlogs.

Configuration (/settings/integrations): - Organization Name: Your ADO organization (e.g., mycompany) - Project Name: Target ADO project - Personal Access Token (PAT): Token with Work Items (Read, Write) permissions - Test Connection: Validates credentials before saving

API Integration (lib/integrations/azure-devops/): - Client (client.ts): Core ADO REST API wrapper - Authentication via PAT (Base64 encoded) - Work item CRUD operations - Area path and iteration management - Query support - Pull (pull.ts): Import work items from ADO - Fetches epics, features, and user stories - Preserves hierarchy (parent-child relationships) - Handles orphaned stories (no parent epic) - Maps ADO fields to BA Genie format - Push (push.ts): Export backlogs to ADO - Bulk create/update work items - Maintains epic→story hierarchy - Syncs story points and acceptance criteria - Tracks sync status (synced, modified_locally, modified_externally)

Supported Work Item Types: - Epic - Feature
- User Story - Task (child of User Story)

Field Mapping:

ADO                    → BA Genie
─────────────────────────────────
Title                  → title
Description            → description
Acceptance Criteria    → acceptanceCriteria[]
Story Points           → storyPoints
Assigned To            → assignedTo
State                  → status
Area Path              → tags
Iteration Path         → sprint

Import Workflow:

User clicks "Import from Azure DevOps"
/api/integrations/azure-devops/items
  - Fetches all epics/features/user stories
  - Groups by epic hierarchy
Import Dialog shows selectable epics
  - Master checkbox for select all/deselect all
  - Shows story count per epic
User selects desired epics → Import
/api/backlogs (POST)
  - Creates new backlog with imported structure
  - Stores externalId for sync tracking
Navigate to backlog detail page

Export Workflow:

User clicks "Export" → Select "Azure DevOps"
/api/integrations/azure-devops/export
  - Reads backlog structure
  - Creates/updates ADO work items via API
  - Maintains epic→story hierarchy
  - Syncs acceptance criteria and story points
Updates syncStatus in database
  - synced: Successfully exported
  - modified_locally: Changed after export
Toast notification with export summary


2. Jira Integration

Purpose: Integrate with Jira Cloud to import issues and export enhanced backlogs with intelligent field detection.

Configuration (/settings/integrations): - Site URL: Jira instance URL (e.g., https://mycompany.atlassian.net) - Email: Jira account email - API Token: Generate from Atlassian Account Settings - Project Key: Target Jira project (e.g., PROJ) - Custom Field Mapping: Configure Story Points and Acceptance Criteria fields

API Integration (lib/integrations/jira/): - Client (client.ts): Jira Cloud REST API v3 wrapper - Basic auth (email:token) - Issue CRUD operations - Custom field detection - Atlassian Document Format (ADF) support - Pull (pull.ts): Import issues from Jira - Fetches project structure and metadata - Auto-detects custom fields: - Story Points: Number field (schema type validation) - Acceptance Criteria: Text field (string/textarea types) - Multi-signal field detection: 1. Schema type validation (primary) 2. Field name pattern matching (secondary) 3. Custom type string analysis (tertiary) - Handles epic→story hierarchy - Parses Jira ADF (Atlassian Document Format) for rich text - Push (push.ts): Export backlogs to Jira - Bulk create/update issues - Writes acceptance criteria to dedicated field (if mapped) - Strips embedded AC from description to avoid duplication - Converts AC to ADF bulletList format - Handles both mapped and fallback scenarios

Supported Issue Types: - Epic - Story - Task - Bug (imported as user story)

Field Detection (pull.ts):

// Story Points Detection
- Schema type: number
- Custom type: float, number
- Name patterns: "story point", "points", "estimate"

// Acceptance Criteria Detection
- Schema type: string
- Custom type: textarea, textfield
- Name patterns: "acceptance criteria", "acceptance", "criteria"

Custom Field Mapping UI (/settings/integrations): - Dropdown selectors for Story Points and Acceptance Criteria - Auto-populated with detected custom fields - Manual override capability - Persistent storage in IntegrationCredential.structure - Visual confirmation (green/amber indicators) - Info note about Jira screen layout configuration

Atlassian Document Format (ADF):

{
  "version": 1,
  "type": "doc",
  "content": [
    {
      "type": "bulletList",
      "content": [
        {
          "type": "listItem",
          "content": [{ "type": "paragraph", "content": [{ "type": "text", "text": "AC item" }] }]
        }
      ]
    }
  ]
}

Import Workflow:

User clicks "Import from Jira"
/api/integrations/jira/items
  - Fetches all epics and stories
  - Auto-detects custom fields
  - Groups by epic hierarchy
Import Dialog with master checkbox
  - Select all / Deselect all functionality
  - Shows story count per epic
User selects epics → Import
/api/backlogs (POST)
  - Creates backlog with Jira structure
  - Stores externalId (issue key) for sync
Navigate to backlog detail page

Export Workflow:

User clicks "Export" → Select "Jira"
/api/integrations/jira/export
  - Reads custom field mappings from cached structure
  - If Acceptance Criteria field mapped:
    * Strips embedded AC from description
    * Writes AC to dedicated field as ADF
  - If not mapped:
    * Falls back to embedding AC in description
  - Bulk creates/updates Jira issues
Updates syncStatus and externalId
Toast notification with summary

Important Notes: - Screen Layout Configuration: Custom fields must be added to the Story issue type screen layout in Jira Project Settings for visibility in Jira UI (data writes correctly via API regardless) - Field Persistence: Field mappings are cached in IntegrationCredential.structure JSON - Fallback Behavior: If custom fields not detected, falls back to embedding AC in description


3. Backlog Management & AI Enhancement

Backlog Views (/projects/[id]/backlog):

View Modes: - Grid View: 3-column responsive card layout - Color-coded accent bars (green=AI, blue=Jira, purple=ADO) - Source icons and badges - Item count and date stamps - Hover effects with delete button - List View: Horizontal row layout - Colored icon blocks - Inline badges and metadata - Clean single-row per backlog - Compact View: Dense table format - Color dot indicators - Maximum density for large backlogs - Dividers between rows

View Toggle: Icon buttons (Grid/List/Compact) with tooltips

Backlog Detail Page (/projects/[id]/backlog/[backlogId]):

Core Capabilities:

1. Auto-Allocate Story Points - Purpose: AI-powered batch estimation for all user stories - Configuration: - Point series: Fibonacci (1,2,3,5,8,13,21) / Linear (1-10) / Power of Two (1,2,4,8,16) - Hours per point: Customizable effort mapping - AI Model: Claude Sonnet 4 - Process:

User clicks "Auto-Allocate Points"
Modal shows series and hours-per-point selectors
User clicks "Auto-Allocate"
/api/backlogs/[backlogId]/auto-allocate-points (POST)
  - Fetches all user stories
  - Sends to AI for batch estimation
  - AI evaluates: technical complexity, AC count, scope, dependencies
  - Returns point estimates with reasoning
Updates story_points and ai_suggested_points for all stories
Toast shows count of updated stories
Page refreshes with new estimates
- Key Features: - Independent estimation (AI doesn't parrot existing values) - Always overwrites existing points (explicit user action) - Preserves AI suggestions in separate field for reference - Validates points are within selected series - Provides reasoning for each estimate

2. Individual Story Enhancement - Make More Detailed: Expands description with clarity and context - Add Acceptance Criteria: Generates comprehensive AC from existing content - Suggest Story Points: AI estimates complexity using sibling context - Generate Tasks: Breaks story into implementation tasks - Custom Prompt: User-defined enhancements

Enhancement Dialog:

User clicks sparkle icon on story
Modal shows enhancement options (radio buttons)
User selects type → "Enhance with AI"
/api/backlogs/[backlogId]/items/[itemId]/enhance (POST)
  - Fetches story context
  - Calls AI with enhancement-specific prompt
  - For suggest_points: includes sibling stories for relative sizing
Updates story fields (title, description, AC, points)
  - Always overwrites when user explicitly requested
  - Marks as modified_locally if synced
  - Recomputes content hash
Toast notification + page refresh

3. Backlog Export - CSV Export: Download backlog as spreadsheet - Integration Export: Push to ADO or Jira - Bulk Operations: Create/update work items in batches - Sync Status Tracking: - synced: Exported and unchanged - modified_locally: Changed after export - modified_externally: Changed in external system - not_synced: Never exported

4. Epic and Story Management - Add Epic: Create new epics with description - Add Story: Create user stories under epics - Drag & Drop: Reorder stories within epics - Edit: Inline editing for title, description, AC, points - Delete: Remove epics or stories (with confirmation) - Collapse/Expand: Toggle epic sections

5. Story Point Assignment - Manual Selection: Dropdown with valid point values - AI Suggestion: Click button to get AI estimate - Visual Indicators: Shows assigned vs suggested points - Series Constraint: Dropdown limited to configured series


4. Database Schema

Integration Credentials:

model IntegrationCredential {
  id         String   @id @default(cuid())
  userId     String
  platform   String   // 'azure_devops', 'jira'
  credentials Json    // Encrypted credentials
  structure  Json?    // Cached structure (field IDs, custom fields)
  createdAt  DateTime @default(now())
  updatedAt  DateTime @updatedAt
}

Backlog Items:

model BacklogItem {
  id                 String   @id @default(cuid())
  backlogId          String
  itemType           String   // 'epic', 'user_story', 'task'
  title              String
  description        String?
  acceptanceCriteria Json?    // string[]
  storyPoints        Int?
  aiSuggestedPoints  Int?     // AI recommendation
  externalId         String?  // ADO ID or Jira key
  syncStatus         String?  // 'synced', 'modified_locally', etc.
  contentHash        String?  // For change detection
  order              Int      @default(0)
  parentItemId       String?
  assignedTo         String?
  tags               Json?    // string[]
  isDeleted          Boolean  @default(false)
  createdAt          DateTime @default(now())
  updatedAt          DateTime @updatedAt
}


5. API Routes

Integration Management: - GET /api/integrations/status - Check connected platforms - POST /api/integrations/azure-devops/credentials - Save ADO credentials - POST /api/integrations/jira/credentials - Save Jira credentials with field mappings - DELETE /api/integrations/{platform}/credentials - Disconnect integration

Work Item Import: - GET /api/integrations/azure-devops/items - Fetch ADO work items with hierarchy - GET /api/integrations/jira/items - Fetch Jira issues with auto-detected fields

Backlog Export: - POST /api/integrations/azure-devops/export - Export backlog to ADO - Bulk create/update with area path and iteration - Returns success count and failures - POST /api/integrations/jira/export - Export backlog to Jira - Uses mapped custom field IDs - Converts AC to ADF format - Returns created/updated issue keys

AI Enhancement: - POST /api/backlogs/[backlogId]/auto-allocate-points - Batch story point estimation - Input: series, hoursPerPoint, validPoints - Output: estimates with reasoning - POST /api/backlogs/[backlogId]/items/[itemId]/enhance - Individual story enhancement - Input: type (make_detailed, add_acceptance_criteria, suggest_points, etc.) - Output: enhanced story fields


6. Key Components

Frontend (app/(app)/(project-detail)/projects/[id]/backlog/): - page.tsx - Backlog list with Grid/List/Compact views - [backlogId]/page.tsx - Backlog detail with epic/story management

Integration Settings (app/(app)/settings/integrations/page.tsx): - ADO connection form with test button - Jira connection form with custom field mapping UI - Field mapping dropdowns (Story Points, Acceptance Criteria) - Visual indicators (green=connected, amber=not detected) - Info boxes for configuration guidance

Backend Integration Code (lib/integrations/): - azure-devops/ - ADO client, pull, push, types - jira/ - Jira client, pull, push, types - Shared patterns for authentication, error handling, field mapping

AI Services (lib/helpers/llm.ts): - callPrompt() - Bedrock Claude invocation - Enhanced prompts for independent estimation - Sibling context for relative sizing - Detailed evaluation factors (complexity, scope, dependencies)


7. Configuration Guide

Azure DevOps Setup: 1. Generate PAT: ADO → User Settings → Personal Access Tokens 2. Scope: Work Items (Read, Write) 3. Copy token to BA Genie settings 4. Enter organization and project name 5. Test connection → Save

Jira Setup: 1. Generate API Token: Atlassian Account → Security → API Tokens 2. Copy token to BA Genie settings 3. Enter site URL (https://yoursite.atlassian.net) 4. Enter email and project key 5. Fetch structure to auto-detect custom fields 6. Map Story Points and Acceptance Criteria fields 7. Add custom fields to Story screen layout in Jira Project Settings

Custom Field Mapping (Jira):

Jira Project Settings → Issue Types → Story → Edit Fields
Drag custom fields from available list onto layout
Save changes
→ Fields now visible in Jira UI


8. Best Practices

Field Detection: - Always fetch structure after adding new custom fields in Jira - Use descriptive field names containing "story point" or "acceptance criteria" - Test field mapping with small export before bulk operations

Story Point AI Estimation: - Run auto-allocate for consistent baseline across all stories - Use individual suggestions for iterative refinement - Review AI reasoning to understand complexity factors - Adjust hours-per-point mapping based on team velocity

Sync Management: - Export regularly to keep external systems updated - Monitor syncStatus to identify local changes - Re-import to detect external changes - Use content hash for change detection

Performance: - Import selectively (don't import entire project if not needed) - Use compact view for large backlogs (20+ items) - Batch operations for bulk updates


9. Troubleshooting

ADO Connection Failed: - Verify PAT has correct permissions (Read, Write) - Check organization and project names (case-sensitive) - Ensure PAT hasn't expired (check ADO settings)

Jira Connection Failed: - Verify API token is valid (regenerate if needed) - Check site URL format (https://yoursite.atlassian.net) - Confirm email matches Atlassian account - Ensure project key exists and is accessible

Custom Fields Not Detected (Jira): - Click "Fetch Structure" again after adding fields in Jira - Verify field type (number for points, text for AC) - Check field name contains relevant keywords - Manually select from dropdown if auto-detection fails

Export Failures: - Check integration credentials are still valid - Verify required fields are populated (title, description) - Check external system quotas/rate limits - Review CloudWatch logs for error details

Auto-Allocate Shows "0 stories updated": - ~~Previous issue: COALESCE prevented overwriting existing points~~ - Fixed in v1.4: Auto-allocate now always overwrites points - Verify stories exist and aren't deleted - Check AI model access (Bedrock permissions)

Story Points Not Visible in Jira UI: - Data is written correctly via API - Solution: Add custom field to Story screen layout in Jira Project Settings


Resource Names (v1.4)

  • ADO Client: lib/integrations/azure-devops/client.ts
  • Jira Client: lib/integrations/jira/client.ts
  • Auto-Allocate API: /api/backlogs/[backlogId]/auto-allocate-points
  • Enhanced Backlog Pages: app/(app)/(project-detail)/projects/[id]/backlog/
  • Integration Settings: app/(app)/settings/integrations/page.tsx

Quick Reference

Resource AWS Console Path Search Term
Bot SFN Step Functions → State machines BotSFN
Document Processing SFN Step Functions → State machines ProcessDocumentsSFN
Email Handler CloudWatch → Log Groups EmailReceivedSnsHandler
Any Lambda CloudWatch → Log Groups /aws/lambda/{stage}-{name}
S3 Buckets S3 → Buckets {stage}- prefix

Troubleshooting

Step Function Stuck

  • Check execution in AWS Console → Step Functions
  • Look for tasks waiting for task token
  • Check corresponding Lambda logs in CloudWatch

Email Not Processing

  • Check Lambda logs for EmailReceivedSnsHandler
  • Check SNS topic subscription
  • Verify S3 bucket received the email
  • Check SES → Email Receiving for rule status

Document Processing Failed

  • Check Step Function execution graph
  • Look at the failed step's input/output
  • Check Lambda logs for the specific Lambda that failed
  • Look for timeout issues (default 15 minutes for processing Lambdas)