Core Concepts
Understanding the annotation-driven testing approach.
The Problem
Traditional Playwright tests lack metadata that connects them to business requirements:
// β No traceability
test('user can create token', async ({ page }) => {
// Which feature area is this?
// What user journey does it validate?
// How do we track trends over time?
});When tests fail, you need to manually trace back to understand:
- What feature is affected?
- Is this a critical user journey?
- Has this test been flaky before?
The Solution: Annotation-Driven Testing
Every test is annotated with metadata linking it to user journeys and business requirements:
// β
Full traceability
test('user can create token', async ({ page }, testInfo) => {
pushTestAnnotations(testInfo, {
journeyId: 'ACCOUNT_SECURITY_CREATE-TOKEN',
testCaseId: 'ACCOUNT_SECURITY_CREATE-TOKEN_VALID',
});
// Now we know: feature area, journey, and specific scenario
});The Three-Level Hierarchy
Annotations follow a Suite β Journey β Test Case hierarchy:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Test Suite: ACCOUNT_SECURITY β
β (Product/feature area - set in beforeEach) β
β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Journey: ACCOUNT_SECURITY_VIEW-TOKENS β β
β β (User flow - one per test file) β β
β β β β
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β β Test Case: ACCOUNT_SECURITY_VIEW-TOKENS_VALID β β β
β β β (Specific scenario - unique per test) β β β
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β β Test Case: ACCOUNT_SECURITY_VIEW-TOKENS_EMPTY β β β
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Journey: ACCOUNT_SECURITY_CREATE-TOKEN β β
β β β β
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β β Test Case: ACCOUNT_SECURITY_CREATE-TOKEN_VALID β β β
β β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββTest Suite (testSuiteName)
Scope: Product or feature area (1 suite : N journeys)
Set in: beforeEach at the describe level
Examples:
ACCOUNT_SECURITYUSER_MANAGEMENTDATA_PIPELINES
Journey (journeyId)
Scope: A specific user flow (1 journey : N test cases)
Format: {testSuiteName}_{JOURNEY-NAME}
Set in: Each test() function
Examples:
ACCOUNT_SECURITY_VIEW-TOKENSUSER_MANAGEMENT_CREATE-USERDATA_PIPELINES_CREATE-JOB
Test Case (testCaseId)
Scope: A specific scenario (unique per test)
Format: {journeyId}_{SCENARIO}
Set in: Each test() function (unique per test)
Examples:
ACCOUNT_SECURITY_VIEW-TOKENS_VALIDACCOUNT_SECURITY_VIEW-TOKENS_EMPTYUSER_MANAGEMENT_CREATE-USER_INVALID
Test Case IDs must be globally unique and stable. Theyβre used for trend analysis and tracking test history over time.
Naming Conventions
Flexible Delimiters
Hyphens and underscores are normalized for validation:
// All of these are valid and equivalent for hierarchy validation:
testSuiteName: "HOME_DASHBOARD"
journeyId: "HOME-DASHBOARD_RECIPE" // Hyphen variant OK
testCaseId: "HOME-DASHBOARD_RECIPE_VALID"Recommended Patterns
| Level | Pattern | Example |
|---|---|---|
| Suite | PRODUCT_FEATURE | ACCOUNT_SECURITY |
| Journey | {SUITE}_{ACTION} | ACCOUNT_SECURITY_VIEW-TOKENS |
| Test Case | {JOURNEY}_{SCENARIO} | ACCOUNT_SECURITY_VIEW-TOKENS_VALID |
Common scenario suffixes:
_VALIDβ Happy path_INVALIDβ Validation errors_EMPTYβ Empty state_ERRORβ Error handling_EDGEβ Edge cases
Validation Rules
The annotation framework enforces these rules:
| Rule | Level | Description |
|---|---|---|
| Required | Error | All three annotations must be present |
| Hierarchy | Error | testCaseId must start with journeyId |
| Uniqueness | Error | testCaseId cannot equal journeyId (must add suffix) |
| Consistency | Warning | journeyId should start with testSuiteName |
Benefits
1. Traceability
Link tests directly to requirements:
- Test β Journey β Feature β Requirement
- Quickly identify whatβs affected when tests fail
2. Observability
Build dashboards showing:
- Pass rates by suite, journey, or test case
- Trends over time
- Flaky test patterns
3. Coverage Tracking
Identify gaps:
- Which journeys have no tests?
- Which scenarios are missing?
- Whatβs the coverage per feature area?
4. Trend Analysis
Track stability:
- Is this test consistently failing?
- When did it start failing?
- Is it flaky (passes after retry)?
Custom Schemas
The Contentstack conventions are a reference implementation. You can define your own schema:
import { pushAnnotations, createValidator } from '@lytics/playwright-annotations';
import type { TestAnnotations } from '@lytics/playwright-annotations';
// Define your own schema
interface MyTeamAnnotations extends TestAnnotations {
epic: string;
story: string;
severity: 'blocker' | 'critical' | 'major' | 'minor';
}
// Create a validator
const validateMyAnnotations = createValidator<MyTeamAnnotations>({
rules: [
(a) => ({
valid: Boolean(a.epic && a.story),
errors: !a.epic || !a.story ? ['epic and story are required'] : [],
warnings: [],
}),
]
});
// Use in tests
test('my test', async ({}, testInfo) => {
pushAnnotations(testInfo, {
epic: 'AUTH-2023',
story: 'AUTH-2023-01',
severity: 'critical'
});
});See Annotations β Custom Schema for more details.