Files
yt2ai/tests/manual/mobile-test-cases.md
Marius Mutu 064899eb95
Some checks failed
Build and Test YT2AI Bookmarklet / build-and-test (16.x) (push) Has been cancelled
Build and Test YT2AI Bookmarklet / build-and-test (18.x) (push) Has been cancelled
Build and Test YT2AI Bookmarklet / build-and-test (20.x) (push) Has been cancelled
Build and Test YT2AI Bookmarklet / release (push) Has been cancelled
Build and Test YT2AI Bookmarklet / security-scan (push) Has been cancelled
Initial project setup
Add project structure with package.json, source code, tests, documentation, and GitHub workflows.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-08 15:51:19 +03:00

14 KiB

Mobile Testing Protocol - YT2AI Bookmarklet

Test Environment Setup

Required Devices

  • Primary: Android phone/tablet with Chrome 90+
  • Secondary: Android device with different screen size
  • Optional: Android Chrome Dev/Beta for compatibility testing

Test Preparation

  1. Clear browser data before testing session
  2. Ensure stable internet (WiFi preferred for baseline)
  3. Install debug version for detailed logging
  4. Enable Chrome DevTools for desktop debugging
  5. Prepare test YouTube videos (see Test Data section)

Test Categories

1. Installation Testing

Test Case 1.1: Bookmark Creation

Objective: Verify bookmarklet can be installed as bookmark

Steps:

  1. Open Chrome on Android device
  2. Navigate to any webpage
  3. Tap star () icon in address bar
  4. Tap "Edit" on the bookmark
  5. Replace URL with bookmarklet code from dist/bookmarklet-debug.js
  6. Name bookmark "YT2AI Test"
  7. Tap "Save"

Expected Results:

  • Bookmark saves without error
  • Bookmark appears in bookmark list
  • URL field contains complete javascript code

Pass Criteria:

  • Bookmark creation successful
  • JavaScript code properly saved
  • Bookmark accessible from menu

Test Case 1.2: Installation Validation

Objective: Confirm proper bookmarklet installation

Steps:

  1. Navigate to https://youtube.com
  2. Tap the "YT2AI Test" bookmark
  3. Observe console logs (if DevTools available)

Expected Results:

  • Bookmarklet executes without JavaScript errors
  • Console shows "[YT2AI:INFO] Bookmarklet initialized"
  • Error message about non-video page (expected)

Pass Criteria:

  • No JavaScript console errors
  • Initialization message appears
  • Error handling works correctly

2. YouTube Context Detection

Test Case 2.1: Standard Video Detection

Objective: Verify video ID extraction from standard YouTube URLs

Test Data:

  • https://www.youtube.com/watch?v=dQw4w9WgXcQ (Rick Roll)
  • https://www.youtube.com/watch?v=jNQXAC9IVRw (Me at the zoo)

Steps:

  1. Navigate to test video URL
  2. Wait for page load
  3. Tap bookmarklet
  4. Check initialization overlay

Expected Results:

  • "YT2AI Ready!" success message
  • Video ID displayed correctly
  • Environment info shows mobile detection

Pass Criteria:

  • Video ID extracted correctly
  • Success overlay appears
  • Mobile environment detected
  • Android platform detected

Test Case 2.2: YouTube Shorts Detection

Objective: Test Shorts URL format support

Test Data:

  • Find active YouTube Shorts video
  • URL format: https://www.youtube.com/shorts/VIDEO_ID

Steps:

  1. Navigate to Shorts video
  2. Tap bookmarklet
  3. Verify video detection

Expected Results:

  • Video ID extracted from Shorts URL
  • Normal initialization flow

Pass Criteria:

  • Shorts URL recognized
  • Video ID extracted
  • No format-specific errors

Test Case 2.3: Invalid Context Handling

Objective: Test error handling for non-video pages

Test Data:

  • https://www.youtube.com (homepage)
  • https://www.google.com (non-YouTube)

Steps:

  1. Navigate to invalid URL
  2. Tap bookmarklet
  3. Observe error handling

Expected Results:

  • Appropriate error messages
  • Mobile-friendly error display
  • Clear instructions for resolution

Pass Criteria:

  • Error messages display correctly
  • Mobile-optimized error UI
  • Touch-friendly close button (44px+)

3. Mobile UI Testing

Test Case 3.1: Overlay Responsiveness

Objective: Verify mobile-optimized overlay display

Steps:

  1. Navigate to valid YouTube video
  2. Tap bookmarklet to trigger overlay
  3. Test different orientations (portrait/landscape)
  4. Test different screen sizes if available

Expected Results:

  • Overlay centers properly on screen
  • Text readable without zooming
  • Touch targets minimum 44px
  • Responsive to orientation changes

Pass Criteria:

  • Overlay properly centered
  • Text readable on small screens
  • Touch targets accessible
  • Orientation changes handled
  • No horizontal scrolling required

Test Case 3.2: Touch Interaction

Objective: Test touch-optimized interactions

Steps:

  1. Trigger overlay display
  2. Test close button with finger tap
  3. Test backdrop tap to close
  4. Test with different finger sizes/angles

Expected Results:

  • Close button responds to touch
  • Backdrop tap closes overlay
  • No accidental triggers
  • Smooth touch feedback

Pass Criteria:

  • Close button touch responsive
  • Backdrop touch closes overlay
  • Touch targets not too sensitive
  • Visual feedback on interaction

Test Case 3.3: Loading States

Objective: Verify loading indicators work on mobile

Steps:

  1. Navigate to long video (20+ minutes)
  2. Tap bookmarklet
  3. Observe loading progression
  4. Test with slower network if possible

Expected Results:

  • Loading overlay displays immediately
  • Progress indication visible
  • User can cancel operation
  • Timeout handling works

Pass Criteria:

  • Loading state immediately visible
  • Progress indication clear
  • Cancel option available
  • Appropriate timeout (30s mobile)

4. Network Condition Testing

Test Case 4.1: WiFi Performance

Objective: Baseline performance on stable connection

Steps:

  1. Connect device to stable WiFi
  2. Test with medium-length video (5-10 minutes)
  3. Measure extraction time
  4. Verify success rate

Expected Results:

  • Extraction completes within 30 seconds
  • High success rate (85%+)
  • Stable network detection

Pass Criteria:

  • Extraction time < 30 seconds
  • Success on multiple attempts
  • Network quality detected correctly

Test Case 4.2: Mobile Network Performance

Objective: Test on cellular data connection

Steps:

  1. Switch to mobile data (3G/4G/5G)
  2. Test same videos as WiFi test
  3. Compare performance and reliability
  4. Test timeout handling

Expected Results:

  • Longer extraction times acceptable
  • Adaptive timeout handling
  • Graceful degradation on slow connections

Pass Criteria:

  • Works on cellular connections
  • Timeout appropriately extended
  • Network condition detected
  • Error handling for slow connections

Test Case 4.3: Poor Network Handling

Objective: Test error handling in poor network conditions

Setup:

  • Use network throttling if available
  • Test in areas with poor signal
  • Simulate network interruption

Steps:

  1. Simulate slow network (2G speed)
  2. Attempt subtitle extraction
  3. Test network interruption during extraction
  4. Verify error handling and recovery

Expected Results:

  • Appropriate timeout extensions
  • Clear error messages for network issues
  • Retry mechanisms function
  • Graceful failure handling

Pass Criteria:

  • Slow network handling appropriate
  • Network errors clearly communicated
  • Retry logic functions
  • No infinite loading states

5. Subtitle Extraction Testing

Test Case 5.1: Standard Video Extraction

Objective: Test subtitle extraction from typical videos

Test Videos:

  • TED Talks (usually have good auto-captions)
  • Educational content (Khan Academy, etc.)
  • Popular videos with confirmed subtitles

Steps:

  1. Navigate to test video
  2. Tap bookmarklet
  3. Wait for extraction completion
  4. Verify subtitle content quality

Expected Results:

  • Subtitles extracted successfully
  • Content formatted appropriately
  • Clipboard copy functions
  • Claude.ai integration works

Pass Criteria:

  • Subtitles extracted successfully
  • Content properly formatted
  • Clipboard functionality works
  • Claude.ai tab opens (if available)

Test Case 5.2: No Subtitles Handling

Objective: Test handling of videos without subtitles

Test Videos:

  • Music videos (often no auto-captions)
  • Very new videos (captions not generated yet)
  • Non-English videos without English captions

Steps:

  1. Navigate to video without English subtitles
  2. Tap bookmarklet
  3. Observe error handling

Expected Results:

  • Clear "No subtitles found" message
  • User-friendly explanation
  • Suggestion for alternative videos

Pass Criteria:

  • Clear error message displayed
  • Mobile-friendly error UI
  • Helpful guidance provided

Test Case 5.3: CAPTCHA Handling

Objective: Test CAPTCHA scenario handling

Note: CAPTCHA scenarios are difficult to trigger reliably

Steps:

  1. Attempt multiple extractions rapidly
  2. If CAPTCHA appears, follow guided instructions
  3. Test CAPTCHA completion flow

Expected Results:

  • CAPTCHA guidance clear on mobile
  • Mobile-friendly CAPTCHA interface
  • Recovery after CAPTCHA completion

Pass Criteria:

  • CAPTCHA guidance mobile-optimized
  • Clear instructions provided
  • Recovery flow functions properly

6. Error Handling & Recovery

Test Case 6.1: API Service Downtime

Objective: Test handling when subtitle API is unavailable

Steps:

  1. Block requests to downloadyoutubesubtitles.com (if possible)
  2. Attempt extraction
  3. Verify error handling

Expected Results:

  • Service unavailable message
  • Retry suggestion
  • No system crash

Pass Criteria:

  • Service error clearly communicated
  • Retry mechanism available
  • Graceful degradation

Test Case 6.2: Memory Pressure

Objective: Test behavior under memory constraints

Steps:

  1. Open multiple browser tabs
  2. Use memory-intensive pages
  3. Attempt bookmarklet operation
  4. Monitor for memory-related failures

Expected Results:

  • Bookmarklet functions despite memory pressure
  • Appropriate cleanup on completion
  • No memory leaks

Pass Criteria:

  • Functions under memory pressure
  • Cleanup properly executed
  • No browser crashes

7. Cross-Device Compatibility

Test Case 7.1: Different Screen Sizes

Objective: Test across various Android screen sizes

Test Devices:

  • Small phone (5" or less)
  • Standard phone (5-6")
  • Large phone/phablet (6"+)
  • Tablet (7-10")

Steps:

  1. Install bookmarklet on each device
  2. Test core functionality
  3. Verify UI scaling

Expected Results:

  • UI scales appropriately
  • Touch targets remain accessible
  • Text remains readable

Pass Criteria:

  • UI scales correctly on all sizes
  • Touch targets remain 44px+
  • Readability maintained

Test Case 7.2: Different Chrome Versions

Objective: Test compatibility across Chrome versions

Test Versions:

  • Chrome stable (current)
  • Chrome beta (if available)
  • Older Chrome (90-95 range)

Steps:

  1. Install bookmarklet in different versions
  2. Test core functionality
  3. Check for version-specific issues

Expected Results:

  • Core functionality works across versions
  • No version-specific errors
  • Graceful degradation if needed

Pass Criteria:

  • Works on target Chrome versions (90+)
  • No version-specific errors
  • Feature detection handles differences

8. Performance Testing

Test Case 8.1: Memory Usage

Objective: Monitor memory consumption during operation

Tools:

  • Chrome DevTools Memory tab (if available)
  • Android system memory monitoring

Steps:

  1. Measure baseline memory usage
  2. Execute bookmarklet
  3. Monitor peak memory usage
  4. Verify cleanup effectiveness

Expected Results:

  • Peak memory < 10MB
  • Memory cleaned up after operation
  • No memory leaks

Pass Criteria:

  • Peak memory within limits
  • Memory properly released
  • No persistent memory leaks

Test Case 8.2: Execution Speed

Objective: Measure bookmarklet performance

Steps:

  1. Use performance monitoring (if available)
  2. Measure initialization time
  3. Measure extraction time for various video lengths
  4. Compare across different devices

Expected Results:

  • Initialization < 100ms
  • Extraction time reasonable for video length
  • Consistent performance across devices

Pass Criteria:

  • Initialization time < 100ms
  • Extraction time appropriate
  • Performance consistent

Test Data

Test YouTube Videos

Reliable Test Videos (Usually have subtitles)

  • Rick Astley - Never Gonna Give You Up: dQw4w9WgXcQ
  • Me at the zoo (First YouTube video): jNQXAC9IVRw
  • TED Talk sample: Find current TED Talk with confirmed subtitles

Edge Case Videos

  • Very long video: Find 1+ hour educational content
  • Very short video: Find <30 second clips
  • No subtitles: Music videos or very new uploads

Test Scenarios

Network Conditions

  1. Optimal: Strong WiFi, high bandwidth
  2. Mobile: 4G/5G cellular data
  3. Slow: 3G or throttled connection
  4. Interrupted: Connection drops during extraction

Device Conditions

  1. Fresh browser: Cleared cache and data
  2. Memory pressure: Multiple tabs open
  3. Battery saver: Low power mode enabled
  4. Background apps: Multiple apps running

Test Reporting

Test Results Template

## Test Session: [Date]

### Environment
- **Device:** [Device model and Android version]
- **Chrome Version:** [Version number]
- **Network:** [WiFi/Mobile/Throttled]
- **Bookmarklet Version:** [Version/commit hash]

### Test Results
- **Passed:** [X/Y] test cases
- **Failed:** [List of failed test cases]
- **Blocked:** [Test cases that couldn't be executed]

### Issues Discovered
1. **Issue:** [Description]
   - **Severity:** [Critical/High/Medium/Low]
   - **Steps to reproduce:** [Steps]
   - **Expected vs Actual:** [Comparison]

### Performance Notes
- **Average extraction time:** [Seconds]
- **Memory usage:** [Peak MB]
- **Success rate:** [Percentage]

### Recommendations
- [List of improvements or fixes needed]

Issue Severity Levels

Critical

  • Bookmarklet doesn't execute
  • JavaScript errors prevent functionality
  • Security vulnerabilities

High

  • Core functionality broken
  • Mobile UI completely unusable
  • Data corruption or loss

Medium

  • Feature partially broken
  • UI issues affecting usability
  • Performance significantly degraded

Low

  • Minor UI inconsistencies
  • Performance slightly affected
  • Non-critical feature issues

Automation Opportunities

Future Automation

While manual testing is essential for mobile UX, consider automating:

  • Regression testing of core functionality
  • Performance monitoring across builds
  • Cross-version compatibility checks
  • API integration validation

Current Manual Requirements

These aspects require human testing:

  • Touch interaction quality
  • Visual design on various screens
  • User experience flow
  • Real network condition handling

Remember: Mobile testing is critical for this project. Real device testing cannot be substituted with desktop browser simulation.