Skip to content

Conversation

@macneale4
Copy link
Contributor

This PR is for the business logic of conjoining archives. It is not accessible in any way by a user at this point. Just implementation and unit tests using in memory buffers. Materializing to disk and integrating with the GC process is still required.

macneale4 and others added 13 commits July 16, 2025 23:27
Added conjoinAll method to archiveWriter that takes multiple archiveReader instances
and combines them into a single archive. The method currently returns a stub
implementation with proper validation and error handling.

Added comprehensive test that validates the expected behavior:
- Creates two test archives with different chunk prefixes
- Tests that combined reader contains all chunks from both archives
- Verifies data integrity is maintained through the conjunction process
- Includes helper function to reduce test code duplication

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
…ence

- Add full implementation of conjoinAll method that merges multiple archive readers
- Includes context parameter, data span sorting, and memory-efficient io.Copy
- Uses indexFinalizeFlushArchive for proper archive finalization and file persistence
- Fix createTestArchive to properly compress chunk data before writing byte spans
- Remove redundant readerAtAdapter type in favor of existing readerAtWithStatsBridge
- All tests pass including new TestArchiveConjoinAll validation

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
- Change conjoinAll return type from (archiveReader, error) to just error
- Replace indexFinalizeFlushArchive with indexFinalize for in-memory completion
- Remove file persistence logic to make method suitable for unit testing
- Update test to create archive reader from in-memory bytes using provided pattern
- Add documentation clarifying the method completes archive writing in memory only

This makes conjoinAll more appropriate for unit tests that don't need disk materialization.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
…ure duplicate handling

- Add TestArchiveConjoinAllDuplicateChunk test that expects successful deduplication
- Create createTestArchiveWithHashes helper for precise hash control
- Create createArchiveWithDuplicates helper for mixed duplicate/unique scenarios
- Test verifies combined archive contains all expected chunks after deduplication
- Currently fails as expected since conjoinAll errors on duplicates
- Test is correctly structured for future implementation of duplicate handling

The helper functions will enable easy expansion to 10+ archives with complex duplicate patterns.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
- Update conjoinAll to allow duplicate chunks instead of erroring
- Each chunk gets its own index entry pointing to its actual data location
- Enhanced TestArchiveConjoinAllDuplicateChunk with more comprehensive test cases
- Validate that duplicate chunks appear twice in the index as expected
- Maintain performance by writing entire data blocks from each archive

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
- Add reconstructHashFromPrefixAndSuffix helper function in archive_reader.go
- Update archive_writer.go to use the helper function
- Remove duplicate hash reconstruction logic while maintaining functionality
- All tests continue to pass

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
- Refactor createTestArchive to delegate to createTestArchiveWithHashes
- Remove ~35 lines of duplicate archive creation code
- Maintain same functionality with cleaner test helper hierarchy
- All tests continue to pass

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
Enhanced createTestArchiveWithHashes to support mixed compression types
and consolidated duplicate archive creation code. Fixed variable shadowing
issue where 'chunks' parameter was shadowing the chunks package name.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
Added TestArchiveConjoinAllComprehensive that tests complex scenarios:
- 10 initial archives with mixed compression and chunk sizes (10-250 bytes)
- Shared chunks duplicated across archives with mixed Snappy/zStd compression
- Nested conjoin operations (first conjoin 10 archives, then conjoin result with 3 more)
- Single validation loop testing all expected chunks and hashes
- Proper buffer sizing to handle large amounts of data

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
…n verification

Added precise chunk count verification and comprehensive iteration testing:
- Replaced approximate assertions with exact chunk count verification
- Added iteration testing to verify each chunk ref is visited exactly once
- Verify duplicate chunks appear the correct number of times during iteration
- Ensure total iteration count equals chunk count for complete validation

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
@coffeegoddd
Copy link
Contributor

@coffeegoddd DOLT

comparing_percentages
100.000000 to 100.000000
version result total
a885a7a ok 5937457
version total_tests
a885a7a 5937457
correctness_percentage
100.0

@coffeegoddd
Copy link
Contributor

@macneale4 DOLT

comparing_percentages
100.000000 to 100.000000
version result total
031ba30 ok 5937457
version total_tests
031ba30 5937457
correctness_percentage
100.0

@macneale4 macneale4 closed this Jul 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants