Skip to content

How version 0.8.0b2 works in detail

David Osipov edited this page Mar 18, 2025 · 1 revision

This document provides a comprehensive explanation of the feldman_vss.py script, version 0.8.0b2, a Python implementation of Feldman's Verifiable Secret Sharing (VSS) scheme. This version builds upon previous iterations, focusing on enhanced post-quantum security, efficiency, robustness, and memory safety. The document details the code, cryptographic principles, design decisions, and security considerations.

I. Introduction and Design Philosophy

The feldman_vss.py script implements Feldman's VSS with a strong focus on post-quantum security, moving beyond the vulnerabilities of discrete logarithm-based schemes. Version 0.8.0b2 introduces several key improvements:

  • Memory Safety: Introduces a MemoryMonitor class and check_memory_safety function to proactively prevent excessive memory allocation, mitigating potential denial-of-service vulnerabilities related to large inputs or malicious data.
  • Enhanced Hash-Based Commitments: Refines the commitment scheme with more robust domain separation and deterministic encoding, ensuring consistent commitment values across different platforms and execution environments.
  • Improved Byzantine Fault Tolerance: Enhances the share refreshing protocol with better detection and handling of Byzantine (malicious) participants, including adaptive quorum-based Byzantine detection and optimized verification.
  • Zero-Knowledge Proofs: Adds support for creating and verifying zero-knowledge proofs of polynomial knowledge, allowing the dealer to prove knowledge of the secret polynomial without revealing the coefficients.
  • Serialization/Deserialization Enhancements: Improves the serialization and deserialization of commitment data and proofs, including checksums for integrity verification and handling of extra entropy for low-entropy secrets.
  • Integration with Pedersen VSS: Provides helper functions to combine Feldman VSS with Pedersen VSS, offering both binding and hiding properties.
  • Configurable Parameters: Adds the VSSConfig class.
  • Deterministic Hashing: Guarantees deterministic commitment generation.
  • Thread-Safe LRU Cache: Uses SafeLRUCache.
  • Status: The library should be considered experimental until the known vulnerabilities are addressed.

The core design goals remain consistent with previous versions:

  1. Uncompromising Post-Quantum Security: The script relies exclusively on hash-based commitments (BLAKE3 or SHA3-256) for post-quantum security, avoiding the discrete logarithm problem.

  2. Provable Security: Adherence to rigorous security principles, using zero-knowledge proofs, secure serialization, and countermeasures against timing, fault injection, and Byzantine behavior.

  3. Efficiency and Performance:

    • Batch Verification: batch_verify_shares for optimized verification of multiple shares.
    • Precomputation: CyclicGroup precomputes powers of the generator.
    • LRU Caching: SafeLRUCache stores exponentiation results.
    • Optimized Multi-exponentiation: efficient_multi_exp for efficient computation of multiple exponentiations.
    • msgpack Serialization: Compact and efficient binary serialization.
  4. Robustness and Fault Tolerance:

    • Input Validation: Extensive validation to prevent misuse.
    • Error Handling: Detailed error handling with custom exceptions. Sanitizable error messages (sanitize_errors).
    • Fault Injection Countermeasures: secure_redundant_execution attempts to mitigate fault injection (but is not foolproof in Python).
    • Byzantine Fault Tolerance: Enhanced version of Chen & Lindell's Protocol 5 in refresh_shares for secure share refreshing, even with Byzantine participants.
  5. Flexibility and Configurability: The VSSConfig class allows customization of security parameters.

  6. Integration and Extensibility:

    • Shamir Secret Sharing Compatibility: Designed to work with Shamir Secret Sharing. create_vss_from_shamir creates a compatible FeldmanVSS instance.
    • Pedersen VSS Integration: Helper functions (integrate_with_pedersen, create_dual_commitment_proof, verify_dual_commitments) to combine Feldman VSS with Pedersen VSS.

II. Detailed Class and Function Explanations

This section provides a detailed explanation of each class and function, including their purpose, implementation, and security considerations. Significant changes and additions in version 0.8.0b2 are highlighted.

A. Exception and Warning Classes:

  • SecurityWarning: Warning for potentially insecure configurations.
  • SecurityError: Exception for security-related issues.
  • ParameterError: Exception for invalid parameters.
  • VerificationError: Exception for share verification failures.
  • SerializationError: Exception for serialization/deserialization errors.

These classes provide structured error handling and categorize different types of issues.

B. VSSConfig (Dataclass)

  • Purpose: Holds configuration parameters for the VSS scheme.
  • Fields:
    • prime_bits: Size (in bits) of the prime modulus (default: 4096).
    • safe_prime: Whether to use a safe prime (default: True).
    • secure_serialization: Whether to use secure serialization (default: True).
    • use_blake3: Whether to use BLAKE3 (default: True).
    • cache_size: Size of the LRU cache (default: 128).
    • sanitize_errors: Whether to sanitize error messages (default: True).
  • __post_init__: Enforces a minimum prime_bits value of MIN_PRIME_BITS (4096) and issues a warning if a smaller value is provided.

C. SafeLRUCache

  • Purpose: Thread-safe Least Recently Used (LRU) cache for storing exponentiation results.
  • Fields:
    • capacity: Maximum number of items.
    • cache: OrderedDict storing items.
    • lock: threading.RLock for thread safety.
  • Methods:
    • get(key): Retrieves an item, moving it to the most recently used position.
    • put(key, value): Adds an item, evicting the least recently used if necessary.
    • clear(): Clears the cache.
    • __len__(): Returns the number of items.

D. CyclicGroup

  • Purpose: Represents the cyclic group for cryptographic operations. Uses gmpy2 for all arithmetic.

  • Fields:

    • prime: Prime modulus (gmpy2.mpz).
    • generator: Group generator (gmpy2.mpz).
    • cached_powers: SafeLRUCache for exponentiation results.
    • _precompute_exponent_length: Bit length of the prime.
    • _precompute_window_size: Window size for precomputation.
    • _precomputed_powers: Dictionary storing precomputed powers of the generator.
  • __init__:

    • Takes optional prime, generator, prime_bits, use_safe_prime, and cache_size.
    • Selects or generates a prime (safe prime preferred).
    • Finds a generator if not provided.
    • Initializes the cached_powers cache.
    • Precomputes powers of the generator.
  • Static Methods:

    • _is_probable_prime(n, k=40): Miller-Rabin primality test.
    • _is_safe_prime(p): Checks if p is a safe prime.
  • Instance Methods:

    • _generate_prime(bits): Generates a random prime.
    • _generate_safe_prime(bits): Generates a safe prime (very slow).
    • _is_generator(g): Checks if g is a generator.
    • _find_generator(): Finds a generator.
    • _precompute_powers(): Precomputes powers of the generator (multi-level windowing).
    • exp(base, exponent): Exponentiation with optimizations (precomputation, caching, gmpy2.powmod).
    • _exp_with_precomputation(exponent): Exponentiation using precomputed powers.
    • mul(a, b): Modular multiplication.
    • secure_random_element(): Generates a cryptographically secure random element.
    • clear_cache(): Clears the cache.
    • hash_to_group(data): Hashes data to a group element uniformly (rejection sampling).
      • Domain Separation: Uses a domain separation prefix.
      • Multiple Rounds: Makes multiple attempts.
      • Extra Security Bytes: Minimizes bias.
      • Strict Rejection Sampling: Only accepts valid values.
      • BLAKE3 or SHA3-256: Uses blake3 (if available) or hashlib.sha3_256.
      • Raises SecurityError: On failure after many attempts.
    • _enhanced_encode_for_hash(*args, context="FeldmanVSS"): Encodes values for hashing (type tagging, length-prefixing, deterministic integer encoding).
    • efficient_multi_exp(bases, exponents): Efficient multi-exponentiation.
    • secure_exp(base, exponent): Constant-time exponentiation (uses gmpy2.powmod).

E. constant_time_compare(a, b)

  • Purpose: Attempts constant-time comparison.
  • Implementation: Converts inputs to bytes, pads, XORs, and checks for 0.
  • Security Note: Does not guarantee constant-time in Python.

F. compute_checksum(data: bytes) -> int

  • Purpose: Calculates a checksum (using blake3 or SHA3-256)
  • Implementation: uses blake3 or sha3_256.
  • Security: Provides integrity checks for serialized data.

G. secure_redundant_execution(func, *args, sanitize_error_func=None, function_name=None, context=None, **kwargs)

  • Purpose: Attempts to mitigate fault injection.
  • Implementation: Executes func multiple times, shuffles order, adds delays, compares results.
  • Security Note: Not a perfect defense in Python.

H. estimate_mpz_size(n)

  • Purpose: Estimates the memory required for a gmpy2.mpz number, given either the number itself or its bit length.
  • Implementation: Calculates the number of limbs required based on the bit length and then adds an estimated overhead for the gmpy2 object. The limb size is detected (8 bytes on 64-bit systems, 4 bytes on 32-bit).
  • Use Case: Used by MemoryMonitor to track memory usage.

I. estimate_mpz_operation_memory(op_type, a_bits, b_bits=None)

  • Purpose: Estimates the memory required for various gmpy2.mpz operations (addition, multiplication, exponentiation, modulo).
  • Implementation: Calculates the approximate bit length of the result based on the operation type and the bit lengths of the operands. Then, it uses estimate_mpz_size to estimate the memory usage.
  • Use Case: Used by check_memory_safety to determine if an operation is likely to exceed memory limits.

J. estimate_exp_result_size(base_bits, exponent)

  • Purpose: Estimates the bit length of the result of an exponentiation (base^exponent).
  • Implementation: Provides a reasonable estimate, considering both the base's bit length and the exponent. For very large exponents, it caps the result to prevent extremely large estimates.
  • Use Case: Used by check_memory_safety when checking the safety of exponentiation operations.

K. get_system_memory()

  • Purpose: Gets the available system memory.
  • Implementation: Tries to use psutil to get the available memory. If psutil is not available, it returns a conservative estimate (1GB).
  • Use Case: Used by MemoryMonitor to determine the maximum allowed memory usage.

L. check_memory_safety(operation, *args, max_size_mb=1024, reject_unknown=False)

  • Purpose: Checks if an operation can be performed safely without exceeding memory limits. This is a critical addition for preventing denial-of-service vulnerabilities.
  • Implementation:
    • Takes the operation type ("exp", "mul", "polynomial", "matrix", "polynomial_eval", etc.) and the arguments to the operation.
    • Estimates the memory required for the operation based on the operation type and the sizes of the arguments.
    • Compares the estimated memory usage to a maximum allowed size (max_size_mb, defaulting to 1024 MB).
    • Returns True if the operation is likely safe, False otherwise.
    • Handles unknown operations conservatively (can be configured to reject them).
    • Logs warnings and errors during the estimation process.

M. MemoryMonitor

  • Purpose: Tracks estimated memory usage to prevent exceeding limits. This is a new class in version 0.8.0b2, addressing memory safety concerns.
  • Fields:
    • max_memory_mb: Maximum allowed memory usage (in MB).
    • current_usage: Current estimated usage (in bytes).
    • peak_usage: Peak usage recorded (in bytes).
  • Methods:
    • __init__(max_memory_mb=1024): Initializes with a maximum memory limit.
    • check_allocation(size_bytes): Checks if an allocation would exceed limits without modifying the tracker.
    • allocate(size_bytes): Tracks a memory allocation, raising MemoryError if it would exceed the limit.
    • release(size_bytes): Tracks memory release.
    • get_usage_stats(): Returns usage statistics (current, peak, max, percentages).

N. FeldmanVSS

  • Purpose: Main class implementing Feldman Verifiable Secret Sharing.

  • Fields:

    • field: Finite field for polynomial arithmetic.
    • config: VSSConfig instance.
    • group: CyclicGroup instance.
    • generator: Generator of the cyclic group.
    • hash_algorithm: Hash function (BLAKE3 or SHA3-256).
    • _byzantine_evidence: Stores Byzantine evidence.
  • __init__:

    • Takes field, config, and group.
    • Creates a CyclicGroup if not provided.
    • Initializes hash_algorithm.
  • Methods:

    • _sanitize_error(self, message, detailed_message=None): Sanitizes error messages based on config.sanitize_errors.
    • _raise_sanitized_error(self, error_class, message, detailed_message=None): Raises an error with a sanitized message.
    • _compute_hash_commitment_single(self, value, randomizer, index, context=None, extra_entropy=None): Computes a single hash-based commitment.
      • Deterministic Encoding: Uses self.group._enhanced_encode_for_hash for robust encoding.
      • Hashing: Uses the chosen hash algorithm.
      • Modular Reduction: Reduces the hash modulo the prime.
    • _compute_hash_commitment(self, value, randomizer, index, context=None, extra_entropy=None): Computes a hash-based commitment with redundant execution.
    • _compute_combined_randomizer(self, randomizers, x): Calculates the combined randomizer.
    • _compute_expected_commitment(self, commitments, x): Calculates the expected commitment.
    • _verify_hash_based_commitment(self, value, combined_randomizer, x, expected_commitment, context=None, extra_entropy=None): Verifies a hash-based commitment (using constant_time_compare).
    • create_commitments(self, coefficients, context=None): Creates hash-based commitments (calls create_enhanced_commitments).
    • create_enhanced_commitments(self, coefficients, context=None): Creates enhanced commitments, handling low-entropy secrets.
      • Entropy Check: Checks if the secret has low entropy.
      • Extra Entropy: Adds extra entropy if needed.
      • Commitment Calculation: Uses _compute_hash_commitment.
      • Returns: (commitment, randomizer, extra_entropy) tuples.
    • _verify_share_hash_based_single(self, x, y, commitments): Single share verification.
    • verify_share(self, share_x, share_y, commitments): Verifies a share (with redundant execution).
    • batch_verify_shares(self, shares, commitments): Efficiently verifies multiple shares.
      • Small Batches: Calls verify_share.
      • Large Batches: Optimized approach (precomputation, batch processing, constant-time logic).
    • serialize_commitments(self, commitments): Serializes commitments.
      • Data Structure: Creates a dictionary with version, timestamp, generator, prime, commitments, hash_based flag.
      • msgpack Serialization: Uses msgpack.
      • Checksum: Computes a checksum.
      • Wrapper: Creates a wrapper with data and checksum.
      • Base64 Encoding: URL-safe base64.
    • deserialize_commitments(self, data): Deserializes commitments.
      • Base64 Decoding: URL-safe base64.
      • msgpack Deserialization: Unpacks the wrapper.
      • Checksum Verification: Verifies the checksum (constant_time_compare).
      • Data Extraction: Extracts data.
      • Validation: Extensive validation (version, structure, prime, generator, commitment values).
      • Reconstruction: Reconstructs commitment tuples.
      • Returns: Commitments, generator, prime, timestamp, hash_based flag.
    • verify_share_from_serialized(self, share_x, share_y, serialized_commitments): Verifies a share against serialized commitments.
    • clear_cache(): Clears caches.
    • __del__: Destructor (clears caches, wipes sensitive data).
    • refresh_shares(self, shares, threshold, total_shares, original_commitments=None, participant_ids=None): Refreshes shares (enhanced Chen & Lindell's Protocol 5).
    • _refresh_shares_additive(self, shares, threshold, total_shares, participant_ids): Core logic for additive share refreshing (Byzantine fault tolerance).
      1. Zero-Sharing Generation:
        • Each participant creates a sharing of zero.
        • Deterministic seeds for verification.
        • random.Random() used with a cryptographically strong seed (secure).
        • Commitments to zero-sharing polynomial coefficients.
        • Zero commitment verification.
      2. Enhanced Verification:
        • Echo Broadcast: For consistency verification.
        • Byzantine Detection: _detect_byzantine_behavior.
        • Adaptive Quorum: Adapts to the threat level.
        • Batch Verification: batch_verify_shares.
        • Invalidity Evidence: _generate_invalidity_evidence.
        • Collusion Detection: _enhanced_collusion_detection.
      3. New Share Calculation:
        • Adds verified zero-shares to the original share.
        • _secure_sum_shares for constant-time summation.
      4. New Commitment Calculation:
        • Reconstructs coefficients from new shares.
        • Creates new commitments.
      5. Verification Data: Returns detailed verification data.
    • _secure_sum_shares(self, shares_dict, modulus): Constant-time summation of shares.
    • _get_original_share_value(self, participant_id, shares): Safely retrieves an original share.
    • _determine_security_threshold(self, base_threshold, verified_count, total_parties, invalid_parties): Calculates an adaptive security threshold.
    • _detect_collusion_patterns(self, invalid_shares_detected, party_ids): Detects potential collusion.
    • _create_invalidity_proof(self, party_id, participant_id, share, commitments): Creates proof of invalid share.
    • _generate_refresh_consistency_proof(self, participant_id, original_y, sum_zero_shares, new_y, verified_shares): Generates proof of correct refreshing.
    • _process_echo_consistency(self, zero_commitments, zero_sharings, participant_ids): Echo-consistency protocol (detects equivocation).
    • _calculate_optimal_batch_size(self, num_participants, num_shares): Determines optimal batch size.
    • _prepare_verification_batches(self, zero_sharings, zero_commitments, participant_ids, batch_size): Groups shares for batch verification.
    • _process_verification_batches(self, verification_batches): Processes verification batches (potentially in parallel).
    • _get_share_value_from_results(self, party_id, p_id, zero_sharings): Retrieves a share value.
    • _generate_invalidity_evidence(self, party_id, p_id, zero_sharings, zero_commitments, verification_proofs, share_verification, echo_consistency): Generates evidence for invalid shares.
    • _enhanced_collusion_detection(self, invalid_shares_detected, party_ids, echo_consistency): Enhanced collusion detection.
    • create_polynomial_proof(self, coefficients, commitments): Creates a zero-knowledge proof of polynomial knowledge.
      • Blinding Factors: Generates random blinding factors.
      • Blinding Commitments: Creates commitments to blinding factors.
      • Fiat-Shamir Transform: Generates a non-interactive challenge.
      • Responses: Computes responses.
      • Returns: Proof data.
    • verify_polynomial_proof(self, proof, commitments): Verifies a zero-knowledge proof. * Input Validation: Thorough validation. * Response Verification: Verifies each coefficient's proof. * Constant-Time Comparison: Uses constant_time_compare. * Returns: True if valid, False otherwise.
    • _detect_byzantine_behavior(self, party_id, commitments, shares, consistency_results=None): Detects Byzantine behavior (inconsistent shares, invalid commitments, equivocation).
    • detect_byzantine_party(self, party_id, commitments, shares, consistency_results=None): Public method to detect Byzantine behavior.
    • _evaluate_polynomial(self, coefficients, x): Evaluates a polynomial (Horner's method).
    • _reconstruct_polynomial_coefficients(self, x_values, y_values, threshold): Reconstructs coefficients (Lagrange interpolation).
      • Vandermonde Matrix: Creates a Vandermonde matrix.
      • Secure Gaussian Elimination: Solves the system using _secure_matrix_solve.
    • _secure_matrix_solve(self, matrix, vector, prime=None): Solves linear equations (Gaussian elimination with side-channel resistance).
      • Forward Elimination: Finds pivots and scales rows.
      • Secure Pivot Selection: Uses _find_secure_pivot.
      • Modular Inverse: Uses gmpy2.invert.
      • Constant-Time Operations: Attempts constant-time.
    • _find_secure_pivot(self, matrix, col, n): Finds a pivot (side-channel resistant).
    • create_commitments_with_proof(self, coefficients, context=None): Creates commitments and a proof in one operation.
    • verify_commitments_with_proof(self, commitments, proof): Verifies commitments and proof.
    • serialize_commitments_with_proof(self, commitments, proof): Serializes commitments and proof.
    • deserialize_commitments_with_proof(self, data): Deserializes commitments and proof.
    • verify_share_with_proof(self, share_x, share_y, serialized_data): Verifies a share and proof.
    • _verify_challenge_consistency(self, proof: ProofDict, commitments: CommitmentList) -> bool: Verifies challenge consistency.

III. Factory Functions and Integration Helpers

  • get_feldman_vss(field, **kwargs): Creates a FeldmanVSS instance (post-quantum secure).
  • create_vss_from_shamir(shamir_instance): Creates a FeldmanVSS compatible with a ShamirSecretSharing instance.
  • integrate_with_pedersen(feldman_vss, pedersen_vss, shares, coefficients): Integrates Feldman VSS with Pedersen VSS.
  • create_dual_commitment_proof(feldman_vss, pedersen_vss, coefficients, feldman_commitments, pedersen_commitments): Creates a proof that Feldman and Pedersen commitments are to the same coefficients.
  • verify_dual_commitments(feldman_vss, pedersen_vss, feldman_commitments, pedersen_commitments, proof): Verifies the dual commitment proof.

IV. Security Analysis and Known Vulnerabilities

A. Strengths:

  • Post-Quantum Security: Hash-based commitments.
  • Provable Security (in principle): Based on established protocols.
  • Efficiency Optimizations: Batch verification, precomputation, caching, multi-exponentiation.
  • Robustness Features: Input validation, error handling, fault injection/Byzantine countermeasures.
  • Serialization with Integrity: Checksums.
  • Integration with Pedersen VSS: Binding and hiding.
  • Share Refreshing: Secure protocol.
  • Memory Safety: MemoryMonitor and check_memory_safety.

B. Known Vulnerabilities (Explicitly Acknowledged):

  1. Timing Side-Channels in Matrix Operations: _find_secure_pivot and _secure_matrix_solve are vulnerable in Python.
  2. Non-Constant-Time Comparison: constant_time_compare is not truly constant-time in Python.

C. Other Potential Concerns:

  • secure_redundant_execution Limitations: Not a perfect defense in Python.
  • Pure Python Implementation: Reliance on Python for critical operations is a limitation.

V. Conclusion and Future Directions

Version 0.8.0b2 of feldman_vss.py provides a significantly enhanced implementation of Feldman's VSS, with a strong emphasis on post-quantum security, memory safety, and robustness. The addition of memory monitoring, improved Byzantine fault tolerance, and zero-knowledge proofs makes it a more secure and practical solution for secret sharing.

However, the known vulnerabilities related to timing side-channels (due to the pure Python implementation) remain a critical concern. The script acknowledges these limitations and should be considered experimental until addressed. The planned resolution is to integrate with Rust components for security-critical operations in future versions. This will provide much stronger guarantees of constant-time execution.

Clone this wiki locally