Skip to content

How version 0.7.5‐beta works in detail

David Osipov edited this page Mar 13, 2025 · 1 revision

Overall Purpose

The FeldmanVSS class implements Feldman's Verifiable Secret Sharing (VSS) scheme with a strong emphasis on post-quantum security. It builds upon Shamir's Secret Sharing, adding a layer of verifiability that allows participants to confirm that their shares are consistent with a publicly known commitment, without revealing the secret itself. The post-quantum security comes from the use of hash-based commitments (using BLAKE3 or SHA3-256) instead of relying solely on discrete logarithm assumptions.

Class Structure and Key Attributes

  • __init__(self, field, config=None, group=None): The constructor initializes the VSS scheme.
    • field: This is an object representing the finite field over which the polynomial arithmetic will be performed. It must have a .prime attribute, which is the prime modulus of the field. This is typically inherited from a corresponding Shamir Secret Sharing implementation.
    • config: A VSSConfig object (or None, in which case a default configuration is used). This controls several important parameters:
      • prime_bits: The size (in bits) of the prime modulus. The default is 4096, providing a good level of post-quantum security.
      • safe_prime: A boolean indicating whether to use a safe prime (where (p-1)/2 is also prime). Safe primes are strongly recommended for enhanced security, as they make certain attacks more difficult. Defaults to True.
      • secure_serialization: Whether to use a secure serialization format that includes checksums. Defaults to True.
      • use_blake3: Whether to prefer the BLAKE3 hash function (if available). BLAKE3 is generally faster and considered more secure than SHA3-256. Defaults to True.
      • cache_size: The size of the LRU cache used for exponentiation.
    • group: A CyclicGroup object. This handles the group operations (exponentiation, multiplication, etc.) needed for the commitment scheme. If None is provided, a CyclicGroup is created internally using the parameters from config.
    • generator: Stores the generator of the cyclic group. This is used for creating commitments.
    • _commitment_cache: A cache to store intermediate values during verification, improving efficiency.
    • hash_algorithm: Selects either blake3.blake3 or hashlib.sha3_256 based on availability and the config.use_blake3 setting.
    • _byzantine_evidence: A dictionary to store the evidence during the execution of _process_echo_consistency.

Core Methods

  1. create_commitments(self, coefficients):

    • Purpose: Creates commitments to the coefficients of the secret polynomial. These commitments are public and allow verification of shares without revealing the secret.
    • How it works: It calls create_enhanced_commitments to handle the core logic.
    • Input: coefficients: A list of coefficients [a_0, a_1, ..., a_{k-1}], where a_0 is the secret.
    • Output: A list of tuples. Each tuple contains:
      • The commitment (a hash).
      • A randomizer used in the commitment calculation.
      • Optionally, extra entropy (for low-entropy secrets).
  2. create_enhanced_commitments(self, coefficients, context=None):

    • Purpose: Creates hash-based commitments, specifically addressing the potential weakness of low-entropy secrets. This is a crucial enhancement for post-quantum security.
    • How it works:
      • Converts coefficients to integers modulo the field prime.
      • Checks if the secret (the first coefficient) might have low entropy (bit length less than 256).
      • For each coefficient:
        • Generates a secure randomizer (r_i).
        • If the secret might have low entropy, generates extra entropy (secrets.token_bytes(32)).
        • Calls _compute_hash_commitment to compute the actual hash-based commitment.
        • Stores the commitment, randomizer, and extra entropy (if any) in a tuple.
    • Inputs:
      • coefficients: The polynomial coefficients.
      • context: An optional string for domain separation (to prevent collisions if the same values are used in different contexts).
    • Outputs: A list of (commitment, randomizer, extra_entropy) tuples.
  3. _compute_hash_commitment_single(self, value, randomizer, index, context=None, extra_entropy=None):

    • Purpose: This is the core of the commitment scheme. It computes a single hash-based commitment.
    • How It Works (Detailed Explanation):
      1. Deterministic Encoding: Converts value and randomizer to gmpy2.mpz objects to ensure consistent, platform-independent behavior. It then converts these to fixed-length byte strings using .to_bytes(byte_length, 'big'), where byte_length is determined by the prime's bit length. This is critical for security.
      2. Element Preparation: Creates a list elements containing:
        • VSS_VERSION: The version string of the VSS implementation.
        • "COMMIT": A fixed domain separator.
        • context or "polynomial": A context string (defaults to "polynomial").
        • The byte representation of the value.
        • The byte representation of the randomizer.
        • extra_entropy (if provided).
      3. Enhanced Encoding: Uses the method _enhanced_encode_for_hash to prevent any collisions.
      4. Hashing: Uses either BLAKE3 (if available and requested) or SHA3-256 to hash the encoded data.
      5. Modulo Reduction: Takes the hash output modulo the group's prime to ensure the commitment is within the group.
    • Inputs:
      • value: The value to commit to.
      • randomizer: A random value.
      • index: The index of the coefficient (not used in the hash calculation itself, but kept for API compatibility with older versions).
      • context: An optional context string.
      • extra_entropy: Optional extra entropy (for low-entropy secrets).
    • Outputs: The hash commitment (an integer).
  4. _compute_hash_commitment(self, value, randomizer, index, context=None, extra_entropy=None):

    • Purpose: Adds fault injection resistance to the commitment calculation.
    • How it works: Calls secure_redundant_execution to execute _compute_hash_commitment_single multiple times and verify that the results are identical. This helps prevent attackers from injecting faults to leak information.
    • Inputs/Outputs: Same as _compute_hash_commitment_single.
  5. verify_share(self, share_x, share_y, commitments):

    • Purpose: Verifies that a given share (share_x, share_y) is valid with respect to the provided commitments.
    • How it works:
      • Input validation.
      • Calls secure_redundant_execution to perform the verification redundantly, protecting against fault injection attacks. The actual verification logic is inside _verify_share_hash_based_single.
    • Inputs:
      • share_x: The x-coordinate of the share.
      • share_y: The y-coordinate of the share (the share value).
      • commitments: The list of commitments.
    • Outputs: True if the share is valid, False otherwise.
  6. _verify_share_hash_based_single(self, x, y, commitments):

    • Purpose: Performs the core logic of verifying a single share against hash-based commitments.
    • How it Works:
      1. Extract Randomizers: Gets the randomizers from the commitments list.
      2. Compute Combined Randomizer: Calculates r_combined, which is the randomizer that should have been used for the share at point x. This is done using _compute_combined_randomizer.
      3. Compute Expected Commitment: Calculates expected_commitment, which is the commitment value that should correspond to the share at point x. This is done using _compute_expected_commitment.
      4. Extract Extra Entropy: Retrieves extra_entropy from the first commitment (if present). This is only used for the constant term of the polynomial.
      5. Verify: Calls _verify_hash_based_commitment to perform the actual hash comparison.
    • Inputs/Outputs: Same as the public verify_share method.
  7. _verify_hash_based_commitment(self, value, combined_randomizer, x, expected_commitment, context=None, extra_entropy=None):

    • Purpose: Verifies a single hash-based commitment.
    • How it works:
      • Computes the hash commitment of the provided value and combined_randomizer.
      • Compares the computed commitment with the expected_commitment using constant_time_compare to prevent timing attacks.
    • Inputs:
      • value: The value being verified.
      • combined_randomizer: The combined randomizer for the share.
      • x: The x-coordinate of the share.
      • expected_commitment: The expected commitment value.
      • context: The optional context string.
      • extra_entropy: Optional extra entropy.
    • Outputs: True if the commitment is valid, False otherwise.
  8. _compute_combined_randomizer(self, randomizers, x):

    • Purpose: Computes the combined randomizer used for a share at a specific x value. This is part of evaluating the "randomizer polynomial" at x.
    • How it works: Evaluates a polynomial with the randomizers as coefficients at the point x.
    • Inputs:
      • randomizers: A list of randomizers (one for each coefficient).
      • x: The point at which to evaluate.
    • Outputs: The combined randomizer.
  9. _compute_expected_commitment(self, commitments, x):

    • Purpose: Computes the expected commitment value for a share at a specific x value. This is part of evaluating the "commitment polynomial" at x.
    • How it works: Evaluates a polynomial where the coefficients are the commitment values at the point x.
    • Inputs:
      • commitments: The list of commitments.
      • x: The point at which to evaluate.
    • Outputs: The expected commitment value.
  10. batch_verify_shares(self, shares, commitments):

    • Purpose: Efficiently verifies multiple shares against the same set of commitments. This is much faster than verifying each share individually.
    • How it works:
      • Input validation.
      • Handles small batches (less than 5 shares) with standard verify_share.
      • For larger batches:
        • Precomputes powers of x for each share to avoid redundant calculations.
        • Calculates the combined randomizer and expected commitment for each share.
        • Processes shares in batches (default size 32), calling _verify_hash_based_commitment for each.
      • Uses constant-time boolean operations (all_valid &= is_valid) to combine results without leaking timing information.
    • Inputs:
      • shares: A list of (x, y) share tuples.
      • commitments: The list of commitments.
    • Outputs: A tuple:
      • all_valid: True if all shares are valid, False otherwise.
      • results: A dictionary mapping share indices to their individual verification results (True or False).
  11. serialize_commitments(self, commitments):

    • Purpose: Serializes the commitment data into a string for storage or transmission. This includes a checksum for integrity verification.
    • How it works:
      • Input validation.
      • Creates a dictionary containing:
        • version: The VSS version.
        • timestamp: The current timestamp.
        • generator: The group generator.
        • prime: The group prime.
        • commitments: A list of (commitment, randomizer, extra_entropy) tuples, converted to integers and hex strings as needed.
        • hash_based: True (to indicate this is a hash-based commitment).
      • Packs the dictionary using msgpack.
      • Computes a checksum of the packed data using compute_checksum.
      • Creates a wrapper dictionary containing the packed data and the checksum.
      • Packs the wrapper using msgpack.
      • Encodes the result using URL-safe base64.
    • Inputs: commitments: The list of commitments.
    • Outputs: A base64-encoded string.
  12. deserialize_commitments(self, data):

    • Purpose: Deserializes commitment data and verifies its integrity.
    • How it works:
      • Input validation.
      • Decodes the base64-encoded string.
      • Unpacks the checksum wrapper using msgpack.
      • Verifies the checksum using compute_checksum and constant_time_compare. This is a critical security check to detect tampering.
      • Unpacks the actual commitment data using msgpack.
      • Validates the VSS version.
      • Validates the structure of the deserialized data.
      • Validates that the prime is actually prime (and a safe prime, if required).
      • Validates that the generator is a valid generator for the group.
      • Validates that the commitment and randomizer values are within the correct range.
      • Reconstructs the (commitment, randomizer, extra_entropy) tuples, converting values back to gmpy2.mpz objects as needed.
    • Inputs: data: The serialized commitment data.
    • Outputs: A tuple: (commitments, generator, prime, timestamp, is_hash_based).
  13. verify_share_from_serialized(self, share_x, share_y, serialized_commitments):

    • Purpose: Verifies a share against serialized commitment data.
    • How It Works:
      • Deserializes the commitments using deserialize_commitments.
      • Creates a temporary CyclicGroup and FeldmanVSS instance using the deserialized parameters.
      • Calls verify_share on the temporary FeldmanVSS instance.
    • Inputs:
      • share_x: x-coordinate of share.
      • share_y: y-coordinate of share.
      • serialized_commitments: Serialized commitments.
    • Output: Boolean value if share is valid or not.
  14. refresh_shares(self, shares, threshold, total_shares, original_commitments=None, participant_ids=None):

    • Purpose: Implements a secure share refreshing protocol. This allows participants to generate new shares for the same secret without needing to reconstruct the secret. This is essential for long-term security, as it prevents an attacker who compromises shares over time from eventually learning the secret. This implementation uses an optimized version of Chen & Lindell's Protocol 5.
    • How it works:
      • Input validation.
      • Calls the internal method _refresh_shares_additive to perform the core logic.
    • Inputs:
      • shares: A dictionary mapping participant IDs to their shares ({id: (x, y)}).
      • threshold: The secret sharing threshold.
      • total_shares: The total number of shares.
      • original_commitments: (Optional) The original commitments (not used in the core logic, but can be included for external verification).
      • participant_ids: (Optional) A list of participant IDs. If not provided, numeric IDs are used.
    • Outputs: A tuple: (new_shares, new_commitments, verification_data).
  15. _refresh_shares_additive(self, shares, threshold, total_shares, participant_ids):

    • Purpose: This is the heart of the share refreshing implementation. It's an optimized version of Chen & Lindell's Protocol 5, designed for asynchronous environments and with improved Byzantine fault tolerance.
    • How it works (High-Level Overview):
      1. Zero Sharing: Each participant creates a Shamir sharing of zero with a threshold of t. This is done using a deterministic random number generator seeded with a master secret and the participant's ID. This determinism is crucial for verification.
      2. Verification: Participants exchange these zero shares and verify them against commitments. This step includes:
        • Echo Broadcast: A mechanism to ensure consistency and detect if a party is sending different shares to different participants (equivocation).
        • Byzantine Detection: Identifies parties that are behaving maliciously (e.g., sending invalid shares, equivocating).
        • Batch Verification: Uses batch_verify_shares for efficiency.
        • Adaptive Quorum: Adjusts the required number of valid shares based on the observed level of Byzantine behavior.
      3. Share Combination: Each participant adds the verified zero shares they received to their original share. Since the zero shares sum to zero, this results in a new sharing of the same secret.
      4. New Commitments: New commitments are created for the refreshed shares.
      5. Verification data: Returns a comprehensive set of verification data, including information about any detected Byzantine behavior, and proofs of correct refreshing.
    • Key Optimizations and Security Features:
      • Asynchronous Operation: Designed to work reliably even with network delays and parties being temporarily offline.
      • Reduced Communication: Uses deterministic randomness and verification to minimize the amount of data that needs to be exchanged.
      • Improved Byzantine Fault Tolerance: Uses echo broadcast, adaptive quorum-based detection, and detailed evidence collection to identify and exclude malicious parties.
      • Efficient Verification: Leverages batch verification techniques.
      • Constant-Time Operations: Uses _secure_sum_shares to perform summations in constant time, preventing timing attacks.
      • Collusion Detection: Includes _enhanced_collusion_detection to identify potential collusion among malicious parties.
      • Cryptographic Proofs: Generates proofs to demonstrate the validity of the share refreshing process.
    • Inputs/Outputs: Same as refresh_shares.

The remaining methods within _refresh_shares_additive are helper functions that support this process:

  • _secure_sum_shares(self, shares_dict, modulus): Performs a constant-time summation of shares.
  • _get_original_share_value(self, participant_id, shares): Safely retrieves a participant's original share.
  • _determine_security_threshold(self, base_threshold, verified_count, total_parties, invalid_parties): Calculates the adaptive security threshold.
  • _detect_collusion_patterns(self, invalid_shares_detected, party_ids): (Deprecated) Basic collusion detection.
  • _create_invalidity_proof(self, party_id, participant_id, share, commitments): Creates a proof that a share is invalid.
  • _generate_refresh_consistency_proof(self, participant_id, original_y, sum_zero_shares, new_y, verified_shares): Creates a proof of correct share refreshing.
  • _process_echo_consistency(self, zero_commitments, zero_sharings, participant_ids): Implements the echo consistency protocol.
  • _calculate_optimal_batch_size(self, num_participants, num_shares): Determines the best batch size for verification.
  • _prepare_verification_batches(self, zero_sharings, zero_commitments, participant_ids, batch_size): Groups shares for batch verification.
  • _process_verification_batches(self, verification_batches): Processes verification batches (potentially in parallel).
  • _get_share_value_from_results(self, party_id, p_id, zero_sharings): Retrieves a share value.
  • _generate_invalidity_evidence(self, party_id, p_id, zero_sharings, zero_commitments, verification_proofs, share_verification, echo_consistency): Generates evidence for invalid shares.
  • _enhanced_collusion_detection(self, invalid_shares_detected, party_ids, echo_consistency): Improved collusion detection.
  • _evaluate_polynomial(self, coefficients, x): Evaluates a polynomial at point x.
  • _reconstruct_polynomial_coefficients(self, x_values, y_values, threshold): Reconstructs polynomial coefficients using Lagrange interpolation.
  • _secure_matrix_solve(self, matrix, vector, prime=None): Solves a linear system using Gaussian elimination, with countermeasures against side-channel attacks.
  • _find_secure_pivot(self, matrix, col, n): Finds a pivot for Gaussian elimination in a way that resists timing attacks.
  1. create_polynomial_proof(self, coefficients, commitments):

    • Purpose: Creates a zero-knowledge proof of knowledge (ZK-PoK) of the polynomial coefficients. This allows a prover to convince a verifier that they know the coefficients without revealing them.
    • How it works (simplified):
      1. Blinding Factors: Generates random "blinding factors."
      2. Blinding Commitments: Creates commitments to these blinding factors.
      3. Challenge: Generates a non-interactive challenge using the Fiat-Shamir transform. This involves hashing a combination of public information (generator, prime, commitments, blinding commitments, timestamp).
      4. Responses: Computes responses based on the blinding factors, challenge, and the original coefficients.
      5. Proof Structure: Returns a dictionary containing the blinding commitments, challenge, responses, randomizers used for the original commitments, randomizers used for the blinding commitments, and a timestamp.
    • Inputs:
      • coefficients: The polynomial coefficients.
      • commitments: The commitments to the coefficients.
    • Outputs: A dictionary representing the proof.
  2. verify_polynomial_proof(self, proof, commitments):

    • Purpose: Verifies a ZK-PoK of polynomial coefficients.
    • How it works (simplified):
      1. Input Validation: Checks that the proof and commitments have the expected structure.
      2. Verification Loop: Iterates through the responses in the proof. For each response:
        • Computes the commitment that should have been created based on response and randomizer.
        • Computes commitment based on blinding commitment and original commitment with the challenge.
        • Compares calculated commitment with expected commitment using constant time comparison.
      3. Result: Returns True if all checks pass, False otherwise.
    • Inputs:
      • proof: The proof data.
      • commitments: The commitments to the coefficients.
    • Outputs: True if the proof is valid, False otherwise.
  3. create_commitments_with_proof(self, coefficients, context=None):

    • Purpose: Creates commitments and generates proof in one combined operation.
    • How it works:
      • Input Validation
      • Creates commitments using create_commitments.
      • Generates zero-knowledge proof using create_polynomial_proof.
    • Inputs:
      • coefficients: The polynomial coefficients.
      • context: The optional context string.
    • Outputs: Tuple of commitments and proof.
  4. verify_commitments_with_proof(self, commitments, proof):

    • Purpose: Verifies that a zero-knowledge proof demonstrates knowledge of the polynomial coefficients.
    • How it works:
      • Input validation.
      • Calls verify_polynomial_proof.
    • Inputs:
      • commitments: List of commitments.
      • proof: Proof data structure from create_polynomial_proof
    • Outputs: Boolean value if proof is valid.
  5. serialize_commitments_with_proof(self, commitments, proof):

    • Purpose: Serializes commitments and associated proof for storage or transmission.
    • How it works:
      • Input validation.
      • Serializes the commitments using the existing serialize_commitments method logic.
      • Processes proof data for serialization by converting necessary values to integers.
      • Creates dictionary with the serialized commitment information, along with proof, has_proof, hash_based.
      • Packs the dictionary using msgpack
    • Inputs:
      • commitments: commitments.
      • proof: proof.
    • Outputs: Base64 encoded string.
  6. deserialize_commitments_with_proof(self, data):

    • Purpose: Deserialize commitment data that includes a zero-knowledge proof.
    • How it works:
      • Input validation.
      • Decodes base64 encoded string.
      • Deserializes commitments using the existing deserialize_commitments method.
      • Checks and extracts if proof data is present.
      • Validates proof structure.
      • Reconstructs the proof with proper structure.
    • Inputs:
      • data: Serialized data.
    • Outputs: Tuple of commitments, proof, generator, prime and timestamp.
  7. verify_share_with_proof(self, share_x, share_y, serialized_data):

    • Purpose: Comprehensive verification of a share against serialized commitments and proof.
    • How it works:
      • Input validation.
      • Deserializes the commitments and proof.
      • Creates a temporary CyclicGroup and FeldmanVSS instance.
      • Verifies both the share (verify_share) and the proof (verify_commitments_with_proof).
    • Inputs:
      • share_x: x-coordinate of share.
      • share_y: y-coordinate of share.
      • serialized_data: Serialized commitments with proof.
    • Outputs: Tuple of boolean values for share and proof validity.
  8. detect_byzantine_party(self, party_id, commitments, shares, consistency_results=None):

    • Purpose: A public method to detect Byzantine behavior from a specific party.
    • How it Works:
      • Calls the private method _detect_byzantine_behavior.
    • Inputs:
      • party_id: ID of the party.
      • commitments: Commitments from the party.
      • shares: Shares distributed by the party.
      • consistency_results: Optional consistency check.
    • Outputs:
      • Tuple (is_byzantine, evidence_details).
  9. _detect_byzantine_behavior(self, party_id, commitments, shares, consistency_results=None):

    • Purpose: Detects various types of Byzantine (malicious) behavior.
    • How it works:
      1. Commitment Check: Checks if the commitments are valid (e.g., not empty, correctly formatted). For hash-based commitments, it verifies that the first commitment corresponds to the value 0 (since this is a sharing of zero).
      2. Share Consistency: Checks if the shares distributed by the party are consistent with the commitments.
      3. Equivocation Check: Uses the _byzantine_evidence (collected during echo consistency checks) to see if the party sent different shares to different participants.
    • Inputs:
      • party_id: The ID of the party being checked.
      • commitments: The commitments made by the party.
      • shares: The shares distributed by the party.
      • consistency_results: (Optional) Results from echo consistency checks.
    • Outputs: A tuple: (is_byzantine, evidence). is_byzantine is a boolean indicating whether Byzantine behavior was detected. evidence is a dictionary containing details about the detected misbehavior.
  10. clear_cache(self): Clears the internal caches (_commitment_cache and the CyclicGroup's cached_powers). This is important for managing memory, especially in long-running processes.

  11. __del__(self): Destructor that clears caches and attempts to delete sensitive data (like the generator).

Helper Functions (Outside the Class)

  • constant_time_compare(a, b): Compares two values (integers, strings, or bytes) in constant time. This is essential to prevent timing attacks, where an attacker could learn information about secret values by measuring how long the comparison takes.
  • compute_checksum(data: bytes) -> int: Computes a checksum of the given data, used for integrity checks during serialization.
  • secure_redundant_execution(func: Callable, *args, **kwargs) -> Any: Executes a function multiple times and checks if the results are identical. This helps detect and mitigate fault injection attacks.
  • get_feldman_vss(field, **kwargs): Factory function for creating FeldmanVSS instance.
  • create_vss_from_shamir(shamir_instance): Creates a FeldmanVSS instance from a ShamirSecretSharing instance.
  • integrate_with_pedersen(feldman_vss, pedersen_vss, shares, coefficients): Integrates Feldman VSS with Pedersen VSS.
  • create_dual_commitment_proof(feldman_vss, pedersen_vss, coefficients, feldman_commitments, pedersen_commitments): Creates zero-knowledge proof.
  • verify_dual_commitments(feldman_vss, pedersen_vss, feldman_commitments, pedersen_commitments, proof): Verify dual commitments.

Security Considerations and Potential Vulnerabilities (Addressed in the Code)

  • Post-Quantum Security: The use of hash-based commitments makes the scheme resistant to attacks from quantum computers.
  • Low-Entropy Secrets: The create_enhanced_commitments method addresses the vulnerability of low-entropy secrets by adding extra random entropy.
  • Timing Attacks:
    • constant_time_compare is used throughout to prevent timing attacks during comparisons.
    • secure_exp in CyclicGroup uses constant-time exponentiation.
    • _secure_matrix_solve and _find_secure_pivot are designed to be side-channel resistant during polynomial reconstruction.
  • Fault Injection Attacks:
    • secure_redundant_execution is used in critical operations (commitment creation, share verification) to detect and mitigate fault injection.
  • Byzantine Behavior: The share refreshing protocol (_refresh_shares_additive) includes extensive mechanisms to detect and handle malicious participants, including:
    • Echo broadcast for consistency checks.
    • Adaptive quorum-based Byzantine detection.
    • Detailed evidence collection for invalid shares and equivocation.
    • Collusion detection.
  • Deterministic Hashing: The code uses deterministic byte encoding for integers in hash calculations to ensure that commitments are consistent across different platforms and executions.
  • Safe Primes: Uses safe primes by default.
  • Thread Safety: Uses SafeLRUCache for exponentiation.

Potential Vulnerabilities (Acknowledged but Not Fully Addressed - Beta Version)

The docstring clearly identifies these, which is excellent for transparency:

  1. Timing Side-Channels (in Pure Python): While functions like constant_time_compare, _secure_matrix_solve, and _find_secure_pivot aim for constant-time operation, they are written in pure Python. The Python interpreter, garbage collection, and underlying hardware can introduce timing variations that might leak information. The ideal solution is to use a well-vetted cryptographic library or implement these functions in a lower-level language (e.g., C).
  2. secure_redundant_execution Assumptions: This function assumes the provided function is strictly deterministic and has no side effects. If there's any non-determinism, the redundant executions might produce different results, leading to a false positive SecurityError.
  3. Bias in hash_to_group: The rejection sampling in hash_to_group has a fallback to modular reduction, introducing a slight statistical bias. While likely negligible for large primes, it's a theoretical weakness.

False-Positive Vulnerabilities (Explained in Docstring)

  • Use of random.Random(): The docstring explains why using random.Random() seeded with cryptographically strong material is secure in the specific context of _refresh_shares_additive.

Clone this wiki locally