Skip to content

Patch unicodedecodeerror in HEAR #231

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Dec 4, 2024

Conversation

reactive-firewall
Copy link
Collaborator

@reactive-firewall reactive-firewall commented Dec 2, 2024

Implement Mitigations for unicodedecodeerror in HEAR

Related Issue:

Summary by CodeRabbit

  • New Features

    • Enhanced error handling for incoming data, allowing the server to process invalid UTF-8 data without raising exceptions.
    • Improved server shutdown functionality in response to specific commands.
  • Tests

    • Added a test case to verify the handling of invalid UTF-8 data, ensuring stability and robustness during data processing.

…$188 -)

Changes in file multicast/hear.py:
 - Implemented changes to ensure defined (ignore) behavior when dealing with non-utf8.

Changes in file tests/test_hear_data_processing.py:
 - implemented new test for related changes.
Changes in file multicast/hear.py:
 - Clearified the docstring in handle(self) related to #188 a bit.
@reactive-firewall reactive-firewall linked an issue Dec 2, 2024 that may be closed by this pull request
Copy link
Contributor

coderabbitai bot commented Dec 2, 2024

Walkthrough

The changes in this pull request focus on enhancing error handling and control flow in the HearUDPHandler class within the multicast/hear.py file. Modifications include the addition of a try-except block to handle UnicodeDecodeError exceptions and early returns for invalid data or socket conditions. Additionally, the handle_error method in the McastServer class has been updated to allow for graceful server shutdown upon receiving a "STOP" command. A corresponding test has been added to ensure that the handler correctly processes invalid UTF-8 data without raising exceptions.

Changes

File Change Summary
multicast/hear.py Updated handle method in HearUDPHandler to improve error handling and control flow.
multicast/hear.py Modified handle_error method in McastServer to shut down server on receiving "STOP".
tests/test_hear_data_processing.py Added test_handle_with_invalid_utf8_data to verify handling of invalid UTF-8 data in tests.

Assessment against linked issues

Objective Addressed Explanation
Track handling of UnicodeDecodeError in HearUDPHandler (#188)

Possibly related PRs

  • [COVERAGE] improving coverage slightly #145: This PR modifies the multicast/hear.py file and includes changes to the HearUDPHandler class, which is directly related to the error handling improvements made in the main PR.
  • [PATCH] specialize exception hear (- WIP #154 -) #179: This PR specializes exception handling in the HearUDPHandler class by replacing RuntimeError with a custom exception for the 'STOP' command, which aligns with the changes made in the main PR regarding error handling and server shutdown commands.

Suggested labels

Linter, Documentation

🐰 In the land of multicast, where data flows,
A rabbit hops swiftly, as error handling grows.
With UTF-8 whispers, we silence the dread,
And when "STOP" is spoken, the server's well-fed.
So here’s to the changes, both clever and bright,
In the code we now cherish, all feels just right! 🌟


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 1bc42c5 and 24a6db0.

📒 Files selected for processing (1)
  • tests/test_hear_data_processing.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/test_hear_data_processing.py

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@reactive-firewall reactive-firewall self-assigned this Dec 2, 2024
@github-actions github-actions bot added Multicast Any main project file changes Python Lang Changes to Python source code Testing Something can be verified CI Continuous Integration Tooling labels Dec 2, 2024
Copy link

codecov bot commented Dec 2, 2024

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
1208 2 1206 16
View the top 2 failed tests by shortest run time
tests.test_fuzz.HypothesisTestSuite::test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input
Stack Traces | 20.8s run time
+ Exception Group Traceback (most recent call last):
  |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/case.py", line 58, in testPartExecutor
  |     yield
  |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/case.py", line 651, in run
  |     self._callTestMethod(testMethod)
  |     ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
  |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/case.py", line 606, in _callTestMethod
  |     if method() is not None:
  |        ~~~~~~^^
  |   File ".../multicast/tests/test_fuzz.py", line 129, in test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input
  |     @settings(deadline=300)
  |                   ^^^^^^^
  |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13................../site-packages/hypothesis/core.py", line 1758, in wrapped_test
  |     raise the_error_hypothesis_found
  | hypothesis.errors.FlakyFailure: Hypothesis test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input(self=<tests.test_fuzz.HypothesisTestSuite testMethod=test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input>, text='XFY') produces unreliable results: Falsified on the first call but did not on a subsequent one (1 sub-exception)
  | Falsifying example: test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input(
  |     self=<tests.test_fuzz.HypothesisTestSuite testMethod=test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input>,
  |     text='XFY',
  | )
  | Unreliable test timings! On an initial run, this test took 409.85ms, which exceeded the deadline of 300.00ms, but on a subsequent run it took 263.72 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None.
  | 
  | You can reproduce this example by temporarily adding @reproduce_failure('6.122.1', b'AAEhAQ8BIgA=') as a decorator on your test case
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13................../site-packages/hypothesis/core.py", line 1060, in _execute_once_for_engine
    |     result = self.execute_once(data)
    |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13................../site-packages/hypothesis/core.py", line 999, in execute_once
    |     result = self.test_runner(data, run)
    |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13................../site-packages/hypothesis/core.py", line 709, in default_executor
    |     return function(data)
    |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13................../site-packages/hypothesis/core.py", line 974, in run
    |     return test(*args, **kwargs)
    |   File ".../multicast/tests/test_fuzz.py", line 129, in test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input
    |     @settings(deadline=300)
    |                   ^^^^^^^^^
    |   File ".........................../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13................../site-packages/hypothesis/core.py", line 906, in test
    |     raise DeadlineExceeded(
    |         datetime.timedelta(seconds=runtime), self.settings.deadline
    |     )
    | hypothesis.errors.DeadlineExceeded: Test took 409.85ms, which exceeds the deadline of 300.00ms
    +------------------------------------
tests.test_fuzz.HypothesisTestSuite::test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input
Stack Traces | 31.5s run time
self = <tests.test_fuzz.HypothesisTestSuite testMethod=test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input>

    @given(st.text(alphabet=string.ascii_letters + string.digits, min_size=3, max_size=15))
>   @settings(deadline=300)

tests/test_fuzz.py:129: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<tests.test_fuzz.HypothesisTestSuite testMethod=test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input>, 'HTD')
kwargs = {}, arg_drawtime = 0.00044325600003958243, arg_stateful = 0.0
arg_gctime = 0.046189482999949405, start = 264.32920805, result = None
finish = 264.660934559, in_drawtime = 0.0, in_stateful = 0.0, in_gctime = 0.0
runtime = 0.33172650900002054

    @proxies(self.test)
    def test(*args, **kwargs):
        arg_drawtime = math.fsum(data.draw_times.values())
        arg_stateful = math.fsum(data._stateful_run_times.values())
        arg_gctime = gc_cumulative_time()
        start = time.perf_counter()
        try:
            with unwrap_markers_from_group(), ensure_free_stackframes():
                result = self.test(*args, **kwargs)
        finally:
            finish = time.perf_counter()
            in_drawtime = math.fsum(data.draw_times.values()) - arg_drawtime
            in_stateful = (
                math.fsum(data._stateful_run_times.values()) - arg_stateful
            )
            in_gctime = gc_cumulative_time() - arg_gctime
            runtime = finish - start - in_drawtime - in_stateful - in_gctime
            self._timing_features = {
                "execute:test": runtime,
                "overall:gc": in_gctime,
                **data.draw_times,
                **data._stateful_run_times,
            }
    
        if (current_deadline := self.settings.deadline) is not None:
            if not is_final:
                current_deadline = (current_deadline // 4) * 5
            if runtime >= current_deadline.total_seconds():
>               raise DeadlineExceeded(
                    datetime.timedelta(seconds=runtime), self.settings.deadline
                )
E               hypothesis.errors.DeadlineExceeded: Test took 331.73ms, which exceeds the deadline of 300.00ms
E               Falsifying example: test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input(
E                   self=<tests.test_fuzz.HypothesisTestSuite testMethod=test_invalid_Error_WHEN_cli_called_GIVEN_invalid_fuzz_input>,
E                   text='HTD',
E               )
E               
E               You can reproduce this example by temporarily adding @reproduce_failure('6.122.1', b'AAERAR0BDQA=') as a decorator on your test case

.../hostedtoolcache/Python/3.12.7.../x64/lib/python3.12.../site-packages/hypothesis/core.py:906: DeadlineExceeded

To view more test analytics, go to the Test Analytics Dashboard
📢 Thoughts on this report? Let us know!

@reactive-firewall reactive-firewall marked this pull request as ready for review December 2, 2024 07:35
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
multicast/hear.py (1)

431-434: Consider adding debug logging for invalid UTF-8 data.

While the silent handling of UnicodeDecodeError is appropriate, adding debug-level logging would help with troubleshooting without affecting normal operation.

 try:
     data = data.decode('utf8') if isinstance(data, bytes) else str(data)
 except UnicodeDecodeError:  # pragma: no cover
+    if (_sys.stdout.isatty()):  # pragma: no cover
+        print(f"Debug: Received invalid UTF-8 data from {self.client_address[0]}")
     return  # silently ignore invalid UTF-8 data -- fail quickly.
tests/test_hear_data_processing.py (1)

183-210: Consider strengthening test assertions.

While the test effectively verifies that no exception is raised, it could be enhanced by adding assertions to verify the handler's state after processing invalid data.

 try:
     # Should silently ignore invalid UTF-8 data
     handler.handle()
     # If no exception is raised, the test passes
+    # Verify handler state after processing invalid data
+    self.assertIsNone(handler.server)  # Server should remain None
+    self.assertEqual(handler.client_address, ("224.0.0.1", _fixture_port_num))
 except Exception as e:
     self.fail(f"Handler raised an unexpected exception: {e}")
 finally:
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 39b1dd7 and 1bc42c5.

📒 Files selected for processing (2)
  • multicast/hear.py (2 hunks)
  • tests/test_hear_data_processing.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
tests/test_hear_data_processing.py (2)

Pattern **/*.py: When reviewing Python code for this project:

  1. Prioritize portability over clarity, especially when dealing with cross-Python compatibility. However, with the priority in mind, do still consider improvements to clarity when relevant.
  2. As a general guideline, consider the code style advocated in the PEP 8 standard (excluding the use of spaces for indentation) and evaluate suggested changes for code style compliance.
  3. As a style convention, consider the code style advocated in CEP-8 and evaluate suggested changes for code style compliance.
  4. As a general guideline, try to provide any relevant, official, and supporting documentation links to any tool's suggestions in review comments. This guideline is important for posterity.
  5. As a general rule, undocumented function definitions and class definitions in the project's Python code are assumed incomplete. Please consider suggesting a short summary of the code for any of these incomplete definitions as docstrings when reviewing.
  6. Verify Flake8's configuration file is located at ".flake8.ini"

Pattern tests/*: When reviewing test code:

  1. Prioritize portability over clarity, especially when dealing with cross-Python compatibility. However, with the priority in mind, do still consider improvements to clarity when relevant.
  2. As a general guideline, consider the code style advocated in the PEP 8 standard (excluding the use of spaces for indentation) and evaluate suggested changes for code style compliance.
  3. As a style convention, consider the code style advocated in CEP-8 and evaluate suggested changes for code style compliance, pointing out any violations discovered.
  4. As a general guideline, try to provide any relevant, official, and supporting documentation links to any tool's suggestions in review comments. This guideline is important for posterity.
  5. As a project rule, Python source files with names prefixed by the string "test_" and located in the project's "tests" directory are the project's unit-testing code. It is safe, albeit a heuristic, to assume these are considered part of the project's minimal acceptance testing unless a justifying exception to this assumption is documented.
  6. As a project rule, any files without extensions and with names prefixed by either the string "check_" or the string "test_", and located in the project's "tests" directory, are the project's non-unit test code. "Non-unit test" in this context refers to any type of testing other than unit testing, such as (but not limited to) functional testing, style linting, regression testing, etc. It can also be assumed that non-unit testing code is usually written as Bash shell scripts.
multicast/hear.py (1)

Pattern **/*.py: When reviewing Python code for this project:

  1. Prioritize portability over clarity, especially when dealing with cross-Python compatibility. However, with the priority in mind, do still consider improvements to clarity when relevant.
  2. As a general guideline, consider the code style advocated in the PEP 8 standard (excluding the use of spaces for indentation) and evaluate suggested changes for code style compliance.
  3. As a style convention, consider the code style advocated in CEP-8 and evaluate suggested changes for code style compliance.
  4. As a general guideline, try to provide any relevant, official, and supporting documentation links to any tool's suggestions in review comments. This guideline is important for posterity.
  5. As a general rule, undocumented function definitions and class definitions in the project's Python code are assumed incomplete. Please consider suggesting a short summary of the code for any of these incomplete definitions as docstrings when reviewing.
  6. Verify Flake8's configuration file is located at ".flake8.ini"
📓 Learnings (1)
📓 Common learnings
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#0
File: :0-0
Timestamp: 2024-11-21T02:05:08.096Z
Learning: Reactive-firewall prefers scripts to include explicit data processing steps that produce the expected output, ensuring accurate and useful results.
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#0
File: :0-0
Timestamp: 2024-11-12T10:29:50.461Z
Learning: For future scripts, reactive-firewall prefers to use only POSIX built-ins (like `printf` instead of `echo`) to align with CEP-8 conventions.
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#0
File: :0-0
Timestamp: 2024-11-21T02:05:08.096Z
Learning: Reactive-firewall prefers to specify the full raw URL when fetching files from gists to ensure correct retrieval of content, particularly when the file is in markdown format.
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#0
File: :0-0
Timestamp: 2024-11-29T04:45:25.464Z
Learning: For future reviews, remember that reactive-firewall prefers documentation to be presented in a tone appropriate for developers, avoiding raw source code dumps, and focusing on clear explanations and real-world examples.
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#0
File: :0-0
Timestamp: 2024-11-12T10:29:55.563Z
Learning: For future scripts, reactive-firewall prefers to keep the shebang line with an extra space (e.g., `#! /bin/bash`) as per project style.
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#184
File: LICENSE.md:36-43
Timestamp: 2024-11-14T07:03:35.207Z
Learning: Reactive-firewall prefers that I verify URLs before suggesting changes, and avoid raising hypothetical issues without verification.
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#0
File: :0-0
Timestamp: 2024-11-29T01:10:18.136Z
Learning: For future reviews, remember that reactive-firewall prefers documentation to be presented in a tone appropriate for developers, following the project's style and conventions.
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#184
File: tests/check_pip:259-265
Timestamp: 2024-11-15T19:32:16.878Z
Learning: Reactive-firewall prefers that code changes adhere to SOLID design principles, specifically avoiding unnecessary complexity (AHA principle), maintaining the Single Responsibility Principle, and ensuring complex work is done on separate development branches (Open/Closed Principle).
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#110
File: docs/USAGE.md:66-66
Timestamp: 2024-11-12T10:29:55.563Z
Learning: Reactive-firewall prefers to use "Caveat" as a heading instead of "Caution".
Learnt from: reactive-firewall
PR: reactive-firewall/multicast#0
File: :0-0
Timestamp: 2024-10-08T03:15:05.171Z
Learning: For future reviews, remember that when drafting or updating CEP (Convention Enhancement Proposal) documents, it's important to adhere to the project's standard format and include all relevant sections such as Motivation, Detailed Explanation, Implementation Guidelines, Impact, and References.
🔇 Additional comments (1)
multicast/hear.py (1)

380-385: LGTM! Clear and accurate documentation update.

The docstring updates properly document the new behavior for handling UnicodeDecodeError and None checks.

Changes in file tests/test_hear_data_processing.py:
 - improved verification for test_handle_with_invalid_utf8_data(self)
@reactive-firewall reactive-firewall merged commit 31b604e into master Dec 4, 2024
138 checks passed
@reactive-firewall reactive-firewall deleted the patch-unicodedecodeerror-188 branch December 4, 2024 05:19
@reactive-firewall reactive-firewall mentioned this pull request Dec 5, 2024
19 tasks
@reactive-firewall reactive-firewall mentioned this pull request Dec 19, 2024
22 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI Continuous Integration Tooling Multicast Any main project file changes Python Lang Changes to Python source code Testing Something can be verified
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Track handling of UnicodeDecodeError in HearUDPHandler
1 participant