Skip to content

File descriptor leak using multiprocessing.Queue #131794

Open
@gduvalsc

Description

@gduvalsc

Bug report

Bug description:

"Too many open files" can occur with multiprocessing.Queue, even if the protocol used to work with the queue is correct.

The file descriptor leak can be demonstrated with two small programs:

File "ko.py"

import os
import multiprocessing

pid = os.getpid()
cmd = f'lsof -p {pid} | grep -i pipe | wc -l'
cmdfull = f'lsof -p {pid} | grep -i pipe'
queues = {}

for i in range(100):
    queues[i] = multiprocessing.Queue()
    os.system(cmd)
    queues[i].close()
    queues[i].join_thread()
    os.system(cmd)

os.system(cmdfull)

File "ok.py"

import os
import multiprocessing

pid = os.getpid()
cmd = f'lsof -p {pid} | grep -i pipe | wc -l'
cmdfull = f'lsof -p {pid} | grep -i pipe'
queues = {}

for i in range(100):
    queues[i] = multiprocessing.Queue()
    queues[i].put('Hello')
    os.system(cmd)
    queues[i].close()
    queues[i].join_thread()
    os.system(cmd)

os.system(cmdfull)

An empty queue that remains empty is not abnormal (e.g., an error queue when no errors occur).

The workaround is to create a class extending multiprocessing.Queue, adding a close method that inserts an extra message into the queue before performing the actual close.

This issue can be observed on Linux and macOS. However, it is not specific to a particular operating system.

CPython versions tested on:

3.12

Operating systems tested on:

Linux

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions