Suppose some library has a persistent error condition that causes it to create a Socket object, allocate a socket from the operating system, and then throw an exception. Does the file descriptor (and the underlying socket) get released? Well, it depends.
If the garbage collector manages to run, then yes. But the GC is not actually tracking file descriptors, just memory. So if the error condition causes a retry, the application might burn through a lot of file descriptors very quickly, without triggering any garbage collection. This results in other requests for sockets or open files crapping out with EMFILE. This could even cause a deadlock, if the rest of the system needs to open files or sockets to get any work done--- if they don't get work done, no more dirty objects, so the garbage collector doesn't run and free up those file handles.
I wish this was only theoretical, but we have encountered this bug in two entirely different pieces of code. One was mainly our fault and not too difficult to fix. The current incarnation is deep within a library. We might be able to limit the rate of retries (which would be good anyway) but I'm not sure whether that fixes the problem or just defers it.
(Python in theory could have the same problem but in practice it seems to clean up open subprocesses, file handles, etc. pretty rapidly. Maybe I just haven't pushed it hard enough. It also has the 'with' construct to handle explicit cleanup.)