File: //lib64/python3.8/__pycache__/asyncore.cpython-38.opt-1.pyc
U
e5d~N ã @ sh d Z ddlZddlZddlZddlZddlZddlZddlmZm Z m
Z
mZmZm
Z
mZmZmZmZmZmZmZ eee
eeeehƒZze W n ek
r¤ i ZY nX dd„ ZG dd„ deƒZeeefZdd „ Zd
d„ Z dd
„ Z!dd„ Z"d&dd„Z#d'dd„Z$e$Z%d(dd„Z&G dd„ dƒZ'G dd„ de'ƒZ(dd„ Z)d)dd „Z*ej+d!krdG d"d#„ d#ƒZ,G d$d%„ d%e'ƒZ-dS )*a Basic infrastructure for asynchronous socket service clients and servers.
There are only two ways to have a program on a single processor do "more
than one thing at a time". Multi-threaded programming is the simplest and
most popular way to do it, but there is another very different technique,
that lets you have nearly all the advantages of multi-threading, without
actually using multiple threads. it's really only practical if your program
is largely I/O bound. If your program is CPU bound, then pre-emptive
scheduled threads are probably what you really need. Network servers are
rarely CPU-bound, however.
If your operating system supports the select() system call in its I/O
library (and nearly all do), then you can use it to juggle multiple
communication channels at once; doing other work while your I/O is taking
place in the "background." Although this strategy can seem strange and
complex, especially at first, it is in many ways easier to understand and
control than multi-threaded programming. The module documented here solves
many of the difficult problems for you, making the task of building
sophisticated high-performance network servers and clients a snap.
é N)
ÚEALREADYÚEINPROGRESSÚEWOULDBLOCKÚ
ECONNRESETÚEINVALÚENOTCONNÚ ESHUTDOWNÚEISCONNÚEBADFÚECONNABORTEDÚEPIPEÚEAGAINÚ errorcodec
C sH zt | ¡W S tttfk
rB | tkr6t| Y S d| Y S X d S )NzUnknown error %s)ÚosÚstrerrorÚ
ValueErrorÚ
OverflowErrorÚ NameErrorr )Úerr© r ú /usr/lib64/python3.8/asyncore.pyÚ _strerrorD s r c @ s e Zd ZdS )ÚExitNowN)Ú__name__Ú
__module__Ú__qualname__r r r r r L s r c C s: z| ¡ W n( tk
r" ‚ Y n | ¡ Y nX d S ©N)Úhandle_read_eventÚ_reraised_exceptionsÚhandle_error©Úobjr r r ÚreadQ s r"