1:mod:`multiprocessing` --- Process-based parallelism 2==================================================== 3 4.. module:: multiprocessing 5 :synopsis: Process-based parallelism. 6 7**Source code:** :source:`Lib/multiprocessing/` 8 9-------------- 10 11Introduction 12------------ 13 14:mod:`multiprocessing` is a package that supports spawning processes using an 15API similar to the :mod:`threading` module. The :mod:`multiprocessing` package 16offers both local and remote concurrency, effectively side-stepping the 17:term:`Global Interpreter Lock` by using subprocesses instead of threads. Due 18to this, the :mod:`multiprocessing` module allows the programmer to fully 19leverage multiple processors on a given machine. It runs on both Unix and 20Windows. 21 22The :mod:`multiprocessing` module also introduces APIs which do not have 23analogs in the :mod:`threading` module. A prime example of this is the 24:class:`~multiprocessing.pool.Pool` object which offers a convenient means of 25parallelizing the execution of a function across multiple input values, 26distributing the input data across processes (data parallelism). The following 27example demonstrates the common practice of defining such functions in a module 28so that child processes can successfully import that module. This basic example 29of data parallelism using :class:`~multiprocessing.pool.Pool`, :: 30 31 from multiprocessing import Pool 32 33 def f(x): 34 return x*x 35 36 if __name__ == '__main__': 37 with Pool(5) as p: 38 print(p.map(f, [1, 2, 3])) 39 40will print to standard output :: 41 42 [1, 4, 9] 43 44 45The :class:`Process` class 46~~~~~~~~~~~~~~~~~~~~~~~~~~ 47 48In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process` 49object and then calling its :meth:`~Process.start` method. :class:`Process` 50follows the API of :class:`threading.Thread`. A trivial example of a 51multiprocess program is :: 52 53 from multiprocessing import Process 54 55 def f(name): 56 print('hello', name) 57 58 if __name__ == '__main__': 59 p = Process(target=f, args=('bob',)) 60 p.start() 61 p.join() 62 63To show the individual process IDs involved, here is an expanded example:: 64 65 from multiprocessing import Process 66 import os 67 68 def info(title): 69 print(title) 70 print('module name:', __name__) 71 print('parent process:', os.getppid()) 72 print('process id:', os.getpid()) 73 74 def f(name): 75 info('function f') 76 print('hello', name) 77 78 if __name__ == '__main__': 79 info('main line') 80 p = Process(target=f, args=('bob',)) 81 p.start() 82 p.join() 83 84For an explanation of why the ``if __name__ == '__main__'`` part is 85necessary, see :ref:`multiprocessing-programming`. 86 87 88 89Contexts and start methods 90~~~~~~~~~~~~~~~~~~~~~~~~~~ 91 92.. _multiprocessing-start-methods: 93 94Depending on the platform, :mod:`multiprocessing` supports three ways 95to start a process. These *start methods* are 96 97 *spawn* 98 The parent process starts a fresh python interpreter process. The 99 child process will only inherit those resources necessary to run 100 the process objects :meth:`~Process.run` method. In particular, 101 unnecessary file descriptors and handles from the parent process 102 will not be inherited. Starting a process using this method is 103 rather slow compared to using *fork* or *forkserver*. 104 105 Available on Unix and Windows. The default on Windows. 106 107 *fork* 108 The parent process uses :func:`os.fork` to fork the Python 109 interpreter. The child process, when it begins, is effectively 110 identical to the parent process. All resources of the parent are 111 inherited by the child process. Note that safely forking a 112 multithreaded process is problematic. 113 114 Available on Unix only. The default on Unix. 115 116 *forkserver* 117 When the program starts and selects the *forkserver* start method, 118 a server process is started. From then on, whenever a new process 119 is needed, the parent process connects to the server and requests 120 that it fork a new process. The fork server process is single 121 threaded so it is safe for it to use :func:`os.fork`. No 122 unnecessary resources are inherited. 123 124 Available on Unix platforms which support passing file descriptors 125 over Unix pipes. 126 127.. versionchanged:: 3.4 128 *spawn* added on all unix platforms, and *forkserver* added for 129 some unix platforms. 130 Child processes no longer inherit all of the parents inheritable 131 handles on Windows. 132 133On Unix using the *spawn* or *forkserver* start methods will also 134start a *semaphore tracker* process which tracks the unlinked named 135semaphores created by processes of the program. When all processes 136have exited the semaphore tracker unlinks any remaining semaphores. 137Usually there should be none, but if a process was killed by a signal 138there may be some "leaked" semaphores. (Unlinking the named semaphores 139is a serious matter since the system allows only a limited number, and 140they will not be automatically unlinked until the next reboot.) 141 142To select a start method you use the :func:`set_start_method` in 143the ``if __name__ == '__main__'`` clause of the main module. For 144example:: 145 146 import multiprocessing as mp 147 148 def foo(q): 149 q.put('hello') 150 151 if __name__ == '__main__': 152 mp.set_start_method('spawn') 153 q = mp.Queue() 154 p = mp.Process(target=foo, args=(q,)) 155 p.start() 156 print(q.get()) 157 p.join() 158 159:func:`set_start_method` should not be used more than once in the 160program. 161 162Alternatively, you can use :func:`get_context` to obtain a context 163object. Context objects have the same API as the multiprocessing 164module, and allow one to use multiple start methods in the same 165program. :: 166 167 import multiprocessing as mp 168 169 def foo(q): 170 q.put('hello') 171 172 if __name__ == '__main__': 173 ctx = mp.get_context('spawn') 174 q = ctx.Queue() 175 p = ctx.Process(target=foo, args=(q,)) 176 p.start() 177 print(q.get()) 178 p.join() 179 180Note that objects related to one context may not be compatible with 181processes for a different context. In particular, locks created using 182the *fork* context cannot be passed to processes started using the 183*spawn* or *forkserver* start methods. 184 185A library which wants to use a particular start method should probably 186use :func:`get_context` to avoid interfering with the choice of the 187library user. 188 189.. warning:: 190 191 The ``'spawn'`` and ``'forkserver'`` start methods cannot currently 192 be used with "frozen" executables (i.e., binaries produced by 193 packages like **PyInstaller** and **cx_Freeze**) on Unix. 194 The ``'fork'`` start method does work. 195 196 197Exchanging objects between processes 198~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 199 200:mod:`multiprocessing` supports two types of communication channel between 201processes: 202 203**Queues** 204 205 The :class:`Queue` class is a near clone of :class:`queue.Queue`. For 206 example:: 207 208 from multiprocessing import Process, Queue 209 210 def f(q): 211 q.put([42, None, 'hello']) 212 213 if __name__ == '__main__': 214 q = Queue() 215 p = Process(target=f, args=(q,)) 216 p.start() 217 print(q.get()) # prints "[42, None, 'hello']" 218 p.join() 219 220 Queues are thread and process safe. 221 222**Pipes** 223 224 The :func:`Pipe` function returns a pair of connection objects connected by a 225 pipe which by default is duplex (two-way). For example:: 226 227 from multiprocessing import Process, Pipe 228 229 def f(conn): 230 conn.send([42, None, 'hello']) 231 conn.close() 232 233 if __name__ == '__main__': 234 parent_conn, child_conn = Pipe() 235 p = Process(target=f, args=(child_conn,)) 236 p.start() 237 print(parent_conn.recv()) # prints "[42, None, 'hello']" 238 p.join() 239 240 The two connection objects returned by :func:`Pipe` represent the two ends of 241 the pipe. Each connection object has :meth:`~Connection.send` and 242 :meth:`~Connection.recv` methods (among others). Note that data in a pipe 243 may become corrupted if two processes (or threads) try to read from or write 244 to the *same* end of the pipe at the same time. Of course there is no risk 245 of corruption from processes using different ends of the pipe at the same 246 time. 247 248 249Synchronization between processes 250~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 251 252:mod:`multiprocessing` contains equivalents of all the synchronization 253primitives from :mod:`threading`. For instance one can use a lock to ensure 254that only one process prints to standard output at a time:: 255 256 from multiprocessing import Process, Lock 257 258 def f(l, i): 259 l.acquire() 260 try: 261 print('hello world', i) 262 finally: 263 l.release() 264 265 if __name__ == '__main__': 266 lock = Lock() 267 268 for num in range(10): 269 Process(target=f, args=(lock, num)).start() 270 271Without using the lock output from the different processes is liable to get all 272mixed up. 273 274 275Sharing state between processes 276~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 277 278As mentioned above, when doing concurrent programming it is usually best to 279avoid using shared state as far as possible. This is particularly true when 280using multiple processes. 281 282However, if you really do need to use some shared data then 283:mod:`multiprocessing` provides a couple of ways of doing so. 284 285**Shared memory** 286 287 Data can be stored in a shared memory map using :class:`Value` or 288 :class:`Array`. For example, the following code :: 289 290 from multiprocessing import Process, Value, Array 291 292 def f(n, a): 293 n.value = 3.1415927 294 for i in range(len(a)): 295 a[i] = -a[i] 296 297 if __name__ == '__main__': 298 num = Value('d', 0.0) 299 arr = Array('i', range(10)) 300 301 p = Process(target=f, args=(num, arr)) 302 p.start() 303 p.join() 304 305 print(num.value) 306 print(arr[:]) 307 308 will print :: 309 310 3.1415927 311 [0, -1, -2, -3, -4, -5, -6, -7, -8, -9] 312 313 The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are 314 typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a 315 double precision float and ``'i'`` indicates a signed integer. These shared 316 objects will be process and thread-safe. 317 318 For more flexibility in using shared memory one can use the 319 :mod:`multiprocessing.sharedctypes` module which supports the creation of 320 arbitrary ctypes objects allocated from shared memory. 321 322**Server process** 323 324 A manager object returned by :func:`Manager` controls a server process which 325 holds Python objects and allows other processes to manipulate them using 326 proxies. 327 328 A manager returned by :func:`Manager` will support types 329 :class:`list`, :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`, 330 :class:`RLock`, :class:`Semaphore`, :class:`BoundedSemaphore`, 331 :class:`Condition`, :class:`Event`, :class:`Barrier`, 332 :class:`Queue`, :class:`Value` and :class:`Array`. For example, :: 333 334 from multiprocessing import Process, Manager 335 336 def f(d, l): 337 d[1] = '1' 338 d['2'] = 2 339 d[0.25] = None 340 l.reverse() 341 342 if __name__ == '__main__': 343 with Manager() as manager: 344 d = manager.dict() 345 l = manager.list(range(10)) 346 347 p = Process(target=f, args=(d, l)) 348 p.start() 349 p.join() 350 351 print(d) 352 print(l) 353 354 will print :: 355 356 {0.25: None, 1: '1', '2': 2} 357 [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] 358 359 Server process managers are more flexible than using shared memory objects 360 because they can be made to support arbitrary object types. Also, a single 361 manager can be shared by processes on different computers over a network. 362 They are, however, slower than using shared memory. 363 364 365Using a pool of workers 366~~~~~~~~~~~~~~~~~~~~~~~ 367 368The :class:`~multiprocessing.pool.Pool` class represents a pool of worker 369processes. It has methods which allows tasks to be offloaded to the worker 370processes in a few different ways. 371 372For example:: 373 374 from multiprocessing import Pool, TimeoutError 375 import time 376 import os 377 378 def f(x): 379 return x*x 380 381 if __name__ == '__main__': 382 # start 4 worker processes 383 with Pool(processes=4) as pool: 384 385 # print "[0, 1, 4,..., 81]" 386 print(pool.map(f, range(10))) 387 388 # print same numbers in arbitrary order 389 for i in pool.imap_unordered(f, range(10)): 390 print(i) 391 392 # evaluate "f(20)" asynchronously 393 res = pool.apply_async(f, (20,)) # runs in *only* one process 394 print(res.get(timeout=1)) # prints "400" 395 396 # evaluate "os.getpid()" asynchronously 397 res = pool.apply_async(os.getpid, ()) # runs in *only* one process 398 print(res.get(timeout=1)) # prints the PID of that process 399 400 # launching multiple evaluations asynchronously *may* use more processes 401 multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)] 402 print([res.get(timeout=1) for res in multiple_results]) 403 404 # make a single worker sleep for 10 secs 405 res = pool.apply_async(time.sleep, (10,)) 406 try: 407 print(res.get(timeout=1)) 408 except TimeoutError: 409 print("We lacked patience and got a multiprocessing.TimeoutError") 410 411 print("For the moment, the pool remains available for more work") 412 413 # exiting the 'with'-block has stopped the pool 414 print("Now the pool is closed and no longer available") 415 416Note that the methods of a pool should only ever be used by the 417process which created it. 418 419.. note:: 420 421 Functionality within this package requires that the ``__main__`` module be 422 importable by the children. This is covered in :ref:`multiprocessing-programming` 423 however it is worth pointing out here. This means that some examples, such 424 as the :class:`multiprocessing.pool.Pool` examples will not work in the 425 interactive interpreter. For example:: 426 427 >>> from multiprocessing import Pool 428 >>> p = Pool(5) 429 >>> def f(x): 430 ... return x*x 431 ... 432 >>> p.map(f, [1,2,3]) 433 Process PoolWorker-1: 434 Process PoolWorker-2: 435 Process PoolWorker-3: 436 Traceback (most recent call last): 437 Traceback (most recent call last): 438 Traceback (most recent call last): 439 AttributeError: 'module' object has no attribute 'f' 440 AttributeError: 'module' object has no attribute 'f' 441 AttributeError: 'module' object has no attribute 'f' 442 443 (If you try this it will actually output three full tracebacks 444 interleaved in a semi-random fashion, and then you may have to 445 stop the master process somehow.) 446 447 448Reference 449--------- 450 451The :mod:`multiprocessing` package mostly replicates the API of the 452:mod:`threading` module. 453 454 455:class:`Process` and exceptions 456~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 457 458.. class:: Process(group=None, target=None, name=None, args=(), kwargs={}, \ 459 *, daemon=None) 460 461 Process objects represent activity that is run in a separate process. The 462 :class:`Process` class has equivalents of all the methods of 463 :class:`threading.Thread`. 464 465 The constructor should always be called with keyword arguments. *group* 466 should always be ``None``; it exists solely for compatibility with 467 :class:`threading.Thread`. *target* is the callable object to be invoked by 468 the :meth:`run()` method. It defaults to ``None``, meaning nothing is 469 called. *name* is the process name (see :attr:`name` for more details). 470 *args* is the argument tuple for the target invocation. *kwargs* is a 471 dictionary of keyword arguments for the target invocation. If provided, 472 the keyword-only *daemon* argument sets the process :attr:`daemon` flag 473 to ``True`` or ``False``. If ``None`` (the default), this flag will be 474 inherited from the creating process. 475 476 By default, no arguments are passed to *target*. 477 478 If a subclass overrides the constructor, it must make sure it invokes the 479 base class constructor (:meth:`Process.__init__`) before doing anything else 480 to the process. 481 482 .. versionchanged:: 3.3 483 Added the *daemon* argument. 484 485 .. method:: run() 486 487 Method representing the process's activity. 488 489 You may override this method in a subclass. The standard :meth:`run` 490 method invokes the callable object passed to the object's constructor as 491 the target argument, if any, with sequential and keyword arguments taken 492 from the *args* and *kwargs* arguments, respectively. 493 494 .. method:: start() 495 496 Start the process's activity. 497 498 This must be called at most once per process object. It arranges for the 499 object's :meth:`run` method to be invoked in a separate process. 500 501 .. method:: join([timeout]) 502 503 If the optional argument *timeout* is ``None`` (the default), the method 504 blocks until the process whose :meth:`join` method is called terminates. 505 If *timeout* is a positive number, it blocks at most *timeout* seconds. 506 Note that the method returns ``None`` if its process terminates or if the 507 method times out. Check the process's :attr:`exitcode` to determine if 508 it terminated. 509 510 A process can be joined many times. 511 512 A process cannot join itself because this would cause a deadlock. It is 513 an error to attempt to join a process before it has been started. 514 515 .. attribute:: name 516 517 The process's name. The name is a string used for identification purposes 518 only. It has no semantics. Multiple processes may be given the same 519 name. 520 521 The initial name is set by the constructor. If no explicit name is 522 provided to the constructor, a name of the form 523 'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' is constructed, where 524 each N\ :sub:`k` is the N-th child of its parent. 525 526 .. method:: is_alive 527 528 Return whether the process is alive. 529 530 Roughly, a process object is alive from the moment the :meth:`start` 531 method returns until the child process terminates. 532 533 .. attribute:: daemon 534 535 The process's daemon flag, a Boolean value. This must be set before 536 :meth:`start` is called. 537 538 The initial value is inherited from the creating process. 539 540 When a process exits, it attempts to terminate all of its daemonic child 541 processes. 542 543 Note that a daemonic process is not allowed to create child processes. 544 Otherwise a daemonic process would leave its children orphaned if it gets 545 terminated when its parent process exits. Additionally, these are **not** 546 Unix daemons or services, they are normal processes that will be 547 terminated (and not joined) if non-daemonic processes have exited. 548 549 In addition to the :class:`threading.Thread` API, :class:`Process` objects 550 also support the following attributes and methods: 551 552 .. attribute:: pid 553 554 Return the process ID. Before the process is spawned, this will be 555 ``None``. 556 557 .. attribute:: exitcode 558 559 The child's exit code. This will be ``None`` if the process has not yet 560 terminated. A negative value *-N* indicates that the child was terminated 561 by signal *N*. 562 563 .. attribute:: authkey 564 565 The process's authentication key (a byte string). 566 567 When :mod:`multiprocessing` is initialized the main process is assigned a 568 random string using :func:`os.urandom`. 569 570 When a :class:`Process` object is created, it will inherit the 571 authentication key of its parent process, although this may be changed by 572 setting :attr:`authkey` to another byte string. 573 574 See :ref:`multiprocessing-auth-keys`. 575 576 .. attribute:: sentinel 577 578 A numeric handle of a system object which will become "ready" when 579 the process ends. 580 581 You can use this value if you want to wait on several events at 582 once using :func:`multiprocessing.connection.wait`. Otherwise 583 calling :meth:`join()` is simpler. 584 585 On Windows, this is an OS handle usable with the ``WaitForSingleObject`` 586 and ``WaitForMultipleObjects`` family of API calls. On Unix, this is 587 a file descriptor usable with primitives from the :mod:`select` module. 588 589 .. versionadded:: 3.3 590 591 .. method:: terminate() 592 593 Terminate the process. On Unix this is done using the ``SIGTERM`` signal; 594 on Windows :c:func:`TerminateProcess` is used. Note that exit handlers and 595 finally clauses, etc., will not be executed. 596 597 Note that descendant processes of the process will *not* be terminated -- 598 they will simply become orphaned. 599 600 .. warning:: 601 602 If this method is used when the associated process is using a pipe or 603 queue then the pipe or queue is liable to become corrupted and may 604 become unusable by other process. Similarly, if the process has 605 acquired a lock or semaphore etc. then terminating it is liable to 606 cause other processes to deadlock. 607 608 .. method:: kill() 609 610 Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix. 611 612 .. versionadded:: 3.7 613 614 .. method:: close() 615 616 Close the :class:`Process` object, releasing all resources associated 617 with it. :exc:`ValueError` is raised if the underlying process 618 is still running. Once :meth:`close` returns successfully, most 619 other methods and attributes of the :class:`Process` object will 620 raise :exc:`ValueError`. 621 622 .. versionadded:: 3.7 623 624 Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`, 625 :meth:`terminate` and :attr:`exitcode` methods should only be called by 626 the process that created the process object. 627 628 Example usage of some of the methods of :class:`Process`: 629 630 .. doctest:: 631 632 >>> import multiprocessing, time, signal 633 >>> p = multiprocessing.Process(target=time.sleep, args=(1000,)) 634 >>> print(p, p.is_alive()) 635 <Process(Process-1, initial)> False 636 >>> p.start() 637 >>> print(p, p.is_alive()) 638 <Process(Process-1, started)> True 639 >>> p.terminate() 640 >>> time.sleep(0.1) 641 >>> print(p, p.is_alive()) 642 <Process(Process-1, stopped[SIGTERM])> False 643 >>> p.exitcode == -signal.SIGTERM 644 True 645 646.. exception:: ProcessError 647 648 The base class of all :mod:`multiprocessing` exceptions. 649 650.. exception:: BufferTooShort 651 652 Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied 653 buffer object is too small for the message read. 654 655 If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give 656 the message as a byte string. 657 658.. exception:: AuthenticationError 659 660 Raised when there is an authentication error. 661 662.. exception:: TimeoutError 663 664 Raised by methods with a timeout when the timeout expires. 665 666Pipes and Queues 667~~~~~~~~~~~~~~~~ 668 669When using multiple processes, one generally uses message passing for 670communication between processes and avoids having to use any synchronization 671primitives like locks. 672 673For passing messages one can use :func:`Pipe` (for a connection between two 674processes) or a queue (which allows multiple producers and consumers). 675 676The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types 677are multi-producer, multi-consumer :abbr:`FIFO (first-in, first-out)` 678queues modelled on the :class:`queue.Queue` class in the 679standard library. They differ in that :class:`Queue` lacks the 680:meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join` methods introduced 681into Python 2.5's :class:`queue.Queue` class. 682 683If you use :class:`JoinableQueue` then you **must** call 684:meth:`JoinableQueue.task_done` for each task removed from the queue or else the 685semaphore used to count the number of unfinished tasks may eventually overflow, 686raising an exception. 687 688Note that one can also create a shared queue by using a manager object -- see 689:ref:`multiprocessing-managers`. 690 691.. note:: 692 693 :mod:`multiprocessing` uses the usual :exc:`queue.Empty` and 694 :exc:`queue.Full` exceptions to signal a timeout. They are not available in 695 the :mod:`multiprocessing` namespace so you need to import them from 696 :mod:`queue`. 697 698.. note:: 699 700 When an object is put on a queue, the object is pickled and a 701 background thread later flushes the pickled data to an underlying 702 pipe. This has some consequences which are a little surprising, 703 but should not cause any practical difficulties -- if they really 704 bother you then you can instead use a queue created with a 705 :ref:`manager <multiprocessing-managers>`. 706 707 (1) After putting an object on an empty queue there may be an 708 infinitesimal delay before the queue's :meth:`~Queue.empty` 709 method returns :const:`False` and :meth:`~Queue.get_nowait` can 710 return without raising :exc:`queue.Empty`. 711 712 (2) If multiple processes are enqueuing objects, it is possible for 713 the objects to be received at the other end out-of-order. 714 However, objects enqueued by the same process will always be in 715 the expected order with respect to each other. 716 717.. warning:: 718 719 If a process is killed using :meth:`Process.terminate` or :func:`os.kill` 720 while it is trying to use a :class:`Queue`, then the data in the queue is 721 likely to become corrupted. This may cause any other process to get an 722 exception when it tries to use the queue later on. 723 724.. warning:: 725 726 As mentioned above, if a child process has put items on a queue (and it has 727 not used :meth:`JoinableQueue.cancel_join_thread 728 <multiprocessing.Queue.cancel_join_thread>`), then that process will 729 not terminate until all buffered items have been flushed to the pipe. 730 731 This means that if you try joining that process you may get a deadlock unless 732 you are sure that all items which have been put on the queue have been 733 consumed. Similarly, if the child process is non-daemonic then the parent 734 process may hang on exit when it tries to join all its non-daemonic children. 735 736 Note that a queue created using a manager does not have this issue. See 737 :ref:`multiprocessing-programming`. 738 739For an example of the usage of queues for interprocess communication see 740:ref:`multiprocessing-examples`. 741 742 743.. function:: Pipe([duplex]) 744 745 Returns a pair ``(conn1, conn2)`` of 746 :class:`~multiprocessing.connection.Connection` objects representing the 747 ends of a pipe. 748 749 If *duplex* is ``True`` (the default) then the pipe is bidirectional. If 750 *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be 751 used for receiving messages and ``conn2`` can only be used for sending 752 messages. 753 754 755.. class:: Queue([maxsize]) 756 757 Returns a process shared queue implemented using a pipe and a few 758 locks/semaphores. When a process first puts an item on the queue a feeder 759 thread is started which transfers objects from a buffer into the pipe. 760 761 The usual :exc:`queue.Empty` and :exc:`queue.Full` exceptions from the 762 standard library's :mod:`queue` module are raised to signal timeouts. 763 764 :class:`Queue` implements all the methods of :class:`queue.Queue` except for 765 :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join`. 766 767 .. method:: qsize() 768 769 Return the approximate size of the queue. Because of 770 multithreading/multiprocessing semantics, this number is not reliable. 771 772 Note that this may raise :exc:`NotImplementedError` on Unix platforms like 773 Mac OS X where ``sem_getvalue()`` is not implemented. 774 775 .. method:: empty() 776 777 Return ``True`` if the queue is empty, ``False`` otherwise. Because of 778 multithreading/multiprocessing semantics, this is not reliable. 779 780 .. method:: full() 781 782 Return ``True`` if the queue is full, ``False`` otherwise. Because of 783 multithreading/multiprocessing semantics, this is not reliable. 784 785 .. method:: put(obj[, block[, timeout]]) 786 787 Put obj into the queue. If the optional argument *block* is ``True`` 788 (the default) and *timeout* is ``None`` (the default), block if necessary until 789 a free slot is available. If *timeout* is a positive number, it blocks at 790 most *timeout* seconds and raises the :exc:`queue.Full` exception if no 791 free slot was available within that time. Otherwise (*block* is 792 ``False``), put an item on the queue if a free slot is immediately 793 available, else raise the :exc:`queue.Full` exception (*timeout* is 794 ignored in that case). 795 796 .. method:: put_nowait(obj) 797 798 Equivalent to ``put(obj, False)``. 799 800 .. method:: get([block[, timeout]]) 801 802 Remove and return an item from the queue. If optional args *block* is 803 ``True`` (the default) and *timeout* is ``None`` (the default), block if 804 necessary until an item is available. If *timeout* is a positive number, 805 it blocks at most *timeout* seconds and raises the :exc:`queue.Empty` 806 exception if no item was available within that time. Otherwise (block is 807 ``False``), return an item if one is immediately available, else raise the 808 :exc:`queue.Empty` exception (*timeout* is ignored in that case). 809 810 .. method:: get_nowait() 811 812 Equivalent to ``get(False)``. 813 814 :class:`multiprocessing.Queue` has a few additional methods not found in 815 :class:`queue.Queue`. These methods are usually unnecessary for most 816 code: 817 818 .. method:: close() 819 820 Indicate that no more data will be put on this queue by the current 821 process. The background thread will quit once it has flushed all buffered 822 data to the pipe. This is called automatically when the queue is garbage 823 collected. 824 825 .. method:: join_thread() 826 827 Join the background thread. This can only be used after :meth:`close` has 828 been called. It blocks until the background thread exits, ensuring that 829 all data in the buffer has been flushed to the pipe. 830 831 By default if a process is not the creator of the queue then on exit it 832 will attempt to join the queue's background thread. The process can call 833 :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing. 834 835 .. method:: cancel_join_thread() 836 837 Prevent :meth:`join_thread` from blocking. In particular, this prevents 838 the background thread from being joined automatically when the process 839 exits -- see :meth:`join_thread`. 840 841 A better name for this method might be 842 ``allow_exit_without_flush()``. It is likely to cause enqueued 843 data to lost, and you almost certainly will not need to use it. 844 It is really only there if you need the current process to exit 845 immediately without waiting to flush enqueued data to the 846 underlying pipe, and you don't care about lost data. 847 848 .. note:: 849 850 This class's functionality requires a functioning shared semaphore 851 implementation on the host operating system. Without one, the 852 functionality in this class will be disabled, and attempts to 853 instantiate a :class:`Queue` will result in an :exc:`ImportError`. See 854 :issue:`3770` for additional information. The same holds true for any 855 of the specialized queue types listed below. 856 857.. class:: SimpleQueue() 858 859 It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`. 860 861 .. method:: empty() 862 863 Return ``True`` if the queue is empty, ``False`` otherwise. 864 865 .. method:: get() 866 867 Remove and return an item from the queue. 868 869 .. method:: put(item) 870 871 Put *item* into the queue. 872 873 874.. class:: JoinableQueue([maxsize]) 875 876 :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which 877 additionally has :meth:`task_done` and :meth:`join` methods. 878 879 .. method:: task_done() 880 881 Indicate that a formerly enqueued task is complete. Used by queue 882 consumers. For each :meth:`~Queue.get` used to fetch a task, a subsequent 883 call to :meth:`task_done` tells the queue that the processing on the task 884 is complete. 885 886 If a :meth:`~queue.Queue.join` is currently blocking, it will resume when all 887 items have been processed (meaning that a :meth:`task_done` call was 888 received for every item that had been :meth:`~Queue.put` into the queue). 889 890 Raises a :exc:`ValueError` if called more times than there were items 891 placed in the queue. 892 893 894 .. method:: join() 895 896 Block until all items in the queue have been gotten and processed. 897 898 The count of unfinished tasks goes up whenever an item is added to the 899 queue. The count goes down whenever a consumer calls 900 :meth:`task_done` to indicate that the item was retrieved and all work on 901 it is complete. When the count of unfinished tasks drops to zero, 902 :meth:`~queue.Queue.join` unblocks. 903 904 905Miscellaneous 906~~~~~~~~~~~~~ 907 908.. function:: active_children() 909 910 Return list of all live children of the current process. 911 912 Calling this has the side effect of "joining" any processes which have 913 already finished. 914 915.. function:: cpu_count() 916 917 Return the number of CPUs in the system. 918 919 This number is not equivalent to the number of CPUs the current process can 920 use. The number of usable CPUs can be obtained with 921 ``len(os.sched_getaffinity(0))`` 922 923 May raise :exc:`NotImplementedError`. 924 925 .. seealso:: 926 :func:`os.cpu_count` 927 928.. function:: current_process() 929 930 Return the :class:`Process` object corresponding to the current process. 931 932 An analogue of :func:`threading.current_thread`. 933 934.. function:: freeze_support() 935 936 Add support for when a program which uses :mod:`multiprocessing` has been 937 frozen to produce a Windows executable. (Has been tested with **py2exe**, 938 **PyInstaller** and **cx_Freeze**.) 939 940 One needs to call this function straight after the ``if __name__ == 941 '__main__'`` line of the main module. For example:: 942 943 from multiprocessing import Process, freeze_support 944 945 def f(): 946 print('hello world!') 947 948 if __name__ == '__main__': 949 freeze_support() 950 Process(target=f).start() 951 952 If the ``freeze_support()`` line is omitted then trying to run the frozen 953 executable will raise :exc:`RuntimeError`. 954 955 Calling ``freeze_support()`` has no effect when invoked on any operating 956 system other than Windows. In addition, if the module is being run 957 normally by the Python interpreter on Windows (the program has not been 958 frozen), then ``freeze_support()`` has no effect. 959 960.. function:: get_all_start_methods() 961 962 Returns a list of the supported start methods, the first of which 963 is the default. The possible start methods are ``'fork'``, 964 ``'spawn'`` and ``'forkserver'``. On Windows only ``'spawn'`` is 965 available. On Unix ``'fork'`` and ``'spawn'`` are always 966 supported, with ``'fork'`` being the default. 967 968 .. versionadded:: 3.4 969 970.. function:: get_context(method=None) 971 972 Return a context object which has the same attributes as the 973 :mod:`multiprocessing` module. 974 975 If *method* is ``None`` then the default context is returned. 976 Otherwise *method* should be ``'fork'``, ``'spawn'``, 977 ``'forkserver'``. :exc:`ValueError` is raised if the specified 978 start method is not available. 979 980 .. versionadded:: 3.4 981 982.. function:: get_start_method(allow_none=False) 983 984 Return the name of start method used for starting processes. 985 986 If the start method has not been fixed and *allow_none* is false, 987 then the start method is fixed to the default and the name is 988 returned. If the start method has not been fixed and *allow_none* 989 is true then ``None`` is returned. 990 991 The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'`` 992 or ``None``. ``'fork'`` is the default on Unix, while ``'spawn'`` is 993 the default on Windows. 994 995 .. versionadded:: 3.4 996 997.. function:: set_executable() 998 999 Sets the path of the Python interpreter to use when starting a child process. 1000 (By default :data:`sys.executable` is used). Embedders will probably need to 1001 do some thing like :: 1002 1003 set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe')) 1004 1005 before they can create child processes. 1006 1007 .. versionchanged:: 3.4 1008 Now supported on Unix when the ``'spawn'`` start method is used. 1009 1010.. function:: set_start_method(method) 1011 1012 Set the method which should be used to start child processes. 1013 *method* can be ``'fork'``, ``'spawn'`` or ``'forkserver'``. 1014 1015 Note that this should be called at most once, and it should be 1016 protected inside the ``if __name__ == '__main__'`` clause of the 1017 main module. 1018 1019 .. versionadded:: 3.4 1020 1021.. note:: 1022 1023 :mod:`multiprocessing` contains no analogues of 1024 :func:`threading.active_count`, :func:`threading.enumerate`, 1025 :func:`threading.settrace`, :func:`threading.setprofile`, 1026 :class:`threading.Timer`, or :class:`threading.local`. 1027 1028 1029Connection Objects 1030~~~~~~~~~~~~~~~~~~ 1031 1032.. currentmodule:: multiprocessing.connection 1033 1034Connection objects allow the sending and receiving of picklable objects or 1035strings. They can be thought of as message oriented connected sockets. 1036 1037Connection objects are usually created using 1038:func:`Pipe <multiprocessing.Pipe>` -- see also 1039:ref:`multiprocessing-listeners-clients`. 1040 1041.. class:: Connection 1042 1043 .. method:: send(obj) 1044 1045 Send an object to the other end of the connection which should be read 1046 using :meth:`recv`. 1047 1048 The object must be picklable. Very large pickles (approximately 32 MiB+, 1049 though it depends on the OS) may raise a :exc:`ValueError` exception. 1050 1051 .. method:: recv() 1052 1053 Return an object sent from the other end of the connection using 1054 :meth:`send`. Blocks until there is something to receive. Raises 1055 :exc:`EOFError` if there is nothing left to receive 1056 and the other end was closed. 1057 1058 .. method:: fileno() 1059 1060 Return the file descriptor or handle used by the connection. 1061 1062 .. method:: close() 1063 1064 Close the connection. 1065 1066 This is called automatically when the connection is garbage collected. 1067 1068 .. method:: poll([timeout]) 1069 1070 Return whether there is any data available to be read. 1071 1072 If *timeout* is not specified then it will return immediately. If 1073 *timeout* is a number then this specifies the maximum time in seconds to 1074 block. If *timeout* is ``None`` then an infinite timeout is used. 1075 1076 Note that multiple connection objects may be polled at once by 1077 using :func:`multiprocessing.connection.wait`. 1078 1079 .. method:: send_bytes(buffer[, offset[, size]]) 1080 1081 Send byte data from a :term:`bytes-like object` as a complete message. 1082 1083 If *offset* is given then data is read from that position in *buffer*. If 1084 *size* is given then that many bytes will be read from buffer. Very large 1085 buffers (approximately 32 MiB+, though it depends on the OS) may raise a 1086 :exc:`ValueError` exception 1087 1088 .. method:: recv_bytes([maxlength]) 1089 1090 Return a complete message of byte data sent from the other end of the 1091 connection as a string. Blocks until there is something to receive. 1092 Raises :exc:`EOFError` if there is nothing left 1093 to receive and the other end has closed. 1094 1095 If *maxlength* is specified and the message is longer than *maxlength* 1096 then :exc:`OSError` is raised and the connection will no longer be 1097 readable. 1098 1099 .. versionchanged:: 3.3 1100 This function used to raise :exc:`IOError`, which is now an 1101 alias of :exc:`OSError`. 1102 1103 1104 .. method:: recv_bytes_into(buffer[, offset]) 1105 1106 Read into *buffer* a complete message of byte data sent from the other end 1107 of the connection and return the number of bytes in the message. Blocks 1108 until there is something to receive. Raises 1109 :exc:`EOFError` if there is nothing left to receive and the other end was 1110 closed. 1111 1112 *buffer* must be a writable :term:`bytes-like object`. If 1113 *offset* is given then the message will be written into the buffer from 1114 that position. Offset must be a non-negative integer less than the 1115 length of *buffer* (in bytes). 1116 1117 If the buffer is too short then a :exc:`BufferTooShort` exception is 1118 raised and the complete message is available as ``e.args[0]`` where ``e`` 1119 is the exception instance. 1120 1121 .. versionchanged:: 3.3 1122 Connection objects themselves can now be transferred between processes 1123 using :meth:`Connection.send` and :meth:`Connection.recv`. 1124 1125 .. versionadded:: 3.3 1126 Connection objects now support the context management protocol -- see 1127 :ref:`typecontextmanager`. :meth:`~contextmanager.__enter__` returns the 1128 connection object, and :meth:`~contextmanager.__exit__` calls :meth:`close`. 1129 1130For example: 1131 1132.. doctest:: 1133 1134 >>> from multiprocessing import Pipe 1135 >>> a, b = Pipe() 1136 >>> a.send([1, 'hello', None]) 1137 >>> b.recv() 1138 [1, 'hello', None] 1139 >>> b.send_bytes(b'thank you') 1140 >>> a.recv_bytes() 1141 b'thank you' 1142 >>> import array 1143 >>> arr1 = array.array('i', range(5)) 1144 >>> arr2 = array.array('i', [0] * 10) 1145 >>> a.send_bytes(arr1) 1146 >>> count = b.recv_bytes_into(arr2) 1147 >>> assert count == len(arr1) * arr1.itemsize 1148 >>> arr2 1149 array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0]) 1150 1151 1152.. warning:: 1153 1154 The :meth:`Connection.recv` method automatically unpickles the data it 1155 receives, which can be a security risk unless you can trust the process 1156 which sent the message. 1157 1158 Therefore, unless the connection object was produced using :func:`Pipe` you 1159 should only use the :meth:`~Connection.recv` and :meth:`~Connection.send` 1160 methods after performing some sort of authentication. See 1161 :ref:`multiprocessing-auth-keys`. 1162 1163.. warning:: 1164 1165 If a process is killed while it is trying to read or write to a pipe then 1166 the data in the pipe is likely to become corrupted, because it may become 1167 impossible to be sure where the message boundaries lie. 1168 1169 1170Synchronization primitives 1171~~~~~~~~~~~~~~~~~~~~~~~~~~ 1172 1173.. currentmodule:: multiprocessing 1174 1175Generally synchronization primitives are not as necessary in a multiprocess 1176program as they are in a multithreaded program. See the documentation for 1177:mod:`threading` module. 1178 1179Note that one can also create synchronization primitives by using a manager 1180object -- see :ref:`multiprocessing-managers`. 1181 1182.. class:: Barrier(parties[, action[, timeout]]) 1183 1184 A barrier object: a clone of :class:`threading.Barrier`. 1185 1186 .. versionadded:: 3.3 1187 1188.. class:: BoundedSemaphore([value]) 1189 1190 A bounded semaphore object: a close analog of 1191 :class:`threading.BoundedSemaphore`. 1192 1193 A solitary difference from its close analog exists: its ``acquire`` method's 1194 first argument is named *block*, as is consistent with :meth:`Lock.acquire`. 1195 1196 .. note:: 1197 On Mac OS X, this is indistinguishable from :class:`Semaphore` because 1198 ``sem_getvalue()`` is not implemented on that platform. 1199 1200.. class:: Condition([lock]) 1201 1202 A condition variable: an alias for :class:`threading.Condition`. 1203 1204 If *lock* is specified then it should be a :class:`Lock` or :class:`RLock` 1205 object from :mod:`multiprocessing`. 1206 1207 .. versionchanged:: 3.3 1208 The :meth:`~threading.Condition.wait_for` method was added. 1209 1210.. class:: Event() 1211 1212 A clone of :class:`threading.Event`. 1213 1214 1215.. class:: Lock() 1216 1217 A non-recursive lock object: a close analog of :class:`threading.Lock`. 1218 Once a process or thread has acquired a lock, subsequent attempts to 1219 acquire it from any process or thread will block until it is released; 1220 any process or thread may release it. The concepts and behaviors of 1221 :class:`threading.Lock` as it applies to threads are replicated here in 1222 :class:`multiprocessing.Lock` as it applies to either processes or threads, 1223 except as noted. 1224 1225 Note that :class:`Lock` is actually a factory function which returns an 1226 instance of ``multiprocessing.synchronize.Lock`` initialized with a 1227 default context. 1228 1229 :class:`Lock` supports the :term:`context manager` protocol and thus may be 1230 used in :keyword:`with` statements. 1231 1232 .. method:: acquire(block=True, timeout=None) 1233 1234 Acquire a lock, blocking or non-blocking. 1235 1236 With the *block* argument set to ``True`` (the default), the method call 1237 will block until the lock is in an unlocked state, then set it to locked 1238 and return ``True``. Note that the name of this first argument differs 1239 from that in :meth:`threading.Lock.acquire`. 1240 1241 With the *block* argument set to ``False``, the method call does not 1242 block. If the lock is currently in a locked state, return ``False``; 1243 otherwise set the lock to a locked state and return ``True``. 1244 1245 When invoked with a positive, floating-point value for *timeout*, block 1246 for at most the number of seconds specified by *timeout* as long as 1247 the lock can not be acquired. Invocations with a negative value for 1248 *timeout* are equivalent to a *timeout* of zero. Invocations with a 1249 *timeout* value of ``None`` (the default) set the timeout period to 1250 infinite. Note that the treatment of negative or ``None`` values for 1251 *timeout* differs from the implemented behavior in 1252 :meth:`threading.Lock.acquire`. The *timeout* argument has no practical 1253 implications if the *block* argument is set to ``False`` and is thus 1254 ignored. Returns ``True`` if the lock has been acquired or ``False`` if 1255 the timeout period has elapsed. 1256 1257 1258 .. method:: release() 1259 1260 Release a lock. This can be called from any process or thread, not only 1261 the process or thread which originally acquired the lock. 1262 1263 Behavior is the same as in :meth:`threading.Lock.release` except that 1264 when invoked on an unlocked lock, a :exc:`ValueError` is raised. 1265 1266 1267.. class:: RLock() 1268 1269 A recursive lock object: a close analog of :class:`threading.RLock`. A 1270 recursive lock must be released by the process or thread that acquired it. 1271 Once a process or thread has acquired a recursive lock, the same process 1272 or thread may acquire it again without blocking; that process or thread 1273 must release it once for each time it has been acquired. 1274 1275 Note that :class:`RLock` is actually a factory function which returns an 1276 instance of ``multiprocessing.synchronize.RLock`` initialized with a 1277 default context. 1278 1279 :class:`RLock` supports the :term:`context manager` protocol and thus may be 1280 used in :keyword:`with` statements. 1281 1282 1283 .. method:: acquire(block=True, timeout=None) 1284 1285 Acquire a lock, blocking or non-blocking. 1286 1287 When invoked with the *block* argument set to ``True``, block until the 1288 lock is in an unlocked state (not owned by any process or thread) unless 1289 the lock is already owned by the current process or thread. The current 1290 process or thread then takes ownership of the lock (if it does not 1291 already have ownership) and the recursion level inside the lock increments 1292 by one, resulting in a return value of ``True``. Note that there are 1293 several differences in this first argument's behavior compared to the 1294 implementation of :meth:`threading.RLock.acquire`, starting with the name 1295 of the argument itself. 1296 1297 When invoked with the *block* argument set to ``False``, do not block. 1298 If the lock has already been acquired (and thus is owned) by another 1299 process or thread, the current process or thread does not take ownership 1300 and the recursion level within the lock is not changed, resulting in 1301 a return value of ``False``. If the lock is in an unlocked state, the 1302 current process or thread takes ownership and the recursion level is 1303 incremented, resulting in a return value of ``True``. 1304 1305 Use and behaviors of the *timeout* argument are the same as in 1306 :meth:`Lock.acquire`. Note that some of these behaviors of *timeout* 1307 differ from the implemented behaviors in :meth:`threading.RLock.acquire`. 1308 1309 1310 .. method:: release() 1311 1312 Release a lock, decrementing the recursion level. If after the 1313 decrement the recursion level is zero, reset the lock to unlocked (not 1314 owned by any process or thread) and if any other processes or threads 1315 are blocked waiting for the lock to become unlocked, allow exactly one 1316 of them to proceed. If after the decrement the recursion level is still 1317 nonzero, the lock remains locked and owned by the calling process or 1318 thread. 1319 1320 Only call this method when the calling process or thread owns the lock. 1321 An :exc:`AssertionError` is raised if this method is called by a process 1322 or thread other than the owner or if the lock is in an unlocked (unowned) 1323 state. Note that the type of exception raised in this situation 1324 differs from the implemented behavior in :meth:`threading.RLock.release`. 1325 1326 1327.. class:: Semaphore([value]) 1328 1329 A semaphore object: a close analog of :class:`threading.Semaphore`. 1330 1331 A solitary difference from its close analog exists: its ``acquire`` method's 1332 first argument is named *block*, as is consistent with :meth:`Lock.acquire`. 1333 1334.. note:: 1335 1336 On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with 1337 a timeout will emulate that function's behavior using a sleeping loop. 1338 1339.. note:: 1340 1341 If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is 1342 blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`, 1343 :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire` 1344 or :meth:`Condition.wait` then the call will be immediately interrupted and 1345 :exc:`KeyboardInterrupt` will be raised. 1346 1347 This differs from the behaviour of :mod:`threading` where SIGINT will be 1348 ignored while the equivalent blocking calls are in progress. 1349 1350.. note:: 1351 1352 Some of this package's functionality requires a functioning shared semaphore 1353 implementation on the host operating system. Without one, the 1354 :mod:`multiprocessing.synchronize` module will be disabled, and attempts to 1355 import it will result in an :exc:`ImportError`. See 1356 :issue:`3770` for additional information. 1357 1358 1359Shared :mod:`ctypes` Objects 1360~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1361 1362It is possible to create shared objects using shared memory which can be 1363inherited by child processes. 1364 1365.. function:: Value(typecode_or_type, *args, lock=True) 1366 1367 Return a :mod:`ctypes` object allocated from shared memory. By default the 1368 return value is actually a synchronized wrapper for the object. The object 1369 itself can be accessed via the *value* attribute of a :class:`Value`. 1370 1371 *typecode_or_type* determines the type of the returned object: it is either a 1372 ctypes type or a one character typecode of the kind used by the :mod:`array` 1373 module. *\*args* is passed on to the constructor for the type. 1374 1375 If *lock* is ``True`` (the default) then a new recursive lock 1376 object is created to synchronize access to the value. If *lock* is 1377 a :class:`Lock` or :class:`RLock` object then that will be used to 1378 synchronize access to the value. If *lock* is ``False`` then 1379 access to the returned object will not be automatically protected 1380 by a lock, so it will not necessarily be "process-safe". 1381 1382 Operations like ``+=`` which involve a read and write are not 1383 atomic. So if, for instance, you want to atomically increment a 1384 shared value it is insufficient to just do :: 1385 1386 counter.value += 1 1387 1388 Assuming the associated lock is recursive (which it is by default) 1389 you can instead do :: 1390 1391 with counter.get_lock(): 1392 counter.value += 1 1393 1394 Note that *lock* is a keyword-only argument. 1395 1396.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True) 1397 1398 Return a ctypes array allocated from shared memory. By default the return 1399 value is actually a synchronized wrapper for the array. 1400 1401 *typecode_or_type* determines the type of the elements of the returned array: 1402 it is either a ctypes type or a one character typecode of the kind used by 1403 the :mod:`array` module. If *size_or_initializer* is an integer, then it 1404 determines the length of the array, and the array will be initially zeroed. 1405 Otherwise, *size_or_initializer* is a sequence which is used to initialize 1406 the array and whose length determines the length of the array. 1407 1408 If *lock* is ``True`` (the default) then a new lock object is created to 1409 synchronize access to the value. If *lock* is a :class:`Lock` or 1410 :class:`RLock` object then that will be used to synchronize access to the 1411 value. If *lock* is ``False`` then access to the returned object will not be 1412 automatically protected by a lock, so it will not necessarily be 1413 "process-safe". 1414 1415 Note that *lock* is a keyword only argument. 1416 1417 Note that an array of :data:`ctypes.c_char` has *value* and *raw* 1418 attributes which allow one to use it to store and retrieve strings. 1419 1420 1421The :mod:`multiprocessing.sharedctypes` module 1422>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1423 1424.. module:: multiprocessing.sharedctypes 1425 :synopsis: Allocate ctypes objects from shared memory. 1426 1427The :mod:`multiprocessing.sharedctypes` module provides functions for allocating 1428:mod:`ctypes` objects from shared memory which can be inherited by child 1429processes. 1430 1431.. note:: 1432 1433 Although it is possible to store a pointer in shared memory remember that 1434 this will refer to a location in the address space of a specific process. 1435 However, the pointer is quite likely to be invalid in the context of a second 1436 process and trying to dereference the pointer from the second process may 1437 cause a crash. 1438 1439.. function:: RawArray(typecode_or_type, size_or_initializer) 1440 1441 Return a ctypes array allocated from shared memory. 1442 1443 *typecode_or_type* determines the type of the elements of the returned array: 1444 it is either a ctypes type or a one character typecode of the kind used by 1445 the :mod:`array` module. If *size_or_initializer* is an integer then it 1446 determines the length of the array, and the array will be initially zeroed. 1447 Otherwise *size_or_initializer* is a sequence which is used to initialize the 1448 array and whose length determines the length of the array. 1449 1450 Note that setting and getting an element is potentially non-atomic -- use 1451 :func:`Array` instead to make sure that access is automatically synchronized 1452 using a lock. 1453 1454.. function:: RawValue(typecode_or_type, *args) 1455 1456 Return a ctypes object allocated from shared memory. 1457 1458 *typecode_or_type* determines the type of the returned object: it is either a 1459 ctypes type or a one character typecode of the kind used by the :mod:`array` 1460 module. *\*args* is passed on to the constructor for the type. 1461 1462 Note that setting and getting the value is potentially non-atomic -- use 1463 :func:`Value` instead to make sure that access is automatically synchronized 1464 using a lock. 1465 1466 Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw`` 1467 attributes which allow one to use it to store and retrieve strings -- see 1468 documentation for :mod:`ctypes`. 1469 1470.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True) 1471 1472 The same as :func:`RawArray` except that depending on the value of *lock* a 1473 process-safe synchronization wrapper may be returned instead of a raw ctypes 1474 array. 1475 1476 If *lock* is ``True`` (the default) then a new lock object is created to 1477 synchronize access to the value. If *lock* is a 1478 :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object 1479 then that will be used to synchronize access to the 1480 value. If *lock* is ``False`` then access to the returned object will not be 1481 automatically protected by a lock, so it will not necessarily be 1482 "process-safe". 1483 1484 Note that *lock* is a keyword-only argument. 1485 1486.. function:: Value(typecode_or_type, *args, lock=True) 1487 1488 The same as :func:`RawValue` except that depending on the value of *lock* a 1489 process-safe synchronization wrapper may be returned instead of a raw ctypes 1490 object. 1491 1492 If *lock* is ``True`` (the default) then a new lock object is created to 1493 synchronize access to the value. If *lock* is a :class:`~multiprocessing.Lock` or 1494 :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the 1495 value. If *lock* is ``False`` then access to the returned object will not be 1496 automatically protected by a lock, so it will not necessarily be 1497 "process-safe". 1498 1499 Note that *lock* is a keyword-only argument. 1500 1501.. function:: copy(obj) 1502 1503 Return a ctypes object allocated from shared memory which is a copy of the 1504 ctypes object *obj*. 1505 1506.. function:: synchronized(obj[, lock]) 1507 1508 Return a process-safe wrapper object for a ctypes object which uses *lock* to 1509 synchronize access. If *lock* is ``None`` (the default) then a 1510 :class:`multiprocessing.RLock` object is created automatically. 1511 1512 A synchronized wrapper will have two methods in addition to those of the 1513 object it wraps: :meth:`get_obj` returns the wrapped object and 1514 :meth:`get_lock` returns the lock object used for synchronization. 1515 1516 Note that accessing the ctypes object through the wrapper can be a lot slower 1517 than accessing the raw ctypes object. 1518 1519 .. versionchanged:: 3.5 1520 Synchronized objects support the :term:`context manager` protocol. 1521 1522 1523The table below compares the syntax for creating shared ctypes objects from 1524shared memory with the normal ctypes syntax. (In the table ``MyStruct`` is some 1525subclass of :class:`ctypes.Structure`.) 1526 1527==================== ========================== =========================== 1528ctypes sharedctypes using type sharedctypes using typecode 1529==================== ========================== =========================== 1530c_double(2.4) RawValue(c_double, 2.4) RawValue('d', 2.4) 1531MyStruct(4, 6) RawValue(MyStruct, 4, 6) 1532(c_short * 7)() RawArray(c_short, 7) RawArray('h', 7) 1533(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8)) 1534==================== ========================== =========================== 1535 1536 1537Below is an example where a number of ctypes objects are modified by a child 1538process:: 1539 1540 from multiprocessing import Process, Lock 1541 from multiprocessing.sharedctypes import Value, Array 1542 from ctypes import Structure, c_double 1543 1544 class Point(Structure): 1545 _fields_ = [('x', c_double), ('y', c_double)] 1546 1547 def modify(n, x, s, A): 1548 n.value **= 2 1549 x.value **= 2 1550 s.value = s.value.upper() 1551 for a in A: 1552 a.x **= 2 1553 a.y **= 2 1554 1555 if __name__ == '__main__': 1556 lock = Lock() 1557 1558 n = Value('i', 7) 1559 x = Value(c_double, 1.0/3.0, lock=False) 1560 s = Array('c', b'hello world', lock=lock) 1561 A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock) 1562 1563 p = Process(target=modify, args=(n, x, s, A)) 1564 p.start() 1565 p.join() 1566 1567 print(n.value) 1568 print(x.value) 1569 print(s.value) 1570 print([(a.x, a.y) for a in A]) 1571 1572 1573.. highlight:: none 1574 1575The results printed are :: 1576 1577 49 1578 0.1111111111111111 1579 HELLO WORLD 1580 [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)] 1581 1582.. highlight:: python3 1583 1584 1585.. _multiprocessing-managers: 1586 1587Managers 1588~~~~~~~~ 1589 1590Managers provide a way to create data which can be shared between different 1591processes, including sharing over a network between processes running on 1592different machines. A manager object controls a server process which manages 1593*shared objects*. Other processes can access the shared objects by using 1594proxies. 1595 1596.. function:: multiprocessing.Manager() 1597 1598 Returns a started :class:`~multiprocessing.managers.SyncManager` object which 1599 can be used for sharing objects between processes. The returned manager 1600 object corresponds to a spawned child process and has methods which will 1601 create shared objects and return corresponding proxies. 1602 1603.. module:: multiprocessing.managers 1604 :synopsis: Share data between process with shared objects. 1605 1606Manager processes will be shutdown as soon as they are garbage collected or 1607their parent process exits. The manager classes are defined in the 1608:mod:`multiprocessing.managers` module: 1609 1610.. class:: BaseManager([address[, authkey]]) 1611 1612 Create a BaseManager object. 1613 1614 Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure 1615 that the manager object refers to a started manager process. 1616 1617 *address* is the address on which the manager process listens for new 1618 connections. If *address* is ``None`` then an arbitrary one is chosen. 1619 1620 *authkey* is the authentication key which will be used to check the 1621 validity of incoming connections to the server process. If 1622 *authkey* is ``None`` then ``current_process().authkey`` is used. 1623 Otherwise *authkey* is used and it must be a byte string. 1624 1625 .. method:: start([initializer[, initargs]]) 1626 1627 Start a subprocess to start the manager. If *initializer* is not ``None`` 1628 then the subprocess will call ``initializer(*initargs)`` when it starts. 1629 1630 .. method:: get_server() 1631 1632 Returns a :class:`Server` object which represents the actual server under 1633 the control of the Manager. The :class:`Server` object supports the 1634 :meth:`serve_forever` method:: 1635 1636 >>> from multiprocessing.managers import BaseManager 1637 >>> manager = BaseManager(address=('', 50000), authkey=b'abc') 1638 >>> server = manager.get_server() 1639 >>> server.serve_forever() 1640 1641 :class:`Server` additionally has an :attr:`address` attribute. 1642 1643 .. method:: connect() 1644 1645 Connect a local manager object to a remote manager process:: 1646 1647 >>> from multiprocessing.managers import BaseManager 1648 >>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc') 1649 >>> m.connect() 1650 1651 .. method:: shutdown() 1652 1653 Stop the process used by the manager. This is only available if 1654 :meth:`start` has been used to start the server process. 1655 1656 This can be called multiple times. 1657 1658 .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]]) 1659 1660 A classmethod which can be used for registering a type or callable with 1661 the manager class. 1662 1663 *typeid* is a "type identifier" which is used to identify a particular 1664 type of shared object. This must be a string. 1665 1666 *callable* is a callable used for creating objects for this type 1667 identifier. If a manager instance will be connected to the 1668 server using the :meth:`connect` method, or if the 1669 *create_method* argument is ``False`` then this can be left as 1670 ``None``. 1671 1672 *proxytype* is a subclass of :class:`BaseProxy` which is used to create 1673 proxies for shared objects with this *typeid*. If ``None`` then a proxy 1674 class is created automatically. 1675 1676 *exposed* is used to specify a sequence of method names which proxies for 1677 this typeid should be allowed to access using 1678 :meth:`BaseProxy._callmethod`. (If *exposed* is ``None`` then 1679 :attr:`proxytype._exposed_` is used instead if it exists.) In the case 1680 where no exposed list is specified, all "public methods" of the shared 1681 object will be accessible. (Here a "public method" means any attribute 1682 which has a :meth:`~object.__call__` method and whose name does not begin 1683 with ``'_'``.) 1684 1685 *method_to_typeid* is a mapping used to specify the return type of those 1686 exposed methods which should return a proxy. It maps method names to 1687 typeid strings. (If *method_to_typeid* is ``None`` then 1688 :attr:`proxytype._method_to_typeid_` is used instead if it exists.) If a 1689 method's name is not a key of this mapping or if the mapping is ``None`` 1690 then the object returned by the method will be copied by value. 1691 1692 *create_method* determines whether a method should be created with name 1693 *typeid* which can be used to tell the server process to create a new 1694 shared object and return a proxy for it. By default it is ``True``. 1695 1696 :class:`BaseManager` instances also have one read-only property: 1697 1698 .. attribute:: address 1699 1700 The address used by the manager. 1701 1702 .. versionchanged:: 3.3 1703 Manager objects support the context management protocol -- see 1704 :ref:`typecontextmanager`. :meth:`~contextmanager.__enter__` starts the 1705 server process (if it has not already started) and then returns the 1706 manager object. :meth:`~contextmanager.__exit__` calls :meth:`shutdown`. 1707 1708 In previous versions :meth:`~contextmanager.__enter__` did not start the 1709 manager's server process if it was not already started. 1710 1711.. class:: SyncManager 1712 1713 A subclass of :class:`BaseManager` which can be used for the synchronization 1714 of processes. Objects of this type are returned by 1715 :func:`multiprocessing.Manager`. 1716 1717 Its methods create and return :ref:`multiprocessing-proxy_objects` for a 1718 number of commonly used data types to be synchronized across processes. 1719 This notably includes shared lists and dictionaries. 1720 1721 .. method:: Barrier(parties[, action[, timeout]]) 1722 1723 Create a shared :class:`threading.Barrier` object and return a 1724 proxy for it. 1725 1726 .. versionadded:: 3.3 1727 1728 .. method:: BoundedSemaphore([value]) 1729 1730 Create a shared :class:`threading.BoundedSemaphore` object and return a 1731 proxy for it. 1732 1733 .. method:: Condition([lock]) 1734 1735 Create a shared :class:`threading.Condition` object and return a proxy for 1736 it. 1737 1738 If *lock* is supplied then it should be a proxy for a 1739 :class:`threading.Lock` or :class:`threading.RLock` object. 1740 1741 .. versionchanged:: 3.3 1742 The :meth:`~threading.Condition.wait_for` method was added. 1743 1744 .. method:: Event() 1745 1746 Create a shared :class:`threading.Event` object and return a proxy for it. 1747 1748 .. method:: Lock() 1749 1750 Create a shared :class:`threading.Lock` object and return a proxy for it. 1751 1752 .. method:: Namespace() 1753 1754 Create a shared :class:`Namespace` object and return a proxy for it. 1755 1756 .. method:: Queue([maxsize]) 1757 1758 Create a shared :class:`queue.Queue` object and return a proxy for it. 1759 1760 .. method:: RLock() 1761 1762 Create a shared :class:`threading.RLock` object and return a proxy for it. 1763 1764 .. method:: Semaphore([value]) 1765 1766 Create a shared :class:`threading.Semaphore` object and return a proxy for 1767 it. 1768 1769 .. method:: Array(typecode, sequence) 1770 1771 Create an array and return a proxy for it. 1772 1773 .. method:: Value(typecode, value) 1774 1775 Create an object with a writable ``value`` attribute and return a proxy 1776 for it. 1777 1778 .. method:: dict() 1779 dict(mapping) 1780 dict(sequence) 1781 1782 Create a shared :class:`dict` object and return a proxy for it. 1783 1784 .. method:: list() 1785 list(sequence) 1786 1787 Create a shared :class:`list` object and return a proxy for it. 1788 1789 .. versionchanged:: 3.6 1790 Shared objects are capable of being nested. For example, a shared 1791 container object such as a shared list can contain other shared objects 1792 which will all be managed and synchronized by the :class:`SyncManager`. 1793 1794.. class:: Namespace 1795 1796 A type that can register with :class:`SyncManager`. 1797 1798 A namespace object has no public methods, but does have writable attributes. 1799 Its representation shows the values of its attributes. 1800 1801 However, when using a proxy for a namespace object, an attribute beginning 1802 with ``'_'`` will be an attribute of the proxy and not an attribute of the 1803 referent: 1804 1805 .. doctest:: 1806 1807 >>> manager = multiprocessing.Manager() 1808 >>> Global = manager.Namespace() 1809 >>> Global.x = 10 1810 >>> Global.y = 'hello' 1811 >>> Global._z = 12.3 # this is an attribute of the proxy 1812 >>> print(Global) 1813 Namespace(x=10, y='hello') 1814 1815 1816Customized managers 1817>>>>>>>>>>>>>>>>>>> 1818 1819To create one's own manager, one creates a subclass of :class:`BaseManager` and 1820uses the :meth:`~BaseManager.register` classmethod to register new types or 1821callables with the manager class. For example:: 1822 1823 from multiprocessing.managers import BaseManager 1824 1825 class MathsClass: 1826 def add(self, x, y): 1827 return x + y 1828 def mul(self, x, y): 1829 return x * y 1830 1831 class MyManager(BaseManager): 1832 pass 1833 1834 MyManager.register('Maths', MathsClass) 1835 1836 if __name__ == '__main__': 1837 with MyManager() as manager: 1838 maths = manager.Maths() 1839 print(maths.add(4, 3)) # prints 7 1840 print(maths.mul(7, 8)) # prints 56 1841 1842 1843Using a remote manager 1844>>>>>>>>>>>>>>>>>>>>>> 1845 1846It is possible to run a manager server on one machine and have clients use it 1847from other machines (assuming that the firewalls involved allow it). 1848 1849Running the following commands creates a server for a single shared queue which 1850remote clients can access:: 1851 1852 >>> from multiprocessing.managers import BaseManager 1853 >>> from queue import Queue 1854 >>> queue = Queue() 1855 >>> class QueueManager(BaseManager): pass 1856 >>> QueueManager.register('get_queue', callable=lambda:queue) 1857 >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra') 1858 >>> s = m.get_server() 1859 >>> s.serve_forever() 1860 1861One client can access the server as follows:: 1862 1863 >>> from multiprocessing.managers import BaseManager 1864 >>> class QueueManager(BaseManager): pass 1865 >>> QueueManager.register('get_queue') 1866 >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra') 1867 >>> m.connect() 1868 >>> queue = m.get_queue() 1869 >>> queue.put('hello') 1870 1871Another client can also use it:: 1872 1873 >>> from multiprocessing.managers import BaseManager 1874 >>> class QueueManager(BaseManager): pass 1875 >>> QueueManager.register('get_queue') 1876 >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra') 1877 >>> m.connect() 1878 >>> queue = m.get_queue() 1879 >>> queue.get() 1880 'hello' 1881 1882Local processes can also access that queue, using the code from above on the 1883client to access it remotely:: 1884 1885 >>> from multiprocessing import Process, Queue 1886 >>> from multiprocessing.managers import BaseManager 1887 >>> class Worker(Process): 1888 ... def __init__(self, q): 1889 ... self.q = q 1890 ... super(Worker, self).__init__() 1891 ... def run(self): 1892 ... self.q.put('local hello') 1893 ... 1894 >>> queue = Queue() 1895 >>> w = Worker(queue) 1896 >>> w.start() 1897 >>> class QueueManager(BaseManager): pass 1898 ... 1899 >>> QueueManager.register('get_queue', callable=lambda: queue) 1900 >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra') 1901 >>> s = m.get_server() 1902 >>> s.serve_forever() 1903 1904.. _multiprocessing-proxy_objects: 1905 1906Proxy Objects 1907~~~~~~~~~~~~~ 1908 1909A proxy is an object which *refers* to a shared object which lives (presumably) 1910in a different process. The shared object is said to be the *referent* of the 1911proxy. Multiple proxy objects may have the same referent. 1912 1913A proxy object has methods which invoke corresponding methods of its referent 1914(although not every method of the referent will necessarily be available through 1915the proxy). In this way, a proxy can be used just like its referent can: 1916 1917.. doctest:: 1918 1919 >>> from multiprocessing import Manager 1920 >>> manager = Manager() 1921 >>> l = manager.list([i*i for i in range(10)]) 1922 >>> print(l) 1923 [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] 1924 >>> print(repr(l)) 1925 <ListProxy object, typeid 'list' at 0x...> 1926 >>> l[4] 1927 16 1928 >>> l[2:5] 1929 [4, 9, 16] 1930 1931Notice that applying :func:`str` to a proxy will return the representation of 1932the referent, whereas applying :func:`repr` will return the representation of 1933the proxy. 1934 1935An important feature of proxy objects is that they are picklable so they can be 1936passed between processes. As such, a referent can contain 1937:ref:`multiprocessing-proxy_objects`. This permits nesting of these managed 1938lists, dicts, and other :ref:`multiprocessing-proxy_objects`: 1939 1940.. doctest:: 1941 1942 >>> a = manager.list() 1943 >>> b = manager.list() 1944 >>> a.append(b) # referent of a now contains referent of b 1945 >>> print(a, b) 1946 [<ListProxy object, typeid 'list' at ...>] [] 1947 >>> b.append('hello') 1948 >>> print(a[0], b) 1949 ['hello'] ['hello'] 1950 1951Similarly, dict and list proxies may be nested inside one another:: 1952 1953 >>> l_outer = manager.list([ manager.dict() for i in range(2) ]) 1954 >>> d_first_inner = l_outer[0] 1955 >>> d_first_inner['a'] = 1 1956 >>> d_first_inner['b'] = 2 1957 >>> l_outer[1]['c'] = 3 1958 >>> l_outer[1]['z'] = 26 1959 >>> print(l_outer[0]) 1960 {'a': 1, 'b': 2} 1961 >>> print(l_outer[1]) 1962 {'c': 3, 'z': 26} 1963 1964If standard (non-proxy) :class:`list` or :class:`dict` objects are contained 1965in a referent, modifications to those mutable values will not be propagated 1966through the manager because the proxy has no way of knowing when the values 1967contained within are modified. However, storing a value in a container proxy 1968(which triggers a ``__setitem__`` on the proxy object) does propagate through 1969the manager and so to effectively modify such an item, one could re-assign the 1970modified value to the container proxy:: 1971 1972 # create a list proxy and append a mutable object (a dictionary) 1973 lproxy = manager.list() 1974 lproxy.append({}) 1975 # now mutate the dictionary 1976 d = lproxy[0] 1977 d['a'] = 1 1978 d['b'] = 2 1979 # at this point, the changes to d are not yet synced, but by 1980 # updating the dictionary, the proxy is notified of the change 1981 lproxy[0] = d 1982 1983This approach is perhaps less convenient than employing nested 1984:ref:`multiprocessing-proxy_objects` for most use cases but also 1985demonstrates a level of control over the synchronization. 1986 1987.. note:: 1988 1989 The proxy types in :mod:`multiprocessing` do nothing to support comparisons 1990 by value. So, for instance, we have: 1991 1992 .. doctest:: 1993 1994 >>> manager.list([1,2,3]) == [1,2,3] 1995 False 1996 1997 One should just use a copy of the referent instead when making comparisons. 1998 1999.. class:: BaseProxy 2000 2001 Proxy objects are instances of subclasses of :class:`BaseProxy`. 2002 2003 .. method:: _callmethod(methodname[, args[, kwds]]) 2004 2005 Call and return the result of a method of the proxy's referent. 2006 2007 If ``proxy`` is a proxy whose referent is ``obj`` then the expression :: 2008 2009 proxy._callmethod(methodname, args, kwds) 2010 2011 will evaluate the expression :: 2012 2013 getattr(obj, methodname)(*args, **kwds) 2014 2015 in the manager's process. 2016 2017 The returned value will be a copy of the result of the call or a proxy to 2018 a new shared object -- see documentation for the *method_to_typeid* 2019 argument of :meth:`BaseManager.register`. 2020 2021 If an exception is raised by the call, then is re-raised by 2022 :meth:`_callmethod`. If some other exception is raised in the manager's 2023 process then this is converted into a :exc:`RemoteError` exception and is 2024 raised by :meth:`_callmethod`. 2025 2026 Note in particular that an exception will be raised if *methodname* has 2027 not been *exposed*. 2028 2029 An example of the usage of :meth:`_callmethod`: 2030 2031 .. doctest:: 2032 2033 >>> l = manager.list(range(10)) 2034 >>> l._callmethod('__len__') 2035 10 2036 >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7] 2037 [2, 3, 4, 5, 6] 2038 >>> l._callmethod('__getitem__', (20,)) # equivalent to l[20] 2039 Traceback (most recent call last): 2040 ... 2041 IndexError: list index out of range 2042 2043 .. method:: _getvalue() 2044 2045 Return a copy of the referent. 2046 2047 If the referent is unpicklable then this will raise an exception. 2048 2049 .. method:: __repr__ 2050 2051 Return a representation of the proxy object. 2052 2053 .. method:: __str__ 2054 2055 Return the representation of the referent. 2056 2057 2058Cleanup 2059>>>>>>> 2060 2061A proxy object uses a weakref callback so that when it gets garbage collected it 2062deregisters itself from the manager which owns its referent. 2063 2064A shared object gets deleted from the manager process when there are no longer 2065any proxies referring to it. 2066 2067 2068Process Pools 2069~~~~~~~~~~~~~ 2070 2071.. module:: multiprocessing.pool 2072 :synopsis: Create pools of processes. 2073 2074One can create a pool of processes which will carry out tasks submitted to it 2075with the :class:`Pool` class. 2076 2077.. class:: Pool([processes[, initializer[, initargs[, maxtasksperchild [, context]]]]]) 2078 2079 A process pool object which controls a pool of worker processes to which jobs 2080 can be submitted. It supports asynchronous results with timeouts and 2081 callbacks and has a parallel map implementation. 2082 2083 *processes* is the number of worker processes to use. If *processes* is 2084 ``None`` then the number returned by :func:`os.cpu_count` is used. 2085 2086 If *initializer* is not ``None`` then each worker process will call 2087 ``initializer(*initargs)`` when it starts. 2088 2089 *maxtasksperchild* is the number of tasks a worker process can complete 2090 before it will exit and be replaced with a fresh worker process, to enable 2091 unused resources to be freed. The default *maxtasksperchild* is ``None``, which 2092 means worker processes will live as long as the pool. 2093 2094 *context* can be used to specify the context used for starting 2095 the worker processes. Usually a pool is created using the 2096 function :func:`multiprocessing.Pool` or the :meth:`Pool` method 2097 of a context object. In both cases *context* is set 2098 appropriately. 2099 2100 Note that the methods of the pool object should only be called by 2101 the process which created the pool. 2102 2103 .. versionadded:: 3.2 2104 *maxtasksperchild* 2105 2106 .. versionadded:: 3.4 2107 *context* 2108 2109 .. note:: 2110 2111 Worker processes within a :class:`Pool` typically live for the complete 2112 duration of the Pool's work queue. A frequent pattern found in other 2113 systems (such as Apache, mod_wsgi, etc) to free resources held by 2114 workers is to allow a worker within a pool to complete only a set 2115 amount of work before being exiting, being cleaned up and a new 2116 process spawned to replace the old one. The *maxtasksperchild* 2117 argument to the :class:`Pool` exposes this ability to the end user. 2118 2119 .. method:: apply(func[, args[, kwds]]) 2120 2121 Call *func* with arguments *args* and keyword arguments *kwds*. It blocks 2122 until the result is ready. Given this blocks, :meth:`apply_async` is 2123 better suited for performing work in parallel. Additionally, *func* 2124 is only executed in one of the workers of the pool. 2125 2126 .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]]) 2127 2128 A variant of the :meth:`apply` method which returns a result object. 2129 2130 If *callback* is specified then it should be a callable which accepts a 2131 single argument. When the result becomes ready *callback* is applied to 2132 it, that is unless the call failed, in which case the *error_callback* 2133 is applied instead. 2134 2135 If *error_callback* is specified then it should be a callable which 2136 accepts a single argument. If the target function fails, then 2137 the *error_callback* is called with the exception instance. 2138 2139 Callbacks should complete immediately since otherwise the thread which 2140 handles the results will get blocked. 2141 2142 .. method:: map(func, iterable[, chunksize]) 2143 2144 A parallel equivalent of the :func:`map` built-in function (it supports only 2145 one *iterable* argument though). It blocks until the result is ready. 2146 2147 This method chops the iterable into a number of chunks which it submits to 2148 the process pool as separate tasks. The (approximate) size of these 2149 chunks can be specified by setting *chunksize* to a positive integer. 2150 2151 Note that it may cause high memory usage for very long iterables. Consider 2152 using :meth:`imap` or :meth:`imap_unordered` with explicit *chunksize* 2153 option for better efficiency. 2154 2155 .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]]) 2156 2157 A variant of the :meth:`.map` method which returns a result object. 2158 2159 If *callback* is specified then it should be a callable which accepts a 2160 single argument. When the result becomes ready *callback* is applied to 2161 it, that is unless the call failed, in which case the *error_callback* 2162 is applied instead. 2163 2164 If *error_callback* is specified then it should be a callable which 2165 accepts a single argument. If the target function fails, then 2166 the *error_callback* is called with the exception instance. 2167 2168 Callbacks should complete immediately since otherwise the thread which 2169 handles the results will get blocked. 2170 2171 .. method:: imap(func, iterable[, chunksize]) 2172 2173 A lazier version of :meth:`.map`. 2174 2175 The *chunksize* argument is the same as the one used by the :meth:`.map` 2176 method. For very long iterables using a large value for *chunksize* can 2177 make the job complete **much** faster than using the default value of 2178 ``1``. 2179 2180 Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator 2181 returned by the :meth:`imap` method has an optional *timeout* parameter: 2182 ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the 2183 result cannot be returned within *timeout* seconds. 2184 2185 .. method:: imap_unordered(func, iterable[, chunksize]) 2186 2187 The same as :meth:`imap` except that the ordering of the results from the 2188 returned iterator should be considered arbitrary. (Only when there is 2189 only one worker process is the order guaranteed to be "correct".) 2190 2191 .. method:: starmap(func, iterable[, chunksize]) 2192 2193 Like :meth:`map` except that the elements of the *iterable* are expected 2194 to be iterables that are unpacked as arguments. 2195 2196 Hence an *iterable* of ``[(1,2), (3, 4)]`` results in ``[func(1,2), 2197 func(3,4)]``. 2198 2199 .. versionadded:: 3.3 2200 2201 .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_callback]]]) 2202 2203 A combination of :meth:`starmap` and :meth:`map_async` that iterates over 2204 *iterable* of iterables and calls *func* with the iterables unpacked. 2205 Returns a result object. 2206 2207 .. versionadded:: 3.3 2208 2209 .. method:: close() 2210 2211 Prevents any more tasks from being submitted to the pool. Once all the 2212 tasks have been completed the worker processes will exit. 2213 2214 .. method:: terminate() 2215 2216 Stops the worker processes immediately without completing outstanding 2217 work. When the pool object is garbage collected :meth:`terminate` will be 2218 called immediately. 2219 2220 .. method:: join() 2221 2222 Wait for the worker processes to exit. One must call :meth:`close` or 2223 :meth:`terminate` before using :meth:`join`. 2224 2225 .. versionadded:: 3.3 2226 Pool objects now support the context management protocol -- see 2227 :ref:`typecontextmanager`. :meth:`~contextmanager.__enter__` returns the 2228 pool object, and :meth:`~contextmanager.__exit__` calls :meth:`terminate`. 2229 2230 2231.. class:: AsyncResult 2232 2233 The class of the result returned by :meth:`Pool.apply_async` and 2234 :meth:`Pool.map_async`. 2235 2236 .. method:: get([timeout]) 2237 2238 Return the result when it arrives. If *timeout* is not ``None`` and the 2239 result does not arrive within *timeout* seconds then 2240 :exc:`multiprocessing.TimeoutError` is raised. If the remote call raised 2241 an exception then that exception will be reraised by :meth:`get`. 2242 2243 .. method:: wait([timeout]) 2244 2245 Wait until the result is available or until *timeout* seconds pass. 2246 2247 .. method:: ready() 2248 2249 Return whether the call has completed. 2250 2251 .. method:: successful() 2252 2253 Return whether the call completed without raising an exception. Will 2254 raise :exc:`AssertionError` if the result is not ready. 2255 2256The following example demonstrates the use of a pool:: 2257 2258 from multiprocessing import Pool 2259 import time 2260 2261 def f(x): 2262 return x*x 2263 2264 if __name__ == '__main__': 2265 with Pool(processes=4) as pool: # start 4 worker processes 2266 result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process 2267 print(result.get(timeout=1)) # prints "100" unless your computer is *very* slow 2268 2269 print(pool.map(f, range(10))) # prints "[0, 1, 4,..., 81]" 2270 2271 it = pool.imap(f, range(10)) 2272 print(next(it)) # prints "0" 2273 print(next(it)) # prints "1" 2274 print(it.next(timeout=1)) # prints "4" unless your computer is *very* slow 2275 2276 result = pool.apply_async(time.sleep, (10,)) 2277 print(result.get(timeout=1)) # raises multiprocessing.TimeoutError 2278 2279 2280.. _multiprocessing-listeners-clients: 2281 2282Listeners and Clients 2283~~~~~~~~~~~~~~~~~~~~~ 2284 2285.. module:: multiprocessing.connection 2286 :synopsis: API for dealing with sockets. 2287 2288Usually message passing between processes is done using queues or by using 2289:class:`~Connection` objects returned by 2290:func:`~multiprocessing.Pipe`. 2291 2292However, the :mod:`multiprocessing.connection` module allows some extra 2293flexibility. It basically gives a high level message oriented API for dealing 2294with sockets or Windows named pipes. It also has support for *digest 2295authentication* using the :mod:`hmac` module, and for polling 2296multiple connections at the same time. 2297 2298 2299.. function:: deliver_challenge(connection, authkey) 2300 2301 Send a randomly generated message to the other end of the connection and wait 2302 for a reply. 2303 2304 If the reply matches the digest of the message using *authkey* as the key 2305 then a welcome message is sent to the other end of the connection. Otherwise 2306 :exc:`~multiprocessing.AuthenticationError` is raised. 2307 2308.. function:: answer_challenge(connection, authkey) 2309 2310 Receive a message, calculate the digest of the message using *authkey* as the 2311 key, and then send the digest back. 2312 2313 If a welcome message is not received, then 2314 :exc:`~multiprocessing.AuthenticationError` is raised. 2315 2316.. function:: Client(address[, family[, authkey]]) 2317 2318 Attempt to set up a connection to the listener which is using address 2319 *address*, returning a :class:`~Connection`. 2320 2321 The type of the connection is determined by *family* argument, but this can 2322 generally be omitted since it can usually be inferred from the format of 2323 *address*. (See :ref:`multiprocessing-address-formats`) 2324 2325 If *authkey* is given and not None, it should be a byte string and will be 2326 used as the secret key for an HMAC-based authentication challenge. No 2327 authentication is done if *authkey* is None. 2328 :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails. 2329 See :ref:`multiprocessing-auth-keys`. 2330 2331.. class:: Listener([address[, family[, backlog[, authkey]]]]) 2332 2333 A wrapper for a bound socket or Windows named pipe which is 'listening' for 2334 connections. 2335 2336 *address* is the address to be used by the bound socket or named pipe of the 2337 listener object. 2338 2339 .. note:: 2340 2341 If an address of '0.0.0.0' is used, the address will not be a connectable 2342 end point on Windows. If you require a connectable end-point, 2343 you should use '127.0.0.1'. 2344 2345 *family* is the type of socket (or named pipe) to use. This can be one of 2346 the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix 2347 domain socket) or ``'AF_PIPE'`` (for a Windows named pipe). Of these only 2348 the first is guaranteed to be available. If *family* is ``None`` then the 2349 family is inferred from the format of *address*. If *address* is also 2350 ``None`` then a default is chosen. This default is the family which is 2351 assumed to be the fastest available. See 2352 :ref:`multiprocessing-address-formats`. Note that if *family* is 2353 ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a 2354 private temporary directory created using :func:`tempfile.mkstemp`. 2355 2356 If the listener object uses a socket then *backlog* (1 by default) is passed 2357 to the :meth:`~socket.socket.listen` method of the socket once it has been 2358 bound. 2359 2360 If *authkey* is given and not None, it should be a byte string and will be 2361 used as the secret key for an HMAC-based authentication challenge. No 2362 authentication is done if *authkey* is None. 2363 :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails. 2364 See :ref:`multiprocessing-auth-keys`. 2365 2366 .. method:: accept() 2367 2368 Accept a connection on the bound socket or named pipe of the listener 2369 object and return a :class:`~Connection` object. 2370 If authentication is attempted and fails, then 2371 :exc:`~multiprocessing.AuthenticationError` is raised. 2372 2373 .. method:: close() 2374 2375 Close the bound socket or named pipe of the listener object. This is 2376 called automatically when the listener is garbage collected. However it 2377 is advisable to call it explicitly. 2378 2379 Listener objects have the following read-only properties: 2380 2381 .. attribute:: address 2382 2383 The address which is being used by the Listener object. 2384 2385 .. attribute:: last_accepted 2386 2387 The address from which the last accepted connection came. If this is 2388 unavailable then it is ``None``. 2389 2390 .. versionadded:: 3.3 2391 Listener objects now support the context management protocol -- see 2392 :ref:`typecontextmanager`. :meth:`~contextmanager.__enter__` returns the 2393 listener object, and :meth:`~contextmanager.__exit__` calls :meth:`close`. 2394 2395.. function:: wait(object_list, timeout=None) 2396 2397 Wait till an object in *object_list* is ready. Returns the list of 2398 those objects in *object_list* which are ready. If *timeout* is a 2399 float then the call blocks for at most that many seconds. If 2400 *timeout* is ``None`` then it will block for an unlimited period. 2401 A negative timeout is equivalent to a zero timeout. 2402 2403 For both Unix and Windows, an object can appear in *object_list* if 2404 it is 2405 2406 * a readable :class:`~multiprocessing.connection.Connection` object; 2407 * a connected and readable :class:`socket.socket` object; or 2408 * the :attr:`~multiprocessing.Process.sentinel` attribute of a 2409 :class:`~multiprocessing.Process` object. 2410 2411 A connection or socket object is ready when there is data available 2412 to be read from it, or the other end has been closed. 2413 2414 **Unix**: ``wait(object_list, timeout)`` almost equivalent 2415 ``select.select(object_list, [], [], timeout)``. The difference is 2416 that, if :func:`select.select` is interrupted by a signal, it can 2417 raise :exc:`OSError` with an error number of ``EINTR``, whereas 2418 :func:`wait` will not. 2419 2420 **Windows**: An item in *object_list* must either be an integer 2421 handle which is waitable (according to the definition used by the 2422 documentation of the Win32 function ``WaitForMultipleObjects()``) 2423 or it can be an object with a :meth:`fileno` method which returns a 2424 socket handle or pipe handle. (Note that pipe handles and socket 2425 handles are **not** waitable handles.) 2426 2427 .. versionadded:: 3.3 2428 2429 2430**Examples** 2431 2432The following server code creates a listener which uses ``'secret password'`` as 2433an authentication key. It then waits for a connection and sends some data to 2434the client:: 2435 2436 from multiprocessing.connection import Listener 2437 from array import array 2438 2439 address = ('localhost', 6000) # family is deduced to be 'AF_INET' 2440 2441 with Listener(address, authkey=b'secret password') as listener: 2442 with listener.accept() as conn: 2443 print('connection accepted from', listener.last_accepted) 2444 2445 conn.send([2.25, None, 'junk', float]) 2446 2447 conn.send_bytes(b'hello') 2448 2449 conn.send_bytes(array('i', [42, 1729])) 2450 2451The following code connects to the server and receives some data from the 2452server:: 2453 2454 from multiprocessing.connection import Client 2455 from array import array 2456 2457 address = ('localhost', 6000) 2458 2459 with Client(address, authkey=b'secret password') as conn: 2460 print(conn.recv()) # => [2.25, None, 'junk', float] 2461 2462 print(conn.recv_bytes()) # => 'hello' 2463 2464 arr = array('i', [0, 0, 0, 0, 0]) 2465 print(conn.recv_bytes_into(arr)) # => 8 2466 print(arr) # => array('i', [42, 1729, 0, 0, 0]) 2467 2468The following code uses :func:`~multiprocessing.connection.wait` to 2469wait for messages from multiple processes at once:: 2470 2471 import time, random 2472 from multiprocessing import Process, Pipe, current_process 2473 from multiprocessing.connection import wait 2474 2475 def foo(w): 2476 for i in range(10): 2477 w.send((i, current_process().name)) 2478 w.close() 2479 2480 if __name__ == '__main__': 2481 readers = [] 2482 2483 for i in range(4): 2484 r, w = Pipe(duplex=False) 2485 readers.append(r) 2486 p = Process(target=foo, args=(w,)) 2487 p.start() 2488 # We close the writable end of the pipe now to be sure that 2489 # p is the only process which owns a handle for it. This 2490 # ensures that when p closes its handle for the writable end, 2491 # wait() will promptly report the readable end as being ready. 2492 w.close() 2493 2494 while readers: 2495 for r in wait(readers): 2496 try: 2497 msg = r.recv() 2498 except EOFError: 2499 readers.remove(r) 2500 else: 2501 print(msg) 2502 2503 2504.. _multiprocessing-address-formats: 2505 2506Address Formats 2507>>>>>>>>>>>>>>> 2508 2509* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where 2510 *hostname* is a string and *port* is an integer. 2511 2512* An ``'AF_UNIX'`` address is a string representing a filename on the 2513 filesystem. 2514 2515* An ``'AF_PIPE'`` address is a string of the form 2516 :samp:`r'\\\\.\\pipe\\{PipeName}'`. To use :func:`Client` to connect to a named 2517 pipe on a remote computer called *ServerName* one should use an address of the 2518 form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead. 2519 2520Note that any string beginning with two backslashes is assumed by default to be 2521an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address. 2522 2523 2524.. _multiprocessing-auth-keys: 2525 2526Authentication keys 2527~~~~~~~~~~~~~~~~~~~ 2528 2529When one uses :meth:`Connection.recv <Connection.recv>`, the 2530data received is automatically 2531unpickled. Unfortunately unpickling data from an untrusted source is a security 2532risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module 2533to provide digest authentication. 2534 2535An authentication key is a byte string which can be thought of as a 2536password: once a connection is established both ends will demand proof 2537that the other knows the authentication key. (Demonstrating that both 2538ends are using the same key does **not** involve sending the key over 2539the connection.) 2540 2541If authentication is requested but no authentication key is specified then the 2542return value of ``current_process().authkey`` is used (see 2543:class:`~multiprocessing.Process`). This value will be automatically inherited by 2544any :class:`~multiprocessing.Process` object that the current process creates. 2545This means that (by default) all processes of a multi-process program will share 2546a single authentication key which can be used when setting up connections 2547between themselves. 2548 2549Suitable authentication keys can also be generated by using :func:`os.urandom`. 2550 2551 2552Logging 2553~~~~~~~ 2554 2555Some support for logging is available. Note, however, that the :mod:`logging` 2556package does not use process shared locks so it is possible (depending on the 2557handler type) for messages from different processes to get mixed up. 2558 2559.. currentmodule:: multiprocessing 2560.. function:: get_logger() 2561 2562 Returns the logger used by :mod:`multiprocessing`. If necessary, a new one 2563 will be created. 2564 2565 When first created the logger has level :data:`logging.NOTSET` and no 2566 default handler. Messages sent to this logger will not by default propagate 2567 to the root logger. 2568 2569 Note that on Windows child processes will only inherit the level of the 2570 parent process's logger -- any other customization of the logger will not be 2571 inherited. 2572 2573.. currentmodule:: multiprocessing 2574.. function:: log_to_stderr() 2575 2576 This function performs a call to :func:`get_logger` but in addition to 2577 returning the logger created by get_logger, it adds a handler which sends 2578 output to :data:`sys.stderr` using format 2579 ``'[%(levelname)s/%(processName)s] %(message)s'``. 2580 2581Below is an example session with logging turned on:: 2582 2583 >>> import multiprocessing, logging 2584 >>> logger = multiprocessing.log_to_stderr() 2585 >>> logger.setLevel(logging.INFO) 2586 >>> logger.warning('doomed') 2587 [WARNING/MainProcess] doomed 2588 >>> m = multiprocessing.Manager() 2589 [INFO/SyncManager-...] child process calling self.run() 2590 [INFO/SyncManager-...] created temp directory /.../pymp-... 2591 [INFO/SyncManager-...] manager serving at '/.../listener-...' 2592 >>> del m 2593 [INFO/MainProcess] sending shutdown message to manager 2594 [INFO/SyncManager-...] manager exiting with exitcode 0 2595 2596For a full table of logging levels, see the :mod:`logging` module. 2597 2598 2599The :mod:`multiprocessing.dummy` module 2600~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2601 2602.. module:: multiprocessing.dummy 2603 :synopsis: Dumb wrapper around threading. 2604 2605:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is 2606no more than a wrapper around the :mod:`threading` module. 2607 2608 2609.. _multiprocessing-programming: 2610 2611Programming guidelines 2612---------------------- 2613 2614There are certain guidelines and idioms which should be adhered to when using 2615:mod:`multiprocessing`. 2616 2617 2618All start methods 2619~~~~~~~~~~~~~~~~~ 2620 2621The following applies to all start methods. 2622 2623Avoid shared state 2624 2625 As far as possible one should try to avoid shifting large amounts of data 2626 between processes. 2627 2628 It is probably best to stick to using queues or pipes for communication 2629 between processes rather than using the lower level synchronization 2630 primitives. 2631 2632Picklability 2633 2634 Ensure that the arguments to the methods of proxies are picklable. 2635 2636Thread safety of proxies 2637 2638 Do not use a proxy object from more than one thread unless you protect it 2639 with a lock. 2640 2641 (There is never a problem with different processes using the *same* proxy.) 2642 2643Joining zombie processes 2644 2645 On Unix when a process finishes but has not been joined it becomes a zombie. 2646 There should never be very many because each time a new process starts (or 2647 :func:`~multiprocessing.active_children` is called) all completed processes 2648 which have not yet been joined will be joined. Also calling a finished 2649 process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will 2650 join the process. Even so it is probably good 2651 practice to explicitly join all the processes that you start. 2652 2653Better to inherit than pickle/unpickle 2654 2655 When using the *spawn* or *forkserver* start methods many types 2656 from :mod:`multiprocessing` need to be picklable so that child 2657 processes can use them. However, one should generally avoid 2658 sending shared objects to other processes using pipes or queues. 2659 Instead you should arrange the program so that a process which 2660 needs access to a shared resource created elsewhere can inherit it 2661 from an ancestor process. 2662 2663Avoid terminating processes 2664 2665 Using the :meth:`Process.terminate <multiprocessing.Process.terminate>` 2666 method to stop a process is liable to 2667 cause any shared resources (such as locks, semaphores, pipes and queues) 2668 currently being used by the process to become broken or unavailable to other 2669 processes. 2670 2671 Therefore it is probably best to only consider using 2672 :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes 2673 which never use any shared resources. 2674 2675Joining processes that use queues 2676 2677 Bear in mind that a process that has put items in a queue will wait before 2678 terminating until all the buffered items are fed by the "feeder" thread to 2679 the underlying pipe. (The child process can call the 2680 :meth:`Queue.cancel_join_thread <multiprocessing.Queue.cancel_join_thread>` 2681 method of the queue to avoid this behaviour.) 2682 2683 This means that whenever you use a queue you need to make sure that all 2684 items which have been put on the queue will eventually be removed before the 2685 process is joined. Otherwise you cannot be sure that processes which have 2686 put items on the queue will terminate. Remember also that non-daemonic 2687 processes will be joined automatically. 2688 2689 An example which will deadlock is the following:: 2690 2691 from multiprocessing import Process, Queue 2692 2693 def f(q): 2694 q.put('X' * 1000000) 2695 2696 if __name__ == '__main__': 2697 queue = Queue() 2698 p = Process(target=f, args=(queue,)) 2699 p.start() 2700 p.join() # this deadlocks 2701 obj = queue.get() 2702 2703 A fix here would be to swap the last two lines (or simply remove the 2704 ``p.join()`` line). 2705 2706Explicitly pass resources to child processes 2707 2708 On Unix using the *fork* start method, a child process can make 2709 use of a shared resource created in a parent process using a 2710 global resource. However, it is better to pass the object as an 2711 argument to the constructor for the child process. 2712 2713 Apart from making the code (potentially) compatible with Windows 2714 and the other start methods this also ensures that as long as the 2715 child process is still alive the object will not be garbage 2716 collected in the parent process. This might be important if some 2717 resource is freed when the object is garbage collected in the 2718 parent process. 2719 2720 So for instance :: 2721 2722 from multiprocessing import Process, Lock 2723 2724 def f(): 2725 ... do something using "lock" ... 2726 2727 if __name__ == '__main__': 2728 lock = Lock() 2729 for i in range(10): 2730 Process(target=f).start() 2731 2732 should be rewritten as :: 2733 2734 from multiprocessing import Process, Lock 2735 2736 def f(l): 2737 ... do something using "l" ... 2738 2739 if __name__ == '__main__': 2740 lock = Lock() 2741 for i in range(10): 2742 Process(target=f, args=(lock,)).start() 2743 2744Beware of replacing :data:`sys.stdin` with a "file like object" 2745 2746 :mod:`multiprocessing` originally unconditionally called:: 2747 2748 os.close(sys.stdin.fileno()) 2749 2750 in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted 2751 in issues with processes-in-processes. This has been changed to:: 2752 2753 sys.stdin.close() 2754 sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False) 2755 2756 Which solves the fundamental issue of processes colliding with each other 2757 resulting in a bad file descriptor error, but introduces a potential danger 2758 to applications which replace :func:`sys.stdin` with a "file-like object" 2759 with output buffering. This danger is that if multiple processes call 2760 :meth:`~io.IOBase.close()` on this file-like object, it could result in the same 2761 data being flushed to the object multiple times, resulting in corruption. 2762 2763 If you write a file-like object and implement your own caching, you can 2764 make it fork-safe by storing the pid whenever you append to the cache, 2765 and discarding the cache when the pid changes. For example:: 2766 2767 @property 2768 def cache(self): 2769 pid = os.getpid() 2770 if pid != self._pid: 2771 self._pid = pid 2772 self._cache = [] 2773 return self._cache 2774 2775 For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331` 2776 2777The *spawn* and *forkserver* start methods 2778~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2779 2780There are a few extra restriction which don't apply to the *fork* 2781start method. 2782 2783More picklability 2784 2785 Ensure that all arguments to :meth:`Process.__init__` are picklable. 2786 Also, if you subclass :class:`~multiprocessing.Process` then make sure that 2787 instances will be picklable when the :meth:`Process.start 2788 <multiprocessing.Process.start>` method is called. 2789 2790Global variables 2791 2792 Bear in mind that if code run in a child process tries to access a global 2793 variable, then the value it sees (if any) may not be the same as the value 2794 in the parent process at the time that :meth:`Process.start 2795 <multiprocessing.Process.start>` was called. 2796 2797 However, global variables which are just module level constants cause no 2798 problems. 2799 2800Safe importing of main module 2801 2802 Make sure that the main module can be safely imported by a new Python 2803 interpreter without causing unintended side effects (such a starting a new 2804 process). 2805 2806 For example, using the *spawn* or *forkserver* start method 2807 running the following module would fail with a 2808 :exc:`RuntimeError`:: 2809 2810 from multiprocessing import Process 2811 2812 def foo(): 2813 print('hello') 2814 2815 p = Process(target=foo) 2816 p.start() 2817 2818 Instead one should protect the "entry point" of the program by using ``if 2819 __name__ == '__main__':`` as follows:: 2820 2821 from multiprocessing import Process, freeze_support, set_start_method 2822 2823 def foo(): 2824 print('hello') 2825 2826 if __name__ == '__main__': 2827 freeze_support() 2828 set_start_method('spawn') 2829 p = Process(target=foo) 2830 p.start() 2831 2832 (The ``freeze_support()`` line can be omitted if the program will be run 2833 normally instead of frozen.) 2834 2835 This allows the newly spawned Python interpreter to safely import the module 2836 and then run the module's ``foo()`` function. 2837 2838 Similar restrictions apply if a pool or manager is created in the main 2839 module. 2840 2841 2842.. _multiprocessing-examples: 2843 2844Examples 2845-------- 2846 2847Demonstration of how to create and use customized managers and proxies: 2848 2849.. literalinclude:: ../includes/mp_newtype.py 2850 :language: python3 2851 2852 2853Using :class:`~multiprocessing.pool.Pool`: 2854 2855.. literalinclude:: ../includes/mp_pool.py 2856 :language: python3 2857 2858 2859An example showing how to use queues to feed tasks to a collection of worker 2860processes and collect the results: 2861 2862.. literalinclude:: ../includes/mp_workers.py 2863