1:mod:`multiprocessing` --- Process-based parallelism
2====================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based parallelism.
6
7**Source code:** :source:`Lib/multiprocessing/`
8
9--------------
10
11Introduction
12------------
13
14:mod:`multiprocessing` is a package that supports spawning processes using an
15API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
16offers both local and remote concurrency, effectively side-stepping the
17:term:`Global Interpreter Lock <global interpreter lock>` by using
18subprocesses instead of threads.  Due
19to this, the :mod:`multiprocessing` module allows the programmer to fully
20leverage multiple processors on a given machine.  It runs on both Unix and
21Windows.
22
23The :mod:`multiprocessing` module also introduces APIs which do not have
24analogs in the :mod:`threading` module.  A prime example of this is the
25:class:`~multiprocessing.pool.Pool` object which offers a convenient means of
26parallelizing the execution of a function across multiple input values,
27distributing the input data across processes (data parallelism).  The following
28example demonstrates the common practice of defining such functions in a module
29so that child processes can successfully import that module.  This basic example
30of data parallelism using :class:`~multiprocessing.pool.Pool`, ::
31
32   from multiprocessing import Pool
33
34   def f(x):
35       return x*x
36
37   if __name__ == '__main__':
38       with Pool(5) as p:
39           print(p.map(f, [1, 2, 3]))
40
41will print to standard output ::
42
43   [1, 4, 9]
44
45
46The :class:`Process` class
47~~~~~~~~~~~~~~~~~~~~~~~~~~
48
49In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
50object and then calling its :meth:`~Process.start` method.  :class:`Process`
51follows the API of :class:`threading.Thread`.  A trivial example of a
52multiprocess program is ::
53
54   from multiprocessing import Process
55
56   def f(name):
57       print('hello', name)
58
59   if __name__ == '__main__':
60       p = Process(target=f, args=('bob',))
61       p.start()
62       p.join()
63
64To show the individual process IDs involved, here is an expanded example::
65
66    from multiprocessing import Process
67    import os
68
69    def info(title):
70        print(title)
71        print('module name:', __name__)
72        print('parent process:', os.getppid())
73        print('process id:', os.getpid())
74
75    def f(name):
76        info('function f')
77        print('hello', name)
78
79    if __name__ == '__main__':
80        info('main line')
81        p = Process(target=f, args=('bob',))
82        p.start()
83        p.join()
84
85For an explanation of why the ``if __name__ == '__main__'`` part is
86necessary, see :ref:`multiprocessing-programming`.
87
88
89
90Contexts and start methods
91~~~~~~~~~~~~~~~~~~~~~~~~~~
92
93.. _multiprocessing-start-methods:
94
95Depending on the platform, :mod:`multiprocessing` supports three ways
96to start a process.  These *start methods* are
97
98  *spawn*
99    The parent process starts a fresh python interpreter process.  The
100    child process will only inherit those resources necessary to run
101    the process object's :meth:`~Process.run` method.  In particular,
102    unnecessary file descriptors and handles from the parent process
103    will not be inherited.  Starting a process using this method is
104    rather slow compared to using *fork* or *forkserver*.
105
106    Available on Unix and Windows.  The default on Windows and macOS.
107
108  *fork*
109    The parent process uses :func:`os.fork` to fork the Python
110    interpreter.  The child process, when it begins, is effectively
111    identical to the parent process.  All resources of the parent are
112    inherited by the child process.  Note that safely forking a
113    multithreaded process is problematic.
114
115    Available on Unix only.  The default on Unix.
116
117  *forkserver*
118    When the program starts and selects the *forkserver* start method,
119    a server process is started.  From then on, whenever a new process
120    is needed, the parent process connects to the server and requests
121    that it fork a new process.  The fork server process is single
122    threaded so it is safe for it to use :func:`os.fork`.  No
123    unnecessary resources are inherited.
124
125    Available on Unix platforms which support passing file descriptors
126    over Unix pipes.
127
128.. versionchanged:: 3.8
129
130   On macOS, the *spawn* start method is now the default.  The *fork* start
131   method should be considered unsafe as it can lead to crashes of the
132   subprocess. See :issue:`33725`.
133
134.. versionchanged:: 3.4
135   *spawn* added on all unix platforms, and *forkserver* added for
136   some unix platforms.
137   Child processes no longer inherit all of the parents inheritable
138   handles on Windows.
139
140On Unix using the *spawn* or *forkserver* start methods will also
141start a *resource tracker* process which tracks the unlinked named
142system resources (such as named semaphores or
143:class:`~multiprocessing.shared_memory.SharedMemory` objects) created
144by processes of the program.  When all processes
145have exited the resource tracker unlinks any remaining tracked object.
146Usually there should be none, but if a process was killed by a signal
147there may be some "leaked" resources.  (Neither leaked semaphores nor shared
148memory segments will be automatically unlinked until the next reboot. This is
149problematic for both objects because the system allows only a limited number of
150named semaphores, and shared memory segments occupy some space in the main
151memory.)
152
153To select a start method you use the :func:`set_start_method` in
154the ``if __name__ == '__main__'`` clause of the main module.  For
155example::
156
157       import multiprocessing as mp
158
159       def foo(q):
160           q.put('hello')
161
162       if __name__ == '__main__':
163           mp.set_start_method('spawn')
164           q = mp.Queue()
165           p = mp.Process(target=foo, args=(q,))
166           p.start()
167           print(q.get())
168           p.join()
169
170:func:`set_start_method` should not be used more than once in the
171program.
172
173Alternatively, you can use :func:`get_context` to obtain a context
174object.  Context objects have the same API as the multiprocessing
175module, and allow one to use multiple start methods in the same
176program. ::
177
178       import multiprocessing as mp
179
180       def foo(q):
181           q.put('hello')
182
183       if __name__ == '__main__':
184           ctx = mp.get_context('spawn')
185           q = ctx.Queue()
186           p = ctx.Process(target=foo, args=(q,))
187           p.start()
188           print(q.get())
189           p.join()
190
191Note that objects related to one context may not be compatible with
192processes for a different context.  In particular, locks created using
193the *fork* context cannot be passed to processes started using the
194*spawn* or *forkserver* start methods.
195
196A library which wants to use a particular start method should probably
197use :func:`get_context` to avoid interfering with the choice of the
198library user.
199
200.. warning::
201
202   The ``'spawn'`` and ``'forkserver'`` start methods cannot currently
203   be used with "frozen" executables (i.e., binaries produced by
204   packages like **PyInstaller** and **cx_Freeze**) on Unix.
205   The ``'fork'`` start method does work.
206
207
208Exchanging objects between processes
209~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
210
211:mod:`multiprocessing` supports two types of communication channel between
212processes:
213
214**Queues**
215
216   The :class:`Queue` class is a near clone of :class:`queue.Queue`.  For
217   example::
218
219      from multiprocessing import Process, Queue
220
221      def f(q):
222          q.put([42, None, 'hello'])
223
224      if __name__ == '__main__':
225          q = Queue()
226          p = Process(target=f, args=(q,))
227          p.start()
228          print(q.get())    # prints "[42, None, 'hello']"
229          p.join()
230
231   Queues are thread and process safe.
232
233**Pipes**
234
235   The :func:`Pipe` function returns a pair of connection objects connected by a
236   pipe which by default is duplex (two-way).  For example::
237
238      from multiprocessing import Process, Pipe
239
240      def f(conn):
241          conn.send([42, None, 'hello'])
242          conn.close()
243
244      if __name__ == '__main__':
245          parent_conn, child_conn = Pipe()
246          p = Process(target=f, args=(child_conn,))
247          p.start()
248          print(parent_conn.recv())   # prints "[42, None, 'hello']"
249          p.join()
250
251   The two connection objects returned by :func:`Pipe` represent the two ends of
252   the pipe.  Each connection object has :meth:`~Connection.send` and
253   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
254   may become corrupted if two processes (or threads) try to read from or write
255   to the *same* end of the pipe at the same time.  Of course there is no risk
256   of corruption from processes using different ends of the pipe at the same
257   time.
258
259
260Synchronization between processes
261~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
262
263:mod:`multiprocessing` contains equivalents of all the synchronization
264primitives from :mod:`threading`.  For instance one can use a lock to ensure
265that only one process prints to standard output at a time::
266
267   from multiprocessing import Process, Lock
268
269   def f(l, i):
270       l.acquire()
271       try:
272           print('hello world', i)
273       finally:
274           l.release()
275
276   if __name__ == '__main__':
277       lock = Lock()
278
279       for num in range(10):
280           Process(target=f, args=(lock, num)).start()
281
282Without using the lock output from the different processes is liable to get all
283mixed up.
284
285
286Sharing state between processes
287~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
288
289As mentioned above, when doing concurrent programming it is usually best to
290avoid using shared state as far as possible.  This is particularly true when
291using multiple processes.
292
293However, if you really do need to use some shared data then
294:mod:`multiprocessing` provides a couple of ways of doing so.
295
296**Shared memory**
297
298   Data can be stored in a shared memory map using :class:`Value` or
299   :class:`Array`.  For example, the following code ::
300
301      from multiprocessing import Process, Value, Array
302
303      def f(n, a):
304          n.value = 3.1415927
305          for i in range(len(a)):
306              a[i] = -a[i]
307
308      if __name__ == '__main__':
309          num = Value('d', 0.0)
310          arr = Array('i', range(10))
311
312          p = Process(target=f, args=(num, arr))
313          p.start()
314          p.join()
315
316          print(num.value)
317          print(arr[:])
318
319   will print ::
320
321      3.1415927
322      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
323
324   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
325   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
326   double precision float and ``'i'`` indicates a signed integer.  These shared
327   objects will be process and thread-safe.
328
329   For more flexibility in using shared memory one can use the
330   :mod:`multiprocessing.sharedctypes` module which supports the creation of
331   arbitrary ctypes objects allocated from shared memory.
332
333**Server process**
334
335   A manager object returned by :func:`Manager` controls a server process which
336   holds Python objects and allows other processes to manipulate them using
337   proxies.
338
339   A manager returned by :func:`Manager` will support types
340   :class:`list`, :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`,
341   :class:`RLock`, :class:`Semaphore`, :class:`BoundedSemaphore`,
342   :class:`Condition`, :class:`Event`, :class:`Barrier`,
343   :class:`Queue`, :class:`Value` and :class:`Array`.  For example, ::
344
345      from multiprocessing import Process, Manager
346
347      def f(d, l):
348          d[1] = '1'
349          d['2'] = 2
350          d[0.25] = None
351          l.reverse()
352
353      if __name__ == '__main__':
354          with Manager() as manager:
355              d = manager.dict()
356              l = manager.list(range(10))
357
358              p = Process(target=f, args=(d, l))
359              p.start()
360              p.join()
361
362              print(d)
363              print(l)
364
365   will print ::
366
367       {0.25: None, 1: '1', '2': 2}
368       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
369
370   Server process managers are more flexible than using shared memory objects
371   because they can be made to support arbitrary object types.  Also, a single
372   manager can be shared by processes on different computers over a network.
373   They are, however, slower than using shared memory.
374
375
376Using a pool of workers
377~~~~~~~~~~~~~~~~~~~~~~~
378
379The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
380processes.  It has methods which allows tasks to be offloaded to the worker
381processes in a few different ways.
382
383For example::
384
385   from multiprocessing import Pool, TimeoutError
386   import time
387   import os
388
389   def f(x):
390       return x*x
391
392   if __name__ == '__main__':
393       # start 4 worker processes
394       with Pool(processes=4) as pool:
395
396           # print "[0, 1, 4,..., 81]"
397           print(pool.map(f, range(10)))
398
399           # print same numbers in arbitrary order
400           for i in pool.imap_unordered(f, range(10)):
401               print(i)
402
403           # evaluate "f(20)" asynchronously
404           res = pool.apply_async(f, (20,))      # runs in *only* one process
405           print(res.get(timeout=1))             # prints "400"
406
407           # evaluate "os.getpid()" asynchronously
408           res = pool.apply_async(os.getpid, ()) # runs in *only* one process
409           print(res.get(timeout=1))             # prints the PID of that process
410
411           # launching multiple evaluations asynchronously *may* use more processes
412           multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
413           print([res.get(timeout=1) for res in multiple_results])
414
415           # make a single worker sleep for 10 secs
416           res = pool.apply_async(time.sleep, (10,))
417           try:
418               print(res.get(timeout=1))
419           except TimeoutError:
420               print("We lacked patience and got a multiprocessing.TimeoutError")
421
422           print("For the moment, the pool remains available for more work")
423
424       # exiting the 'with'-block has stopped the pool
425       print("Now the pool is closed and no longer available")
426
427Note that the methods of a pool should only ever be used by the
428process which created it.
429
430.. note::
431
432   Functionality within this package requires that the ``__main__`` module be
433   importable by the children. This is covered in :ref:`multiprocessing-programming`
434   however it is worth pointing out here. This means that some examples, such
435   as the :class:`multiprocessing.pool.Pool` examples will not work in the
436   interactive interpreter. For example::
437
438      >>> from multiprocessing import Pool
439      >>> p = Pool(5)
440      >>> def f(x):
441      ...     return x*x
442      ...
443      >>> with p:
444      ...   p.map(f, [1,2,3])
445      Process PoolWorker-1:
446      Process PoolWorker-2:
447      Process PoolWorker-3:
448      Traceback (most recent call last):
449      Traceback (most recent call last):
450      Traceback (most recent call last):
451      AttributeError: 'module' object has no attribute 'f'
452      AttributeError: 'module' object has no attribute 'f'
453      AttributeError: 'module' object has no attribute 'f'
454
455   (If you try this it will actually output three full tracebacks
456   interleaved in a semi-random fashion, and then you may have to
457   stop the parent process somehow.)
458
459
460Reference
461---------
462
463The :mod:`multiprocessing` package mostly replicates the API of the
464:mod:`threading` module.
465
466
467:class:`Process` and exceptions
468~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
469
470.. class:: Process(group=None, target=None, name=None, args=(), kwargs={}, \
471                   *, daemon=None)
472
473   Process objects represent activity that is run in a separate process. The
474   :class:`Process` class has equivalents of all the methods of
475   :class:`threading.Thread`.
476
477   The constructor should always be called with keyword arguments. *group*
478   should always be ``None``; it exists solely for compatibility with
479   :class:`threading.Thread`.  *target* is the callable object to be invoked by
480   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
481   called. *name* is the process name (see :attr:`name` for more details).
482   *args* is the argument tuple for the target invocation.  *kwargs* is a
483   dictionary of keyword arguments for the target invocation.  If provided,
484   the keyword-only *daemon* argument sets the process :attr:`daemon` flag
485   to ``True`` or ``False``.  If ``None`` (the default), this flag will be
486   inherited from the creating process.
487
488   By default, no arguments are passed to *target*.
489
490   If a subclass overrides the constructor, it must make sure it invokes the
491   base class constructor (:meth:`Process.__init__`) before doing anything else
492   to the process.
493
494   .. versionchanged:: 3.3
495      Added the *daemon* argument.
496
497   .. method:: run()
498
499      Method representing the process's activity.
500
501      You may override this method in a subclass.  The standard :meth:`run`
502      method invokes the callable object passed to the object's constructor as
503      the target argument, if any, with sequential and keyword arguments taken
504      from the *args* and *kwargs* arguments, respectively.
505
506   .. method:: start()
507
508      Start the process's activity.
509
510      This must be called at most once per process object.  It arranges for the
511      object's :meth:`run` method to be invoked in a separate process.
512
513   .. method:: join([timeout])
514
515      If the optional argument *timeout* is ``None`` (the default), the method
516      blocks until the process whose :meth:`join` method is called terminates.
517      If *timeout* is a positive number, it blocks at most *timeout* seconds.
518      Note that the method returns ``None`` if its process terminates or if the
519      method times out.  Check the process's :attr:`exitcode` to determine if
520      it terminated.
521
522      A process can be joined many times.
523
524      A process cannot join itself because this would cause a deadlock.  It is
525      an error to attempt to join a process before it has been started.
526
527   .. attribute:: name
528
529      The process's name.  The name is a string used for identification purposes
530      only.  It has no semantics.  Multiple processes may be given the same
531      name.
532
533      The initial name is set by the constructor.  If no explicit name is
534      provided to the constructor, a name of the form
535      'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' is constructed, where
536      each N\ :sub:`k` is the N-th child of its parent.
537
538   .. method:: is_alive
539
540      Return whether the process is alive.
541
542      Roughly, a process object is alive from the moment the :meth:`start`
543      method returns until the child process terminates.
544
545   .. attribute:: daemon
546
547      The process's daemon flag, a Boolean value.  This must be set before
548      :meth:`start` is called.
549
550      The initial value is inherited from the creating process.
551
552      When a process exits, it attempts to terminate all of its daemonic child
553      processes.
554
555      Note that a daemonic process is not allowed to create child processes.
556      Otherwise a daemonic process would leave its children orphaned if it gets
557      terminated when its parent process exits. Additionally, these are **not**
558      Unix daemons or services, they are normal processes that will be
559      terminated (and not joined) if non-daemonic processes have exited.
560
561   In addition to the  :class:`threading.Thread` API, :class:`Process` objects
562   also support the following attributes and methods:
563
564   .. attribute:: pid
565
566      Return the process ID.  Before the process is spawned, this will be
567      ``None``.
568
569   .. attribute:: exitcode
570
571      The child's exit code.  This will be ``None`` if the process has not yet
572      terminated.  A negative value *-N* indicates that the child was terminated
573      by signal *N*.
574
575   .. attribute:: authkey
576
577      The process's authentication key (a byte string).
578
579      When :mod:`multiprocessing` is initialized the main process is assigned a
580      random string using :func:`os.urandom`.
581
582      When a :class:`Process` object is created, it will inherit the
583      authentication key of its parent process, although this may be changed by
584      setting :attr:`authkey` to another byte string.
585
586      See :ref:`multiprocessing-auth-keys`.
587
588   .. attribute:: sentinel
589
590      A numeric handle of a system object which will become "ready" when
591      the process ends.
592
593      You can use this value if you want to wait on several events at
594      once using :func:`multiprocessing.connection.wait`.  Otherwise
595      calling :meth:`join()` is simpler.
596
597      On Windows, this is an OS handle usable with the ``WaitForSingleObject``
598      and ``WaitForMultipleObjects`` family of API calls.  On Unix, this is
599      a file descriptor usable with primitives from the :mod:`select` module.
600
601      .. versionadded:: 3.3
602
603   .. method:: terminate()
604
605      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
606      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
607      finally clauses, etc., will not be executed.
608
609      Note that descendant processes of the process will *not* be terminated --
610      they will simply become orphaned.
611
612      .. warning::
613
614         If this method is used when the associated process is using a pipe or
615         queue then the pipe or queue is liable to become corrupted and may
616         become unusable by other process.  Similarly, if the process has
617         acquired a lock or semaphore etc. then terminating it is liable to
618         cause other processes to deadlock.
619
620   .. method:: kill()
621
622      Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix.
623
624      .. versionadded:: 3.7
625
626   .. method:: close()
627
628      Close the :class:`Process` object, releasing all resources associated
629      with it.  :exc:`ValueError` is raised if the underlying process
630      is still running.  Once :meth:`close` returns successfully, most
631      other methods and attributes of the :class:`Process` object will
632      raise :exc:`ValueError`.
633
634      .. versionadded:: 3.7
635
636   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
637   :meth:`terminate` and :attr:`exitcode` methods should only be called by
638   the process that created the process object.
639
640   Example usage of some of the methods of :class:`Process`:
641
642   .. doctest::
643      :options: +ELLIPSIS
644
645       >>> import multiprocessing, time, signal
646       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
647       >>> print(p, p.is_alive())
648       <Process ... initial> False
649       >>> p.start()
650       >>> print(p, p.is_alive())
651       <Process ... started> True
652       >>> p.terminate()
653       >>> time.sleep(0.1)
654       >>> print(p, p.is_alive())
655       <Process ... stopped exitcode=-SIGTERM> False
656       >>> p.exitcode == -signal.SIGTERM
657       True
658
659.. exception:: ProcessError
660
661   The base class of all :mod:`multiprocessing` exceptions.
662
663.. exception:: BufferTooShort
664
665   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
666   buffer object is too small for the message read.
667
668   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
669   the message as a byte string.
670
671.. exception:: AuthenticationError
672
673   Raised when there is an authentication error.
674
675.. exception:: TimeoutError
676
677   Raised by methods with a timeout when the timeout expires.
678
679Pipes and Queues
680~~~~~~~~~~~~~~~~
681
682When using multiple processes, one generally uses message passing for
683communication between processes and avoids having to use any synchronization
684primitives like locks.
685
686For passing messages one can use :func:`Pipe` (for a connection between two
687processes) or a queue (which allows multiple producers and consumers).
688
689The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types
690are multi-producer, multi-consumer :abbr:`FIFO (first-in, first-out)`
691queues modelled on the :class:`queue.Queue` class in the
692standard library.  They differ in that :class:`Queue` lacks the
693:meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join` methods introduced
694into Python 2.5's :class:`queue.Queue` class.
695
696If you use :class:`JoinableQueue` then you **must** call
697:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
698semaphore used to count the number of unfinished tasks may eventually overflow,
699raising an exception.
700
701Note that one can also create a shared queue by using a manager object -- see
702:ref:`multiprocessing-managers`.
703
704.. note::
705
706   :mod:`multiprocessing` uses the usual :exc:`queue.Empty` and
707   :exc:`queue.Full` exceptions to signal a timeout.  They are not available in
708   the :mod:`multiprocessing` namespace so you need to import them from
709   :mod:`queue`.
710
711.. note::
712
713   When an object is put on a queue, the object is pickled and a
714   background thread later flushes the pickled data to an underlying
715   pipe.  This has some consequences which are a little surprising,
716   but should not cause any practical difficulties -- if they really
717   bother you then you can instead use a queue created with a
718   :ref:`manager <multiprocessing-managers>`.
719
720   (1) After putting an object on an empty queue there may be an
721       infinitesimal delay before the queue's :meth:`~Queue.empty`
722       method returns :const:`False` and :meth:`~Queue.get_nowait` can
723       return without raising :exc:`queue.Empty`.
724
725   (2) If multiple processes are enqueuing objects, it is possible for
726       the objects to be received at the other end out-of-order.
727       However, objects enqueued by the same process will always be in
728       the expected order with respect to each other.
729
730.. warning::
731
732   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
733   while it is trying to use a :class:`Queue`, then the data in the queue is
734   likely to become corrupted.  This may cause any other process to get an
735   exception when it tries to use the queue later on.
736
737.. warning::
738
739   As mentioned above, if a child process has put items on a queue (and it has
740   not used :meth:`JoinableQueue.cancel_join_thread
741   <multiprocessing.Queue.cancel_join_thread>`), then that process will
742   not terminate until all buffered items have been flushed to the pipe.
743
744   This means that if you try joining that process you may get a deadlock unless
745   you are sure that all items which have been put on the queue have been
746   consumed.  Similarly, if the child process is non-daemonic then the parent
747   process may hang on exit when it tries to join all its non-daemonic children.
748
749   Note that a queue created using a manager does not have this issue.  See
750   :ref:`multiprocessing-programming`.
751
752For an example of the usage of queues for interprocess communication see
753:ref:`multiprocessing-examples`.
754
755
756.. function:: Pipe([duplex])
757
758   Returns a pair ``(conn1, conn2)`` of
759   :class:`~multiprocessing.connection.Connection` objects representing the
760   ends of a pipe.
761
762   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
763   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
764   used for receiving messages and ``conn2`` can only be used for sending
765   messages.
766
767
768.. class:: Queue([maxsize])
769
770   Returns a process shared queue implemented using a pipe and a few
771   locks/semaphores.  When a process first puts an item on the queue a feeder
772   thread is started which transfers objects from a buffer into the pipe.
773
774   The usual :exc:`queue.Empty` and :exc:`queue.Full` exceptions from the
775   standard library's :mod:`queue` module are raised to signal timeouts.
776
777   :class:`Queue` implements all the methods of :class:`queue.Queue` except for
778   :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join`.
779
780   .. method:: qsize()
781
782      Return the approximate size of the queue.  Because of
783      multithreading/multiprocessing semantics, this number is not reliable.
784
785      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
786      Mac OS X where ``sem_getvalue()`` is not implemented.
787
788   .. method:: empty()
789
790      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
791      multithreading/multiprocessing semantics, this is not reliable.
792
793   .. method:: full()
794
795      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
796      multithreading/multiprocessing semantics, this is not reliable.
797
798   .. method:: put(obj[, block[, timeout]])
799
800      Put obj into the queue.  If the optional argument *block* is ``True``
801      (the default) and *timeout* is ``None`` (the default), block if necessary until
802      a free slot is available.  If *timeout* is a positive number, it blocks at
803      most *timeout* seconds and raises the :exc:`queue.Full` exception if no
804      free slot was available within that time.  Otherwise (*block* is
805      ``False``), put an item on the queue if a free slot is immediately
806      available, else raise the :exc:`queue.Full` exception (*timeout* is
807      ignored in that case).
808
809      .. versionchanged:: 3.8
810         If the queue is closed, :exc:`ValueError` is raised instead of
811         :exc:`AssertionError`.
812
813   .. method:: put_nowait(obj)
814
815      Equivalent to ``put(obj, False)``.
816
817   .. method:: get([block[, timeout]])
818
819      Remove and return an item from the queue.  If optional args *block* is
820      ``True`` (the default) and *timeout* is ``None`` (the default), block if
821      necessary until an item is available.  If *timeout* is a positive number,
822      it blocks at most *timeout* seconds and raises the :exc:`queue.Empty`
823      exception if no item was available within that time.  Otherwise (block is
824      ``False``), return an item if one is immediately available, else raise the
825      :exc:`queue.Empty` exception (*timeout* is ignored in that case).
826
827      .. versionchanged:: 3.8
828         If the queue is closed, :exc:`ValueError` is raised instead of
829         :exc:`OSError`.
830
831   .. method:: get_nowait()
832
833      Equivalent to ``get(False)``.
834
835   :class:`multiprocessing.Queue` has a few additional methods not found in
836   :class:`queue.Queue`.  These methods are usually unnecessary for most
837   code:
838
839   .. method:: close()
840
841      Indicate that no more data will be put on this queue by the current
842      process.  The background thread will quit once it has flushed all buffered
843      data to the pipe.  This is called automatically when the queue is garbage
844      collected.
845
846   .. method:: join_thread()
847
848      Join the background thread.  This can only be used after :meth:`close` has
849      been called.  It blocks until the background thread exits, ensuring that
850      all data in the buffer has been flushed to the pipe.
851
852      By default if a process is not the creator of the queue then on exit it
853      will attempt to join the queue's background thread.  The process can call
854      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
855
856   .. method:: cancel_join_thread()
857
858      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
859      the background thread from being joined automatically when the process
860      exits -- see :meth:`join_thread`.
861
862      A better name for this method might be
863      ``allow_exit_without_flush()``.  It is likely to cause enqueued
864      data to lost, and you almost certainly will not need to use it.
865      It is really only there if you need the current process to exit
866      immediately without waiting to flush enqueued data to the
867      underlying pipe, and you don't care about lost data.
868
869   .. note::
870
871      This class's functionality requires a functioning shared semaphore
872      implementation on the host operating system. Without one, the
873      functionality in this class will be disabled, and attempts to
874      instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
875      :issue:`3770` for additional information.  The same holds true for any
876      of the specialized queue types listed below.
877
878.. class:: SimpleQueue()
879
880   It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
881
882   .. method:: close()
883
884      Close the queue: release internal resources.
885
886      A queue must not be used anymore after it is closed. For example,
887      :meth:`get`, :meth:`put` and :meth:`empty` methods must no longer be
888      called.
889
890      .. versionadded:: 3.9
891
892   .. method:: empty()
893
894      Return ``True`` if the queue is empty, ``False`` otherwise.
895
896   .. method:: get()
897
898      Remove and return an item from the queue.
899
900   .. method:: put(item)
901
902      Put *item* into the queue.
903
904
905.. class:: JoinableQueue([maxsize])
906
907   :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
908   additionally has :meth:`task_done` and :meth:`join` methods.
909
910   .. method:: task_done()
911
912      Indicate that a formerly enqueued task is complete. Used by queue
913      consumers.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
914      call to :meth:`task_done` tells the queue that the processing on the task
915      is complete.
916
917      If a :meth:`~queue.Queue.join` is currently blocking, it will resume when all
918      items have been processed (meaning that a :meth:`task_done` call was
919      received for every item that had been :meth:`~Queue.put` into the queue).
920
921      Raises a :exc:`ValueError` if called more times than there were items
922      placed in the queue.
923
924
925   .. method:: join()
926
927      Block until all items in the queue have been gotten and processed.
928
929      The count of unfinished tasks goes up whenever an item is added to the
930      queue.  The count goes down whenever a consumer calls
931      :meth:`task_done` to indicate that the item was retrieved and all work on
932      it is complete.  When the count of unfinished tasks drops to zero,
933      :meth:`~queue.Queue.join` unblocks.
934
935
936Miscellaneous
937~~~~~~~~~~~~~
938
939.. function:: active_children()
940
941   Return list of all live children of the current process.
942
943   Calling this has the side effect of "joining" any processes which have
944   already finished.
945
946.. function:: cpu_count()
947
948   Return the number of CPUs in the system.
949
950   This number is not equivalent to the number of CPUs the current process can
951   use.  The number of usable CPUs can be obtained with
952   ``len(os.sched_getaffinity(0))``
953
954   May raise :exc:`NotImplementedError`.
955
956   .. seealso::
957      :func:`os.cpu_count`
958
959.. function:: current_process()
960
961   Return the :class:`Process` object corresponding to the current process.
962
963   An analogue of :func:`threading.current_thread`.
964
965.. function:: parent_process()
966
967   Return the :class:`Process` object corresponding to the parent process of
968   the :func:`current_process`. For the main process, ``parent_process`` will
969   be ``None``.
970
971   .. versionadded:: 3.8
972
973.. function:: freeze_support()
974
975   Add support for when a program which uses :mod:`multiprocessing` has been
976   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
977   **PyInstaller** and **cx_Freeze**.)
978
979   One needs to call this function straight after the ``if __name__ ==
980   '__main__'`` line of the main module.  For example::
981
982      from multiprocessing import Process, freeze_support
983
984      def f():
985          print('hello world!')
986
987      if __name__ == '__main__':
988          freeze_support()
989          Process(target=f).start()
990
991   If the ``freeze_support()`` line is omitted then trying to run the frozen
992   executable will raise :exc:`RuntimeError`.
993
994   Calling ``freeze_support()`` has no effect when invoked on any operating
995   system other than Windows.  In addition, if the module is being run
996   normally by the Python interpreter on Windows (the program has not been
997   frozen), then ``freeze_support()`` has no effect.
998
999.. function:: get_all_start_methods()
1000
1001   Returns a list of the supported start methods, the first of which
1002   is the default.  The possible start methods are ``'fork'``,
1003   ``'spawn'`` and ``'forkserver'``.  On Windows only ``'spawn'`` is
1004   available.  On Unix ``'fork'`` and ``'spawn'`` are always
1005   supported, with ``'fork'`` being the default.
1006
1007   .. versionadded:: 3.4
1008
1009.. function:: get_context(method=None)
1010
1011   Return a context object which has the same attributes as the
1012   :mod:`multiprocessing` module.
1013
1014   If *method* is ``None`` then the default context is returned.
1015   Otherwise *method* should be ``'fork'``, ``'spawn'``,
1016   ``'forkserver'``.  :exc:`ValueError` is raised if the specified
1017   start method is not available.
1018
1019   .. versionadded:: 3.4
1020
1021.. function:: get_start_method(allow_none=False)
1022
1023   Return the name of start method used for starting processes.
1024
1025   If the start method has not been fixed and *allow_none* is false,
1026   then the start method is fixed to the default and the name is
1027   returned.  If the start method has not been fixed and *allow_none*
1028   is true then ``None`` is returned.
1029
1030   The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'``
1031   or ``None``.  ``'fork'`` is the default on Unix, while ``'spawn'`` is
1032   the default on Windows.
1033
1034   .. versionadded:: 3.4
1035
1036.. function:: set_executable()
1037
1038   Sets the path of the Python interpreter to use when starting a child process.
1039   (By default :data:`sys.executable` is used).  Embedders will probably need to
1040   do some thing like ::
1041
1042      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
1043
1044   before they can create child processes.
1045
1046   .. versionchanged:: 3.4
1047      Now supported on Unix when the ``'spawn'`` start method is used.
1048
1049.. function:: set_start_method(method)
1050
1051   Set the method which should be used to start child processes.
1052   *method* can be ``'fork'``, ``'spawn'`` or ``'forkserver'``.
1053
1054   Note that this should be called at most once, and it should be
1055   protected inside the ``if __name__ == '__main__'`` clause of the
1056   main module.
1057
1058   .. versionadded:: 3.4
1059
1060.. note::
1061
1062   :mod:`multiprocessing` contains no analogues of
1063   :func:`threading.active_count`, :func:`threading.enumerate`,
1064   :func:`threading.settrace`, :func:`threading.setprofile`,
1065   :class:`threading.Timer`, or :class:`threading.local`.
1066
1067
1068Connection Objects
1069~~~~~~~~~~~~~~~~~~
1070
1071.. currentmodule:: multiprocessing.connection
1072
1073Connection objects allow the sending and receiving of picklable objects or
1074strings.  They can be thought of as message oriented connected sockets.
1075
1076Connection objects are usually created using
1077:func:`Pipe <multiprocessing.Pipe>` -- see also
1078:ref:`multiprocessing-listeners-clients`.
1079
1080.. class:: Connection
1081
1082   .. method:: send(obj)
1083
1084      Send an object to the other end of the connection which should be read
1085      using :meth:`recv`.
1086
1087      The object must be picklable.  Very large pickles (approximately 32 MiB+,
1088      though it depends on the OS) may raise a :exc:`ValueError` exception.
1089
1090   .. method:: recv()
1091
1092      Return an object sent from the other end of the connection using
1093      :meth:`send`.  Blocks until there is something to receive.  Raises
1094      :exc:`EOFError` if there is nothing left to receive
1095      and the other end was closed.
1096
1097   .. method:: fileno()
1098
1099      Return the file descriptor or handle used by the connection.
1100
1101   .. method:: close()
1102
1103      Close the connection.
1104
1105      This is called automatically when the connection is garbage collected.
1106
1107   .. method:: poll([timeout])
1108
1109      Return whether there is any data available to be read.
1110
1111      If *timeout* is not specified then it will return immediately.  If
1112      *timeout* is a number then this specifies the maximum time in seconds to
1113      block.  If *timeout* is ``None`` then an infinite timeout is used.
1114
1115      Note that multiple connection objects may be polled at once by
1116      using :func:`multiprocessing.connection.wait`.
1117
1118   .. method:: send_bytes(buffer[, offset[, size]])
1119
1120      Send byte data from a :term:`bytes-like object` as a complete message.
1121
1122      If *offset* is given then data is read from that position in *buffer*.  If
1123      *size* is given then that many bytes will be read from buffer.  Very large
1124      buffers (approximately 32 MiB+, though it depends on the OS) may raise a
1125      :exc:`ValueError` exception
1126
1127   .. method:: recv_bytes([maxlength])
1128
1129      Return a complete message of byte data sent from the other end of the
1130      connection as a string.  Blocks until there is something to receive.
1131      Raises :exc:`EOFError` if there is nothing left
1132      to receive and the other end has closed.
1133
1134      If *maxlength* is specified and the message is longer than *maxlength*
1135      then :exc:`OSError` is raised and the connection will no longer be
1136      readable.
1137
1138      .. versionchanged:: 3.3
1139         This function used to raise :exc:`IOError`, which is now an
1140         alias of :exc:`OSError`.
1141
1142
1143   .. method:: recv_bytes_into(buffer[, offset])
1144
1145      Read into *buffer* a complete message of byte data sent from the other end
1146      of the connection and return the number of bytes in the message.  Blocks
1147      until there is something to receive.  Raises
1148      :exc:`EOFError` if there is nothing left to receive and the other end was
1149      closed.
1150
1151      *buffer* must be a writable :term:`bytes-like object`.  If
1152      *offset* is given then the message will be written into the buffer from
1153      that position.  Offset must be a non-negative integer less than the
1154      length of *buffer* (in bytes).
1155
1156      If the buffer is too short then a :exc:`BufferTooShort` exception is
1157      raised and the complete message is available as ``e.args[0]`` where ``e``
1158      is the exception instance.
1159
1160   .. versionchanged:: 3.3
1161      Connection objects themselves can now be transferred between processes
1162      using :meth:`Connection.send` and :meth:`Connection.recv`.
1163
1164   .. versionadded:: 3.3
1165      Connection objects now support the context management protocol -- see
1166      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
1167      connection object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
1168
1169For example:
1170
1171.. doctest::
1172
1173    >>> from multiprocessing import Pipe
1174    >>> a, b = Pipe()
1175    >>> a.send([1, 'hello', None])
1176    >>> b.recv()
1177    [1, 'hello', None]
1178    >>> b.send_bytes(b'thank you')
1179    >>> a.recv_bytes()
1180    b'thank you'
1181    >>> import array
1182    >>> arr1 = array.array('i', range(5))
1183    >>> arr2 = array.array('i', [0] * 10)
1184    >>> a.send_bytes(arr1)
1185    >>> count = b.recv_bytes_into(arr2)
1186    >>> assert count == len(arr1) * arr1.itemsize
1187    >>> arr2
1188    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
1189
1190
1191.. warning::
1192
1193    The :meth:`Connection.recv` method automatically unpickles the data it
1194    receives, which can be a security risk unless you can trust the process
1195    which sent the message.
1196
1197    Therefore, unless the connection object was produced using :func:`Pipe` you
1198    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
1199    methods after performing some sort of authentication.  See
1200    :ref:`multiprocessing-auth-keys`.
1201
1202.. warning::
1203
1204    If a process is killed while it is trying to read or write to a pipe then
1205    the data in the pipe is likely to become corrupted, because it may become
1206    impossible to be sure where the message boundaries lie.
1207
1208
1209Synchronization primitives
1210~~~~~~~~~~~~~~~~~~~~~~~~~~
1211
1212.. currentmodule:: multiprocessing
1213
1214Generally synchronization primitives are not as necessary in a multiprocess
1215program as they are in a multithreaded program.  See the documentation for
1216:mod:`threading` module.
1217
1218Note that one can also create synchronization primitives by using a manager
1219object -- see :ref:`multiprocessing-managers`.
1220
1221.. class:: Barrier(parties[, action[, timeout]])
1222
1223   A barrier object: a clone of :class:`threading.Barrier`.
1224
1225   .. versionadded:: 3.3
1226
1227.. class:: BoundedSemaphore([value])
1228
1229   A bounded semaphore object: a close analog of
1230   :class:`threading.BoundedSemaphore`.
1231
1232   A solitary difference from its close analog exists: its ``acquire`` method's
1233   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1234
1235   .. note::
1236      On Mac OS X, this is indistinguishable from :class:`Semaphore` because
1237      ``sem_getvalue()`` is not implemented on that platform.
1238
1239.. class:: Condition([lock])
1240
1241   A condition variable: an alias for :class:`threading.Condition`.
1242
1243   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
1244   object from :mod:`multiprocessing`.
1245
1246   .. versionchanged:: 3.3
1247      The :meth:`~threading.Condition.wait_for` method was added.
1248
1249.. class:: Event()
1250
1251   A clone of :class:`threading.Event`.
1252
1253
1254.. class:: Lock()
1255
1256   A non-recursive lock object: a close analog of :class:`threading.Lock`.
1257   Once a process or thread has acquired a lock, subsequent attempts to
1258   acquire it from any process or thread will block until it is released;
1259   any process or thread may release it.  The concepts and behaviors of
1260   :class:`threading.Lock` as it applies to threads are replicated here in
1261   :class:`multiprocessing.Lock` as it applies to either processes or threads,
1262   except as noted.
1263
1264   Note that :class:`Lock` is actually a factory function which returns an
1265   instance of ``multiprocessing.synchronize.Lock`` initialized with a
1266   default context.
1267
1268   :class:`Lock` supports the :term:`context manager` protocol and thus may be
1269   used in :keyword:`with` statements.
1270
1271   .. method:: acquire(block=True, timeout=None)
1272
1273      Acquire a lock, blocking or non-blocking.
1274
1275      With the *block* argument set to ``True`` (the default), the method call
1276      will block until the lock is in an unlocked state, then set it to locked
1277      and return ``True``.  Note that the name of this first argument differs
1278      from that in :meth:`threading.Lock.acquire`.
1279
1280      With the *block* argument set to ``False``, the method call does not
1281      block.  If the lock is currently in a locked state, return ``False``;
1282      otherwise set the lock to a locked state and return ``True``.
1283
1284      When invoked with a positive, floating-point value for *timeout*, block
1285      for at most the number of seconds specified by *timeout* as long as
1286      the lock can not be acquired.  Invocations with a negative value for
1287      *timeout* are equivalent to a *timeout* of zero.  Invocations with a
1288      *timeout* value of ``None`` (the default) set the timeout period to
1289      infinite.  Note that the treatment of negative or ``None`` values for
1290      *timeout* differs from the implemented behavior in
1291      :meth:`threading.Lock.acquire`.  The *timeout* argument has no practical
1292      implications if the *block* argument is set to ``False`` and is thus
1293      ignored.  Returns ``True`` if the lock has been acquired or ``False`` if
1294      the timeout period has elapsed.
1295
1296
1297   .. method:: release()
1298
1299      Release a lock.  This can be called from any process or thread, not only
1300      the process or thread which originally acquired the lock.
1301
1302      Behavior is the same as in :meth:`threading.Lock.release` except that
1303      when invoked on an unlocked lock, a :exc:`ValueError` is raised.
1304
1305
1306.. class:: RLock()
1307
1308   A recursive lock object: a close analog of :class:`threading.RLock`.  A
1309   recursive lock must be released by the process or thread that acquired it.
1310   Once a process or thread has acquired a recursive lock, the same process
1311   or thread may acquire it again without blocking; that process or thread
1312   must release it once for each time it has been acquired.
1313
1314   Note that :class:`RLock` is actually a factory function which returns an
1315   instance of ``multiprocessing.synchronize.RLock`` initialized with a
1316   default context.
1317
1318   :class:`RLock` supports the :term:`context manager` protocol and thus may be
1319   used in :keyword:`with` statements.
1320
1321
1322   .. method:: acquire(block=True, timeout=None)
1323
1324      Acquire a lock, blocking or non-blocking.
1325
1326      When invoked with the *block* argument set to ``True``, block until the
1327      lock is in an unlocked state (not owned by any process or thread) unless
1328      the lock is already owned by the current process or thread.  The current
1329      process or thread then takes ownership of the lock (if it does not
1330      already have ownership) and the recursion level inside the lock increments
1331      by one, resulting in a return value of ``True``.  Note that there are
1332      several differences in this first argument's behavior compared to the
1333      implementation of :meth:`threading.RLock.acquire`, starting with the name
1334      of the argument itself.
1335
1336      When invoked with the *block* argument set to ``False``, do not block.
1337      If the lock has already been acquired (and thus is owned) by another
1338      process or thread, the current process or thread does not take ownership
1339      and the recursion level within the lock is not changed, resulting in
1340      a return value of ``False``.  If the lock is in an unlocked state, the
1341      current process or thread takes ownership and the recursion level is
1342      incremented, resulting in a return value of ``True``.
1343
1344      Use and behaviors of the *timeout* argument are the same as in
1345      :meth:`Lock.acquire`.  Note that some of these behaviors of *timeout*
1346      differ from the implemented behaviors in :meth:`threading.RLock.acquire`.
1347
1348
1349   .. method:: release()
1350
1351      Release a lock, decrementing the recursion level.  If after the
1352      decrement the recursion level is zero, reset the lock to unlocked (not
1353      owned by any process or thread) and if any other processes or threads
1354      are blocked waiting for the lock to become unlocked, allow exactly one
1355      of them to proceed.  If after the decrement the recursion level is still
1356      nonzero, the lock remains locked and owned by the calling process or
1357      thread.
1358
1359      Only call this method when the calling process or thread owns the lock.
1360      An :exc:`AssertionError` is raised if this method is called by a process
1361      or thread other than the owner or if the lock is in an unlocked (unowned)
1362      state.  Note that the type of exception raised in this situation
1363      differs from the implemented behavior in :meth:`threading.RLock.release`.
1364
1365
1366.. class:: Semaphore([value])
1367
1368   A semaphore object: a close analog of :class:`threading.Semaphore`.
1369
1370   A solitary difference from its close analog exists: its ``acquire`` method's
1371   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1372
1373.. note::
1374
1375   On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
1376   a timeout will emulate that function's behavior using a sleeping loop.
1377
1378.. note::
1379
1380   If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
1381   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
1382   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
1383   or :meth:`Condition.wait` then the call will be immediately interrupted and
1384   :exc:`KeyboardInterrupt` will be raised.
1385
1386   This differs from the behaviour of :mod:`threading` where SIGINT will be
1387   ignored while the equivalent blocking calls are in progress.
1388
1389.. note::
1390
1391   Some of this package's functionality requires a functioning shared semaphore
1392   implementation on the host operating system. Without one, the
1393   :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
1394   import it will result in an :exc:`ImportError`. See
1395   :issue:`3770` for additional information.
1396
1397
1398Shared :mod:`ctypes` Objects
1399~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1400
1401It is possible to create shared objects using shared memory which can be
1402inherited by child processes.
1403
1404.. function:: Value(typecode_or_type, *args, lock=True)
1405
1406   Return a :mod:`ctypes` object allocated from shared memory.  By default the
1407   return value is actually a synchronized wrapper for the object.  The object
1408   itself can be accessed via the *value* attribute of a :class:`Value`.
1409
1410   *typecode_or_type* determines the type of the returned object: it is either a
1411   ctypes type or a one character typecode of the kind used by the :mod:`array`
1412   module.  *\*args* is passed on to the constructor for the type.
1413
1414   If *lock* is ``True`` (the default) then a new recursive lock
1415   object is created to synchronize access to the value.  If *lock* is
1416   a :class:`Lock` or :class:`RLock` object then that will be used to
1417   synchronize access to the value.  If *lock* is ``False`` then
1418   access to the returned object will not be automatically protected
1419   by a lock, so it will not necessarily be "process-safe".
1420
1421   Operations like ``+=`` which involve a read and write are not
1422   atomic.  So if, for instance, you want to atomically increment a
1423   shared value it is insufficient to just do ::
1424
1425       counter.value += 1
1426
1427   Assuming the associated lock is recursive (which it is by default)
1428   you can instead do ::
1429
1430       with counter.get_lock():
1431           counter.value += 1
1432
1433   Note that *lock* is a keyword-only argument.
1434
1435.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1436
1437   Return a ctypes array allocated from shared memory.  By default the return
1438   value is actually a synchronized wrapper for the array.
1439
1440   *typecode_or_type* determines the type of the elements of the returned array:
1441   it is either a ctypes type or a one character typecode of the kind used by
1442   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
1443   determines the length of the array, and the array will be initially zeroed.
1444   Otherwise, *size_or_initializer* is a sequence which is used to initialize
1445   the array and whose length determines the length of the array.
1446
1447   If *lock* is ``True`` (the default) then a new lock object is created to
1448   synchronize access to the value.  If *lock* is a :class:`Lock` or
1449   :class:`RLock` object then that will be used to synchronize access to the
1450   value.  If *lock* is ``False`` then access to the returned object will not be
1451   automatically protected by a lock, so it will not necessarily be
1452   "process-safe".
1453
1454   Note that *lock* is a keyword only argument.
1455
1456   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
1457   attributes which allow one to use it to store and retrieve strings.
1458
1459
1460The :mod:`multiprocessing.sharedctypes` module
1461>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1462
1463.. module:: multiprocessing.sharedctypes
1464   :synopsis: Allocate ctypes objects from shared memory.
1465
1466The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1467:mod:`ctypes` objects from shared memory which can be inherited by child
1468processes.
1469
1470.. note::
1471
1472   Although it is possible to store a pointer in shared memory remember that
1473   this will refer to a location in the address space of a specific process.
1474   However, the pointer is quite likely to be invalid in the context of a second
1475   process and trying to dereference the pointer from the second process may
1476   cause a crash.
1477
1478.. function:: RawArray(typecode_or_type, size_or_initializer)
1479
1480   Return a ctypes array allocated from shared memory.
1481
1482   *typecode_or_type* determines the type of the elements of the returned array:
1483   it is either a ctypes type or a one character typecode of the kind used by
1484   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1485   determines the length of the array, and the array will be initially zeroed.
1486   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1487   array and whose length determines the length of the array.
1488
1489   Note that setting and getting an element is potentially non-atomic -- use
1490   :func:`Array` instead to make sure that access is automatically synchronized
1491   using a lock.
1492
1493.. function:: RawValue(typecode_or_type, *args)
1494
1495   Return a ctypes object allocated from shared memory.
1496
1497   *typecode_or_type* determines the type of the returned object: it is either a
1498   ctypes type or a one character typecode of the kind used by the :mod:`array`
1499   module.  *\*args* is passed on to the constructor for the type.
1500
1501   Note that setting and getting the value is potentially non-atomic -- use
1502   :func:`Value` instead to make sure that access is automatically synchronized
1503   using a lock.
1504
1505   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1506   attributes which allow one to use it to store and retrieve strings -- see
1507   documentation for :mod:`ctypes`.
1508
1509.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1510
1511   The same as :func:`RawArray` except that depending on the value of *lock* a
1512   process-safe synchronization wrapper may be returned instead of a raw ctypes
1513   array.
1514
1515   If *lock* is ``True`` (the default) then a new lock object is created to
1516   synchronize access to the value.  If *lock* is a
1517   :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1518   then that will be used to synchronize access to the
1519   value.  If *lock* is ``False`` then access to the returned object will not be
1520   automatically protected by a lock, so it will not necessarily be
1521   "process-safe".
1522
1523   Note that *lock* is a keyword-only argument.
1524
1525.. function:: Value(typecode_or_type, *args, lock=True)
1526
1527   The same as :func:`RawValue` except that depending on the value of *lock* a
1528   process-safe synchronization wrapper may be returned instead of a raw ctypes
1529   object.
1530
1531   If *lock* is ``True`` (the default) then a new lock object is created to
1532   synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
1533   :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
1534   value.  If *lock* is ``False`` then access to the returned object will not be
1535   automatically protected by a lock, so it will not necessarily be
1536   "process-safe".
1537
1538   Note that *lock* is a keyword-only argument.
1539
1540.. function:: copy(obj)
1541
1542   Return a ctypes object allocated from shared memory which is a copy of the
1543   ctypes object *obj*.
1544
1545.. function:: synchronized(obj[, lock])
1546
1547   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1548   synchronize access.  If *lock* is ``None`` (the default) then a
1549   :class:`multiprocessing.RLock` object is created automatically.
1550
1551   A synchronized wrapper will have two methods in addition to those of the
1552   object it wraps: :meth:`get_obj` returns the wrapped object and
1553   :meth:`get_lock` returns the lock object used for synchronization.
1554
1555   Note that accessing the ctypes object through the wrapper can be a lot slower
1556   than accessing the raw ctypes object.
1557
1558   .. versionchanged:: 3.5
1559      Synchronized objects support the :term:`context manager` protocol.
1560
1561
1562The table below compares the syntax for creating shared ctypes objects from
1563shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1564subclass of :class:`ctypes.Structure`.)
1565
1566==================== ========================== ===========================
1567ctypes               sharedctypes using type    sharedctypes using typecode
1568==================== ========================== ===========================
1569c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1570MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1571(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1572(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1573==================== ========================== ===========================
1574
1575
1576Below is an example where a number of ctypes objects are modified by a child
1577process::
1578
1579   from multiprocessing import Process, Lock
1580   from multiprocessing.sharedctypes import Value, Array
1581   from ctypes import Structure, c_double
1582
1583   class Point(Structure):
1584       _fields_ = [('x', c_double), ('y', c_double)]
1585
1586   def modify(n, x, s, A):
1587       n.value **= 2
1588       x.value **= 2
1589       s.value = s.value.upper()
1590       for a in A:
1591           a.x **= 2
1592           a.y **= 2
1593
1594   if __name__ == '__main__':
1595       lock = Lock()
1596
1597       n = Value('i', 7)
1598       x = Value(c_double, 1.0/3.0, lock=False)
1599       s = Array('c', b'hello world', lock=lock)
1600       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1601
1602       p = Process(target=modify, args=(n, x, s, A))
1603       p.start()
1604       p.join()
1605
1606       print(n.value)
1607       print(x.value)
1608       print(s.value)
1609       print([(a.x, a.y) for a in A])
1610
1611
1612.. highlight:: none
1613
1614The results printed are ::
1615
1616    49
1617    0.1111111111111111
1618    HELLO WORLD
1619    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1620
1621.. highlight:: python3
1622
1623
1624.. _multiprocessing-managers:
1625
1626Managers
1627~~~~~~~~
1628
1629Managers provide a way to create data which can be shared between different
1630processes, including sharing over a network between processes running on
1631different machines. A manager object controls a server process which manages
1632*shared objects*.  Other processes can access the shared objects by using
1633proxies.
1634
1635.. function:: multiprocessing.Manager()
1636
1637   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1638   can be used for sharing objects between processes.  The returned manager
1639   object corresponds to a spawned child process and has methods which will
1640   create shared objects and return corresponding proxies.
1641
1642.. module:: multiprocessing.managers
1643   :synopsis: Share data between process with shared objects.
1644
1645Manager processes will be shutdown as soon as they are garbage collected or
1646their parent process exits.  The manager classes are defined in the
1647:mod:`multiprocessing.managers` module:
1648
1649.. class:: BaseManager([address[, authkey]])
1650
1651   Create a BaseManager object.
1652
1653   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1654   that the manager object refers to a started manager process.
1655
1656   *address* is the address on which the manager process listens for new
1657   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1658
1659   *authkey* is the authentication key which will be used to check the
1660   validity of incoming connections to the server process.  If
1661   *authkey* is ``None`` then ``current_process().authkey`` is used.
1662   Otherwise *authkey* is used and it must be a byte string.
1663
1664   .. method:: start([initializer[, initargs]])
1665
1666      Start a subprocess to start the manager.  If *initializer* is not ``None``
1667      then the subprocess will call ``initializer(*initargs)`` when it starts.
1668
1669   .. method:: get_server()
1670
1671      Returns a :class:`Server` object which represents the actual server under
1672      the control of the Manager. The :class:`Server` object supports the
1673      :meth:`serve_forever` method::
1674
1675      >>> from multiprocessing.managers import BaseManager
1676      >>> manager = BaseManager(address=('', 50000), authkey=b'abc')
1677      >>> server = manager.get_server()
1678      >>> server.serve_forever()
1679
1680      :class:`Server` additionally has an :attr:`address` attribute.
1681
1682   .. method:: connect()
1683
1684      Connect a local manager object to a remote manager process::
1685
1686      >>> from multiprocessing.managers import BaseManager
1687      >>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
1688      >>> m.connect()
1689
1690   .. method:: shutdown()
1691
1692      Stop the process used by the manager.  This is only available if
1693      :meth:`start` has been used to start the server process.
1694
1695      This can be called multiple times.
1696
1697   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1698
1699      A classmethod which can be used for registering a type or callable with
1700      the manager class.
1701
1702      *typeid* is a "type identifier" which is used to identify a particular
1703      type of shared object.  This must be a string.
1704
1705      *callable* is a callable used for creating objects for this type
1706      identifier.  If a manager instance will be connected to the
1707      server using the :meth:`connect` method, or if the
1708      *create_method* argument is ``False`` then this can be left as
1709      ``None``.
1710
1711      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1712      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1713      class is created automatically.
1714
1715      *exposed* is used to specify a sequence of method names which proxies for
1716      this typeid should be allowed to access using
1717      :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
1718      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1719      where no exposed list is specified, all "public methods" of the shared
1720      object will be accessible.  (Here a "public method" means any attribute
1721      which has a :meth:`~object.__call__` method and whose name does not begin
1722      with ``'_'``.)
1723
1724      *method_to_typeid* is a mapping used to specify the return type of those
1725      exposed methods which should return a proxy.  It maps method names to
1726      typeid strings.  (If *method_to_typeid* is ``None`` then
1727      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1728      method's name is not a key of this mapping or if the mapping is ``None``
1729      then the object returned by the method will be copied by value.
1730
1731      *create_method* determines whether a method should be created with name
1732      *typeid* which can be used to tell the server process to create a new
1733      shared object and return a proxy for it.  By default it is ``True``.
1734
1735   :class:`BaseManager` instances also have one read-only property:
1736
1737   .. attribute:: address
1738
1739      The address used by the manager.
1740
1741   .. versionchanged:: 3.3
1742      Manager objects support the context management protocol -- see
1743      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` starts the
1744      server process (if it has not already started) and then returns the
1745      manager object.  :meth:`~contextmanager.__exit__` calls :meth:`shutdown`.
1746
1747      In previous versions :meth:`~contextmanager.__enter__` did not start the
1748      manager's server process if it was not already started.
1749
1750.. class:: SyncManager
1751
1752   A subclass of :class:`BaseManager` which can be used for the synchronization
1753   of processes.  Objects of this type are returned by
1754   :func:`multiprocessing.Manager`.
1755
1756   Its methods create and return :ref:`multiprocessing-proxy_objects` for a
1757   number of commonly used data types to be synchronized across processes.
1758   This notably includes shared lists and dictionaries.
1759
1760   .. method:: Barrier(parties[, action[, timeout]])
1761
1762      Create a shared :class:`threading.Barrier` object and return a
1763      proxy for it.
1764
1765      .. versionadded:: 3.3
1766
1767   .. method:: BoundedSemaphore([value])
1768
1769      Create a shared :class:`threading.BoundedSemaphore` object and return a
1770      proxy for it.
1771
1772   .. method:: Condition([lock])
1773
1774      Create a shared :class:`threading.Condition` object and return a proxy for
1775      it.
1776
1777      If *lock* is supplied then it should be a proxy for a
1778      :class:`threading.Lock` or :class:`threading.RLock` object.
1779
1780      .. versionchanged:: 3.3
1781         The :meth:`~threading.Condition.wait_for` method was added.
1782
1783   .. method:: Event()
1784
1785      Create a shared :class:`threading.Event` object and return a proxy for it.
1786
1787   .. method:: Lock()
1788
1789      Create a shared :class:`threading.Lock` object and return a proxy for it.
1790
1791   .. method:: Namespace()
1792
1793      Create a shared :class:`Namespace` object and return a proxy for it.
1794
1795   .. method:: Queue([maxsize])
1796
1797      Create a shared :class:`queue.Queue` object and return a proxy for it.
1798
1799   .. method:: RLock()
1800
1801      Create a shared :class:`threading.RLock` object and return a proxy for it.
1802
1803   .. method:: Semaphore([value])
1804
1805      Create a shared :class:`threading.Semaphore` object and return a proxy for
1806      it.
1807
1808   .. method:: Array(typecode, sequence)
1809
1810      Create an array and return a proxy for it.
1811
1812   .. method:: Value(typecode, value)
1813
1814      Create an object with a writable ``value`` attribute and return a proxy
1815      for it.
1816
1817   .. method:: dict()
1818               dict(mapping)
1819               dict(sequence)
1820
1821      Create a shared :class:`dict` object and return a proxy for it.
1822
1823   .. method:: list()
1824               list(sequence)
1825
1826      Create a shared :class:`list` object and return a proxy for it.
1827
1828   .. versionchanged:: 3.6
1829      Shared objects are capable of being nested.  For example, a shared
1830      container object such as a shared list can contain other shared objects
1831      which will all be managed and synchronized by the :class:`SyncManager`.
1832
1833.. class:: Namespace
1834
1835   A type that can register with :class:`SyncManager`.
1836
1837   A namespace object has no public methods, but does have writable attributes.
1838   Its representation shows the values of its attributes.
1839
1840   However, when using a proxy for a namespace object, an attribute beginning
1841   with ``'_'`` will be an attribute of the proxy and not an attribute of the
1842   referent:
1843
1844   .. doctest::
1845
1846    >>> manager = multiprocessing.Manager()
1847    >>> Global = manager.Namespace()
1848    >>> Global.x = 10
1849    >>> Global.y = 'hello'
1850    >>> Global._z = 12.3    # this is an attribute of the proxy
1851    >>> print(Global)
1852    Namespace(x=10, y='hello')
1853
1854
1855Customized managers
1856>>>>>>>>>>>>>>>>>>>
1857
1858To create one's own manager, one creates a subclass of :class:`BaseManager` and
1859uses the :meth:`~BaseManager.register` classmethod to register new types or
1860callables with the manager class.  For example::
1861
1862   from multiprocessing.managers import BaseManager
1863
1864   class MathsClass:
1865       def add(self, x, y):
1866           return x + y
1867       def mul(self, x, y):
1868           return x * y
1869
1870   class MyManager(BaseManager):
1871       pass
1872
1873   MyManager.register('Maths', MathsClass)
1874
1875   if __name__ == '__main__':
1876       with MyManager() as manager:
1877           maths = manager.Maths()
1878           print(maths.add(4, 3))         # prints 7
1879           print(maths.mul(7, 8))         # prints 56
1880
1881
1882Using a remote manager
1883>>>>>>>>>>>>>>>>>>>>>>
1884
1885It is possible to run a manager server on one machine and have clients use it
1886from other machines (assuming that the firewalls involved allow it).
1887
1888Running the following commands creates a server for a single shared queue which
1889remote clients can access::
1890
1891   >>> from multiprocessing.managers import BaseManager
1892   >>> from queue import Queue
1893   >>> queue = Queue()
1894   >>> class QueueManager(BaseManager): pass
1895   >>> QueueManager.register('get_queue', callable=lambda:queue)
1896   >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1897   >>> s = m.get_server()
1898   >>> s.serve_forever()
1899
1900One client can access the server as follows::
1901
1902   >>> from multiprocessing.managers import BaseManager
1903   >>> class QueueManager(BaseManager): pass
1904   >>> QueueManager.register('get_queue')
1905   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1906   >>> m.connect()
1907   >>> queue = m.get_queue()
1908   >>> queue.put('hello')
1909
1910Another client can also use it::
1911
1912   >>> from multiprocessing.managers import BaseManager
1913   >>> class QueueManager(BaseManager): pass
1914   >>> QueueManager.register('get_queue')
1915   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1916   >>> m.connect()
1917   >>> queue = m.get_queue()
1918   >>> queue.get()
1919   'hello'
1920
1921Local processes can also access that queue, using the code from above on the
1922client to access it remotely::
1923
1924    >>> from multiprocessing import Process, Queue
1925    >>> from multiprocessing.managers import BaseManager
1926    >>> class Worker(Process):
1927    ...     def __init__(self, q):
1928    ...         self.q = q
1929    ...         super(Worker, self).__init__()
1930    ...     def run(self):
1931    ...         self.q.put('local hello')
1932    ...
1933    >>> queue = Queue()
1934    >>> w = Worker(queue)
1935    >>> w.start()
1936    >>> class QueueManager(BaseManager): pass
1937    ...
1938    >>> QueueManager.register('get_queue', callable=lambda: queue)
1939    >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1940    >>> s = m.get_server()
1941    >>> s.serve_forever()
1942
1943.. _multiprocessing-proxy_objects:
1944
1945Proxy Objects
1946~~~~~~~~~~~~~
1947
1948A proxy is an object which *refers* to a shared object which lives (presumably)
1949in a different process.  The shared object is said to be the *referent* of the
1950proxy.  Multiple proxy objects may have the same referent.
1951
1952A proxy object has methods which invoke corresponding methods of its referent
1953(although not every method of the referent will necessarily be available through
1954the proxy).  In this way, a proxy can be used just like its referent can:
1955
1956.. doctest::
1957
1958   >>> from multiprocessing import Manager
1959   >>> manager = Manager()
1960   >>> l = manager.list([i*i for i in range(10)])
1961   >>> print(l)
1962   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1963   >>> print(repr(l))
1964   <ListProxy object, typeid 'list' at 0x...>
1965   >>> l[4]
1966   16
1967   >>> l[2:5]
1968   [4, 9, 16]
1969
1970Notice that applying :func:`str` to a proxy will return the representation of
1971the referent, whereas applying :func:`repr` will return the representation of
1972the proxy.
1973
1974An important feature of proxy objects is that they are picklable so they can be
1975passed between processes.  As such, a referent can contain
1976:ref:`multiprocessing-proxy_objects`.  This permits nesting of these managed
1977lists, dicts, and other :ref:`multiprocessing-proxy_objects`:
1978
1979.. doctest::
1980
1981   >>> a = manager.list()
1982   >>> b = manager.list()
1983   >>> a.append(b)         # referent of a now contains referent of b
1984   >>> print(a, b)
1985   [<ListProxy object, typeid 'list' at ...>] []
1986   >>> b.append('hello')
1987   >>> print(a[0], b)
1988   ['hello'] ['hello']
1989
1990Similarly, dict and list proxies may be nested inside one another::
1991
1992   >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
1993   >>> d_first_inner = l_outer[0]
1994   >>> d_first_inner['a'] = 1
1995   >>> d_first_inner['b'] = 2
1996   >>> l_outer[1]['c'] = 3
1997   >>> l_outer[1]['z'] = 26
1998   >>> print(l_outer[0])
1999   {'a': 1, 'b': 2}
2000   >>> print(l_outer[1])
2001   {'c': 3, 'z': 26}
2002
2003If standard (non-proxy) :class:`list` or :class:`dict` objects are contained
2004in a referent, modifications to those mutable values will not be propagated
2005through the manager because the proxy has no way of knowing when the values
2006contained within are modified.  However, storing a value in a container proxy
2007(which triggers a ``__setitem__`` on the proxy object) does propagate through
2008the manager and so to effectively modify such an item, one could re-assign the
2009modified value to the container proxy::
2010
2011   # create a list proxy and append a mutable object (a dictionary)
2012   lproxy = manager.list()
2013   lproxy.append({})
2014   # now mutate the dictionary
2015   d = lproxy[0]
2016   d['a'] = 1
2017   d['b'] = 2
2018   # at this point, the changes to d are not yet synced, but by
2019   # updating the dictionary, the proxy is notified of the change
2020   lproxy[0] = d
2021
2022This approach is perhaps less convenient than employing nested
2023:ref:`multiprocessing-proxy_objects` for most use cases but also
2024demonstrates a level of control over the synchronization.
2025
2026.. note::
2027
2028   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
2029   by value.  So, for instance, we have:
2030
2031   .. doctest::
2032
2033       >>> manager.list([1,2,3]) == [1,2,3]
2034       False
2035
2036   One should just use a copy of the referent instead when making comparisons.
2037
2038.. class:: BaseProxy
2039
2040   Proxy objects are instances of subclasses of :class:`BaseProxy`.
2041
2042   .. method:: _callmethod(methodname[, args[, kwds]])
2043
2044      Call and return the result of a method of the proxy's referent.
2045
2046      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
2047
2048         proxy._callmethod(methodname, args, kwds)
2049
2050      will evaluate the expression ::
2051
2052         getattr(obj, methodname)(*args, **kwds)
2053
2054      in the manager's process.
2055
2056      The returned value will be a copy of the result of the call or a proxy to
2057      a new shared object -- see documentation for the *method_to_typeid*
2058      argument of :meth:`BaseManager.register`.
2059
2060      If an exception is raised by the call, then is re-raised by
2061      :meth:`_callmethod`.  If some other exception is raised in the manager's
2062      process then this is converted into a :exc:`RemoteError` exception and is
2063      raised by :meth:`_callmethod`.
2064
2065      Note in particular that an exception will be raised if *methodname* has
2066      not been *exposed*.
2067
2068      An example of the usage of :meth:`_callmethod`:
2069
2070      .. doctest::
2071
2072         >>> l = manager.list(range(10))
2073         >>> l._callmethod('__len__')
2074         10
2075         >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
2076         [2, 3, 4, 5, 6]
2077         >>> l._callmethod('__getitem__', (20,))          # equivalent to l[20]
2078         Traceback (most recent call last):
2079         ...
2080         IndexError: list index out of range
2081
2082   .. method:: _getvalue()
2083
2084      Return a copy of the referent.
2085
2086      If the referent is unpicklable then this will raise an exception.
2087
2088   .. method:: __repr__
2089
2090      Return a representation of the proxy object.
2091
2092   .. method:: __str__
2093
2094      Return the representation of the referent.
2095
2096
2097Cleanup
2098>>>>>>>
2099
2100A proxy object uses a weakref callback so that when it gets garbage collected it
2101deregisters itself from the manager which owns its referent.
2102
2103A shared object gets deleted from the manager process when there are no longer
2104any proxies referring to it.
2105
2106
2107Process Pools
2108~~~~~~~~~~~~~
2109
2110.. module:: multiprocessing.pool
2111   :synopsis: Create pools of processes.
2112
2113One can create a pool of processes which will carry out tasks submitted to it
2114with the :class:`Pool` class.
2115
2116.. class:: Pool([processes[, initializer[, initargs[, maxtasksperchild [, context]]]]])
2117
2118   A process pool object which controls a pool of worker processes to which jobs
2119   can be submitted.  It supports asynchronous results with timeouts and
2120   callbacks and has a parallel map implementation.
2121
2122   *processes* is the number of worker processes to use.  If *processes* is
2123   ``None`` then the number returned by :func:`os.cpu_count` is used.
2124
2125   If *initializer* is not ``None`` then each worker process will call
2126   ``initializer(*initargs)`` when it starts.
2127
2128   *maxtasksperchild* is the number of tasks a worker process can complete
2129   before it will exit and be replaced with a fresh worker process, to enable
2130   unused resources to be freed. The default *maxtasksperchild* is ``None``, which
2131   means worker processes will live as long as the pool.
2132
2133   *context* can be used to specify the context used for starting
2134   the worker processes.  Usually a pool is created using the
2135   function :func:`multiprocessing.Pool` or the :meth:`Pool` method
2136   of a context object.  In both cases *context* is set
2137   appropriately.
2138
2139   Note that the methods of the pool object should only be called by
2140   the process which created the pool.
2141
2142   .. warning::
2143      :class:`multiprocessing.pool` objects have internal resources that need to be
2144      properly managed (like any other resource) by using the pool as a context manager
2145      or by calling :meth:`close` and :meth:`terminate` manually. Failure to do this
2146      can lead to the process hanging on finalization.
2147
2148      Note that is **not correct** to rely on the garbage colletor to destroy the pool
2149      as CPython does not assure that the finalizer of the pool will be called
2150      (see :meth:`object.__del__` for more information).
2151
2152   .. versionadded:: 3.2
2153      *maxtasksperchild*
2154
2155   .. versionadded:: 3.4
2156      *context*
2157
2158   .. note::
2159
2160      Worker processes within a :class:`Pool` typically live for the complete
2161      duration of the Pool's work queue. A frequent pattern found in other
2162      systems (such as Apache, mod_wsgi, etc) to free resources held by
2163      workers is to allow a worker within a pool to complete only a set
2164      amount of work before being exiting, being cleaned up and a new
2165      process spawned to replace the old one. The *maxtasksperchild*
2166      argument to the :class:`Pool` exposes this ability to the end user.
2167
2168   .. method:: apply(func[, args[, kwds]])
2169
2170      Call *func* with arguments *args* and keyword arguments *kwds*.  It blocks
2171      until the result is ready. Given this blocks, :meth:`apply_async` is
2172      better suited for performing work in parallel. Additionally, *func*
2173      is only executed in one of the workers of the pool.
2174
2175   .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]])
2176
2177      A variant of the :meth:`apply` method which returns a
2178      :class:`~multiprocessing.pool.AsyncResult` object.
2179
2180      If *callback* is specified then it should be a callable which accepts a
2181      single argument.  When the result becomes ready *callback* is applied to
2182      it, that is unless the call failed, in which case the *error_callback*
2183      is applied instead.
2184
2185      If *error_callback* is specified then it should be a callable which
2186      accepts a single argument.  If the target function fails, then
2187      the *error_callback* is called with the exception instance.
2188
2189      Callbacks should complete immediately since otherwise the thread which
2190      handles the results will get blocked.
2191
2192   .. method:: map(func, iterable[, chunksize])
2193
2194      A parallel equivalent of the :func:`map` built-in function (it supports only
2195      one *iterable* argument though, for multiple iterables see :meth:`starmap`).
2196      It blocks until the result is ready.
2197
2198      This method chops the iterable into a number of chunks which it submits to
2199      the process pool as separate tasks.  The (approximate) size of these
2200      chunks can be specified by setting *chunksize* to a positive integer.
2201
2202      Note that it may cause high memory usage for very long iterables. Consider
2203      using :meth:`imap` or :meth:`imap_unordered` with explicit *chunksize*
2204      option for better efficiency.
2205
2206   .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]])
2207
2208      A variant of the :meth:`.map` method which returns a
2209      :class:`~multiprocessing.pool.AsyncResult` object.
2210
2211      If *callback* is specified then it should be a callable which accepts a
2212      single argument.  When the result becomes ready *callback* is applied to
2213      it, that is unless the call failed, in which case the *error_callback*
2214      is applied instead.
2215
2216      If *error_callback* is specified then it should be a callable which
2217      accepts a single argument.  If the target function fails, then
2218      the *error_callback* is called with the exception instance.
2219
2220      Callbacks should complete immediately since otherwise the thread which
2221      handles the results will get blocked.
2222
2223   .. method:: imap(func, iterable[, chunksize])
2224
2225      A lazier version of :meth:`.map`.
2226
2227      The *chunksize* argument is the same as the one used by the :meth:`.map`
2228      method.  For very long iterables using a large value for *chunksize* can
2229      make the job complete **much** faster than using the default value of
2230      ``1``.
2231
2232      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
2233      returned by the :meth:`imap` method has an optional *timeout* parameter:
2234      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
2235      result cannot be returned within *timeout* seconds.
2236
2237   .. method:: imap_unordered(func, iterable[, chunksize])
2238
2239      The same as :meth:`imap` except that the ordering of the results from the
2240      returned iterator should be considered arbitrary.  (Only when there is
2241      only one worker process is the order guaranteed to be "correct".)
2242
2243   .. method:: starmap(func, iterable[, chunksize])
2244
2245      Like :meth:`map` except that the elements of the *iterable* are expected
2246      to be iterables that are unpacked as arguments.
2247
2248      Hence an *iterable* of ``[(1,2), (3, 4)]`` results in ``[func(1,2),
2249      func(3,4)]``.
2250
2251      .. versionadded:: 3.3
2252
2253   .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])
2254
2255      A combination of :meth:`starmap` and :meth:`map_async` that iterates over
2256      *iterable* of iterables and calls *func* with the iterables unpacked.
2257      Returns a result object.
2258
2259      .. versionadded:: 3.3
2260
2261   .. method:: close()
2262
2263      Prevents any more tasks from being submitted to the pool.  Once all the
2264      tasks have been completed the worker processes will exit.
2265
2266   .. method:: terminate()
2267
2268      Stops the worker processes immediately without completing outstanding
2269      work.  When the pool object is garbage collected :meth:`terminate` will be
2270      called immediately.
2271
2272   .. method:: join()
2273
2274      Wait for the worker processes to exit.  One must call :meth:`close` or
2275      :meth:`terminate` before using :meth:`join`.
2276
2277   .. versionadded:: 3.3
2278      Pool objects now support the context management protocol -- see
2279      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2280      pool object, and :meth:`~contextmanager.__exit__` calls :meth:`terminate`.
2281
2282
2283.. class:: AsyncResult
2284
2285   The class of the result returned by :meth:`Pool.apply_async` and
2286   :meth:`Pool.map_async`.
2287
2288   .. method:: get([timeout])
2289
2290      Return the result when it arrives.  If *timeout* is not ``None`` and the
2291      result does not arrive within *timeout* seconds then
2292      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
2293      an exception then that exception will be reraised by :meth:`get`.
2294
2295   .. method:: wait([timeout])
2296
2297      Wait until the result is available or until *timeout* seconds pass.
2298
2299   .. method:: ready()
2300
2301      Return whether the call has completed.
2302
2303   .. method:: successful()
2304
2305      Return whether the call completed without raising an exception.  Will
2306      raise :exc:`ValueError` if the result is not ready.
2307
2308      .. versionchanged:: 3.7
2309         If the result is not ready, :exc:`ValueError` is raised instead of
2310         :exc:`AssertionError`.
2311
2312The following example demonstrates the use of a pool::
2313
2314   from multiprocessing import Pool
2315   import time
2316
2317   def f(x):
2318       return x*x
2319
2320   if __name__ == '__main__':
2321       with Pool(processes=4) as pool:         # start 4 worker processes
2322           result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
2323           print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow
2324
2325           print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"
2326
2327           it = pool.imap(f, range(10))
2328           print(next(it))                     # prints "0"
2329           print(next(it))                     # prints "1"
2330           print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow
2331
2332           result = pool.apply_async(time.sleep, (10,))
2333           print(result.get(timeout=1))        # raises multiprocessing.TimeoutError
2334
2335
2336.. _multiprocessing-listeners-clients:
2337
2338Listeners and Clients
2339~~~~~~~~~~~~~~~~~~~~~
2340
2341.. module:: multiprocessing.connection
2342   :synopsis: API for dealing with sockets.
2343
2344Usually message passing between processes is done using queues or by using
2345:class:`~Connection` objects returned by
2346:func:`~multiprocessing.Pipe`.
2347
2348However, the :mod:`multiprocessing.connection` module allows some extra
2349flexibility.  It basically gives a high level message oriented API for dealing
2350with sockets or Windows named pipes.  It also has support for *digest
2351authentication* using the :mod:`hmac` module, and for polling
2352multiple connections at the same time.
2353
2354
2355.. function:: deliver_challenge(connection, authkey)
2356
2357   Send a randomly generated message to the other end of the connection and wait
2358   for a reply.
2359
2360   If the reply matches the digest of the message using *authkey* as the key
2361   then a welcome message is sent to the other end of the connection.  Otherwise
2362   :exc:`~multiprocessing.AuthenticationError` is raised.
2363
2364.. function:: answer_challenge(connection, authkey)
2365
2366   Receive a message, calculate the digest of the message using *authkey* as the
2367   key, and then send the digest back.
2368
2369   If a welcome message is not received, then
2370   :exc:`~multiprocessing.AuthenticationError` is raised.
2371
2372.. function:: Client(address[, family[, authkey]])
2373
2374   Attempt to set up a connection to the listener which is using address
2375   *address*, returning a :class:`~Connection`.
2376
2377   The type of the connection is determined by *family* argument, but this can
2378   generally be omitted since it can usually be inferred from the format of
2379   *address*. (See :ref:`multiprocessing-address-formats`)
2380
2381   If *authkey* is given and not None, it should be a byte string and will be
2382   used as the secret key for an HMAC-based authentication challenge. No
2383   authentication is done if *authkey* is None.
2384   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2385   See :ref:`multiprocessing-auth-keys`.
2386
2387.. class:: Listener([address[, family[, backlog[, authkey]]]])
2388
2389   A wrapper for a bound socket or Windows named pipe which is 'listening' for
2390   connections.
2391
2392   *address* is the address to be used by the bound socket or named pipe of the
2393   listener object.
2394
2395   .. note::
2396
2397      If an address of '0.0.0.0' is used, the address will not be a connectable
2398      end point on Windows. If you require a connectable end-point,
2399      you should use '127.0.0.1'.
2400
2401   *family* is the type of socket (or named pipe) to use.  This can be one of
2402   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
2403   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
2404   the first is guaranteed to be available.  If *family* is ``None`` then the
2405   family is inferred from the format of *address*.  If *address* is also
2406   ``None`` then a default is chosen.  This default is the family which is
2407   assumed to be the fastest available.  See
2408   :ref:`multiprocessing-address-formats`.  Note that if *family* is
2409   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
2410   private temporary directory created using :func:`tempfile.mkstemp`.
2411
2412   If the listener object uses a socket then *backlog* (1 by default) is passed
2413   to the :meth:`~socket.socket.listen` method of the socket once it has been
2414   bound.
2415
2416   If *authkey* is given and not None, it should be a byte string and will be
2417   used as the secret key for an HMAC-based authentication challenge. No
2418   authentication is done if *authkey* is None.
2419   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2420   See :ref:`multiprocessing-auth-keys`.
2421
2422   .. method:: accept()
2423
2424      Accept a connection on the bound socket or named pipe of the listener
2425      object and return a :class:`~Connection` object.
2426      If authentication is attempted and fails, then
2427      :exc:`~multiprocessing.AuthenticationError` is raised.
2428
2429   .. method:: close()
2430
2431      Close the bound socket or named pipe of the listener object.  This is
2432      called automatically when the listener is garbage collected.  However it
2433      is advisable to call it explicitly.
2434
2435   Listener objects have the following read-only properties:
2436
2437   .. attribute:: address
2438
2439      The address which is being used by the Listener object.
2440
2441   .. attribute:: last_accepted
2442
2443      The address from which the last accepted connection came.  If this is
2444      unavailable then it is ``None``.
2445
2446   .. versionadded:: 3.3
2447      Listener objects now support the context management protocol -- see
2448      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2449      listener object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
2450
2451.. function:: wait(object_list, timeout=None)
2452
2453   Wait till an object in *object_list* is ready.  Returns the list of
2454   those objects in *object_list* which are ready.  If *timeout* is a
2455   float then the call blocks for at most that many seconds.  If
2456   *timeout* is ``None`` then it will block for an unlimited period.
2457   A negative timeout is equivalent to a zero timeout.
2458
2459   For both Unix and Windows, an object can appear in *object_list* if
2460   it is
2461
2462   * a readable :class:`~multiprocessing.connection.Connection` object;
2463   * a connected and readable :class:`socket.socket` object; or
2464   * the :attr:`~multiprocessing.Process.sentinel` attribute of a
2465     :class:`~multiprocessing.Process` object.
2466
2467   A connection or socket object is ready when there is data available
2468   to be read from it, or the other end has been closed.
2469
2470   **Unix**: ``wait(object_list, timeout)`` almost equivalent
2471   ``select.select(object_list, [], [], timeout)``.  The difference is
2472   that, if :func:`select.select` is interrupted by a signal, it can
2473   raise :exc:`OSError` with an error number of ``EINTR``, whereas
2474   :func:`wait` will not.
2475
2476   **Windows**: An item in *object_list* must either be an integer
2477   handle which is waitable (according to the definition used by the
2478   documentation of the Win32 function ``WaitForMultipleObjects()``)
2479   or it can be an object with a :meth:`fileno` method which returns a
2480   socket handle or pipe handle.  (Note that pipe handles and socket
2481   handles are **not** waitable handles.)
2482
2483   .. versionadded:: 3.3
2484
2485
2486**Examples**
2487
2488The following server code creates a listener which uses ``'secret password'`` as
2489an authentication key.  It then waits for a connection and sends some data to
2490the client::
2491
2492   from multiprocessing.connection import Listener
2493   from array import array
2494
2495   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
2496
2497   with Listener(address, authkey=b'secret password') as listener:
2498       with listener.accept() as conn:
2499           print('connection accepted from', listener.last_accepted)
2500
2501           conn.send([2.25, None, 'junk', float])
2502
2503           conn.send_bytes(b'hello')
2504
2505           conn.send_bytes(array('i', [42, 1729]))
2506
2507The following code connects to the server and receives some data from the
2508server::
2509
2510   from multiprocessing.connection import Client
2511   from array import array
2512
2513   address = ('localhost', 6000)
2514
2515   with Client(address, authkey=b'secret password') as conn:
2516       print(conn.recv())                  # => [2.25, None, 'junk', float]
2517
2518       print(conn.recv_bytes())            # => 'hello'
2519
2520       arr = array('i', [0, 0, 0, 0, 0])
2521       print(conn.recv_bytes_into(arr))    # => 8
2522       print(arr)                          # => array('i', [42, 1729, 0, 0, 0])
2523
2524The following code uses :func:`~multiprocessing.connection.wait` to
2525wait for messages from multiple processes at once::
2526
2527   import time, random
2528   from multiprocessing import Process, Pipe, current_process
2529   from multiprocessing.connection import wait
2530
2531   def foo(w):
2532       for i in range(10):
2533           w.send((i, current_process().name))
2534       w.close()
2535
2536   if __name__ == '__main__':
2537       readers = []
2538
2539       for i in range(4):
2540           r, w = Pipe(duplex=False)
2541           readers.append(r)
2542           p = Process(target=foo, args=(w,))
2543           p.start()
2544           # We close the writable end of the pipe now to be sure that
2545           # p is the only process which owns a handle for it.  This
2546           # ensures that when p closes its handle for the writable end,
2547           # wait() will promptly report the readable end as being ready.
2548           w.close()
2549
2550       while readers:
2551           for r in wait(readers):
2552               try:
2553                   msg = r.recv()
2554               except EOFError:
2555                   readers.remove(r)
2556               else:
2557                   print(msg)
2558
2559
2560.. _multiprocessing-address-formats:
2561
2562Address Formats
2563>>>>>>>>>>>>>>>
2564
2565* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2566  *hostname* is a string and *port* is an integer.
2567
2568* An ``'AF_UNIX'`` address is a string representing a filename on the
2569  filesystem.
2570
2571* An ``'AF_PIPE'`` address is a string of the form
2572  :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
2573  pipe on a remote computer called *ServerName* one should use an address of the
2574  form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
2575
2576Note that any string beginning with two backslashes is assumed by default to be
2577an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2578
2579
2580.. _multiprocessing-auth-keys:
2581
2582Authentication keys
2583~~~~~~~~~~~~~~~~~~~
2584
2585When one uses :meth:`Connection.recv <Connection.recv>`, the
2586data received is automatically
2587unpickled. Unfortunately unpickling data from an untrusted source is a security
2588risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2589to provide digest authentication.
2590
2591An authentication key is a byte string which can be thought of as a
2592password: once a connection is established both ends will demand proof
2593that the other knows the authentication key.  (Demonstrating that both
2594ends are using the same key does **not** involve sending the key over
2595the connection.)
2596
2597If authentication is requested but no authentication key is specified then the
2598return value of ``current_process().authkey`` is used (see
2599:class:`~multiprocessing.Process`).  This value will be automatically inherited by
2600any :class:`~multiprocessing.Process` object that the current process creates.
2601This means that (by default) all processes of a multi-process program will share
2602a single authentication key which can be used when setting up connections
2603between themselves.
2604
2605Suitable authentication keys can also be generated by using :func:`os.urandom`.
2606
2607
2608Logging
2609~~~~~~~
2610
2611Some support for logging is available.  Note, however, that the :mod:`logging`
2612package does not use process shared locks so it is possible (depending on the
2613handler type) for messages from different processes to get mixed up.
2614
2615.. currentmodule:: multiprocessing
2616.. function:: get_logger()
2617
2618   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2619   will be created.
2620
2621   When first created the logger has level :data:`logging.NOTSET` and no
2622   default handler. Messages sent to this logger will not by default propagate
2623   to the root logger.
2624
2625   Note that on Windows child processes will only inherit the level of the
2626   parent process's logger -- any other customization of the logger will not be
2627   inherited.
2628
2629.. currentmodule:: multiprocessing
2630.. function:: log_to_stderr()
2631
2632   This function performs a call to :func:`get_logger` but in addition to
2633   returning the logger created by get_logger, it adds a handler which sends
2634   output to :data:`sys.stderr` using format
2635   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2636
2637Below is an example session with logging turned on::
2638
2639    >>> import multiprocessing, logging
2640    >>> logger = multiprocessing.log_to_stderr()
2641    >>> logger.setLevel(logging.INFO)
2642    >>> logger.warning('doomed')
2643    [WARNING/MainProcess] doomed
2644    >>> m = multiprocessing.Manager()
2645    [INFO/SyncManager-...] child process calling self.run()
2646    [INFO/SyncManager-...] created temp directory /.../pymp-...
2647    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2648    >>> del m
2649    [INFO/MainProcess] sending shutdown message to manager
2650    [INFO/SyncManager-...] manager exiting with exitcode 0
2651
2652For a full table of logging levels, see the :mod:`logging` module.
2653
2654
2655The :mod:`multiprocessing.dummy` module
2656~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2657
2658.. module:: multiprocessing.dummy
2659   :synopsis: Dumb wrapper around threading.
2660
2661:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2662no more than a wrapper around the :mod:`threading` module.
2663
2664
2665.. _multiprocessing-programming:
2666
2667Programming guidelines
2668----------------------
2669
2670There are certain guidelines and idioms which should be adhered to when using
2671:mod:`multiprocessing`.
2672
2673
2674All start methods
2675~~~~~~~~~~~~~~~~~
2676
2677The following applies to all start methods.
2678
2679Avoid shared state
2680
2681    As far as possible one should try to avoid shifting large amounts of data
2682    between processes.
2683
2684    It is probably best to stick to using queues or pipes for communication
2685    between processes rather than using the lower level synchronization
2686    primitives.
2687
2688Picklability
2689
2690    Ensure that the arguments to the methods of proxies are picklable.
2691
2692Thread safety of proxies
2693
2694    Do not use a proxy object from more than one thread unless you protect it
2695    with a lock.
2696
2697    (There is never a problem with different processes using the *same* proxy.)
2698
2699Joining zombie processes
2700
2701    On Unix when a process finishes but has not been joined it becomes a zombie.
2702    There should never be very many because each time a new process starts (or
2703    :func:`~multiprocessing.active_children` is called) all completed processes
2704    which have not yet been joined will be joined.  Also calling a finished
2705    process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2706    join the process.  Even so it is probably good
2707    practice to explicitly join all the processes that you start.
2708
2709Better to inherit than pickle/unpickle
2710
2711    When using the *spawn* or *forkserver* start methods many types
2712    from :mod:`multiprocessing` need to be picklable so that child
2713    processes can use them.  However, one should generally avoid
2714    sending shared objects to other processes using pipes or queues.
2715    Instead you should arrange the program so that a process which
2716    needs access to a shared resource created elsewhere can inherit it
2717    from an ancestor process.
2718
2719Avoid terminating processes
2720
2721    Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2722    method to stop a process is liable to
2723    cause any shared resources (such as locks, semaphores, pipes and queues)
2724    currently being used by the process to become broken or unavailable to other
2725    processes.
2726
2727    Therefore it is probably best to only consider using
2728    :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2729    which never use any shared resources.
2730
2731Joining processes that use queues
2732
2733    Bear in mind that a process that has put items in a queue will wait before
2734    terminating until all the buffered items are fed by the "feeder" thread to
2735    the underlying pipe.  (The child process can call the
2736    :meth:`Queue.cancel_join_thread <multiprocessing.Queue.cancel_join_thread>`
2737    method of the queue to avoid this behaviour.)
2738
2739    This means that whenever you use a queue you need to make sure that all
2740    items which have been put on the queue will eventually be removed before the
2741    process is joined.  Otherwise you cannot be sure that processes which have
2742    put items on the queue will terminate.  Remember also that non-daemonic
2743    processes will be joined automatically.
2744
2745    An example which will deadlock is the following::
2746
2747        from multiprocessing import Process, Queue
2748
2749        def f(q):
2750            q.put('X' * 1000000)
2751
2752        if __name__ == '__main__':
2753            queue = Queue()
2754            p = Process(target=f, args=(queue,))
2755            p.start()
2756            p.join()                    # this deadlocks
2757            obj = queue.get()
2758
2759    A fix here would be to swap the last two lines (or simply remove the
2760    ``p.join()`` line).
2761
2762Explicitly pass resources to child processes
2763
2764    On Unix using the *fork* start method, a child process can make
2765    use of a shared resource created in a parent process using a
2766    global resource.  However, it is better to pass the object as an
2767    argument to the constructor for the child process.
2768
2769    Apart from making the code (potentially) compatible with Windows
2770    and the other start methods this also ensures that as long as the
2771    child process is still alive the object will not be garbage
2772    collected in the parent process.  This might be important if some
2773    resource is freed when the object is garbage collected in the
2774    parent process.
2775
2776    So for instance ::
2777
2778        from multiprocessing import Process, Lock
2779
2780        def f():
2781            ... do something using "lock" ...
2782
2783        if __name__ == '__main__':
2784            lock = Lock()
2785            for i in range(10):
2786                Process(target=f).start()
2787
2788    should be rewritten as ::
2789
2790        from multiprocessing import Process, Lock
2791
2792        def f(l):
2793            ... do something using "l" ...
2794
2795        if __name__ == '__main__':
2796            lock = Lock()
2797            for i in range(10):
2798                Process(target=f, args=(lock,)).start()
2799
2800Beware of replacing :data:`sys.stdin` with a "file like object"
2801
2802    :mod:`multiprocessing` originally unconditionally called::
2803
2804        os.close(sys.stdin.fileno())
2805
2806    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2807    in issues with processes-in-processes. This has been changed to::
2808
2809        sys.stdin.close()
2810        sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
2811
2812    Which solves the fundamental issue of processes colliding with each other
2813    resulting in a bad file descriptor error, but introduces a potential danger
2814    to applications which replace :func:`sys.stdin` with a "file-like object"
2815    with output buffering.  This danger is that if multiple processes call
2816    :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
2817    data being flushed to the object multiple times, resulting in corruption.
2818
2819    If you write a file-like object and implement your own caching, you can
2820    make it fork-safe by storing the pid whenever you append to the cache,
2821    and discarding the cache when the pid changes. For example::
2822
2823       @property
2824       def cache(self):
2825           pid = os.getpid()
2826           if pid != self._pid:
2827               self._pid = pid
2828               self._cache = []
2829           return self._cache
2830
2831    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2832
2833The *spawn* and *forkserver* start methods
2834~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2835
2836There are a few extra restriction which don't apply to the *fork*
2837start method.
2838
2839More picklability
2840
2841    Ensure that all arguments to :meth:`Process.__init__` are picklable.
2842    Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2843    instances will be picklable when the :meth:`Process.start
2844    <multiprocessing.Process.start>` method is called.
2845
2846Global variables
2847
2848    Bear in mind that if code run in a child process tries to access a global
2849    variable, then the value it sees (if any) may not be the same as the value
2850    in the parent process at the time that :meth:`Process.start
2851    <multiprocessing.Process.start>` was called.
2852
2853    However, global variables which are just module level constants cause no
2854    problems.
2855
2856Safe importing of main module
2857
2858    Make sure that the main module can be safely imported by a new Python
2859    interpreter without causing unintended side effects (such a starting a new
2860    process).
2861
2862    For example, using the *spawn* or *forkserver* start method
2863    running the following module would fail with a
2864    :exc:`RuntimeError`::
2865
2866        from multiprocessing import Process
2867
2868        def foo():
2869            print('hello')
2870
2871        p = Process(target=foo)
2872        p.start()
2873
2874    Instead one should protect the "entry point" of the program by using ``if
2875    __name__ == '__main__':`` as follows::
2876
2877       from multiprocessing import Process, freeze_support, set_start_method
2878
2879       def foo():
2880           print('hello')
2881
2882       if __name__ == '__main__':
2883           freeze_support()
2884           set_start_method('spawn')
2885           p = Process(target=foo)
2886           p.start()
2887
2888    (The ``freeze_support()`` line can be omitted if the program will be run
2889    normally instead of frozen.)
2890
2891    This allows the newly spawned Python interpreter to safely import the module
2892    and then run the module's ``foo()`` function.
2893
2894    Similar restrictions apply if a pool or manager is created in the main
2895    module.
2896
2897
2898.. _multiprocessing-examples:
2899
2900Examples
2901--------
2902
2903Demonstration of how to create and use customized managers and proxies:
2904
2905.. literalinclude:: ../includes/mp_newtype.py
2906   :language: python3
2907
2908
2909Using :class:`~multiprocessing.pool.Pool`:
2910
2911.. literalinclude:: ../includes/mp_pool.py
2912   :language: python3
2913
2914
2915An example showing how to use queues to feed tasks to a collection of worker
2916processes and collect the results:
2917
2918.. literalinclude:: ../includes/mp_workers.py
2919