Monitor Memory usage of Python code

Overview
https://travis-ci.org/pythonprofilers/memory_profiler.svg?branch=master

Memory Profiler

This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for python programs. It is a pure python module which depends on the psutil module.

Installation

Install via pip:

$ pip install -U memory_profiler

The package is also available on conda-forge.

To install from source, download the package, extract and type:

$ python setup.py install

Usage

line-by-line memory usage

The line-by-line memory usage mode is used much in the same way of the line_profiler: first decorate the function you would like to profile with @profile and then run the script with a special script (in this case with specific arguments to the Python interpreter).

In the following example, we create a simple function my_func that allocates lists a, b and then deletes b:

@profile
def my_func():
    a = [1] * (10 ** 6)
    b = [2] * (2 * 10 ** 7)
    del b
    return a

if __name__ == '__main__':
    my_func()

Execute the code passing the option -m memory_profiler to the python interpreter to load the memory_profiler module and print to stdout the line-by-line analysis. If the file name was example.py, this would result in:

$ python -m memory_profiler example.py

Output will follow:

Line #    Mem usage    Increment  Occurences   Line Contents
============================================================
     3   38.816 MiB   38.816 MiB           1   @profile
     4                                         def my_func():
     5   46.492 MiB    7.676 MiB           1       a = [1] * (10 ** 6)
     6  199.117 MiB  152.625 MiB           1       b = [2] * (2 * 10 ** 7)
     7   46.629 MiB -152.488 MiB           1       del b
     8   46.629 MiB    0.000 MiB           1       return a

The first column represents the line number of the code that has been profiled, the second column (Mem usage) the memory usage of the Python interpreter after that line has been executed. The third column (Increment) represents the difference in memory of the current line with respect to the last one. The last column (Line Contents) prints the code that has been profiled.

Decorator

A function decorator is also available. Use as follows:

from memory_profiler import profile

@profile
def my_func():
    a = [1] * (10 ** 6)
    b = [2] * (2 * 10 ** 7)
    del b
    return a

In this case the script can be run without specifying -m memory_profiler in the command line.

In function decorator, you can specify the precision as an argument to the decorator function. Use as follows:

from memory_profiler import profile

@profile(precision=4)
def my_func():
    a = [1] * (10 ** 6)
    b = [2] * (2 * 10 ** 7)
    del b
    return a

If a python script with decorator @profile is called using -m memory_profiler in the command line, the precision parameter is ignored.

Time-based memory usage

Sometimes it is useful to have full memory usage reports as a function of time (not line-by-line) of external processes (be it Python scripts or not). In this case the executable mprof might be useful. Use it like:

mprof run <executable>
mprof plot

The first line run the executable and record memory usage along time, in a file written in the current directory. Once it's done, a graph plot can be obtained using the second line. The recorded file contains a timestamps, that allows for several profiles to be kept at the same time.

Help on each mprof subcommand can be obtained with the -h flag, e.g. mprof run -h.

In the case of a Python script, using the previous command does not give you any information on which function is executed at a given time. Depending on the case, it can be difficult to identify the part of the code that is causing the highest memory usage.

Adding the profile decorator to a function and running the Python script with

mprof run <script>

will record timestamps when entering/leaving the profiled function. Running

mprof plot

afterward will plot the result, making plots (using matplotlib) similar to these:

https://camo.githubusercontent.com/3a584c7cfbae38c9220a755aa21b5ef926c1031d/68747470733a2f2f662e636c6f75642e6769746875622e636f6d2f6173736574732f313930383631382f3836313332302f63623865376337382d663563632d313165322d386531652d3539373237623636663462322e706e67

or, with mprof plot --flame (the function and timestamp names will appear on hover):

./images/flamegraph.png

A discussion of these capabilities can be found here.

Warning

If your Python file imports the memory profiler from memory_profiler import profile these timestamps will not be recorded. Comment out the import, leave your functions decorated, and re-run.

The available commands for mprof are:

  • mprof run: running an executable, recording memory usage
  • mprof plot: plotting one the recorded memory usage (by default, the last one)
  • mprof list: listing all recorded memory usage files in a user-friendly way.
  • mprof clean: removing all recorded memory usage files.
  • mprof rm: removing specific recorded memory usage files

Tracking forked child processes

In a multiprocessing context the main process will spawn child processes whose system resources are allocated separately from the parent process. This can lead to an inaccurate report of memory usage since by default only the parent process is being tracked. The mprof utility provides two mechanisms to track the usage of child processes: sum the memory of all children to the parent's usage and track each child individual.

To create a report that combines memory usage of all the children and the parent, use the include_children flag in either the profile decorator or as a command line argument to mprof:

mprof run --include-children <script>

The second method tracks each child independently of the main process, serializing child rows by index to the output stream. Use the multiprocess flag and plot as follows:

mprof run --multiprocess <script>
mprof plot

This will create a plot using matplotlib similar to this:

https://cloud.githubusercontent.com/assets/745966/24075879/2e85b43a-0bfa-11e7-8dfe-654320dbd2ce.png

You can combine both the include_children and multiprocess flags to show the total memory of the program as well as each child individually. If using the API directly, note that the return from memory_usage will include the child memory in a nested list along with the main process memory.

Plot settings

By default, the command line call is set as the graph title. If you wish to customize it, you can use the -t option to manually set the figure title.

mprof plot -t 'Recorded memory usage'

You can also hide the function timestamps using the n flag, such as

mprof plot -n

Trend lines and its numeric slope can be plotted using the s flag, such as

mprof plot -s

./images/trend_slope.png

The intended usage of the -s switch is to check the labels' numerical slope over a significant time period for :

  • >0 it might mean a memory leak.
  • ~0 if 0 or near 0, the memory usage may be considered stable.
  • <0 to be interpreted depending on the expected process memory usage patterns, also might mean that the sampling period is too small.

The trend lines are for ilustrative purposes and are plotted as (very) small dashed lines.

Setting debugger breakpoints

It is possible to set breakpoints depending on the amount of memory used. That is, you can specify a threshold and as soon as the program uses more memory than what is specified in the threshold it will stop execution and run into the pdb debugger. To use it, you will have to decorate the function as done in the previous section with @profile and then run your script with the option -m memory_profiler --pdb-mmem=X, where X is a number representing the memory threshold in MB. For example:

$ python -m memory_profiler --pdb-mmem=100 my_script.py

will run my_script.py and step into the pdb debugger as soon as the code uses more than 100 MB in the decorated function.

API

memory_profiler exposes a number of functions to be used in third-party code.

memory_usage(proc=-1, interval=.1, timeout=None) returns the memory usage over a time interval. The first argument, proc represents what should be monitored. This can either be the PID of a process (not necessarily a Python program), a string containing some python code to be evaluated or a tuple (f, args, kw) containing a function and its arguments to be evaluated as f(*args, **kw). For example,

>>> from memory_profiler import memory_usage
>>> mem_usage = memory_usage(-1, interval=.2, timeout=1)
>>> print(mem_usage)
    [7.296875, 7.296875, 7.296875, 7.296875, 7.296875]

Here I've told memory_profiler to get the memory consumption of the current process over a period of 1 second with a time interval of 0.2 seconds. As PID I've given it -1, which is a special number (PIDs are usually positive) that means current process, that is, I'm getting the memory usage of the current Python interpreter. Thus I'm getting around 7MB of memory usage from a plain python interpreter. If I try the same thing on IPython (console) I get 29MB, and if I try the same thing on the IPython notebook it scales up to 44MB.

If you'd like to get the memory consumption of a Python function, then you should specify the function and its arguments in the tuple (f, args, kw). For example:

>>> # define a simple function
>>> def f(a, n=100):
    ...     import time
    ...     time.sleep(2)
    ...     b = [a] * n
    ...     time.sleep(1)
    ...     return b
    ...
>>> from memory_profiler import memory_usage
>>> memory_usage((f, (1,), {'n' : int(1e6)}))

This will execute the code f(1, n=int(1e6)) and return the memory consumption during this execution.

REPORTING

The output can be redirected to a log file by passing IO stream as parameter to the decorator like @profile(stream=fp)

>>> fp=open('memory_profiler.log','w+')
>>> @profile(stream=fp)
>>> def my_func():
    ...     a = [1] * (10 ** 6)
    ...     b = [2] * (2 * 10 ** 7)
    ...     del b
    ...     return a

For details refer: examples/reporting_file.py

Reporting via logger Module:

Sometime it would be very convenient to use logger module specially when we need to use RotatingFileHandler.

The output can be redirected to logger module by simply making use of LogFile of memory profiler module.

>>> from memory_profiler import LogFile
>>> import sys
>>> sys.stdout = LogFile('memory_profile_log')

Customized reporting:

Sending everything to the log file while running the memory_profiler could be cumbersome and one can choose only entries with increments by passing True to reportIncrementFlag, where reportIncrementFlag is a parameter to LogFile class of memory profiler module.

>>> from memory_profiler import LogFile
>>> import sys
>>> sys.stdout = LogFile('memory_profile_log', reportIncrementFlag=False)

For details refer: examples/reporting_logger.py

IPython integration

After installing the module, if you use IPython, you can use the %mprun, %%mprun, %memit and %%memit magics.

For IPython 0.11+, you can use the module directly as an extension, with %load_ext memory_profiler

To activate it whenever you start IPython, edit the configuration file for your IPython profile, ~/.ipython/profile_default/ipython_config.py, to register the extension like this (If you already have other extensions, just add this one to the list):

c.InteractiveShellApp.extensions = [
    'memory_profiler',
]

(If the config file doesn't already exist, run ipython profile create in a terminal.)

It then can be used directly from IPython to obtain a line-by-line report using the %mprun or %%mprun magic command. In this case, you can skip the @profile decorator and instead use the -f parameter, like this. Note however that function my_func must be defined in a file (cannot have been defined interactively in the Python interpreter):

In [1]: from example import my_func, my_func_2

In [2]: %mprun -f my_func my_func()

or in cell mode:

In [3]: %%mprun -f my_func -f my_func_2
   ...: my_func()
   ...: my_func_2()

Another useful magic that we define is %memit, which is analogous to %timeit. It can be used as follows:

In [1]: %memit range(10000)
peak memory: 21.42 MiB, increment: 0.41 MiB

In [2]: %memit range(1000000)
peak memory: 52.10 MiB, increment: 31.08 MiB

or in cell mode (with setup code):

In [3]: %%memit l=range(1000000)
   ...: len(l)
   ...:
peak memory: 52.14 MiB, increment: 0.08 MiB

For more details, see the docstrings of the magics.

For IPython 0.10, you can install it by editing the IPython configuration file ~/.ipython/ipy_user_conf.py to add the following lines:

# These two lines are standard and probably already there.
import IPython.ipapi
ip = IPython.ipapi.get()

# These two are the important ones.
import memory_profiler
memory_profiler.load_ipython_extension(ip)

Frequently Asked Questions

  • Q: How accurate are the results ?
  • A: This module gets the memory consumption by querying the operating system kernel about the amount of memory the current process has allocated, which might be slightly different from the amount of memory that is actually used by the Python interpreter. Also, because of how the garbage collector works in Python the result might be different between platforms and even between runs.
  • Q: Does it work under windows ?
  • A: Yes, thanks to the psutil module.

Support, bugs & wish list

For support, please ask your question on stack overflow and add the *memory-profiling* tag. Send issues, proposals, etc. to github's issue tracker .

If you've got questions regarding development, you can email me directly at [email protected]

http://fa.bianp.net/static/tux_memory_small.png

Development

Latest sources are available from github:

https://github.com/pythonprofilers/memory_profiler

Projects using memory_profiler

Benchy

IPython memory usage

PySpeedIT (uses a reduced version of memory_profiler)

pydio-sync (uses custom wrapper on top of memory_profiler)

Authors

This module was written by Fabian Pedregosa and Philippe Gervais inspired by Robert Kern's line profiler.

Tom added windows support and speed improvements via the psutil module.

Victor added python3 support, bugfixes and general cleanup.

Vlad Niculae added the %mprun and %memit IPython magics.

Thomas Kluyver added the IPython extension.

Sagar UDAY KUMAR added Report generation feature and examples.

Dmitriy Novozhilov and Sergei Lebedev added support for tracemalloc.

Benjamin Bengfort added support for tracking the usage of individual child processes and plotting them.

Muhammad Haseeb Tariq fixed issue #152, which made the whole interpreter hang on functions that launched an exception.

Juan Luis Cano modernized the infrastructure and helped with various things.

License

BSD License, see file COPYING for full text.

Comments
  • WIP: Independent child process monitoring #118

    WIP: Independent child process monitoring #118

    This pull request is a start at resolving #118 -- as mentioned in that issue, I've created a function _get_child_memory that iterates through all child processes yielding their memory. This is summed if the include_children flag is true.

    I've also updated the reference from #71 to wrap the entire yield in try/except and yield 0.0 if the race condition occurs; but this needs to be checked.

    Including this into the main mprof library is the next step. I've created a demo mpmprof which logs child and main process memory separately in th same .dat file, then plots them correctly. Here is an example:

    multiprocessing_example

    The next steps are to merge mpmprof with mprof with some flag, e.g. independent_children or something like that (suggestions welcome) to make it behave like mpmprof and to have the plot command correctly handle both cases.

    opened by bbengfort 14
  • mprof each child process independently

    mprof each child process independently

    Moved here as a feature request from the following SO question:

    http://stackoverflow.com/questions/38358881/how-to-profile-multiple-subprocesses-using-python-multiprocessing-and-memory-pro

    The mprof script allows you to track memory usage of a process over time, and includes a -C flag which will also sum up the memory usage of all child processes (forks) spawned by the primary process.

    Instead of summation, I would like the mprof script to include a flag that will identify each process by pid in the generated .dat file, allowing the plot command to visualize each process' memory usage independently of each other, over time.

    opened by bbengfort 13
  • Not able to plot the graph from

    Not able to plot the graph from "./mplot run"

    I've done "mprof run --python " and post that I am trying to plot the graph("mprof plot"). But I don't see any graph being plotted.

    vikas@host:/home/vikas/memory_profiler-0.32$ ./mprof run --python ../asl mprof: Sampling memory every 0.1s

    running as a Python program...

    vikas@host:/home/vikas/memory_profiler-0.32$ cat mprofile_20150224005550.dat

    CMDLINE python ../asl

    MEM 1.316406 1424768150.5671

    MEM 6.539062 1424768150.6675

    MEM 8.812500 1424768150.7678

    MEM 8.812500 1424768150.8681

    MEM 8.812500 1424768150.9684

    opened by kumarvikas2605 13
  • Can the plot be scaled to show fine detail?

    Can the plot be scaled to show fine detail?

    I have an issue with making the plot from a mprof run legible. This is how mine looks: plot

    I'd like to be able to stretch/zoom the plot - make it much larger, so the function markers don't overlap.

    I have tried changing matplotlib's savefig.dpi and figure.figsize values, and both result in the graph being scaled, rather than the canvas being larger and the line/function markers becoming thinner and separated, and the text smaller.

    I tried a really wide figure using these settings in my matplotlibrc:

    figure.figsize   : 200, 10    # figure size in inches
    savefig.dpi      : 100
    

    but it still plotted at 1400x600.

    Do you know a way to make this possible?

    opened by pbowyer 12
  • Add support for coroutine profiling

    Add support for coroutine profiling

    This PR adds support for using @profile decorator on coroutines (both generator based coroutines and ones defined with async def statements). Support for Python versions <3.4 is dropped.

    opened by d-ryzhikov 10
  • mprof disables usage of __file__

    mprof disables usage of __file__

    When running any script with mprof the __file__ in profiled program is always is path to memory_profiler.py.

    Example

    Lets say that you profile script that reads some data file from relative location specified using __file__

    # script_to_profile.py
    import os
    data_path = os.path.join(os.path.dirname(__file__), data.csv)
    print(data_path)
    

    executing this as python <path/to/script/>script_to_profile.py prints <path/to/script/>data.csv

    however, running mprof run <path/to/script/>script_to_profile.py prints <path/to/site-packages/>data.csv

    this makes it impossible to profile scripts that rely on specifying paths to static files using file

    bug 
    opened by mgfinch 9
  • large negative increment values in line profiler

    large negative increment values in line profiler

    i am using memory profile as follows:

    an_instance = AClass()
    precision=1
    backend='psutil'
    with open("mem-profile.txt", 'w') as stream:
      prof = mp.LineProfiler(backend=backend)
      res = prof(an_instance.window)(template)
      mp.show_results(prof, stream=stream, precision=precision)
    return res
    

    however the results seem strange, I see large negative increments, much larger than any calls with positive increments:

    Line #    Mem usage    Increment
    ================================
        51    192.2 MiB    192.2 MiB
        52                          
        53                          
        54                          
        55                          
        56                          
        57                          
        58    192.2 MiB      0.0 MiB
        59    736.4 MiB     -8.1 MiB
        60    736.4 MiB     -6.8 MiB
        61    736.4 MiB     -7.4 MiB
        62    736.4 MiB     -8.1 MiB
        63                          
        64    736.4 MiB     -7.7 MiB
        65    736.4 MiB     -8.0 MiB
        66    736.4 MiB     -7.9 MiB
        67    736.4 MiB     -8.1 MiB
        68                          
        69    736.4 MiB     -7.9 MiB
        70    736.4 MiB     -8.0 MiB
        71    736.4 MiB     -8.1 MiB
        72    736.4 MiB     -8.1 MiB
        73                          
        74    736.4 MiB     -8.1 MiB
        75    736.4 MiB     -8.1 MiB
        76    736.4 MiB     -8.1 MiB
        77                          
        78                          
        79                          
        80                          
        81    736.4 MiB    -47.7 MiB
        82    736.4 MiB  -1028.5 MiB
        83    736.4 MiB  -5789.3 MiB
        84    736.4 MiB  -4799.5 MiB
        85                          
        86    736.4 MiB  -4796.0 MiB
        87                          
        88                          
        89    736.4 MiB  -4789.0 MiB
        90                          
        91                          
        92                          
        93                          
        94    736.4 MiB  -4790.4 MiB
        95    599.3 MiB      6.2 MiB
        96    599.3 MiB     85.1 MiB
        97    599.3 MiB     48.6 MiB
        98                          
        99    599.3 MiB      0.0 MiB
       100    599.3 MiB     42.1 MiB
       101    599.3 MiB      0.0 MiB
       102    599.3 MiB      2.3 MiB
       103                          
       104    736.4 MiB  -4782.2 MiB
       105    736.4 MiB  -4767.1 MiB
       106    736.4 MiB  -4795.4 MiB
       107    736.4 MiB  -4797.3 MiB
       108                          
       109    736.4 MiB  -4605.9 MiB
       110                          
       111                          
       112                          
       113    736.4 MiB      0.0 MiB
       114    753.1 MiB   -807.8 MiB
       115    753.1 MiB -22614.2 MiB
       116    753.1 MiB -21805.3 MiB
       117    753.1 MiB -21806.2 MiB
       118    753.1 MiB -21806.2 MiB
       119    753.1 MiB -12706.5 MiB
       120                          
       121    753.1 MiB -12691.2 MiB
       122    753.1 MiB -12707.0 MiB
       123                          
       124    753.1 MiB  -9099.8 MiB
       125    753.1 MiB   -807.8 MiB
       126    753.1 MiB   -807.8 MiB
       127                          
       128    753.1 MiB   -807.8 MiB
       129    753.1 MiB   -807.8 MiB
       130    753.1 MiB   -807.8 MiB
       131    753.1 MiB   -807.8 MiB
       132                          
       133    753.1 MiB   -807.8 MiB
       134                          
       135    753.1 MiB   -807.8 MiB
       136    753.1 MiB   -807.8 MiB
       137                          
       138                          
       139    753.1 MiB   -807.7 MiB
       140    753.1 MiB   -807.7 MiB
       141                          
       142    753.1 MiB   -807.8 MiB
       143                          
       144    752.9 MiB     -0.2 MiB
       145                          
       146                          
       147                          
       148                          
       149                          
       150                          
       151                          
       152    752.9 MiB      0.0 MiB
    
    
    opened by nathantsoi 9
  • ENH added optional ``tracemalloc`` backend

    ENH added optional ``tracemalloc`` backend

    It is now possible to use tracemalloc to analyze memory usage Python code on Python 3.4 and above. tracemalloc allows for more precise measurements compared to psutil. However, it only works either for pure Python code or for C extensions allocating memory via PyMem_Alloc.

    To use the new backend code run memory_profiler with --backend option, e.g.

    $ python -m memory_profiler --backend=tracemalloc script.py
    

    Also if you use memory_profiler with imported decorator you can specify backend as an argument to the decorator function:

    @profile(backend='tracemalloc')
    def f(n):
        a = [0] * n
        return a
    

    backend parameter to @profile has priority over --backend.

    Note that using tracemalloc in mprof and IPython magic is not supported at the moment.

    opened by demiurg906 9
  • [not bug] How to run mprof if I have 2 versions of python

    [not bug] How to run mprof if I have 2 versions of python

    Hello. Sorry if this question is silly or strange but I couldn't Google this question. On my mac I have pre-installed python 2.7.10 and installed by myself 3.5.1.

    When I use mprof run mem_test.py it runs on 2.7. How can I run it on 3th version? I tried python3 -m mprof run mem_test.py , but /Library/Frameworks/Python.framework/Versions/3.5/bin/python3: No module named mprof

    regards, Artem

    opened by vuklip 9
  • mprof does not work with @profile in python3

    mprof does not work with @profile in python3

    According to:

    https://github.com/fabianp/memory_profiler/blame/master/README.rst#L153

    if you want that the timestamps for function calls are recorded, you don't need to import the profile decorator from memory_profiler, you need to comment out from your script the line:

    # from memory_profiler import profile But if I do that following the tutorial at: http://fa.bianp.net/blog/2014/plot-memory-usage-as-a-function-of-time/

    (acs@dellx) /tmp $ mprof run test1.py 
    mprof: Sampling memory every 0.1s
    Traceback (most recent call last):
      File "test1.py", line 7, in <module>
        @profile
    NameError: name 'profile' is not defined
    
    

    And if I import profile with "from memory_profiler import profile" there are no data about functions calls in the plot.

    opened by acs 8
  • Possible race condition with psutil

    Possible race condition with psutil

    Hi again,

    I think that I may have found a possible race condition when counting the memory with psutil of a process using the include_children option. The problem (I think) is in this piece of code in _get_memory:

    if include_children:
        for p in process.get_children(recursive=True):
            mem += p.get_memory_info()[0] / _TWO_20
    

    The method get_childrenreturns a list that is used to iterate over and calculate the total memory. It may happen though that one of the child processes dies or finishes before the sum has finished, resulting on an error like this:

    Reading configuration from '/pica/h1/guilc/repos/facs/tests/data/bin/fastq_screen.conf'
    Using 1 threads for searches
    Adding database phiX
    Processing /pica/h1/guilc/repos/facs/tests/data/synthetic_fastq/simngs_phiX_100.fastq
    Output file /pica/h1/guilc/repos/facs/tests/data/tmp/simngs_phiX_100_screen.txt already exists - skipping
    Processing complete
    Process MemTimer-2:
    Traceback (most recent call last):
      File "/sw/comp/python/2.7_kalkyl/lib/python2.7/multiprocessing/process.py", line 232, in _bootstrap
        self.run()
      File "/pica/h1/guilc/.virtualenvs/facs/lib/python2.7/site-packages/memory_profiler.py", line 124, in run
        include_children=self.include_children)
      File "/pica/h1/guilc/.virtualenvs/facs/lib/python2.7/site-packages/memory_profiler.py", line 52, in _get_memory
        mem += p.get_memory_info()[0] / _TWO_20
      File "/pica/h1/guilc/.virtualenvs/facs/lib/python2.7/site-packages/psutil/__init__.py", line 758, in get_memory_info
        return self._platform_impl.get_memory_info()
      File "/pica/h1/guilc/.virtualenvs/facs/lib/python2.7/site-packages/psutil/_pslinux.py", line 470, in wrapper
        raise NoSuchProcess(self.pid, self._process_name)
    NoSuchProcess: process no longer exists (pid=17442)
    

    It happens randomly, and can be solved encapsulating the sum on a try except statement:

    if include_children:
        for p in process.get_children(recursive=True):
            try:
                mem += p.get_memory_info()[0] / _TWO_20
            except NoSuchProcess:
                pass
    

    I'm not sure that this is the best solution though... any comments/ideas? @fabianp @brainstorm

    Thanks!

    opened by guillermo-carrasco 8
  • include-children in decorator

    include-children in decorator

    I am not able to create a report that combines memory usage of all the children and the parent using the profile decorator.

    It says the include_children flag does not exist.

    I tried the following:

    profile(include_children=True)
    def myfunc():
      pass
    
    
    
    opened by sowuy 0
  • The @profile decorator will profile another decorator when chaining multiple decorators into a function.

    The @profile decorator will profile another decorator when chaining multiple decorators into a function.

    Stacking multiple decorators together will result in unexpected behavior. A trivial example:

    from functools import wraps
    from random import random
    import time
    
    from memory_profiler import profile
    
    def timeit(f):
        @wraps(wrapped=f)
        def wrapper(*args, **kw):
            ts = time.time()
            result = f(*args, **kw)
            te = time.time()
            print(f'func: {f.__name__} took: {te-ts:2.4f} sec')
            return result
    
        return wrapper
    
    
    @profile
    @timeit
    def to_profile(n):
        arr = [random() for i in range(n)]
        return sum(arr)
    
    
    to_profile(1_000)
    

    What I expect to see:

    Line #    Mem usage    Increment  Occurrences   Line Contents
    =============================================================
        20     48.8 MiB     48.8 MiB           1   @timeit
        21                                         @profile
        22                                         def to_profile(n):
        23     48.8 MiB      0.0 MiB        1003       arr = [random() for i in range(n)]
        24     48.8 MiB      0.0 MiB           1       return sum(arr)
    

    What we actually get:

    Line #    Mem usage    Increment  Occurrences   Line Contents
    =============================================================
         9     49.0 MiB     49.0 MiB           1       @wraps(wrapped=f)
        10                                             def wrapper(*args, **kw):
        11     49.0 MiB      0.0 MiB           1           ts = time.time()
        12     49.0 MiB      0.0 MiB           1           result = f(*args, **kw)
        13     49.0 MiB      0.0 MiB           1           te = time.time()
        14     49.0 MiB      0.0 MiB           1           print(f'func: {f.__name__} took: {te-ts:2.4f} sec')
        15     49.0 MiB      0.0 MiB           1           return result
    

    I realized while checking this that the sample timeit decorator has the same issue and is timing the @profile rather than the underlying to_profile.

    opened by stenbein 1
  • Inconsistent result between line profiler and plot

    Inconsistent result between line profiler and plot

    Hi, I am trying to profile a simple function that loads a big csv files into pandas. After loading, pandas memory_usage shows that its size is 6.8GB Screenshot 2022-10-31 at 15 43 50

    However, if I run the profiler if says less than 1GB Screenshot 2022-10-31 at 15 49 24

    But, running mprof plot afterwards the plot shows more than 6GB (although still less than pandas). Screenshot 2022-10-31 at 15 46 11

    Any idea what is happening?

    opened by SergioG-M 0
  • fit plotting data to xlim

    fit plotting data to xlim

    Hi, I changed the plotting data to fit the xlim window, as I found that y-axis window are not nicely rescaled.

    Related to https://github.com/pythonprofilers/memory_profiler/pull/105#issuecomment-154342342

    opened by githubfzq 0
  • How to test memory usage for a time-loop function?

    How to test memory usage for a time-loop function?

    Hi there, I wrote a time-loop function in my script wich is like this:

    import numpy as np
    
    def time_loop():
        for i in range(0,10000000): # assume this is a time loop from x1 year x2 month x3 day to y1 year y2 month y3 day
            # some codes
            ....
    
    if __name__ == '__main__':
        time_loop()
    

    And I want to detect the memory usage of every line within the function time_loop. so how can I do that? does it simply follow:

    import numpy as np
    from memory_profiler import profile
    
    @profile
    def time_loop():
        for i in range(0,10000000): # assume this is a time loop from x1 year x2 month x3 day to y1 year y2 month y3 day
            # some codes
            ....
    
    if __name__ == '__main__':
        time_loop()
    

    Then python -m memory_profile myscript.py? Thanks!

    opened by xushanthu-2014 0
  • No increment in memory usage

    No increment in memory usage

    Hi! From some of the previous issues it seems the behavior is a bit odd in the presence of loops. But I don't get the lack of increment in the memory in my log from line 241-246. Each of those assignments should consume a pretty substantial amount of memory. Any suggestions?

    Selection_005_1

    opened by SubhankarGhosh 0
Development tool to measure, monitor and analyze the memory behavior of Python objects in a running Python application.

README for pympler Before installing Pympler, try it with your Python version: python setup.py try If any errors are reported, check whether your Pyt

null 996 Jan 1, 2023
Automatically monitor the evolving performance of Flask/Python web services.

Flask Monitoring Dashboard A dashboard for automatic monitoring of Flask web-services. Key Features • How to use • Live Demo • Feedback • Documentatio

null 663 Dec 29, 2022
System monitor - A python-based real-time system monitoring tool

System monitor A python-based real-time system monitoring tool Screenshots Installation Run My project with these commands pip install -r requiremen

Sachit Yadav 4 Feb 11, 2022
Linux/OSX/FreeBSD resource monitor

Index Documents Description Features Themes Support and funding Prerequisites (Read this if you are having issues!) Dependencies Screenshots Installat

null 9k Jan 8, 2023
Watch your Docker registry project size, then monitor it with Grafana.

Watch your Docker registry project size, then monitor it with Grafana.

Nova Kwok 33 Apr 5, 2022
Scalene: a high-performance, high-precision CPU and memory profiler for Python

scalene: a high-performance CPU and memory profiler for Python by Emery Berger 中文版本 (Chinese version) About Scalene % pip install -U scalene Scalen

Emery Berger 138 Dec 30, 2022
🚴 Call stack profiler for Python. Shows you why your code is slow!

pyinstrument Pyinstrument is a Python profiler. A profiler is a tool to help you 'optimize' your code - make it faster. It sounds obvious, but to get

Joe Rickerby 5k Jan 1, 2023
Prometheus instrumentation library for Python applications

Prometheus Python Client The official Python 2 and 3 client for Prometheus. Three Step Demo One: Install the client: pip install prometheus-client Tw

Prometheus 3.2k Jan 7, 2023
Cross-platform lib for process and system monitoring in Python

Home Install Documentation Download Forum Blog Funding What's new Summary psutil (process and system utilities) is a cross-platform library for retrie

Giampaolo Rodola 9k Jan 2, 2023
Sampling profiler for Python programs

py-spy: Sampling profiler for Python programs py-spy is a sampling profiler for Python programs. It lets you visualize what your Python program is spe

Ben Frederickson 9.5k Jan 8, 2023
Yet Another Python Profiler, but this time thread&coroutine&greenlet aware.

Yappi Yet Another Python Profiler, but this time thread&coroutine&greenlet aware. Highlights Fast: Yappi is fast. It is completely written in C and lo

Sümer Cip 1k Jan 1, 2023
Line-by-line profiling for Python

line_profiler and kernprof NOTICE: This is the official line_profiler repository. The most recent version of line-profiler on pypi points to this repo

OpenPyUtils 1.6k Dec 31, 2022
Visual profiler for Python

vprof vprof is a Python package providing rich and interactive visualizations for various Python program characteristics such as running time and memo

Nick Volynets 3.9k Dec 19, 2022
Was an interactive continuous Python profiler.

☠ This project is not maintained anymore. We highly recommend switching to py-spy which provides better performance and usability. Profiling The profi

What! Studio 3k Dec 27, 2022
pprofile + matplotlib = Python program profiled as an awesome heatmap!

pyheat Profilers are extremely helpful tools. They help us dig deep into code, find and understand performance bottlenecks. But sometimes we just want

Vishwas B Sharma 735 Dec 27, 2022
Monitor Memory usage of Python code

Memory Profiler This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for pyth

null 3.7k Dec 30, 2022
Monitor Memory usage of Python code

Memory Profiler This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for pyth

Fabian Pedregosa 80 Nov 18, 2022
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

null 1 Jan 27, 2022
Use Raspberry Pi and CircuitSetup's power monitor hardware to publish electrical usage to MQTT

This repo has code and notes for whole home electrical power monitoring using a Raspberry Pi and CircuitSetup modules. Beyond just collecting data, it

Eric Tsai 10 Jul 25, 2022