AntroPy: entropy and complexity of (EEG) time-series in Python

Overview

https://travis-ci.org/raphaelvallat/antropy.svg?branch=master
https://github.com/raphaelvallat/antropy/blob/master/docs/pictures/logo.png?raw=true

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to extract features from EEG signals.

Documentation

Installation

pip install antropy

Dependencies

Functions

Entropy

import numpy as np
import antropy as ant
np.random.seed(1234567)
x = np.random.normal(size=3000)
# Permutation entropy
print(ant.perm_entropy(x, normalize=True))
# Spectral entropy
print(ant.spectral_entropy(x, sf=100, method='welch', normalize=True))
# Singular value decomposition entropy
print(ant.svd_entropy(x, normalize=True))
# Approximate entropy
print(ant.app_entropy(x))
# Sample entropy
print(ant.sample_entropy(x))
# Hjorth mobility and complexity
print(ant.hjorth_params(x))
# Number of zero-crossings
print(ant.num_zerocross(x))
# Lempel-Ziv complexity
print(ant.lziv_complexity('01111000011001', normalize=True))
0.9995371694290871
0.9940882825422431
0.9999110978316078
2.015221318528564
2.198595813245399
(1.4313385010057378, 1.215335712274099)
1531
1.3597696150205727

Fractal dimension

# Petrosian fractal dimension
print(ant.petrosian_fd(x))
# Katz fractal dimension
print(ant.katz_fd(x))
# Higuchi fractal dimension
print(ant.higuchi_fd(x))
# Detrended fluctuation analysis
print(ant.detrended_fluctuation(x))
1.0310643385753608
5.954272156665926
2.005040632258251
0.47903505674073327

Execution time

Here are some benchmarks computed on a MacBook Pro (2020).

import numpy as np
import antropy as ant
np.random.seed(1234567)
x = np.random.rand(1000)
# Entropy
%timeit ant.perm_entropy(x)
%timeit ant.spectral_entropy(x, sf=100)
%timeit ant.svd_entropy(x)
%timeit ant.app_entropy(x)  # Slow
%timeit ant.sample_entropy(x)  # Numba
# Fractal dimension
%timeit ant.petrosian_fd(x)
%timeit ant.katz_fd(x)
%timeit ant.higuchi_fd(x) # Numba
%timeit ant.detrended_fluctuation(x) # Numba
106 µs ± 5.49 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
138 µs ± 3.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
40.7 µs ± 303 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
2.44 ms ± 134 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.21 ms ± 35.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
23.5 µs ± 695 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
40.1 µs ± 2.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
13.7 µs ± 251 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
315 µs ± 10.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Development

AntroPy was created and is maintained by Raphael Vallat. Contributions are more than welcome so feel free to contact me, open an issue or submit a pull request!

To see the code or report a bug, please visit the GitHub repository.

Note that this program is provided with NO WARRANTY OF ANY KIND. Always double check the results.

Acknowledgement

Several functions of AntroPy were adapted from:

All the credit goes to the author of these excellent packages.

Comments
  • Improve performance in `_xlog2x`

    Improve performance in `_xlog2x`

    Follow up to #3

    Using np.nan_to_num is advantageous because it makes use of numpy's vectorization, instead of 'if x == 0', which applies the test pointwise.

    enhancement 
    opened by jftsang 7
  • modify the _embed function to fit the 2d input

    modify the _embed function to fit the 2d input

    modify the _embed funciton, so, it can take input as 2d array. pre store the sliced signal into a list to accelerate concatenation operation. pre define the indice of sliced signal to reduce the computing in loop. add vectorized operation in loop to slice the signal for all input signal in one time.

    performance: 1e3 signal with 1000 time point, order = 3, decay=1: 0.01s

    1e4 signal with 1000 time point, order = 3, decay=1: 0.1s 1e4 signal with 1000 time point, order = 10, decay=1: 0.85s

    1e5 signal with 1000 time point, order = 3, decay=1: 1.11s 1e5 signal with 1000 time point, order = 10, decay=1: 9.82s

    5e5 signal with 1000 time point, order = 3, decay=1: 67s

    enhancement 
    opened by cheliu-computation 6
  • Handle the limit of p = 0 in p log2 p

    Handle the limit of p = 0 in p log2 p

    This patch defines a helper function, _xlog2x(x), that calculates x * log2(x) but handles the case x == 0 by returning 0 rather than nan. This is needed if the power spectrum has any component that is exactly zero: in particular, if the f = 0 component is zero.

    opened by jftsang 6
  • RuntimeWarning in _xlogx when x has zero values

    RuntimeWarning in _xlogx when x has zero values

    In the version currently on github, _xlogx uses numpy.where to return valid results based on the condition x==0, 0. However numpy.where still tries to apply the log function to all values of x before trimming the values that meet the condition, resulting in runtime warnings.

    To avoid those issues, I would suggest changing the code to something like

        xlogx = np.zeros_like(x)
        valid = np.nonzero(x)
        xlogx[valid] = x[valid] * np.log(x[valid]) / np.log(base)
        return xlogx
    

    As this strictly apply the function to the nonzero elements of x.

    If this looks good to you I could submit a PR. Let me know.

    enhancement 
    opened by guiweber 4
  • Fixed division by zero in linear regresion function (with test)

    Fixed division by zero in linear regresion function (with test)

    Hi,

    Just extending the information provided in the previous PR (https://github.com/raphaelvallat/antropy/pull/20), I provide a serie of screenshots about the problem I was facing when computing the detrended fluctuation of my signals.

    See below one of the segments of my signal where the method fails:

    Screenshot from 2022-11-17 08-24-32

    Results of the tests with this signal: Screenshot from 2022-11-17 08-36-07

    After the proposed solution: Screenshot from 2022-11-17 09-27-52

    I hope these new commits and test help to clarify the issue.

    Thanks, Tino

    enhancement 
    opened by Arritmic 3
  • conda-forge package

    conda-forge package

    Hello, I've added antropy to conda-forge; please let me know if you'd like to be added as a co-maintainer for the respective feedstock. It could also make sense to amend the installation instructions, WDYT?

    enhancement 
    opened by hoechenberger 3
  • Allow readonly arrays for higuchi_fd

    Allow readonly arrays for higuchi_fd

    The current behavior of this method changes the datatype of x as np.asarray is a wrapper for np.array where copy=False. (see here)

    I believe that this is (kind of) unexpected behavior, e.g., a user would not expect that the datatype would change when calculating a feature. Therefore, I suggest giving the user the option of not changing the datatype by adding a copy flag to the higuchi_fd function parameters. By default this flag = False, resulting in the same behavior as now (i.e., datatype of x is changed).

    When benchmarking the speed of the code, I observed no real difference. Perhaps we should even remove the flag and just use np.array instead of np.asarray?

    In [11]: x = np.random.rand(10_000).astype("float32")
    
    In [12]: %timeit ant.higuchi_fd(x)
    246 µs ± 5.24 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
    
    In [13]: x = np.random.rand(10_000).astype("float32")
    
    In [14]: %timeit ant.higuchi_fd(x, copy=True)
    242 µs ± 93.4 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
    

    PS: I really like the fast functions in this library :smile:

    enhancement 
    opened by jvdd 3
  • The most

    The most "generic" entropy measure

    Hi,

    Is there any review paper available that overviews the performance of different entropy measures which are implemented in this library for the actual electrophysiological data? Also, what would be the measure with the smallest number of non-optional parameters that is also guaranteed to work in most cases?

    Thank you!

    documentation question 
    opened by antelk 3
  • Fixed division by zero in linear regresion function

    Fixed division by zero in linear regresion function

    I have been facing problems when computing the Detrended fluctuation analysis (DFA) using the function detrended_fluctuation(x) and the input array is relatively small (subwindows of windows).

    In some cases, len(fluctuations)=1 causing den=0 in the linear regression function. This fix solves the issue for me, having expected results.

    bug enhancement 
    opened by Arritmic 2
  • Zero-crossings

    Zero-crossings

    Hi Raph,

    Was doing some cross-checking and I have a quick question to disperse a doubt in my mind regarding the counting of the number of inversions:

    https://github.com/raphaelvallat/antropy/blob/88fea895dc464fd075f634ac81f2ae4f46b60cac/antropy/entropy.py#L908

    Shouldn't it be: np.diff(np.signbit(np.diff( here? I.e., counting the changes in sign of the consecutive differences, rather than the difference of the sign of the consecutive samples 🤔

    question 
    opened by DominiqueMakowski 2
  • Error importing with 32-bit windows 7

    Error importing with 32-bit windows 7

    Hi there,

    I've been playing with antropy on my main home machine and have come to use the same code on a 32-bit windows 7 machine which has inccured an import error.

    Currently using Python 3.8.10 32-bit. Can this be fixed or is it likely i'm in need of going to a 64-bit version?

    The traceback is as follow:

    Python 3.8.10 (tags/v3.8.10:3d8993a, May  3 2021, 11:34:34) [MSC v.1928 32 bit (
    Intel)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import antropy
    Traceback (most recent call last):
      File "C:\Python38\lib\site-packages\numba\core\errors.py", line 776, in new_er
    ror_context
        yield
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 235, in lowe
    r_block
        self.lower_inst(inst)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 380, in lowe
    r_inst
        val = self.lower_assign(ty, inst)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 556, in lowe
    r_assign
        return self.lower_expr(ty, value)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 1084, in low
    er_expr
        res = self.lower_call(resty, expr)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 815, in lowe
    r_call
        res = self._lower_call_normal(fnty, expr, signature)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 1055, in _lo
    wer_call_normal
        res = impl(self.builder, argvals, self.loc)
      File "C:\Python38\lib\site-packages\numba\core\base.py", line 1194, in __call_
    _
        res = self._imp(self._context, builder, self._sig, args, loc=loc)
      File "C:\Python38\lib\site-packages\numba\core\base.py", line 1224, in wrapper
    
        return fn(*args, **kwargs)
      File "C:\Python38\lib\site-packages\numba\np\unsafe\ndarray.py", line 31, in c
    odegen
        res = _empty_nd_impl(context, builder, arrty, shapes)
      File "C:\Python38\lib\site-packages\numba\np\arrayobj.py", line 3468, in _empt
    y_nd_impl
        arrlen_mult = builder.smul_with_overflow(arrlen, s)
      File "C:\Python38\lib\site-packages\llvmlite\ir\builder.py", line 50, in wrapp
    ed
        raise ValueError("Operands must be the same type, got (%s, %s)"
    ValueError: Operands must be the same type, got (i32, i64)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Python38\lib\site-packages\antropy\__init__.py", line 4, in <module>
        from .fractal import *
      File "C:\Python38\lib\site-packages\antropy\fractal.py", line 304, in <module>
    
        def _dfa(x):
      File "C:\Python38\lib\site-packages\numba\core\decorators.py", line 226, in wr
    apper
        disp.compile(sig)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 979, in co
    mpile
        cres = self._compiler.compile(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 141, in co
    mpile
        status, retval = self._compile_cached(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 155, in _c
    ompile_cached
        retval = self._compile_core(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 168, in _c
    ompile_core
        cres = compiler.compile_extra(self.targetdescr.typing_context,
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 686, in comp
    ile_extra
        return pipeline.compile_extra(func)
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 428, in comp
    ile_extra
        return self._compile_bytecode()
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 492, in _com
    pile_bytecode
        return self._compile_core()
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 471, in _com
    pile_core
        raise e
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 462, in _com
    pile_core
        pm.run(self.state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 34
    3, in run
        raise patched_exception
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 33
    4, in run
        self._runPass(idx, pass_inst, state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_lock.py", line 35, in
    _acquire_compile_lock
        return func(*args, **kwargs)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 28
    9, in _runPass
        mutated |= check(pss.run_pass, internal_state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 26
    2, in check
        mangled = func(compiler_state)
      File "C:\Python38\lib\site-packages\numba\core\typed_passes.py", line 396, in
    run_pass
        lower.lower()
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 138, in lowe
    r
        self.lower_normal_function(self.fndesc)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 192, in lowe
    r_normal_function
        entry_block_tail = self.lower_function_body()
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 221, in lowe
    r_function_body
        self.lower_block(block)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 235, in lowe
    r_block
        self.lower_inst(inst)
      File "C:\Python38\lib\contextlib.py", line 131, in __exit__
        self.gen.throw(type, value, traceback)
      File "C:\Python38\lib\site-packages\numba\core\errors.py", line 786, in new_er
    ror_context
        raise newerr.with_traceback(tb)
    numba.core.errors.LoweringError: Failed in nopython mode pipeline (step: native
    lowering)
    Operands must be the same type, got (i32, i64)
    
    File "lib\site-packages\antropy\fractal.py", line 313:
    def _dfa(x):
        <source elided>
    
        for i_n, n in enumerate(nvals):
        ^
    
    During: lowering "array.70 = call empty_func.71(size_tuple.69, func=empty_func.7
    1, args=(Var(size_tuple.69, fractal.py:313),), kws=[], vararg=None, target=None)
    " at C:\Python38\lib\site-packages\antropy\fractal.py (313)
    >>>
    
    invalid 
    opened by LMBooth 2
  • modify the entropy function be able to compute vectorizly

    modify the entropy function be able to compute vectorizly

    Hi, I have used your package to process the ECG signal and it achieve good results on classify different heart disease. Thanks a lot!

    However, so far, these functions are only can deal with one-dimensional signal like array(~, 1). May I take a try to modify the code and make it can process the data like sklearn.preprocessing.scale(X, axis=xx)? So it will be more efficient to deal with big array, because we do not need to run the foor loop or something else.

    My email is [email protected], welcome to discuss with me!

    enhancement 
    opened by cheliu-computation 2
  • Different results of different SampEn implementations

    Different results of different SampEn implementations

    My own implementation:

    import math import numpy as np from scipy.spatial.distance import pdist

    def sample_entropy(signal,m,r,dist_type='chebyshev', result = None, scale = None):

    # Check Errors
    if m > len(signal):
        raise ValueError('Embedding dimension must be smaller than the signal length (m<N).')
    if len(signal) != signal.size:
        raise ValueError('The signal parameter must be a [Nx1] vector.')
    if not isinstance(dist_type, str):
        raise ValueError('Distance type must be a string.')
    if dist_type not in ['braycurtis', 'canberra', 'chebyshev', 'cityblock',
                         'correlation', 'cosine', 'dice', 'euclidean', 'hamming',
                         'jaccard', 'jensenshannon', 'kulsinski', 'mahalanobis',
                         'matching', 'minkowski', 'rogerstanimoto', 'russellrao',
                         'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'yule']:
        raise ValueError('Distance type unknown.')
    
    # Useful parameters
    N = len(signal)
    sigma = np.std(signal)
    templates_m = []
    templates_m_plus_one = []
    signal = np.squeeze(signal)
    
    for i in range(N - m + 1):
        templates_m.append(signal[i:i + m])
    
    B = np.sum(pdist(templates_m, metric=dist_type) <= sigma * r)
    if B == 0:
        value = math.inf
    else:
        m += 1
        for i in range(N - m + 1):
            templates_m_plus_one.append(signal[i:i + m])
        A = np.sum(pdist(templates_m_plus_one, metric=dist_type) <= sigma * r)
    
        if A == 0:
            value = math.inf
    
        else:
            A = A/len(templates_m_plus_one)
            B = B/len(templates_m)
    
            value = -np.log((A / B))
    
    """IF A = 0 or B = 0, SamEn would return an infinite value. 
    However, the lowest non-zero conditional probability that SampEn should
    report is A/B = 2/[(N-m-1)*(N-m)]"""
    
    if math.isinf(value):
    
        """Note: SampEn has the following limits:
                - Lower bound: 0 
                - Upper bound : log(N-m) + log(N-m-1) - log(2)"""
    
        value = -np.log(2/((N-m-1)*(N-m)))
    
    if result is not None:
        result[scale-1] = value
    
    return value
    

    signal= np.random.rand(200) // rand(200,1) in Matlab parameters: m = 1; r = 0.2


    Outputs:

    My implementation: 2.1812 Implementation adapted: 2.1969 Neurokit 2 entropy_sample function: 2.5316 Your implementation: 2.2431 Different implementation from GitHub: 1.0488

    invalid question 
    opened by dmarcos97 4
  • Speed up importing antropy

    Speed up importing antropy

    Create a file called import.py with the single line import antropy. On my machine (Linux VM), this takes at least 10 seconds to run.

    Using pyinstrument tells me that most of the time is spent importing numba. Is there any possibility of speeding this up? Seems like this is a known issue with numba, though: see e.g. https://github.com/numba/numba/issues/4927.

    $ pyinstrument import.py 
    
      _     ._   __/__   _ _  _  _ _/_   Recorded: 16:36:28  Samples:  7842
     /_//_/// /_\ / //_// / //_'/ //     Duration: 12.368    CPU time: 11.963
    /   _/                      v3.4.1
    
    Program: import.py
    
    12.368 <module>  import.py:1
    └─ 12.368 <module>  antropy/__init__.py:2
       ├─ 6.711 <module>  antropy/fractal.py:1
       │  └─ 6.711 wrapper  numba/core/decorators.py:191
       │        [14277 frames hidden]  numba, llvmlite, contextlib, pickle, ...
       ├─ 3.034 <module>  antropy/entropy.py:1
       │  ├─ 2.390 wrapper  numba/core/decorators.py:191
       │  │     [5009 frames hidden]  numba, abc, llvmlite, inspect, contex...
       │  └─ 0.522 <module>  sklearn/__init__.py:14
       │        [374 frames hidden]  sklearn, scipy, inspect, enum, numpy,...
       └─ 2.618 <module>  antropy/utils.py:1
          ├─ 1.584 wrapper  numba/core/decorators.py:191
          │     [5027 frames hidden]  numba, abc, functools, llvmlite, insp...
          ├─ 0.895 <module>  numba/__init__.py:3
          │     [1444 frames hidden]  numba, llvmlite, pkg_resources, warni...
          └─ 0.138 <module>  numpy/__init__.py:106
                [190 frames hidden]  numpy, pathlib, urllib, collections, ...
    
    To view this report with different options, run:
        pyinstrument --load-prev 2021-06-17T16-36-28 [options]
    
    
    enhancement 
    opened by jftsang 4
  • Allow users to pass signal in frequency domain in spectral entropy

    Allow users to pass signal in frequency domain in spectral entropy

    Currently, antropy.spectral_entropy only allows x to be in time-domain. We should add freqs=None and psd=None as possible input if users want to calculate the spectral entropy of a pre-computed power spectrum. We should also add an example of how to calculate the spectral entropy from a multitaper power spectrum.

    enhancement 
    opened by raphaelvallat 0
Releases(v0.1.5)
  • v0.1.5(Dec 17, 2022)

    This is a minor release.

    What's Changed

    • Handle the limit of p = 0 in p log2 p by @jftsang in https://github.com/raphaelvallat/antropy/pull/3
    • Correlation between entropy/FD metrics for data traces from Hodgin-Huxley model by @antelk in https://github.com/raphaelvallat/antropy/pull/5
    • Fix docstrings and rerun by @antelk in https://github.com/raphaelvallat/antropy/pull/7
    • Improve performance in _xlog2x by @jftsang in https://github.com/raphaelvallat/antropy/pull/8
    • Prevent invalid operations in xlogx by @guiweber in https://github.com/raphaelvallat/antropy/pull/11
    • Allow readonly arrays for higuchi_fd by @jvdd in https://github.com/raphaelvallat/antropy/pull/13
    • modify the _embed function to fit the 2d input by @cheliu-computation in https://github.com/raphaelvallat/antropy/pull/15
    • Fixed division by zero in linear regresion function (with test) by @Arritmic in https://github.com/raphaelvallat/antropy/pull/21
    • Add conda install instructions by @raphaelvallat in https://github.com/raphaelvallat/antropy/pull/19

    New Contributors

    • @jftsang made their first contribution in https://github.com/raphaelvallat/antropy/pull/3
    • @antelk made their first contribution in https://github.com/raphaelvallat/antropy/pull/5
    • @guiweber made their first contribution in https://github.com/raphaelvallat/antropy/pull/11
    • @jvdd made their first contribution in https://github.com/raphaelvallat/antropy/pull/13
    • @cheliu-computation made their first contribution in https://github.com/raphaelvallat/antropy/pull/15
    • @Arritmic made their first contribution in https://github.com/raphaelvallat/antropy/pull/21
    • @raphaelvallat made their first contribution in https://github.com/raphaelvallat/antropy/pull/19

    Full Changelog: https://github.com/raphaelvallat/antropy/compare/v0.1.4...v0.1.5

    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Apr 1, 2021)

Owner
Raphael Vallat
French research scientist specialized in sleep and dreaming | Strong interest in stats and signal processing | Python lover
Raphael Vallat
ICLR21 Tent: Fully Test-Time Adaptation by Entropy Minimization

⛺️ Tent: Fully Test-Time Adaptation by Entropy Minimization This is the official project repository for Tent: Fully-Test Time Adaptation by Entropy Mi

Dequan Wang 204 Dec 25, 2022
MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python

MNE-Python MNE-Python software is an open-source Python package for exploring, visualizing, and analyzing human neurophysiological data such as MEG, E

MNE tools for MEG and EEG data analysis 2.1k Dec 28, 2022
This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning"

CSP_Deep_EEG This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning" {https://www

Seyed Mahdi Roostaiyan 2 Nov 8, 2022
Classification of EEG data using Deep Learning

Graduation-Project Classification of EEG data using Deep Learning Epilepsy is the most common neurological disease in the world. Epilepsy occurs as a

Osman Alpaydın 5 Jun 24, 2022
Self-supervised spatio-spectro-temporal represenation learning for EEG analysis

EEG-Oriented Self-Supervised Learning and Cluster-Aware Adaptation This repository provides a tensorflow implementation of a submitted paper: EEG-Orie

Wonjun Ko 4 Jun 9, 2022
Relative Positional Encoding for Transformers with Linear Complexity

Stochastic Positional Encoding (SPE) This is the source code repository for the ICML 2021 paper Relative Positional Encoding for Transformers with Lin

Antoine Liutkus 48 Nov 16, 2022
A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering.

DeepFilterNet A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering. libDF contains Rust code used for dat

Hendrik Schröter 292 Dec 25, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 4, 2022
PyTorch code for SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised DA

PyTorch Code for SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised Domain Adaptation Viraj Prabhu, Shivam Khare, Deeks

Viraj Prabhu 46 Dec 24, 2022
RE3: State Entropy Maximization with Random Encoders for Efficient Exploration

State Entropy Maximization with Random Encoders for Efficient Exploration (RE3) (ICML 2021) Code for State Entropy Maximization with Random Encoders f

Younggyo Seo 47 Nov 29, 2022
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

null 62 Dec 30, 2022
Semi-supervised Domain Adaptation via Minimax Entropy

Semi-supervised Domain Adaptation via Minimax Entropy (ICCV 2019) Install pip install -r requirements.txt The code is written for Pytorch 0.4.0, but s

Vision and Learning Group 243 Jan 9, 2023
PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Rithesh Kumar 135 Oct 27, 2022
Predicting path with preference based on user demonstration using Maximum Entropy Deep Inverse Reinforcement Learning in a continuous environment

Preference-Planning-Deep-IRL Introduction Check my portfolio post Dependencies Gym stable-baselines3 PyTorch Usage Take Demonstration python3 record.

Tianyu Li 9 Oct 26, 2022
Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.

Softlearning Softlearning is a deep reinforcement learning toolbox for training maximum entropy policies in continuous domains. The implementation is

Robotic AI & Learning Lab Berkeley 997 Dec 30, 2022
Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification

Fine-grainedImageClassification Weakly Supervised Posture Mining with Reverse Cross-entropy for Fine-grained Classification We trained model here: lin

ZhenchaoTang 14 Oct 21, 2022
Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression.

Spatio-Temporal Entropy Model A Pytorch Reproduction of Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression. More details can

null 16 Nov 28, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023