A python library for parsing multiple types of config files, envvars & command line arguments that takes the headache out of setting app configurations.

Overview

parse_it

A python library for parsing multiple types of config files, envvars and command line arguments that takes the headache out of setting app configurations.

Drone.io CI unit tests & auto PyPi push status: Build Status

Code coverage: codecov

Install

First install parse_it, for Python 3.6 & higher this is simply done using pip:

# Install from PyPi for Python version 3.6 & higher
pip install parse_it

If your using a Python 3.4 or older you will require the typing backported package as well, this is done with the following optional install:

# Install from PyPi for Python version 3.4 & lower
pip install parse_it[typing]

How to use

# Load parse_it
from parse_it import ParseIt

# Create parse_it object.
parser = ParseIt()

# Now you can read your configuration values no matter how they are configured (cli args, envvars, json/yaml/etc files)
my_config_key = parser.read_configuration_variable("my_config_key")

By default all configuration files will be assumed to be in the workdir but if you want you can also easily set it to look in all subfolders recursively:

# Load parse_it
from parse_it import ParseIt

# cat /etc/my_config_folder/my_inner_conf_folder/my_config.json >>>
#
# {
#   "my_int": 123
# }
# 

# Create parse_it object that will look for the config files in the "/etc/my_config_folder" and all of it's subfolders
parser = ParseIt(config_location="/etc/my_config_folder", recurse=True)
my_config_key = parser.read_configuration_variable("my_int")
# my_config_key will now be an int of 123

By default parse_it will look for the configuration options in the following order & will return the first one found:

  • cli_args - command line arguments that are passed in the following format --key value
  • env_vars - environment variables, you can also use envvars as an alias for it
  • env - .env formatted files, any file ending with a .env extension in the configuration folder is assumed to be this
  • json - JSON formatted files, any file ending with a .json extension in the configuration folder is assumed to be this
  • yaml - YAML formatted files, any file ending with a .yaml extension in the configuration folder is assumed to be this
  • yml - YAML formatted files, any file ending with a .yml extension in the configuration folder is assumed to be this
  • toml - TOML formatted files, any file ending with a .toml extension in the configuration folder is assumed to be this
  • tml - TOML formatted files, any file ending with a .tml extension in the configuration folder is assumed to be this
  • hcl - HCL formatted files, any file ending with a .hcl extension in the configuration folder is assumed to be this
  • tf - HCL formatted files, any file ending with a .tf extension in the configuration folder is assumed to be this
  • conf - INI formatted files, any file ending with a .conf extension in the configuration folder is assumed to be this
  • cfg - INI formatted files, any file ending with a .cfg extension in the configuration folder is assumed to be this
  • ini - INI formatted files, any file ending with a .ini extension in the configuration folder is assumed to be this
  • xml - XML formatted files, any file ending with a .xml extension in the configuration folder is assumed to be this
  • configuration default value - every configuration value can also optionally be set with a default value
  • global default value - the parser object also has a global default value which can be set

if multiple files of the same type exists in the same folder parse_it will look in all of them in alphabetical order before going to the next type,

You can decide on using your own custom order of any subset of the above options (default values excluded, they will always be last):

# Load parse_it
from parse_it import ParseIt

# Create parse_it object which will only look for envvars then yaml & yml files then json files
parser = ParseIt(config_type_priority=["env_vars", "yaml", "yml", "json"])

The global default value by default is None but if needed it's simple to set it:

# Load parse_it
from parse_it import ParseIt

# Create parse_it object with a custom default value
parser = ParseIt()
my_config_key = parser.read_configuration_variable("my_undeclared_key")
# my_config_key will now be a None

# Create parse_it object with a custom default value
parser = ParseIt(global_default_value="my_default_value")
my_config_key = parser.read_configuration_variable("my_undeclared_key")
# my_config_key will now be an string of "my_default_value"

parse_it will by default attempt to figure out the type of value returned so even in the case of envvars, cli args & INI files you will get strings/dicts/etc:

# Load parse_it
from parse_it import ParseIt

# This is just for the example
import os
os.environ["MY_INT"] = "123"
os.environ["MY_LIST"] = "['first_item', 'second_item', 'third_item']"
os.environ["MY_DICT"] = "{'key': 'value'}"

# Create parse_it object
parser = ParseIt()
my_config_key = parser.read_configuration_variable("MY_INT")
# my_config_key will now be an string of "123"
my_config_key = parser.read_configuration_variable("MY_LIST")
# my_config_key will now be an list of ['first_item', 'second_item', 'third_item']
my_config_key = parser.read_configuration_variable("MY_DICT")
# my_config_key will now be an dict of {'key': 'value'}

# you can easily disable the type estimation
parser = ParseIt(type_estimate=False)
my_config_key = parser.read_configuration_variable("MY_INT")
# my_config_key will now be an string of "123"
my_config_key = parser.read_configuration_variable("MY_LIST")
# my_config_key will now be an string of "['first_item', 'second_item', 'third_item']"
my_config_key = parser.read_configuration_variable("MY_DICT")
# my_config_key will now be an string of "{'key': 'value'}"

As envvars recommended syntax is to have all keys be UPPERCASE which is diffrent then all the rest of the configuration files parse_it will automatically change any needed config value to be in ALL CAPS when looking at envvars for the matching value but if needed you can of course disable that feature:

# Load parse_it
from parse_it import ParseIt

# This is just for the example
import os
os.environ["MY_STRING"] = "UPPER"
os.environ["my_string"] = "lower"

# Create parse_it object
parser = ParseIt()
my_config_key = parser.read_configuration_variable("my_string")
# my_config_key will now be an string of "UPPER"

# disabling force envvar uppercase
parser = ParseIt(force_envvars_uppercase=False)
my_config_key = parser.read_configuration_variable("my_string")
# my_config_key will now be an string of "lower"

You can also easily add a prefix to all envvars (note that force_envvars_uppercase will also affect the given prefix):

# Load parse_it
from parse_it import ParseIt

# This is just for the example
import os
os.environ["PREFIX_MY_INT"] = "123"

# add a prefix to all envvars used
parser = ParseIt(envvar_prefix="prefix_")
my_config_key = parser.read_configuration_variable("my_int")
# my_config_key will now be a int of 123

You can also set a default value on a per configuration key basis:

# Load parse_it
from parse_it import ParseIt

# get a default value of the key
parser = ParseIt()
my_config_key = parser.read_configuration_variable("my_undeclared_key", default_value="my_value")
# my_config_key will now be a string of "my_value"

You can also declare a key to be required (disabled by default) so it will raise a ValueError if not declared by the user anywhere:

# Load parse_it
from parse_it import ParseIt

# will raise an error as the key is not declared anywhere and required is set to True
parser = ParseIt()
my_config_key = parser.read_configuration_variable("my_undeclared_key", required=True)
# Will raise ValueError

While generally not a good idea sometimes you can't avoid it and will need to use a custom non standard file suffix, you can add a custom mapping of suffixes to any of the supported file formats as follows (note that config_type_priority should also be set to configure the priority of said custom suffix):

# Load parse_it
from parse_it import ParseIt

# Create parse_it object which will only look for envvars then the custom_yaml_suffix then standard yaml & yml files then json files
parser = ParseIt(config_type_priority=["env_vars", "custom_yaml_suffix", "yaml", "yml", "json"], custom_suffix_mapping={"yaml": ["custom_yaml_suffix"]})

You might sometimes want to check that the enduser passed to your config a specific type of variable, parse_it allows you to easily check if a value belongs to a given list of types by setting allowed_types which will then raise a TypeError if the value type given is not in the list of allowed_types, by default this is set to None so no type ensuring takes place:

# Load parse_it
from parse_it import ParseIt

# This is just for the example
import os
os.environ["ONLY_INTGERS_PLEASE"] = "123"

# Create parse_it object which will only look for envvars then the custom_yaml_suffix then standard yaml & yml files then json files
parser = ParseIt()

# skips the type ensuring check as it's not set so all types are accepted
my_config_key = parser.read_configuration_variable("only_intgers_please")

# the type of the variable value is in the list of allowed_types so no errors\warning\etc will be raised
my_config_key = parser.read_configuration_variable("only_intgers_please", allowed_types=[int])

# will raise a TypeError
my_config_key = parser.read_configuration_variable("only_intgers_please", allowed_types=[str, dict, list, None])

Sometimes you'll need a lot of configuration keys to have the same parse_it configuration params, rather then looping over them yourself this can be achieved with the read_multiple_configuration_variables function that you will give it a list of the configuration keys you want & will apply the same configuration to all and return you a dict with the key/value of the configurations back.

# Load parse_it
from parse_it import ParseIt

# Create parse_it object.
parser = ParseIt()

# Read multiple config keys at once, will return {"my_first_config_key": "default_value", "my_second_config_key": "default_value"} in the example below
my_config_key = parser.read_multiple_configuration_variables(["my_first_config_key", "my_second_config_key"], default_value="default_value", required=False, allowed_types=[str, list, dict, int])

You can also read a single file rather then a config directory.

# Load parse_it
from parse_it import ParseIt

# cat /etc/my_config_folder/my_config.json >>>
#
# {
#   "my_int": 123
# }
# 

# Create parse_it object that will look at a single config file, envvars & cli
parser = ParseIt(config_location="/etc/my_config_folder/my_config.json")
my_config_key = parser.read_configuration_variable("my_int")
# my_config_key will now be an int of 123

Another option is to read all configurations from all valid sources into a single dict that will include the combined results of all of them (by combined it means it will return only the highest priority of each found key & will combine different keys from different sources into a single dict), this provides less flexibility then reading the configuration variables one by one and is a tad (but just a tad) slower but for some use cases is simpler to use:

# Load parse_it
from parse_it import ParseIt

# Create parse_it object
parser = ParseIt()

my_config_dict = parser.read_all_configuration_variables()
# my_config_dict will now be a dict that includes the keys of all valid sources with the values of each being taken only from the highest priority source

# you can still define the "default_value", "required" & "allowed_types" when reading all configuration variables to a single dict
my_config_dict = parser.read_all_configuration_variables(default_value={"my_key": "my_default_value", "my_other_key": "my_default_value"}, required=["my_required_key","my_other_required_key"], allowed_types={"my_key": [str, list, dict, int], "my_other_key": [str, list, dict, int]})

It has also become a common practice to divide envvar keys by a divider character (usually _) and nest then as subdicts, this assists in declaring complex dictionaries subkeys with each of them being given it's own key, parse_it supports this option as well by setting the envvar_divider variable when declaring the parse_it object (disabled by default):

# Load parse_it
from parse_it import ParseIt

# This is just for the example
import os
os.environ["NEST1_NEST2_NEST3"] = "123"

# Create parse_it object with an envvar_divider
parser = ParseIt(envvar_divider="_")

my_config_dict = parser.read_all_configuration_variables()
# my_config_dict will now be a dict that includes the keys of all valid sources with the values of each being taken only from the highest priority source & the envars keys will be turned to nested subdicts.
# my_config_dict will have in it the following dict {"nest1": {"nest2":{"nest3": 123}}} 
Comments
  • We are converting empty string to None type

    We are converting empty string to None type

    https://github.com/naorlivne/parse_it/blob/3a87980c4703decf8836999f40e466112b995b70/parse_it/type_estimate/type_estimate.py#L25

    Is it possible to change this part in order to get empty string read as string and not converted to None type? I've added a PR with this change applied globally, as another option it could also be optional configuration, ex.: ParseIt( empty_string_value="" | None )

    feature_request 
    opened by Johk3r 6
  • File types in folder have incorrect path when folder_path is relative

    File types in folder have incorrect path when folder_path is relative

    Expected/Wanted Behavior

    The def file_types_in_folder function of file_reader.py module shall return the correct file path when the folder_path parameter value is absolute or relative.

    An quick workaround is to have this piece of code inside the def file_types_in_folder, works for relative and absolute paths, replace lines 89-95 with:

    if file.endswith("." + file_type_ending):
        config_files_dict[file_type_ending].append(os.path.join(root, file))
    

    Actual Behavior

    Function def file_types_in_folder in file_reader.py module is not working correctly when directory is relative.

    For example:

    Function returns an dictionary where files have incorrect path.

    def file_types_in_folder(folder_path: str, file_types_endings: list, recurse: bool = True) -> dict:
        """list all the config file types found inside the given folder based on the filename extension
            Arguments:
                folder_path -- the path of the folder to be checked
                file_types_endings -- list of file types to look for
                recurse -- if True (default) will also look in all subfolders
            Returns:
                config_files_dict -- dict of {file_type: [list_of_file_names_of_said_type]}
        """
        folder_path = strip_trailing_slash(folder_path)
        if folder_exists(folder_path) is False:
            warnings.warn("config_location " + folder_path + " does not exist, only envvars & cli args will be used")
            config_files_dict = {}
            for file_type_ending in file_types_endings:
                config_files_dict[file_type_ending] = []
        else:
            config_files_dict = {}
            for file_type_ending in file_types_endings:
                config_files_dict[file_type_ending] = []
                if recurse is True:
                    for root, subFolders, files in os.walk(folder_path, topdown=True):
                        for file in files:
                            if file.endswith("." + file_type_ending):
                                if folder_path + "/" in root:
                                    if root[0] != "/":
                                        root = root.split(folder_path + "/", 1)[1]    <--- here is the problem re-assign the root directory after splitting it.
                                    config_files_dict[file_type_ending].append(os.path.join(root, file))
                                else:
                                    config_files_dict[file_type_ending].append(file)
                else:
                    for file in os.listdir(folder_path):
                        if os.path.isfile(os.path.join(folder_path, file)) and file.endswith(file_type_ending):
                            config_files_dict[file_type_ending].append(file)
                config_files_dict[file_type_ending].sort()
        return config_files_dict
    

    Steps to Reproduce the Problem

    1. Create an nested directory
    2. Set files with extension in each directory
    3. Parse directory with folder_path as relative path

    Specifications

    Python version: python3.8 (3.6 & higher required, lower versions may work but will not be tested against)

    parse_it version: 3.6.0 (latest)

    OS type & version: Linux/Ubuntu

    opened by theodore86 5
  • Unresolved import when using

    Unresolved import when using "from parse_it import ParseIt"

    Expected/Wanted Behavior

    I've tried adding the parse_it to my venv using pip and Poetry. They both seem to show it as being there.

    pip install parse-it Collecting parse-it Using cached parse_it-3.4.0-py3-none-any.whl (27 kB) Processing c:\users\donal\appdata\local\pip\cache\wheels\bc\f8\ae\bc69cb5f61393ebf9ade4cde41d1a813d35bfe78263a26f99e\dpath-2.0.1-py3-none-any.whl Collecting xmltodict Using cached xmltodict-0.12.0-py2.py3-none-any.whl (9.2 kB) Collecting python-dotenv Using cached python_dotenv-0.14.0-py2.py3-none-any.whl (17 kB) Requirement already satisfied: PyYAML in c:\users\donal\source\repos\tb1\venv\lib\site-packages (from parse-it) (5.3.1) Processing c:\users\donal\appdata\local\pip\cache\wheels\0d\c4\19\13d74440f2a571841db6b6e0a273694327498884dafb9cf978\configobj-5.0.6-py3-none-any.whl Processing c:\users\donal\appdata\local\pip\cache\wheels\ab\bd\51\063758347c589aaf729d573fd51ea4cd962ea5a498be1a8bcc\pyhcl-0.4.4-py3-none-any.whl Collecting toml Using cached toml-0.10.1-py2.py3-none-any.whl (19 kB) Requirement already satisfied: six in c:\users\donal\source\repos\tb1\venv\lib\site-packages (from configobj->parse-it) (1.15.0) Installing collected packages: dpath, xmltodict, python-dotenv, configobj, pyhcl, toml, parse-it Successfully installed configobj-5.0.6 dpath-2.0.1 parse-it-3.4.0 pyhcl-0.4.4 python-dotenv-0.14.0 toml-0.10.1 xmltodict-0.12.0

    Actual Behavior

    Unresolved import

    Steps to Reproduce the Problem

    1. pip install parse_it
    2. And/Or poetry add parse_it

    Specifications

    Python version: (3.5 & higher required, lower versions may work but will not be tested against) 3.7.64

    parse_it version: Current

    OS type & version: Windows 10 Pro. Current.

    MS Visual Studio 2019.

    bug 
    opened by LyonsDo 5
  • Add option to define subdicts via envvars

    Add option to define subdicts via envvars

    Expected/Wanted Behavior

    There should be an option to configure subdicts values via envvars, for example DICT_SUBDICT_VALUE=example should translate to "DICT"={"SUBDICT":{"VALUE":"example"}} if that option is turned on, this will allow endusers to configure all the params they usually pass in configuration files easier as envvars as they won't be forced to give everything as a large dict to a single top level envvar only.

    The separator for this function should be _ & this function should be triggable by a TBD keyword passed to the parse_it object config & it should be by default disabled.

    Actual Behavior

    There is no way to pass values to a subdict via envvars.

    feature_request 
    opened by naorlivne 4
  • please tag releases in git

    please tag releases in git

    https://pypi.org/project/parse-it/#history

    you have quite some releases already, but github is not showing them because you never added a git tag for them.

    if you happen to have a gpg key, you can even sign your tags, so people can verify authenticity. gpg signing is optional, but nice to have.

    enhancement 
    opened by ThomasWaldmann 4
  • Read the config from the remote server

    Read the config from the remote server

    Expected/Wanted Behavior

    I read the config file from the nacos server, the config is str type , can you add parse the str config with json. yaml .

    Actual Behavior

    import json
    import yaml
    from parse_it import ParseIt
    
    config = """
    {'db': [{'mysql': {'username': 'root',  'password': '123456', 'host': '127.0.0.1', 'port': 3306}}]}
    """
    
    config = """
    spring:
      application:
        name: server
      cloud:
        nacos:
          discovery:
            server-addr: http://localhost:8848
        sentinel:
          transport:
            dashboard: http://localhost:8080
    """
    
    try:
        config_1 = json.load(config)
    except Exception as e:
        config_1 = yaml.safe_load(config)
    
    print(type(config_1))
    
    
    class QueryChainWithDict(dict):
        def __init__(self, obj):
            self.obj = obj
     
        def get(self, key):
            return self.obj.get(key)
    test_query_chain = QueryChainWithDict(config_1)
    
    print(test_query_chain.get('spring').get('application').get('name'))
        
    
    

    Steps to Reproduce the Problem

    Specifications

    Python version: (3.6 & higher required, lower versions may work but will not be tested against) 3.9

    parse_it version:

    OS type & version: mac os big sur

    opened by lookcat 3
  • Bump idna from 2.10 to 3.0

    Bump idna from 2.10 to 3.0

    Bumps idna from 2.10 to 3.0.

    Changelog

    Sourced from idna's changelog.

    3.0 (2021-01-01) ++++++++++++++++

    • Python 2 is no longer supported (the 2.x branch supports Python 2, use "idna<3" in your requirements file if you need Python 2 support)
    • Support for V2 UTS 46 test vectors.
    Commits
    • a45bf88 Release v3.0
    • 82f7b70 Fix regressions from removing Python 2 support
    • 229f123 Use Github Actions for unit testing
    • d8bb757 Merge pull request #90 from kjd/fix-licensing
    • a2f5460 Merge branch 'master' into fix-licensing
    • 537aa99 Add 3-clause BSD license that Github can detect
    • 6dc4bfb Merge pull request #89 from jdufresne/py39
    • 153b5ab Use Python 3.9 release in Travis configuration
    • cc273f8 Merge pull request #88 from hugovk/add-3.9
    • 4b3e6fd Remove redundant Python 2 code
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    enhancement 
    opened by dependabot[bot] 3
  • Wrong estimation of type

    Wrong estimation of type

    Hello there, I'm using this awesome library in all my project, but today I've hit a strange behavior, that looks like a bug to me.

    I'm using in a yaml file a dictionary of unique strings like:

    0201d9a59b5e
    020103fea000
    0201c6285571
    

    One of them, 020198015e97 gets read like a float. The workaround is to use type_estimate=False when creating the instance, but the estimations comes handy quite often and I don't feel like this one should be read as a number.

    Expected/Wanted Behavior

    In [1]: from parse_it import ParseIt
       ...: import os
       ...: os.environ["TEST"] = "020198015e97"
       ...: parser = ParseIt()
       ...: parser.read_configuration_variable("TEST")
    Out[1]: "020198015e97"
    

    Actual Behavior

    In [1]: from parse_it import ParseIt
       ...: import os
       ...: os.environ["TEST"] = "020198015e97"
       ...: parser = ParseIt()
       ...: parser.read_configuration_variable("TEST")
    Out[1]: 2.0198015e+104
    

    Specifications

    Python version: 3.8.2

    parse_it version: parse-it==3.4.0

    OS type & version: latest alpine linux in docker

    THANK YOU!

    bug 
    opened by pdonorio 3
  • CLI args nested key names

    CLI args nested key names

    How to give key name for nested properties from CLI? Currently envvar_divider has a support for environment variables. Is there similar functionality exists for CLI args?

    opened by kesavkolla 3
  • Add a function to read all configurtion paramters from all possible sources into a single dict

    Add a function to read all configurtion paramters from all possible sources into a single dict

    Expected/Wanted Behavior

    There should be a function that when called runs the entire config_type_priority in reverse order and creates a single dict with all key\values of all configuration parameters

    said function should have an option to take a list of parameters that are to be required and if not set will raise an error & a dict of parameters default values

    Actual Behavior

    There is no way of making a dict of all parameters ket\value pairs that combines the end results of all config files\envvars\cli args declered in config_type_priority

    feature_request 
    opened by naorlivne 3
  • move parse_it package into a src/ directory

    move parse_it package into a src/ directory

    that makes sure you are actually testing the package / files that got installed and not just the stuff that happens to be in cwd right now.

    most coders start without using a src/ layout and later will migrate to use one. you can save some pain by doing it early.

    https://github.com/pypa/packaging.python.org/issues/320

    enhancement wontfix 
    opened by ThomasWaldmann 3
  • Bump coverage from 6.5.0 to 7.0.1

    Bump coverage from 6.5.0 to 7.0.1

    Bumps coverage from 6.5.0 to 7.0.1.

    Changelog

    Sourced from coverage's changelog.

    Version 7.0.1 — 2022-12-23

    • When checking if a file mapping resolved to a file that exists, we weren't considering files in .whl files. This is now fixed, closing issue 1511_.

    • File pattern rules were too strict, forbidding plus signs and curly braces in directory and file names. This is now fixed, closing issue 1513_.

    • Unusual Unicode or control characters in source files could prevent reporting. This is now fixed, closing issue 1512_.

    • The PyPy wheel now installs on PyPy 3.7, 3.8, and 3.9, closing issue 1510_.

    .. _issue 1510: nedbat/coveragepy#1510 .. _issue 1511: nedbat/coveragepy#1511 .. _issue 1512: nedbat/coveragepy#1512 .. _issue 1513: nedbat/coveragepy#1513

    .. _changes_7-0-0:

    Version 7.0.0 — 2022-12-18

    Nothing new beyond 7.0.0b1.

    .. _changes_7-0-0b1:

    Version 7.0.0b1 — 2022-12-03

    A number of changes have been made to file path handling, including pattern matching and path remapping with the [paths] setting (see :ref:config_paths). These changes might affect you, and require you to update your settings.

    (This release includes the changes from 6.6.0b1 <changes_6-6-0b1_>_, since 6.6.0 was never released.)

    • Changes to file pattern matching, which might require updating your configuration:

      • Previously, * would incorrectly match directory separators, making precise matching difficult. This is now fixed, closing issue 1407_.

      • Now ** matches any number of nested directories, including none.

    • Improvements to combining data files when using the

    ... (truncated)

    Commits
    • c5cda3a docs: releases take a little bit longer now
    • 9d4226e docs: latest sample HTML report
    • 8c77758 docs: prep for 7.0.1
    • da1b282 fix: also look into .whl files for source
    • d327a70 fix: more information when mapping rules aren't working right.
    • 35e249f fix: certain strange characters caused reporting to fail. #1512
    • 152cdc7 fix: don't forbid plus signs in file names. #1513
    • 31513b4 chore: make upgrade
    • 873b059 test: don't run tests on Windows PyPy-3.9
    • 5c5caa2 build: PyPy wheel now installs on 3.7, 3.8, and 3.9. #1510
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    enhancement 
    opened by dependabot[bot] 0
  • Bump dpath from 2.1.0 to 2.1.3

    Bump dpath from 2.1.0 to 2.1.3

    Bumps dpath from 2.1.0 to 2.1.3.

    Release notes

    Sourced from dpath's releases.

    v2.1.3

    Commits

    • Remove trailing comma
    • Bump version
    • Merge pull request #176 by moomoohk from dpath-maintainers/bugfix/175-trailling-comma-in-deprecated-merge

    v2.1.2

    Commits

    • Support negative indexes
    • Minor improvements
    • Improve negative number check
    • Remove unnecessary negative number check
    • Fix values to work with fnmatchcase
    • Add str overload to CyclicInt
    • Simplify int handling in matching code
    • Remove test case
    • Bump version
    • Continue evaluating entire path when handling int
    • Add type hints
    • Improve CyclicInt type
    • Rename CyclicInt to SymmetricInt
    • Fix sign
    • Merge pull request #172 by moomoohk from dpath-maintainers/feature/166-negative-list-indexing

    v2.1.1

    Commits

    • Catch all exceptions in type check
    • Cast path segment to int if it's supposed to be an index
    • Remove redundant import
    • Remove ambiguity of last path segment
    • Remove bad documentation
    • Add test for int ambiguity
    • Merge pull request #169 by moomoohk from dpath-maintainers/bugfix/int-ambiguity
    Commits
    • 312a42c Merge pull request #176 from dpath-maintainers/bugfix/175-trailling-comma-in-...
    • f3303eb Bump version
    • 41c2652 Remove trailing comma
    • 45b3488 Merge pull request #172 from dpath-maintainers/feature/166-negative-list-inde...
    • eb1b2e2 Fix sign
    • 66a64a3 Rename CyclicInt to SymmetricInt
    • 317e3cd Improve CyclicInt type
    • 58407db Add type hints
    • e37fdad Continue evaluating entire path when handling int
    • 6c513fa Bump version
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    enhancement 
    opened by dependabot[bot] 0
  • Bump chardet from 5.0.0 to 5.1.0

    Bump chardet from 5.0.0 to 5.1.0

    Bumps chardet from 5.0.0 to 5.1.0.

    Release notes

    Sourced from chardet's releases.

    chardet 5.1.0

    Features

    • Add should_rename_legacy argument to most functions, which will rename older encodings to their more modern equivalents (e.g., GB2312 becomes GB18030) (#264, @​dan-blanchard)
    • Add capital letter sharp S and ISO-8859-15 support (#222, @​SimonWaldherr)
    • Add a prober for MacRoman encoding (#5 updated as c292b52a97e57c95429ef559af36845019b88b33, Rob Speer and @​dan-blanchard )
    • Add --minimal flag to chardetect command (#214, @​dan-blanchard)
    • Add type annotations to the project and run mypy on CI (#261, @​jdufresne)
    • Add support for Python 3.11 (#274, @​hugovk)

    Fixes

    Misc changes

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    enhancement 
    opened by dependabot[bot] 0
  • Optional UCL Support?

    Optional UCL Support?

    Support getting configuration values from UCL files.

    Specifications

    It would be nice to include support for UCL / libUCL based configuration files. These are common enough on FreeBSD that it would be useful. https://github.com/vstakhov/libucl

    feature_request 
    opened by techdragon 6
  • potential slowness when dealing with large number of configuration values

    potential slowness when dealing with large number of configuration values

    the way the code does it right now is that it does everything per configuration key:

    • a full argparse
    • checking env vars (no big issue, this is quite fast as it is a dict already)
    • a full config file read / parse

    if you have a lot of configuration keys, this will get slow.

    enhancement help wanted 
    opened by ThomasWaldmann 8
Owner
Naor Livne
Naor Livne
git-partial-submodule is a command-line script for setting up and working with submodules while enabling them to use git's partial clone and sparse checkout features.

Partial Submodules for Git git-partial-submodule is a command-line script for setting up and working with submodules while enabling them to use git's

Nathan Reed 15 Sep 22, 2022
Command-line parsing library for Python 3.

Command-line parsing library for Python 3.

null 36 Dec 15, 2022
A command-line based, minimal torrent streaming client made using Python and Webtorrent-cli. Stream your favorite shows straight from the command line.

A command-line based, minimal torrent streaming client made using Python and Webtorrent-cli. Installation pip install -r requirements.txt It use

Jonardon Hazarika 17 Dec 11, 2022
Python command line tool and python engine to label table fields and fields in data files.

Python command line tool and python engine to label table fields and fields in data files. It could help to find meaningful data in your tables and data files or to find Personal identifable information (PII).

APICrafter 22 Dec 5, 2022
A lightweight Python module and command-line tool for generating NATO APP-6(D) compliant military symbols from both ID codes and natural language names

Python military symbols This is a lightweight Python module, including a command-line script, to generate NATO APP-6(D) compliant military symbol icon

Nick Royer 5 Dec 27, 2022
A very simple and lightweight ToDo app using python that can be used from the command line

A very simple and lightweight ToDo app using python that can be used from the command line

Nilesh Sengupta 2 Jul 20, 2022
A cd command that learns - easily navigate directories from the command line

NAME autojump - a faster way to navigate your filesystem DESCRIPTION autojump is a faster way to navigate your filesystem. It works by maintaining a d

William Ting 14.5k Jan 3, 2023
AML Command Transfer. A lightweight tool to transfer any command line to Azure Machine Learning Services

AML Command Transfer (ACT) ACT is a lightweight tool to transfer any command from the local machine to AML or ITP, both of which are Azure Machine Lea

Microsoft 11 Aug 10, 2022
Ros command - Unifying the ROS command line tools

Unifying the ROS command line tools One impairment to ROS 2 adoption is that all

null 37 Dec 15, 2022
Text based command line webcam photobooth app

Skunkbooth Why See it in action Usage Installation Run Media location Contributing Install Poetry Clone the repo Activate poetry shell Install dev dep

David Yang 45 Dec 26, 2022
Command line util for grep.app - Search across a half million git repos

grepgithub Command line util for grep.app - Search across a half million git repos Grepgithub uses grep.app API to search GitHub repositories, providi

Nenad Popovic 18 Dec 28, 2022
Fylm is a wonderful automated command line app for organizing your film media.

Overview Fylm is a wonderful automated command line app for organizing your film media. You can pronounce it Film or File 'em, whichever you like! It

Brandon Shelley 30 Dec 5, 2022
A simple command line chat app to communicate via the terminal.

A simple command line chat app to communicate via the terminal. I'm new to networking so sorry if some of my terminology or code is messed up.

PotNoodle 1 Oct 26, 2021
Hurry is a CLI tool to speed setting up MoniGoMani HyperStrategy & co. #freqtrade #hyperopting #trading #strategy

Hurry is a CLI tool to speed setting up MoniGoMani HyperStrategy & co. #freqtrade #hyperopting #trading #strategy

null 10 Dec 29, 2022
commandpack - A package of modules for working with commands, command packages, files with command packages.

commandpack Help the project financially: Donate: https://smartlegion.github.io/donate/ Yandex Money: https://yoomoney.ru/to/4100115206129186 PayPal:

null 4 Sep 4, 2021
bsp_tool provides a Command Line Interface for analysing .bsp files

bsp_tool Python library for analysing .bsp files bsp_tool provides a Command Line Interface for analysing .bsp files Current development is focused on

Jared Ketterer 64 Dec 28, 2022
A simple command line tool for changing the icons of folders or files on MacOS.

Mac OS File Icon Changer Description A small and simple script to quickly change large amounts or a few files and folders icons to easily customize th

Eroxl 3 Jan 2, 2023
eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.

Command line utilities for tabular data files This is a set of command line utilities for manipulating large tabular data files. Files of numeric and

eBay 1.4k Jan 9, 2023