A suite of utilities for converting to and working with CSV, the king of tabular file formats.

Overview
Comments
  • csvstack: handle reordered columns automatically

    csvstack: handle reordered columns automatically

    I would expect that csvstack would look at headers and stack data intelligently, but it does not. It simply cats the first file together with all but the first line of remaining files.

    In particular, if column names occur in a different order, or if some file has columns that another does not, the results are not consistent with what one would expect.

    For example:

    csvlook a.csv =>
    |----+----|
    |  x | y  |
    |----+----|
    |  1 | 2  |
    |  3 | 4  |
    |  5 | 6  |
    |----+----|
    
    
    csvlook b.csv =>
    |----+-------|
    |  y | z     |
    |----+-------|
    |  8 | this  |
    |  9 | that  |
    |----+-------|
    
    csvstack a.csv b.csv | csvlook =>
    |----+-------|
    |  x | y     |
    |----+-------|
    |  1 | 2     |
    |  3 | 4     |
    |  5 | 6     |
    |  8 | this  |
    |  9 | that  |
    |----+-------|
    

    I would expect the following:

    csvstack a.csv b.csv | csvlook =>
    |----+---+-------|
    |  x | y | z     |
    |----+---+-------|
    |  1 | 2 |       |
    |  3 | 4 |       |
    |  5 | 6 |       |
    |    | 8 | this  |
    |    | 9 | that  |
    |----|---+-------|
    
    feature Normal Priority 
    opened by metasoarous 28
  • csvstat gives me

    csvstat gives me "'NoneType' object has no attribute 'decimal_formats'" error for tutorial data

    Whenever I try csvstat for the tutorial data, I get the following error and it just stops wherever it detects the error

    $ csvstat data.csv
    
      1. "state"
    
            Type of data:          Text
            Contains null values:  False
            Unique values:         1
            Most common values:    NE (1036x)
    
      2. "county"
    
            Type of data:          Text
            Contains null values:  False
            Unique values:         35
            Most common values:    DOUGLAS (760x)
                                   DAKOTA (42x)
                                   CASS (37x)
                                   HALL (23x)
                                   LANCASTER (18x)
    
      3. "fips"
    
            Type of data:          Number
            Contains null values:  False
            Unique values:         35
    'NoneType' object has no attribute 'decimal_formats'
            Most common values:   
    

    My environment:

    OS X 10.11.6

    python version : 2.7.10

    pip version : 9.0.1

    I installed csvkit via pip yesterday (Apr 8th)

    Any idea?

    question 
    opened by suewonjp 27
  • in2csv XLS changing date columns to integers

    in2csv XLS changing date columns to integers

    Hi - When we use the command in2csv <file_name.xls> > <new_file_name.csv> , we notice that our date columns are being converted to integers. This is preventing our usage of the csvkit library. We have looked for solutions on stack overflow, but nothing has worked thus far. Are there any work arounds? We have run updates for openpyxl (openpyxl==2.2.0-b1), csvkit and pip. We are using python 2.7.1.

    Any ideas?

    Thanks, Craig

    Normal Priority fixed: upstream 
    opened by craigtb 25
  • Add PYTHONIOENCODING to tricks.rst: UTF-8 sequences generated from csvlook cannot be encoded when the output is redirected

    Add PYTHONIOENCODING to tricks.rst: UTF-8 sequences generated from csvlook cannot be encoded when the output is redirected

    I use csvlook to generate the table:

    $ csvlook some.csv
    | SolverTaskId | SolverOrderId | SrcPoiId | DstPoiId |   Weight |  Volume |            LoadDate | Amount | AmountUnit |
    | ------------ | ------------- | -------- | -------- | -------- | ------- | ------------------- | ------ | ---------- |
    |          243 |       444 587 |   29 077 |   28 951 |   138,90 |  4,040… | 2016-01-08 00:00:00 |        |            |
    ...
    

    Please note (I'm not sure it's correctly encoded in the above output), that after "444" there is a U+00A0 character (NBSP). The output is correctly encoded UTF-8.

    When I want to redirect the output to a pipe or a file, the exception is thrown:

    $ csvlook some.csv >/dev/null
    'ascii' codec can't encode character u'\xa0' in position 26: ordinal not in range(128)
    

    This clearly looks like #315; however, I'm on a master branch, installed with pip install git+git://github.com/wireservice/csvkit.git.

    documentation Low Priority 
    opened by jest 19
  • csvsql: Lots of error messages when using Homebrew Python 2.7 on macOS

    csvsql: Lots of error messages when using Homebrew Python 2.7 on macOS

    I have encountered a lot of error messages when using csvsql. However, csvsql is fully functional. This can be reproduced by simply calling csvsql without parameters. I installed csvkit by pip from Homebrew Python 2.7

    My system configuration is as follows from brew config:

    HOMEBREW_VERSION: 1.1.9
    ORIGIN: https://github.com/Homebrew/brew.git
    HEAD: 664d0c67d5947605c914c4c56ebcfaa80cb6eca0
    Last commit: 2 days ago
    Core tap ORIGIN: https://github.com/Homebrew/homebrew-core
    Core tap HEAD: 662c74e3f73c00fb55878d0c93c319cdefeb0983
    Core tap last commit: 3 hours ago
    HOMEBREW_PREFIX: /usr/local
    HOMEBREW_REPOSITORY: /usr/local/Homebrew
    HOMEBREW_CELLAR: /usr/local/Cellar
    HOMEBREW_BOTTLE_DOMAIN: https://homebrew.bintray.com
    CPU: octa-core 64-bit haswell
    Homebrew Ruby: 2.0.0-p648
    Clang: 8.0 build 800
    Git: 2.11.0 => /usr/local/bin/git
    Perl: /usr/local/bin/perl => /usr/local/Cellar/perl/5.24.0_1/bin/perl
    Python: /usr/local/bin/python => /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/bin/python2.7
    Ruby: /usr/local/bin/ruby => /usr/local/Cellar/ruby/2.4.0/bin/ruby
    Java: 1.8.0_20
    macOS: 10.12.3-x86_64
    Xcode: 8.2.1
    CLT: 8.2.0.0.1.1480973914
    X11: 2.7.11 => /opt/X11
    

    The error messages are (repeated many times):

    /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory '/usr/local/lib/python2.7/site-packages/google': missing __init__.py
    /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory '/usr/local/lib/python2.7/site-packages/mpl_toolkits': missing __init__.py
    /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py:186: ImportWarning: Not importing directory '/usr/local/lib/python2.7/site-packages/logilab': missing __init__.py
    
    question 
    opened by chuanconggao 19
  • non ascii chars in field values causing csvcut to create new fields

    non ascii chars in field values causing csvcut to create new fields

    testNonASCII.zip

    Hi There, I have been using csvcut tool. My data is in csv with combination of ascii and non ascii characters. The issue I am facing is when there are some non ascii chars in the filed value it is causing new fields and new lines. Could you please help me to handle non ascii chars. I have attached the input and output files. The command I have used is " csvcut -c Col4,preview,Col2,searchable_body,Col3 testNonASCII > result.csv" I need to carry these non-ascii chars to the output file with out changes.

    Thanks in advance for the help.

    opened by bbobbadi 17
  • csvsql keeps telling me I don't have postgres library

    csvsql keeps telling me I don't have postgres library

    I'm on OSX and am getting this msg whenever I try to run my csvsql command. Here's the full command I'm using:

    iconv -f latin1 -t UTF-8 +data/dump/${file}.txt |
    csvcut --tabs --not-columns "ibis_client","company" |
    csvsql --db "postgresql:///import_temp" --insert --table $file
    

    Here's the error:

    You don't appear to have the necessary database backend installed for connection string you're trying to use. Available backends include:
    
    Postgresql: pip install psycopg2
    MySQL:      pip install MySQL-python
    
    For details on connection strings and other backends, please see the SQLAlchemy documentation on dialects at: 
    
    http://www.sqlalchemy.org/docs/dialects/
    

    It keeps repeating that message (likely for every row that's being created).

    I do have psycopg2 installed correctly. I use Python and pip as installed via Homebrew (easier for me to manage). At first I thought this was an issue finding the correct library paths but everything is linked correctly (as far as I can tell). If I which pip and python both are running from /usr/local/bin.

    I'm also using Postgres.app which is running from the default port.

    There doesn't seem to be any other issues regarding this, wondering if it's something I'm doing wrong :-/

    question 
    opened by davedbase 17
  • Feature Request: csvlook to wrap long columns

    Feature Request: csvlook to wrap long columns

    Quite often I have csv files that I want to quickly scan, that have one or two columns with very long strings (i.e. >2000 characters). csvlook will then wrap them over many lines.

    What I would suggest is to add an argument with which the user can specify a maximum length for each column to display. If, for example, I set this to 50, then for each row, all columns that contain more than 50 characters are truncated.

    At first, I thought that the option -z would do exactly that, but apparently it doesn't.

    feature Low Priority 
    opened by Tabea-K 17
  • in2csv:

    in2csv: "ImportWarning: can't resolve package from __spec__ or __package__"

    After installing csvkit using pip (under 4.14.9-2-MANJARO Linux with Python 3.6.4), and running in2csv, I got a bunch of warnings like these:

    $ in2csv
    /usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
    /usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
    /usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
    ... [about 20 of the same omitted]
    /usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
    usage: in2csv [-h] [-d DELIMITER] [-t] [-q QUOTECHAR] [-u {0,1,2,3}] [-b]
                  [-p ESCAPECHAR] [-z FIELD_SIZE_LIMIT] [-e ENCODING] [-L LOCALE]
                  [-S] [--blanks] [--date-format DATE_FORMAT]
                  [--datetime-format DATETIME_FORMAT] [-H] [-K SKIP_LINES] [-v]
                  [-l] [--zero] [-V] [-f FILETYPE] [-s SCHEMA] [-k KEY] [-n]
                  [--sheet SHEET] [--write-sheets WRITE_SHEETS] [-y SNIFF_LIMIT]
                  [-I]
                  [FILE]
    

    The script works (when provided a file), but generates these warnings every time.

    Environment:

    $ pip freeze
    agate==1.6.0
    agate-dbf==0.2.0
    agate-excel==0.2.1
    agate-sql==0.5.2
    appdirs==1.4.3
    awscli==1.14.26
    Babel==2.5.3
    Beaker==1.8.1
    beautifulsoup4==4.6.0
    bigml==4.14.0
    bigmler==3.16.0
    bleach==2.1.1
    botocore==1.8.32
    catfish==1.4.2
    certifi==2018.1.18
    chardet==3.0.4
    colorama==0.3.9
    csvkit==1.0.2
    cycler==0.10.0
    dbfread==2.0.7
    decorator==4.2.1
    docopt==0.6.2
    docutils==0.14
    entrypoints==0.2.3
    et-xmlfile==1.0.1
    future==0.16.0
    gufw==17.10.0
    html5lib==1.0b10
    idna==2.6
    ipykernel==4.6.1
    ipython==6.2.1
    ipython-genutils==0.2.0
    ipywidgets==6.0.0
    isodate==0.6.0
    jdcal==1.3
    jedi==0.11.1
    Jinja2==2.10
    jmespath==0.9.3
    joblib==0.11
    jsonschema==2.6.0
    jupyter-client==5.1.0
    jupyter-console==5.2.0
    jupyter-contrib-core==0.3.3
    jupyter-contrib-nbextensions==0.3.3
    jupyter-core==4.4.0
    jupyter-highlight-selected-word==0.1.0
    jupyter-latex-envs==1.4.0
    jupyter-nbextensions-configurator==0.2.8
    keyutils==0.5
    leather==0.3.3
    lightdm-gtk-greeter-settings==1.2.2
    louis==3.4.0
    lxml==4.1.1
    Mako==1.0.7
    MarkupSafe==1.0
    matplotlib==2.1.0
    menulibre==2.1.3
    mistune==0.8.3
    mugshot==0.3.2
    nbconvert==5.3.1
    nbformat==4.4.0
    notebook==5.2.2
    npyscreen==4.10.5
    numpy==1.13.3
    olefile==0.44
    openpyxl==2.5.0
    packaging==16.8
    pacman-mirrors==4.7.0
    pandas==0.21.0
    pandocfilters==1.4.2
    parsedatetime==2.4
    parso==0.1.1
    pexpect==4.3.0
    pickleshare==0.7.4
    Pillow==4.3.0
    prettytable==0.7.2
    prompt-toolkit==1.0.15
    psutil==5.4.1
    ptyprocess==0.5.2
    pyasn1==0.4.2
    pycairo==1.15.4
    Pygments==2.2.0
    pygobject==3.26.1
    pyparsing==2.2.0
    python-dateutil==2.6.1
    python-distutils-extra==2.39
    python-sane==2.8.3
    python-slugify==1.2.4
    pytimeparse==1.1.7
    pytz==2017.3
    pyxdg==0.25
    PyYAML==3.12
    pyzmq==16.0.3
    reportlab==3.4.0
    requests==2.18.4
    requests-toolbelt==0.8.0
    rsa==3.4.2
    ruamel.yaml==0.15.35
    s3transfer==0.1.12
    scikit-learn==0.19.1
    scipy==1.0.0
    simplegeneric==0.8.1
    six==1.11.0
    skll==1.5
    SQLAlchemy==1.2.2
    team==1.0
    terminado==0.8.1
    testpath==0.3.1
    tornado==4.5.2
    traitlets==4.3.2
    udiskie==1.7.3
    Unidecode==1.0.22
    urllib3==1.22
    virtualenv==15.1.0
    wcwidth==0.1.7
    webencodings==0.5.1
    widgetsnbextension==2.0.0
    xlrd==1.1.0
    
    question 
    opened by vkostyuk 16
  • agate-sql: csvsql: generating mysql sql fails: VARCHAR requires a length on dialect mysql

    agate-sql: csvsql: generating mysql sql fails: VARCHAR requires a length on dialect mysql

    Running on windows, python 3.6.0

    Trying to generate mysql create script and getting error about VARCHAR. I did see the comment in issue #740 about doing an upgrade from:

    pip install --upgrade -e git+git://github.com/wireservice/csvkit.git@master#egg=csvkit

    ..but this did not help.

    Error message:

    csvsql --no-inference -i mysql Account.csv (in table 'Account', column 'MasterRecordId'): VARCHAR requires a length on dialect mysql

    Anything I can do to get around this?

    fixed: upstream 
    opened by snorkelbuckle 16
  • Cannot ADD a column to a CSV file

    Cannot ADD a column to a CSV file

    I was looking at all the tools in csvkit, but couldn't find any tool to add/insert an empty column to a CSV file. Surely something which should be part of csvkit.

    I'm happy to get involved, but wanted to check whether there is an easy way to do this and I have overlooked it. The closest I got was the the --linenumbers option of csvcut.

    opened by halloleo 16
  • csvsql does not work with simple csv file

    csvsql does not work with simple csv file

    Command: csvsql --query 'select * from Quality_Metrics' Quality_Metrics.csv Output: TypeError: create() missing 1 required positional argument: 'bind'

    Python version: 3.9.14 csvkit versions tested: 1.0.7, 1.0.6

    Head of csv file:

    Lane,SampleID,index,index2,ReadNumber,Yield,YieldQ30,QualityScoreSum,Mean Quality Score (PF),% Q30
    1,WG5021,ATTACTCG,ATAGAGGC,1,54978,49075,1881350,34.22,0.89
    1,WG5021,ATTACTCG,ATAGAGGC,2,270578,137683,7147328,26.42,0.51
    1,MK4575,ATTACTCG,CCTATCCT,1,45441,40885,1562737,34.39,0.90
    1,MK4575,ATTACTCG,CCTATCCT,2,223641,123754,6114325,27.34,0.55
    
    question 
    opened by kim-fehl 1
  • csvsql: Add option to use ON CONFLICT DO NOTHING/UPDATE

    csvsql: Add option to use ON CONFLICT DO NOTHING/UPDATE

    I'm using csvsql to insert into to a Postregress database, and I'd like it to ignore any errors generated from unique constraints (basically skip importing the same data twice).

    In Postgres and that would be handled like this:

    INSERT INTO table (num1, num2) VALUES (1,1) ON CONFLICT DO UPDATE
    

    However, it doesn't look like any of the current csvsql options allow me to modify the insert statement like this, since --prefix would add it direcly after INSERT and --after-insert executes a seperate statement. Postgres doesn't support the 'OR IGNORE' format that MySQL does.

    Seems like a --suffix option to append something to the end of the query would be useful here, or is this a fundamental issue, since agate-sql doesn't support this?

    Are there any alternatives for ignoring unique constraint insert attempt errors? Even just a flag to ignore any error from an individual insert statement and keep trying would help here.

    Low Priority feature: upstream 
    opened by jschuur 1
  • Convert excel xlsx with merged row cells to csv

    Convert excel xlsx with merged row cells to csv

    Feature request to convert xlsx format with merged row cells to csv. Current behavior appears to be converting to empty except for the first cell that has the data. Unless there is an option I don't know about.

    question 
    opened by christopinka 1
  • csvsql: Add error/warning if no input provided

    csvsql: Add error/warning if no input provided

    I not getting any feedback from these commands. How could I troubleshoot further what is really happening?

    Tried both 1 - PS C:\Users\goldman> csvsql -v --db postgresql://xxx:[email protected]:5432/u2020 --query "SELECT * FROM public.cim_bts;" 2 - PS C:\Users\goldman> csvsql -v --db postgresql+psycopg2://xxx:[email protected]:5432/u2020 --query "SELECT * FROM public.cim_bts;"

    Sending logs after ctrl c pressed

    PS C:\Users\goldman> csvsql -v --db postgresql://xxx:[email protected]:5432/u2020 --query "SELECT * FROM public.cim_bts;"
    
    
    Traceback (most recent call last):
      File "C:\Users\goldman\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "C:\Users\goldman\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
        exec(code, run_globals)
      File "C:\Users\goldman\AppData\Local\Programs\Python\Python310\Scripts\csvsql.exe\__main__.py", line 7, in <module>
      File "C:\Users\goldman\AppData\Local\Programs\Python\Python310\lib\site-packages\csvkit\utilities\csvsql.py", line 251, in launch_new_instance
        utility.run()
      File "C:\Users\goldman\AppData\Local\Programs\Python\Python310\lib\site-packages\csvkit\cli.py", line 127, in run
        self.main()
      File "C:\Users\goldman\AppData\Local\Programs\Python\Python310\lib\site-packages\csvkit\utilities\csvsql.py", line 144, in main
        self._failsafe_main()
      File "C:\Users\goldman\AppData\Local\Programs\Python\Python310\lib\site-packages\csvkit\utilities\csvsql.py", line 175, in _failsafe_main
        table = agate.Table.from_csv(
      File "C:\Users\goldman\AppData\Local\Programs\Python\Python310\lib\site-packages\agate\table\from_csv.py", line 69, in from_csv
        contents = six.StringIO(f.read())
    KeyboardInterrupt
    

    env: Windows 10 Pro 21H2 19044.1586 minikube version: v1.25.2 PSQL running on top of Docker 20.10.13 Python 3.10.2

    PS C:\Users\goldman> pip list
    Package             Version
    ------------------- ---------
    agate               1.6.3
    agate-dbf           0.2.2
    agate-excel         0.2.5
    agate-sql           0.5.8
    Babel               2.9.1
    certifi             2021.10.8
    charset-normalizer  2.0.12
    colorama            0.4.4
    csvkit              1.0.7
    dbfread             2.0.7
    docopt              0.6.2
    et-xmlfile          1.1.0
    future              0.18.2
    greenlet            1.1.2
    idna                3.3
    isodate             0.6.1
    leather             0.3.4
    olefile             0.46
    openpyxl            3.0.9
    parsedatetime       2.4
    pip                 22.0.3
    pomodoro-calculator 1.0.2
    psycopg2            2.9.3
    pyreadline3         3.4.1
    python-slugify      6.1.1
    pytimeparse         1.1.8
    pytz                2022.1
    requests            2.27.1
    setuptools          58.1.0
    six                 1.16.0
    SQLAlchemy          1.4.35
    text-unidecode      1.3
    urllib3             1.26.8
    xlrd                2.0.1
    
    bug 
    opened by goldman7911 1
  • csvsql: Add Ingres support

    csvsql: Add Ingres support

    Ingres support is available in https://github.com/wireservice/agate-sql/pull/36

    Documenting here for search reasons, csvkit database support is handled in agate-sql.

    feature Normal Priority 
    opened by clach04 0
Owner
wireservice
Home of agate, csvkit, proof, and other tools for journalists and data users.
wireservice
Upload an Excel/CSV file ( < 200 MB) and produce a short summary of the data.

Data-Analysis-Report Deployed App 1. What is this app? Upload an excel/csv file and produce a summary report of the data. 2. Where to upload? How to p

Easwaran T H 0 Feb 26, 2022
Single API for reading, manipulating and writing data in csv, ods, xls, xlsx and xlsm files

pyexcel - Let you focus on data, instead of file formats Support the project If your company has embedded pyexcel and its components into a revenue ge

null 1.1k Dec 29, 2022
Xiaobo Zhang 30 Jan 8, 2023
Reads Data from given Excel File and exports Single PDFs and a complete PDF grouped by Gateway

E-Shelter Excel2QR Reads Data from given Excel File and exports Single PDFs and a complete PDF grouped by Gateway Features Reads Excel 2021 Export Sin

Stefan Knaak 1 Nov 13, 2021
According to the received excel file (.xlsx,.xlsm,.xltx,.xltm), it converts to word format with a given table structure and formatting

According to the received excel file (.xlsx,.xlsm,.xltx,.xltm), it converts to word format with a given table structure and formatting

Diakonov Andrey 2 Feb 18, 2022
xlwings is a BSD-licensed Python library that makes it easy to call Python from Excel and vice versa. It works with Microsoft Excel on Windows and macOS. Sign up for the newsletter or follow us on twitter via

xlwings - Make Excel fly with Python! xlwings CE xlwings CE is a BSD-licensed Python library that makes it easy to call Python from Excel and vice ver

xlwings 2.5k Jan 6, 2023
ObjTables: Tools for creating and reusing high-quality spreadsheets

ObjTables: Tools for creating and reusing high-quality spreadsheets ObjTables is a toolkit which makes it easy to use spreadsheets (e.g., XLSX workboo

Karr whole-cell modeling lab 7 Jun 14, 2021
Transpiler for Excel formula like language to Python. Support script and module mode

Transpiler for Excel formula like language to Python. Support script and module mode (formulas are functions).

Edward Villegas-Pulgarin 1 Dec 7, 2021
Fully Automated YouTube Channel ▶️with Added Extra Features.

Fully Automated Youtube Channel ▒█▀▀█ █▀▀█ ▀▀█▀▀ ▀▀█▀▀ █░░█ █▀▀▄ █▀▀ █▀▀█ ▒█▀▀▄ █░░█ ░░█░░ ░▒█░░ █░░█ █▀▀▄ █▀▀ █▄▄▀ ▒█▄▄█ ▀▀▀▀ ░░▀░░ ░▒█░░ ░▀▀▀ ▀▀▀░

sam-sepiol 249 Jan 2, 2023
UI for converting various point cloud file formats

Point cloud format converter This coverter based on open3d. If you're using old ROS1 i suggest to use conda python3 evn to install requirements. Todo

Haegu Lee 1 Oct 29, 2021
Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

Feng Ruohang 88 Nov 4, 2022
A simple CLI tool for converting logs from Poker Now games to other formats

?? Poker Now Log Converter ?? A command line utility for converting logs from Poker Now games to other formats. Introduction Poker Now is a free onlin

null 6 Dec 23, 2022
DB-Drive-CSV - This is app is can be used to access CSV file as JSON from Google Drive.

DB Drive CSV This is app is can be used to access CSV file as JSON from Google Drive. How To Use Create file/ upload file to Google Drive There's 2 fi

Hartawan Bahari M. 5 Oct 20, 2022
A flask extension using pyexcel to read, manipulate and write data in different excel formats: csv, ods, xls, xlsx and xlsm.

Flask-Excel - Let you focus on data, instead of file formats Support the project If your company has embedded pyexcel and its components into a revenu

null 247 Dec 27, 2022
eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.

Command line utilities for tabular data files This is a set of command line utilities for manipulating large tabular data files. Files of numeric and

eBay 1.4k Jan 9, 2023
A suite of utilities for AWS Lambda Functions that makes tracing with AWS X-Ray, structured logging and creating custom metrics asynchronously easier

A suite of utilities for AWS Lambda Functions that makes tracing with AWS X-Ray, structured logging and creating custom metrics asynchronously easier

Amazon Web Services - Labs 1.9k Jan 7, 2023
Python script to tabulate data formats like json, csv, html, etc

pyT PyT is a a command line tool and as well a library for visualising various data formats like: JSON HTML Table CSV XML, etc. Features Print table o

Mobolaji Abdulsalam 1 Dec 30, 2021
Python Module for Tabular Datasets in XLS, CSV, JSON, YAML, &c.

Tablib: format-agnostic tabular dataset library _____ ______ ___________ ______ __ /_______ ____ /_ ___ /___(_)___ /_ _ __/_ __ `/__ _

Jazzband 4.2k Dec 30, 2022
pytest plugin for a better developer experience when working with the PyTorch test suite

pytest-pytorch What is it? pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test sui

Quansight 39 Nov 18, 2022
PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Xinlei-Pei 6 Dec 23, 2022