Python CLI utility and library for manipulating SQLite databases

Overview

sqlite-utils

PyPI Changelog Python 3.x Tests Documentation Status codecov License

Python CLI utility and library for manipulating SQLite databases.

Some feature highlights

Read more on my blog: sqlite-utils: a Python library and CLI tool for building SQLite databases and other entries tagged sqliteutils.

Installation

pip install sqlite-utils

Or if you use Homebrew for macOS:

brew install sqlite-utils

Using as a CLI tool

Now you can do things with the CLI utility like this:

$ sqlite-utils memory dogs.csv "select * from t"
[{"id": 1, "age": 4, "name": "Cleo"},
 {"id": 2, "age": 2, "name": "Pancakes"}]

$ sqlite-utils insert dogs.db dogs dogs.csv --csv
[####################################]  100%

$ sqlite-utils tables dogs.db --counts
[{"table": "dogs", "count": 2}]

$ sqlite-utils dogs.db "select id, name from dogs"
[{"id": 1, "name": "Cleo"},
 {"id": 2, "name": "Pancakes"}]

$ sqlite-utils dogs.db "select * from dogs" --csv
id,age,name
1,4,Cleo
2,2,Pancakes

$ sqlite-utils dogs.db "select * from dogs" --table
  id    age  name
----  -----  --------
   1      4  Cleo
   2      2  Pancakes

You can import JSON data into a new database table like this:

$ curl https://api.github.com/repos/simonw/sqlite-utils/releases \
    | sqlite-utils insert releases.db releases - --pk id

Or for data in a CSV file:

$ sqlite-utils insert dogs.db dogs dogs.csv --csv

sqlite-utils memory lets you import CSV or JSON data into an in-memory database and run SQL queries against it in a single command:

$ cat dogs.csv | sqlite-utils memory - "select name, age from stdin"

See the full CLI documentation for comprehensive coverage of many more commands.

Using as a library

You can also import sqlite_utils and use it as a Python library like this:

import sqlite_utils
db = sqlite_utils.Database("demo_database.db")
# This line creates a "dogs" table if one does not already exist:
db["dogs"].insert_all([
    {"id": 1, "age": 4, "name": "Cleo"},
    {"id": 2, "age": 2, "name": "Pancakes"}
], pk="id")

Check out the full library documentation for everything else you can do with the Python library.

Related projects

  • Datasette: A tool for exploring and publishing data
  • csvs-to-sqlite: Convert CSV files into a SQLite database
  • db-to-sqlite: CLI tool for exporting a MySQL or PostgreSQL database as a SQLite file
  • dogsheep: A family of tools for personal analytics, built on top of sqlite-utils
Comments
  • table.transform() method for advanced alter table

    table.transform() method for advanced alter table

    SQLite's ALTER TABLE can only do the following:

    • Rename a table
    • Rename a column
    • Add a column

    Notably, it cannot drop columns - so tricks like "add a float version of this text column, populate it, then drop the old one and rename" won't work.

    The docs here https://www.sqlite.org/lang_altertable.html#making_other_kinds_of_table_schema_changes describe a way of implementing full alters safely within a transaction, but it's fiddly.

    1. Create new table
    2. Copy data
    3. Drop old table
    4. Rename new into old

    It would be great if sqlite-utils provided an abstraction to help make these kinds of changes safely.

    enhancement 
    opened by simonw 26
  • Idea: import CSV to memory, run SQL, export in a single command

    Idea: import CSV to memory, run SQL, export in a single command

    I quite often load a CSV file into a SQLite DB, then do stuff with it (like export results back out again as a new CSV) without any intention of keeping the CSV file around afterwards.

    What if sqlite-utils could do this for me? Something like this:

    sqlite-utils --csv blah.csv --csv baz.csv "select * from blah join baz ..."
    
    research 
    opened by simonw 22
  • table.extract(...) method and

    table.extract(...) method and "sqlite-utils extract" command

    One of my favourite features of csvs-to-sqlite is that it can "extract" columns into a separate lookup table - for example:

    csvs-to-sqlite big_csv_file.csv -c country output.db
    

    This will turn the country column in the resulting table into a integer foreign key against a new country table. You can see an example of what that looks like here: https://san-francisco.datasettes.com/registered-business-locations-3d50679/Business+Corridor was extracted from https://san-francisco.datasettes.com/registered-business-locations-3d50679/Registered_Business_Locations_-_San_Francisco?Business%20Corridor=1

    I'd like to have the same capability in sqlite-utils - but with the ability to run it against an existing SQLite table rather than just against a CSV.

    enhancement 
    opened by simonw 21
  • CSV files with too many values in a row cause errors

    CSV files with too many values in a row cause errors

    Original title: csv.DictReader can have None as key

    In some cases, csv.DictReader can have None as key for unnamed columns, and a list of values as value. sqlite_utils.utils.rows_from_file cannot handle that:

    url="https://artsdatabanken.no/Fab2018/api/export/csv"
    db = sqlite_utils.Database(":memory")
    
    with urlopen(url) as fab:
        reader, _ = sqlite_utils.utils.rows_from_file(fab, encoding="utf-16le")   
        db["fab2018"].insert_all(reader, pk="Id")
    

    Result:

    Traceback (most recent call last):
      File "<stdin>", line 3, in <module>
      File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 2924, in insert_all
        chunk = list(chunk)
      File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 3454, in fix_square_braces
        if any("[" in key or "]" in key for key in record.keys()):
      File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 3454, in <genexpr>
        if any("[" in key or "]" in key for key in record.keys()):
    TypeError: argument of type 'NoneType' is not iterable
    

    Code: https://github.com/simonw/sqlite-utils/blob/59be60c471fd7a2c4be7f75e8911163e618ff5ca/sqlite_utils/db.py#L3454

    sqlite-utils insert from command line is not affected by this issue.

    bug python-library 
    opened by frafra 20
  • Introspect if table is FTS4 or FTS5

    Introspect if table is FTS4 or FTS5

    I want .search() to work against both FTS5 and FTS4 tables - but sort by rank should only work for FTS5.

    This means I need to be able to introspect and tell if a table is FTS4 or FTS5.

    Originally posted by @simonw in https://github.com/simonw/sqlite-utils/issues/192#issuecomment-722054264

    enhancement 
    opened by simonw 19
  • Add new spatialite helper methods

    Add new spatialite helper methods

    Refs #79

    This PR adds three new Spatialite-related methods to Database and Table:

    • Database.init_spatialite loads the Spatialite extension and initializes it
    • Table.add_geometry_column adds a geometry column
    • Table.create_spatial_index creates a spatial index

    Has tests and documentation. Feedback very welcome.

    spatialite 
    opened by eyeseast 16
  • create-index should run analyze after creating index

    create-index should run analyze after creating index

    sqlite's query planner depends upon analyze to make good use of indices. It would be nice if analyze was run as part of the create-index command.

    If data is inserted later, things can get out date, but it would still probably be a net win.

    cli-tool 
    opened by fgregg 16
  • `--batch-size 1` doesn't seem to commit for every item

    `--batch-size 1` doesn't seem to commit for every item

    I'm trying this, but it doesn't seem to write anything to the database file until I hit CTRL+C:

    heroku logs --app=simonwillisonblog --tail | grep 'measure#nginx.service' | \
      sqlite-utils insert /tmp/herokutail.db log - --import re --convert "$(cat <<EOD
        r = re.compile(r'([^\s=]+)=(?:"(.*?)"|(\S+))')
        pairs = {}
        for key, value1, value2 in r.findall(line):
            pairs[key] = value1 or value2
        return pairs
    EOD
    )" --lines --batch-size 1
    
    bug 
    opened by simonw 16
  • --lines and --text and --convert and --import

    --lines and --text and --convert and --import

    Refs #356

    Still TODO:

    • [x] Get --lines working, with tests
    • [x] Get --text working, with tests
    • [x] Get regular JSON import working with --convert with tests
    • [x] Get --lines working with --convert with tests
    • [x] Get --text working with --convert with tests
    • [x] Get --csv and --tsv import working with --convert with tests
    • [x] Get --nl working with --convert with tests
    • [x] Documentation for all of the above
    enhancement 
    opened by simonw 15
  • "sqlite-utils convert" command to replace the separate "sqlite-transform" tool

    See https://github.com/simonw/sqlite-transform/issues/11 - I built a separate sqlite-transform tool a while ago that uses the word "transform" to means something entirely different from sqlite-utils transform - I'd like to resolve this by merging the two tools.

    enhancement cli-tool 
    opened by simonw 15
  • The

    The ".upsert()" method is misnamed

    This thread here is illuminating: https://stackoverflow.com/questions/3634984/insert-if-not-exists-else-update

    The term UPSERT in SQLite has a specific meaning as-of 3.24.0 (2018-06-04): https://www.sqlite.org/lang_UPSERT.html

    It means "behave as an UPDATE or a no-op if the INSERT would violate a uniqueness constraint". The syntax in 3.24.0+ looks like this (confusingly it does not use the term "upsert"):

    INSERT INTO phonebook(name,phonenumber) VALUES('Alice','704-555-1212')
      ON CONFLICT(name) DO UPDATE SET phonenumber=excluded.phonenumber
    

    Here's the problem: the sqlite-utils .upsert() and .upsert_all() methods don't do this. They use the following SQL:

    INSERT OR REPLACE INTO [{table}] ({columns}) VALUES {rows};
    

    If the record already exists, it will be entirely replaced by a new record - as opposed to updating any specified fields but leaving existing fields as they are (the behaviour of "upsert" in SQLite itself).

    bug question 
    opened by simonw 15
  • Fixes breaking DEFAULT values

    Fixes breaking DEFAULT values

    Fixes #509, Fixes #336

    Thanks for the great library! I fixed a bug that sqlite-utils transform breaks DEFAULT values. All tests already present passed with no changes, and I added some tests for this PR.

    In #509 case, fixed here.

    $ sqlite3 test.db << EOF
    CREATE TABLE mytable (
        col1 TEXT DEFAULT 'foo',
        col2 TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))
    )
    EOF
    
    $ sqlite3 test.db "SELECT sql FROM sqlite_master WHERE name = 'mytable';"
    CREATE TABLE mytable (
        col1 TEXT DEFAULT 'foo',
        col2 TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))
    )
    
    $ sqlite3 test.db "INSERT INTO mytable DEFAULT VALUES; SELECT * FROM mytable;"
    foo|2022-12-21 01:15:39.669
    
    $ sqlite-utils transform test.db mytable --rename col1 renamedcol1
    $ sqlite3 test.db "SELECT sql FROM sqlite_master WHERE name = 'mytable';"
    CREATE TABLE "mytable" (
       [renamedcol1] TEXT DEFAULT 'foo',
       [col2] TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))  # ← Non-String Value
    )
    
    $ sqlite3 test.db "INSERT INTO mytable DEFAULT VALUES; SELECT * FROM mytable;"
    foo|2022-12-21 01:15:39.669
    foo|2022-12-21 01:15:56.432
    

    And #336 case also fixed. Special values are described here.

    3.2. The DEFAULT clause ... A default value may also be one of the special case-independent keywords CURRENT_TIME, CURRENT_DATE or CURRENT_TIMESTAMP.

    $ echo 'create table bar (baz text, created_at timestamp default CURRENT_TIMESTAMP)' | sqlite3 foo.db
    $ sqlite3 foo.db
    SQLite version 3.39.5 2022-10-14 20:58:05
    Enter ".help" for usage hints.
    sqlite> .schema bar
    CREATE TABLE bar (baz text, created_at timestamp default CURRENT_TIMESTAMP);
    sqlite> .exit
    
    $ sqlite-utils transform foo.db bar --column-order baz
    $ sqlite3 foo.db
    SQLite version 3.39.5 2022-10-14 20:58:05
    Enter ".help" for usage hints.
    sqlite> .schema bar
    CREATE TABLE IF NOT EXISTS "bar" (
       [baz] TEXT,
       [created_at] FLOAT DEFAULT CURRENT_TIMESTAMP
    );
    sqlite> .exit
    
    $ sqlite-utils transform foo.db bar --column-order baz
    $ sqlite3 foo.db
    SQLite version 3.39.5 2022-10-14 20:58:05
    Enter ".help" for usage hints.
    sqlite> .schema bar
    CREATE TABLE IF NOT EXISTS "bar" (
       [baz] TEXT,
       [created_at] FLOAT DEFAULT CURRENT_TIMESTAMP  # ← Non-String Value
    );
    

    :books: Documentation preview :books:: https://sqlite-utils--519.org.readthedocs.build/en/519/

    opened by rhoboro 0
  • Feature request: output number of ignored/replaced rows for insert command

    Feature request: output number of ignored/replaced rows for insert command

    https://hachyderm.io/@briandorsey/109468185742876820

    I'm fiddling with piping json to insert -ignore I'd love to see the count of records inserted & ignored, but didn't see a way to do that in the help/docs.

    Example: xh "https://hachyderm.io/api/v1/timelines/tag/rust?max_id=109443380308326328" | sqlite-utils insert aoc.db aoc - --pk=id --ignore

    enhancement 
    opened by simonw 4
  • upsert new rows with constraints, fixes #514

    upsert new rows with constraints, fixes #514

    This fixes #514 by making the initial insert for upserts include all columns, so that new rows can be added to tables with non-pkey columns that have constraints.

    (aside: I'm not a python programmer. pip? pipenv? venv? These are mystical incantations to me. The process to set up this repo for local development and testing was so easy. Thank you for the excellent contributing documentation!)


    :books: Documentation preview :books:: https://sqlite-utils--515.org.readthedocs.build/en/515/

    opened by cldellow 0
  • upsert of new row with check constraints fails

    upsert of new row with check constraints fails

    (I originally opened this in https://github.com/simonw/datasette-insert/issues/20, but I see that that library depends on sqlite-utils)

    In the case of a new row, upsert first adds the row, specifying only its pkeys: https://github.com/simonw/sqlite-utils/blob/965ca0d5f5bffe06cc02cd7741344d1ddddf9d56/sqlite_utils/db.py#L2783-L2787

    This means that a table with NON NULL (or other constraint) columns that aren't part of the pkey can't have new rows upserted.

    opened by cldellow 0
  • Add or document streamlined workflow for importing Datasette csv / json exports

    Add or document streamlined workflow for importing Datasette csv / json exports

    I'm working on some small front-end enhancements to the laion-aesthetic-datasette project, and I wanted to partially populate a database directly using exports from the existing Datasette instance instead of downloading the parquet files and creating my own multi-GB database.

    There have been a number of small issues that are certainly related to my relative lack of familiarity with the toolkit, but that are still surprising.

    For example: a CSV export of the images table (http://laion-aesthetic.datasette.io/laion-aesthetic-6pls.csv?sql=select+rowid%2C+url%2C+text%2C+domain_id%2C+width%2C+height%2C+similarity%2C+punsafe%2C+pwatermark%2C+aesthetic%2C+hash%2C+index_level_0+from+images+order+by+random%28%29+limit+100) has nested single quotes, double quotes, and commas that aren't handled by rows_from_file. Similarly, the json output has to be manually transformed to add the column names and remove extraneous information before sqlite_utils can import it.

    I was able to work through these issues, but as an enhancement it would be really helpful to create or document a clear workflow that avoids the friction of this data transformation.

    opened by henry501 0
  • `sqlite-utils transform` breaks DEFAULT string values and STRFTIME()

    `sqlite-utils transform` breaks DEFAULT string values and STRFTIME()

    Very nice library! Our team found sqlite-utils through @simonw's comment on the "Simple declarative schema migration for SQLite" article, and we were excited to use it, but unfortunately sqlite-utils transform seems to break our DB.

    Running sqlite-utils transform to modify a column mangles their DEFAULT values:

    • Default string values are wrapped in extra single quotes
    • Function expressions such as STRFTIME() are turned into strings!

    Here are steps to reproduce:

    Original database

    $ sqlite3 test.db << EOF
    CREATE TABLE mytable (
        col1 TEXT DEFAULT 'foo',
        col2 TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))
    )
    EOF
    
    $ sqlite3 test.db "SELECT sql FROM sqlite_master WHERE name = 'mytable';"
    CREATE TABLE mytable (
        col1 TEXT DEFAULT 'foo',
        col2 TEXT DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))
    )
    

    Modified database after sqlite-utils

    $ sqlite3 test.db "INSERT INTO mytable DEFAULT VALUES; SELECT * FROM mytable;"
    foo|2022-11-02 02:26:58.038
    
    $ sqlite-utils transform test.db mytable --rename col1 renamedcol1
    
    $ sqlite3 test.db "SELECT sql FROM sqlite_master WHERE name = 'mytable';"
    CREATE TABLE "mytable" (
       [renamedcol1] TEXT DEFAULT '''foo''',
       [col2] TEXT DEFAULT 'STRFTIME(''%Y-%m-%d %H:%M:%f'', ''NOW'')'
    )
    
    $ sqlite3 test.db "INSERT INTO mytable DEFAULT VALUES; SELECT * FROM mytable;"
    foo|2022-11-02 02:26:58.038
    'foo'|STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
    

    (Related: #336)

    opened by kennysong 0
Releases(3.30)
  • 3.30(Oct 25, 2022)

    • Now tested against Python 3.11. (#502)
    • New table.search_sql(include_rank=True) option, which adds a rank column to the generated SQL. Thanks, Jacob Chapman. (#480)
    • Progress bars now display for newline-delimited JSON files using the --nl option. Thanks, Mischa Untaga. (#485)
    • New db.close() method. (#504)
    • Conversion functions passed to table.convert(...) can now return lists or dictionaries, which will be inserted into the database as JSON strings. (#495)
    • sqlite-utils install and sqlite-utils uninstall commands for installing packages into the same virtual environment as sqlite-utils, described here. (#483)
    • New sqlite_utils.utils.flatten() utility function. (#500)
    • Documentation on using Just to run tests, linters and build documentation.
    • Documentation now covers the Release process for this package.
    Source code(tar.gz)
    Source code(zip)
  • 3.29(Aug 28, 2022)

    • The sqlite-utils query, memory and bulk commands now all accept a new --functions option. This can be passed a string of Python code, and any callable objects defined in that code will be made available to SQL queries as custom SQL functions. See Defining custom SQL functions for details. (#471)
    • db[table].create(...) method now accepts a new transform=True parameter. If the table already exists it will be transform to match the schema configuration options passed to the function. This may result in columns being added or dropped, column types being changed, column order being updated or not null and default values for columns being set. (#467)
    • Related to the above, the sqlite-utils create-table command now accepts a --transform option.
    • New introspection property: table.default_values returns a dictionary mapping each column name with a default value to the configured default value. (#475)
    • The --load-extension option can now be provided a path to a compiled SQLite extension module accompanied by the name of an entrypoint, separated by a colon - for example --load-extension ./lines0:sqlite3_lines0_noread_init. This feature is modelled on code first contributed to Datasette by Alex Garcia. (#470)
    • Functions registered using the db.register_function() method can now have a custom name specified using the new db.register_function(fn, name=...) parameter. (#458)
    • sqlite-utils rows has a new --order option for specifying the sort order for the returned rows. (#469)
    • All of the CLI options that accept Python code blocks can now all be used to define functions that can access modules imported in that same block of code without needing to use the global keyword. (#472)
    • Fixed bug where table.extract() would not behave correctly for columns containing null values. Thanks, Forest Gregg. (#423)
    • New tutorial: Cleaning data with sqlite-utils and Datasette shows how to use sqlite-utils to import and clean an example CSV file.
    • Datasette and sqlite-utils now have a Discord community. Join the Discord here.
    Source code(tar.gz)
    Source code(zip)
  • 3.28(Jul 15, 2022)

    • New table.duplicate(new_name) method for creating a copy of a table with a matching schema and row contents. Thanks, David. (#449)
    • New sqlite-utils duplicate data.db table_name new_name CLI command for Duplicating tables. (#454)
    • sqlite_utils.utils.rows_from_file() is now a documented API. It can be used to read a sequence of dictionaries from a file-like object containing CSV, TSV, JSON or newline-delimited JSON. It can be passed an explicit format or can attempt to detect the format automatically. (#443)
    • sqlite_utils.utils.TypeTracker is now a documented API for detecting the likely column types for a sequence of string rows, see Detecting column types using TypeTracker. (#445)
    • sqlite_utils.utils.chunks() is now a documented API for splitting an iterator into chunks. (#451)
    • sqlite-utils enable-fts now has a --replace option for replacing the existing FTS configuration for a table. (#450)
    • The create-index, add-column and duplicate commands all now take a --ignore option for ignoring errors should the database not be in the right state for them to operate. (#450)
    Source code(tar.gz)
    Source code(zip)
  • 3.27(Jun 15, 2022)

    See also the annotated release notes for this release.

    • Documentation now uses the Furo Sphinx theme. (#435)
    • Code examples in documentation now have a "copy to clipboard" button. (#436)
    • sqlite_utils.utils.utils.rows_from_file() is now a documented API, see Reading rows from a file. (#443)
    • rows_from_file() has two new parameters to help handle CSV files with rows that contain more values than are listed in that CSV file's headings: ignore_extras=True and extras_key="name-of-key". (#440)
    • sqlite_utils.utils.maximize_csv_field_size_limit() helper function for increasing the field size limit for reading CSV files to its maximum, see Setting the maximum CSV field size limit. (#442)
    • table.search(where=, where_args=) parameters for adding additional WHERE clauses to a search query. The where= parameter is available on table.search_sql(...) as well. See Searching with table.search(). (#441)
    • Fixed bug where table.detect_fts() and other search-related functions could fail if two FTS-enabled tables had names that were prefixes of each other. (#434)
    Source code(tar.gz)
    Source code(zip)
  • 3.26.1(May 2, 2022)

    • Now depends on click-default-group-wheel, a pure Python wheel package. This means you can install and use this package with Pyodide, which can run Python entirely in your browser using WebAssembly. (#429)

      Try that out using the Pyodide REPL:

      >>> import micropip
      >>> await micropip.install("sqlite-utils")
      >>> import sqlite_utils
      >>> db = sqlite_utils.Database(memory=True)
      >>> list(db.query("select 3 * 5"))
      [{'3 * 5': 15}]
      
    Source code(tar.gz)
    Source code(zip)
  • 3.26(Apr 13, 2022)

  • 3.25.1(Mar 11, 2022)

  • 3.25(Mar 2, 2022)

    Source code(tar.gz)
    Source code(zip)
  • 3.24(Feb 16, 2022)

    Source code(tar.gz)
    Source code(zip)
  • 3.23(Feb 4, 2022)

    This release introduces four new utility methods for working with SpatiaLite. Thanks, Chris Amico. (#330)

    Source code(tar.gz)
    Source code(zip)
  • 3.22.1(Jan 26, 2022)

  • 3.22(Jan 11, 2022)

  • 3.21(Jan 11, 2022)

    CLI and Python library improvements to help run ANALYZE after creating indexes or inserting rows, to gain better performance from the SQLite query planner when it runs against indexes.

    Three new CLI commands: create-database, analyze and bulk.

    More details and examples can be found in the annotated release notes.

    • New sqlite-utils create-database command for creating new empty database files. (#348)
    • New Python methods for running ANALYZE against a database, table or index: db.analyze() and table.analyze(), see Optimizing index usage with ANALYZE. (#366)
    • New sqlite-utils analyze command for running ANALYZE using the CLI. (#379)
    • The create-index, insert and upsert commands now have a new --analyze option for running ANALYZE after the command has completed. (#379)
    • New sqlite-utils bulk command which can import records in the same way as sqlite-utils insert (from JSON, CSV or TSV) and use them to bulk execute a parametrized SQL query. (#375)
    • The CLI tool can now also be run using python -m sqlite_utils. (#368)
    • Using --fmt now implies --table, so you don't need to pass both options. (#374)
    • The --convert function applied to rows can now modify the row in place. (#371)
    • The insert-files command supports two new columns: stem and suffix. (#372)
    • The --nl import option now ignores blank lines in the input. (#376)
    • Fixed bug where streaming input to the insert command with --batch-size 1 would appear to only commit after several rows had been ingested, due to unnecessary input buffering. (#364)
    Source code(tar.gz)
    Source code(zip)
  • 3.20(Jan 6, 2022)

    • sqlite-utils insert ... --lines to insert the lines from a file into a table with a single line column, see Inserting unstructured data with --lines and --text.
    • sqlite-utils insert ... --text to insert the contents of the file into a table with a single text column and a single row.
    • sqlite-utils insert ... --convert allows a Python function to be provided that will be used to convert each row that is being inserted into the database. See Applying conversions while inserting data, including details on special behavior when combined with --lines and --text. (#356)
    • sqlite-utils convert now accepts a code value of - to read code from standard input. (#353)
    • sqlite-utils convert also now accepts code that defines a named convert(value) function, see Converting data in columns.
    • db.supports_strict property showing if the database connection supports SQLite strict tables.
    • table.strict property (see .strict) indicating if the table uses strict mode. (#344)
    • Fixed bug where sqlite-utils upsert ... --detect-types ignored the --detect-types option. (#362)
    Source code(tar.gz)
    Source code(zip)
  • 3.19(Nov 21, 2021)

    • The table.lookup() method now accepts keyword arguments that match those on the underlying table.insert() method: foreign_keys=, column_order=, not_null=, defaults=, extracts=, conversions= and columns=. You can also now pass pk= to specify a different column name to use for the primary key. (#342)
    Source code(tar.gz)
    Source code(zip)
  • 3.19a0(Nov 19, 2021)

  • 3.18(Nov 15, 2021)

    • The table.lookup() method now has an optional second argument which can be used to populate columns only the first time the record is created, see Working with lookup tables. (#339)
    • sqlite-utils memory now has a --flatten option for flattening nested JSON objects into separate columns, consistent with sqlite-utils insert. (#332)
    • table.create_index(..., find_unique_name=True) parameter, which finds an available name for the created index even if the default name has already been taken. This means that index-foreign-keys will work even if one of the indexes it tries to create clashes with an existing index name. (#335)
    • Added py.typed to the module, so mypy should now correctly pick up the type annotations. Thanks, Andreas Longo. (#331)
    • Now depends on python-dateutil instead of depending on dateutils. Thanks, Denys Pavlov. (#324)
    • table.create() (see Explicitly creating a table) now handles dict, list and tuple types, mapping them to TEXT columns in SQLite so that they can be stored encoded as JSON. (#338)
    • Inserted data with square braces in the column names (for example a CSV file containing a item[price]) column now have the braces converted to underscores: item_price_. Previously such columns would be rejected with an error. (#329)
    • Now also tested against Python 3.10. (#330)
    Source code(tar.gz)
    Source code(zip)
  • 3.17.1(Sep 22, 2021)

  • 3.17(Aug 24, 2021)

  • 3.16(Aug 18, 2021)

    • Type signatures added to more methods, including table.resolve_foreign_keys(), db.create_table_sql(), db.create_table() and table.create(). (#314)
    • New db.quote_fts(value) method, see Quoting characters for use in search - thanks, Mark Neumann. (#246)
    • table.search() now accepts an optional quote=True parameter. (#296)
    • CLI command sqlite-utils search now accepts a --quote option. (#296)
    • Fixed bug where --no-headers and --tsv options to sqlite-utils insert could not be used together. (#295)
    • Various small improvements to API reference documentation.
    Source code(tar.gz)
    Source code(zip)
  • 3.15.1(Aug 10, 2021)

    • Python library now includes type annotations on almost all of the methods, plus detailed docstrings describing each one. (#311)
    • New API Reference documentation page, powered by those docstrings.
    • Fixed bug where .add_foreign_keys() failed to raise an error if called against a View. (#313)
    • Fixed bug where .delete_where() returned a [] instead of returning self if called against a non-existant table. (#315)
    Source code(tar.gz)
    Source code(zip)
  • 3.15(Aug 9, 2021)

  • 3.14(Aug 2, 2021)

    This release introduces the new sqlite-utils convert command (#251) and corresponding table.convert(...) Python method (#302). These tools can be used to apply a Python conversion function to one or more columns of a table, either updating the column in place or using transformed data from that column to populate one or more other columns.

    This command-line example uses the Python standard library textwrap module to wrap the content of the content column in the articles table to 100 characters:

    $ sqlite-utils convert content.db articles content\
        '"\n".join(textwrap.wrap(value, 100))'\
        --import=textwrap
    

    The same operation in Python code looks like this:

    import sqlite_utils, textwrap
    
    db = sqlite_utils.Database("content.db")
    db["articles"].convert("content", lambda v: "\n".join(textwrap.wrap(v, 100)))
    

    See the full documentation for the sqlite-utils convert command and the table.convert(...) Python method for more details.

    Also in this release:

    • The new table.count_where(...) method, for counting rows in a table that match a specific SQL WHERE clause. (#305)
    • New --silent option for the sqlite-utils insert-files command to hide the terminal progress bar, consistent with the --silent option for sqlite-utils convert. (#301)
    Source code(tar.gz)
    Source code(zip)
  • 3.13(Jul 24, 2021)

  • 3.12(Jun 25, 2021)

  • 3.11(Jun 20, 2021)

  • 3.10(Jun 19, 2021)

    This release introduces the sqlite-utils memory command, which can be used to load CSV or JSON data into a temporary in-memory database and run SQL queries (including joins across multiple files) directly against that data.

    Also new: sqlite-utils insert --detect-types, sqlite-utils dump, table.use_rowid plus some smaller fixes.

    sqlite-utils memory

    This example of sqlite-utils memory retrieves information about the all of the repositories in the Dogsheep organization on GitHub using this JSON API, sorts them by their number of stars and outputs a table of the top five (using -t):

    $ curl -s 'https://api.github.com/users/dogsheep/repos'\
      | sqlite-utils memory - '
          select full_name, forks_count, stargazers_count
          from stdin order by stargazers_count desc limit 5
        ' -t
    full_name                            forks_count    stargazers_count
    ---------------------------------  -------------  ------------------
    dogsheep/twitter-to-sqlite                    12                 225
    dogsheep/github-to-sqlite                     14                 139
    dogsheep/dogsheep-photos                       5                 116
    dogsheep/dogsheep.github.io                    7                  90
    dogsheep/healthkit-to-sqlite                   4                  85
    

    The tool works against files on disk as well. This example joins data from two CSV files:

    $ cat creatures.csv
    species_id,name
    1,Cleo
    2,Bants
    2,Dori
    2,Azi
    $ cat species.csv
    id,species_name
    1,Dog
    2,Chicken
    $ sqlite-utils memory species.csv creatures.csv '
      select * from creatures join species on creatures.species_id = species.id
    '
    [{"species_id": 1, "name": "Cleo", "id": 1, "species_name": "Dog"},
     {"species_id": 2, "name": "Bants", "id": 2, "species_name": "Chicken"},
     {"species_id": 2, "name": "Dori", "id": 2, "species_name": "Chicken"},
     {"species_id": 2, "name": "Azi", "id": 2, "species_name": "Chicken"}]
    

    Here the species.csv file becomes the species table, the creatures.csv file becomes the creatures table and the output is JSON, the default output format.

    You can also use the --attach option to attach existing SQLite database files to the in-memory database, in order to join data from CSV or JSON directly against your existing tables.

    Full documentation of this new feature is available in Querying data directly using an in-memory database. (#272)

    sqlite-utils insert --detect-types

    The sqlite-utils insert command can be used to insert data from JSON, CSV or TSV files into a SQLite database file. The new --detect-types option (shortcut -d), when used in conjunction with a CSV or TSV import, will automatically detect if columns in the file are integers or floating point numbers as opposed to treating everything as a text column and create the new table with the corresponding schema. See Inserting CSV or TSV data for details. (#282)

    Other changes

    • Bug fix: table.transform(), when run against a table without explicit primary keys, would incorrectly create a new version of the table with an explicit primary key column called rowid. (#284)
    • New table.use_rowid introspection property, see .use_rowid. (#285)
    • The new sqlite-utils dump file.db command outputs a SQL dump that can be used to recreate a database. (#274)
    • -h now works as a shortcut for --help, thanks Loren McIntyre. (#276)
    • Now using pytest-cov and Codecov to track test coverage - currently at 96%. (#275)
    • SQL errors that occur when using sqlite-utils query are now displayed as CLI errors.
    Source code(tar.gz)
    Source code(zip)
  • 3.9.1(Jun 13, 2021)

  • 3.9(Jun 12, 2021)

  • 3.8(Jun 3, 2021)

Python Library and CLI for exporting MySQL databases

expdb Python library and CLI for exporting MySQL databases Installation Pre-requisites MySQL server Python 3.9+ Using git Clone the repository to your

Devansh Singh 1 Nov 29, 2021
Pymongo based CLI client, to run operation on existing databases and collections

Mongodb-Operations-Console Pymongo based CLI client, to run operation on existing databases and collections Program developed by Gustavo Wydler Azuaga

Gus 1 Dec 1, 2021
Simple CLI for managing Postgres databases in Flask.

Overview Simple CLI that provides the following commands: flask psql create flask psql init flask psql drop flask psql setup: create → init flask psql

Daniel Reeves 21 Oct 3, 2022
Notion-cli-list-manager - A simple command-line tool for managing Notion databases

A simple command-line tool for managing Notion List databases. ✨

Giacomo Salici 75 Dec 4, 2022
Python wrapper and CLI utility to render LaTeX markup and equations as SVG using dvisvgm and svgo.

latex2svg Python wrapper and CLI utility to render LaTeX markup and equations as SVG using dvisvgm and svgo. Based on the original work by Tino Wagner

Matthias C. Hormann 4 Feb 18, 2022
CLI utility to search and download torrents from major torrent sites

CLI Torrent Downloader About CLI Torrent Downloader provides convenient and quick way to search torrent magnet links (and to run associated torrent cl

x0r0x 86 Dec 19, 2022
CLI Utility to encode and recursively recreate directories with ffmpeg.

FFenmass CLI Utility to encode and recursively recreate directories with ffmpeg. Report Bug · Request Feature Table of Contents Getting Started Prereq

George Av. 8 May 6, 2022
AlienFX is a CLI and GUI utility to control the lighting effects of your Alienware computer.

AlienFX is a Linux utility to control the lighting effects of your Alienware computer. At present there is a CLI version (alienfx) and a gtk GUI versi

Stephen Harris 218 Dec 26, 2022
CLI utility for updating the EVE Online static data export in a postgres database

EVE SDE Postgres updater CLI utility for updating the EVE Online static data export postgres database. This has been tested with the Fuzzwork postgres

Markus Juopperi 1 Oct 29, 2021
This is a CLI utility that allows you to view RedFlagDeals.com on the command line.

RFD Description Motivation Installation Usage View Hot Deals View and Sort Hot Deals Search Advanced View Posts Shell Completion bash zsh Description

Dave G 8 Nov 29, 2022
A simple cli utility for importing or exporting dashboard json definitions using the Grafana HTTP API.

grafana-dashboard-manager A simple cli utility for importing or exporting dashboard json definitions using the Grafana HTTP API. This may be useful fo

Beam Connectivity 31 Jan 6, 2023
gget is a free and open-source command-line tool and Python package that enables efficient querying of genomic databases.

gget is a free and open-source command-line tool and Python package that enables efficient querying of genomic databases. gget consists of a collection of separate but interoperable modules, each designed to facilitate one type of database querying in a single line of code.

Pachter Lab 570 Dec 29, 2022
Sink is a CLI tool that allows users to synchronize their local folders to their Google Drives. It is similar to the Git CLI and allows fast and reliable syncs with the drive.

Sink is a CLI synchronisation tool that enables a user to synchronise local system files and folders with their Google Drives. It follows a git C

Yash Thakre 16 May 29, 2022
Python-Stock-Info-CLI: Get stock info through CLI by passing stock ticker.

Python-Stock-Info-CLI Get stock info through CLI by passing stock ticker. Installation Use the following command to install the required modules at on

Ayush Soni 1 Nov 5, 2021
Yts-cli-streamer - A CLI movie streaming client which works on yts.mx API written in python

YTSP It is a CLI movie streaming client which works on yts.mx API written in pyt

null 1 Feb 5, 2022
A simple CLI based any Download Tool, that find files and let you stream or download thorugh WebTorrent CLI or Aria or any command tool

Privateer A simple CLI based any Download Tool, that find files and let you stream or download thorugh WebTorrent CLI or Aria or any command tool How

Shreyash Chavan 2 Apr 4, 2022
[WIP]An ani-cli like cli tool for movies and webseries

mov-cli A cli to browse and watch movies. Installation This project is a work in progress. However, you can try it out python git clone https://github

null 166 Dec 30, 2022
flora-dev-cli (fd-cli) is command line interface software to interact with flora blockchain.

Install git clone https://github.com/Flora-Network/fd-cli.git cd fd-cli python3 -m venv venv source venv/bin/activate pip install -e . --extra-index-u

null 14 Sep 11, 2022