SQL for Humans™

Overview

Records: SQL for Humans™

https://travis-ci.org/kennethreitz/records.svg?branch=master

Records is a very simple, but powerful, library for making raw SQL queries to most relational databases.

https://farm1.staticflickr.com/569/33085227621_7e8da49b90_k_d.jpg

Just write SQL. No bells, no whistles. This common task can be surprisingly difficult with the standard tools available. This library strives to make this workflow as simple as possible, while providing an elegant interface to work with your query results.

Database support includes RedShift, Postgres, MySQL, SQLite, Oracle, and MS-SQL (drivers not included).


☤ The Basics

We know how to write SQL, so let's send some to our database:

import records

db = records.Database('postgres://...')
rows = db.query('select * from active_users')    # or db.query_file('sqls/active-users.sql')

Grab one row at a time:

>>> rows[0]
<Record {"username": "model-t", "active": true, "name": "Henry Ford", "user_email": "[email protected]", "timezone": "2016-02-06 22:28:23.894202"}>

Or iterate over them:

for r in rows:
    print(r.name, r.user_email)

Values can be accessed many ways: row.user_email, row['user_email'], or row[3].

Fields with non-alphanumeric characters (like spaces) are also fully supported.

Or store a copy of your record collection for later reference:

>>> rows.all()
[<Record {"username": ...}>, <Record {"username": ...}>, <Record {"username": ...}>, ...]

If you're only expecting one result:

>>> rows.first()
<Record {"username": ...}>

Other options include rows.as_dict() and rows.as_dict(ordered=True).

☤ Features

  • Iterated rows are cached for future reference.
  • $DATABASE_URL environment variable support.
  • Convenience Database.get_table_names method.
  • Command-line records tool for exporting queries.
  • Safe parameterization: Database.query('life=:everything', everything=42).
  • Queries can be passed as strings or filenames, parameters supported.
  • Transactions: t = Database.transaction(); t.commit().
  • Bulk actions: Database.bulk_query() & Database.bulk_query_file().

Records is proudly powered by SQLAlchemy and Tablib.

☤ Data Export Functionality

Records also features full Tablib integration, and allows you to export your results to CSV, XLS, JSON, HTML Tables, YAML, or Pandas DataFrames with a single line of code. Excellent for sharing data with friends, or generating reports.

>>> print(rows.dataset)
username|active|name      |user_email       |timezone
--------|------|----------|-----------------|--------------------------
model-t |True  |Henry Ford|[email protected]|2016-02-06 22:28:23.894202
...

Comma Separated Values (CSV)

>>> print(rows.export('csv'))
username,active,name,user_email,timezone
model-t,True,Henry Ford,[email protected],2016-02-06 22:28:23.894202
...

YAML Ain't Markup Language (YAML)

>>> print(rows.export('yaml'))
- {active: true, name: Henry Ford, timezone: '2016-02-06 22:28:23.894202', user_email: model-t@gmail.com, username: model-t}
...

JavaScript Object Notation (JSON)

>>> print(rows.export('json'))
[{"username": "model-t", "active": true, "name": "Henry Ford", "user_email": "[email protected]", "timezone": "2016-02-06 22:28:23.894202"}, ...]

Microsoft Excel (xls, xlsx)

with open('report.xls', 'wb') as f:
    f.write(rows.export('xls'))

Pandas DataFrame

>>> rows.export('df')
    username  active       name        user_email                   timezone
0    model-t    True Henry Ford model-t@gmail.com 2016-02-06 22:28:23.894202

You get the point. All other features of Tablib are also available, so you can sort results, add/remove columns/rows, remove duplicates, transpose the table, add separators, slice data by column, and more.

See the Tablib Documentation for more details.

☤ Installation

Of course, the recommended installation method is pipenv:

$ pipenv install records[pandas]
✨🍰✨

☤ Command-Line Tool

As an added bonus, a records command-line tool is automatically included. Here's a screenshot of the usage information:

Screenshot of Records Command-Line Interface.

☤ Thank You

Thanks for checking this library out! I hope you find it useful.

Of course, there's always room for improvement. Feel free to open an issue so we can make Records better, stronger, faster.

Comments
  • use Postgres.py under the hood

    use Postgres.py under the hood

    Over in Gratipay-land we've got a library called Postgres.py that has a very similar scope to Records-the-library (it has no CLI). It's been in production for three or four years, and as you say, it's "much more robust" than Records from an API point of view. I know you're opinionated about API design. I believe Postgres.py's API is worthy (or nearly so) of the "for Humans" appellation (and I'm not sure I didn't actually run that by you at some point in the past ;-).

    I like the Tablib integration and CLI that I see here on Records, and our APIs are pretty close.

    @kennethreitz Would you be open to a PR that switches Records to use Postgres.py under the hood? I'd envision keeping the CLI exactly the same. I think we'd need to talk through API—what gets wrapped, what gets exposed, what might be upstreamed.

    Waddya say? Up for collaborating? :-)

    question wontfix 
    opened by chadwhitacre 17
  • how to bulk_query?

    how to bulk_query?

    i saw:

    https://docs.sqlalchemy.org/en/latest/core/connections.html?highlight=multiparams#sqlalchemy.engine.Connection.execute.params.object

    but it not work, please tell me how to use bulk_query?

    opened by mouday 12
  • Add title and extension to license

    Add title and extension to license

    The title is not strictly required, but it's useful metadata, and part of the recommended license template text (see http://choosealicense.com/licenses/isc/ and https://opensource.org/licenses/isc-license)

    The extension helps with the display of the license on github (it activates text wrapping)

    opened by waldyrious 12
  • Maybe it's worth to make a version for the terminal?

    Maybe it's worth to make a version for the terminal?

    Sometimes it is convenient without touching the database view table or take data in .json, .html, .csv, .yaml or .xls Write the script for this for a long time, but these programs do not have to Unix So users can be given together with a library of more quick way to import and export tables

    enhancement 
    opened by dorosch 10
  • Failure on json export

    Failure on json export

    I have the following table in a postgresql database:

    elzilncu=> \d donor
                Table "public.donor"
     Column  |         Type          | Modifiers
    ---------+-----------------------+-----------
     donorno | integer               | not null
     dlname  | character varying(15) |
     dfname  | character varying(15) |
     dphone  | numeric(4,0)          |
     dstate  | character(2)          |
     dcity   | character varying(15) |
    Indexes:
        "pk_donor" PRIMARY KEY, btree (donorno)
    Referenced by:
        TABLE "gift" CONSTRAINT "fk_donatedby" FOREIGN KEY (donorno) REFERENCES donor(donorno)
    

    That I can dump pretty easily in CSV:

    ❯❯❯ records "select * from donor" csv
    donorno,dlname,dfname,dphone,dstate,dcity
    101,Abrams,Louis,9018,GA,London
    102,Aldinger,Dmitry,1521,GA,Paris
    103,Beckman,Gulsen,8247,WA,Sao Paulo
    104,Berdahl,Samuel,8149,WI,Sydney
    105,Borneman,Joanna,1888,MD,Bombay
    106,Brock,Scott,2142,AL,London
    107,Buyert,Aylin,9355,AK,New York
    108,Cetinsoy,Girwan,6346,AZ,Rome
    109,Chisholm,John,4482,MA,Oslo
    110,Crowder,Anthony,6513,NC,Stockholm
    111,Dishman,Michelle,3903,NC,Helsinki
    112,Duke,Peter,4939,FL,Tokyo
    113,Evans,Ann,4336,GA,Singapore
    114,Frawley,Todd,4785,MN,Perth
    115,Guo,John,6247,MN,Moscow
    116,Hammann,John,5369,ND,Kabaul
    117,Hays,Cami,1352,SD,Lima
    118,Herskowitz,Thomas,6872,MT,London
    119,Jefts,Robert,8103,ME,Oslo
    

    But the same operation fails if I try to export to JSON or YAML:

    ❯❯❯ records "select * from donor" json
    Traceback (most recent call last):
      File "/usr/local/lib/python3.5/site-packages/tablib/packages/omnijson/core.py", line 63, in dumps
        return _engine[1](o)
      File "/usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/__init__.py", line 230, in dumps
        return _default_encoder.encode(obj)
      File "/usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py", line 199, in encode
        chunks = self.iterencode(o, _one_shot=True)
      File "/usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py", line 257, in iterencode
        return _iterencode(o, 0)
      File "/usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py", line 180, in default
        raise TypeError(repr(o) + " is not JSON serializable")
    TypeError: Decimal('9018') is not JSON serializable
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/bin/records", line 11, in <module>
        sys.exit(cli())
      File "/usr/local/lib/python3.5/site-packages/records.py", line 345, in cli
        print(rows.export(arguments['<format>']))
      File "/usr/local/lib/python3.5/site-packages/records.py", line 160, in export
        return self.dataset.export(format, **kwargs)
      File "/usr/local/lib/python3.5/site-packages/tablib/core.py", line 464, in export
        return export_set(self, **kwargs)
      File "/usr/local/lib/python3.5/site-packages/tablib/formats/_json.py", line 22, in export_set
        return json.dumps(dataset.dict, default=date_handler)
      File "/usr/local/lib/python3.5/site-packages/tablib/packages/omnijson/core.py", line 69, in dumps
        raise JSONError(why)
    tablib.packages.omnijson.core.JSONError: Decimal('9018') is not JSON serializable
    

    Here is the full stack trace from Python:

    In [3]: rows.export('json')
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-3-5693f926b1b0> in <module>()
    ----> 1 rows.export('json')
    
    NameError: name 'rows' is not defined
    
    In [4]: rows = db.query('select * from donor')
    
    In [5]: rows.export('json')
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    /usr/local/lib/python3.5/site-packages/tablib/packages/omnijson/core.py in dumps(o, **kwargs)
         62     try:
    ---> 63         return _engine[1](o)
         64
    
    /usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
        229         default is None and not sort_keys and not kw):
    --> 230         return _default_encoder.encode(obj)
        231     if cls is None:
    
    /usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py in encode(self, o)
        198         # equivalent to the PySequence_Fast that ''.join() would do.
    --> 199         chunks = self.iterencode(o, _one_shot=True)
        200         if not isinstance(chunks, (list, tuple)):
    
    /usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py in iterencode(self, o, _one_shot)
        256                 self.skipkeys, _one_shot)
    --> 257         return _iterencode(o, 0)
        258
    
    /usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/python3.5/json/encoder.py in default(self, o)
        179         """
    --> 180         raise TypeError(repr(o) + " is not JSON serializable")
        181
    
    TypeError: Decimal('9018') is not JSON serializable
    
    During handling of the above exception, another exception occurred:
    
    JSONError                                 Traceback (most recent call last)
    <ipython-input-5-5693f926b1b0> in <module>()
    ----> 1 rows.export('json')
    
    /usr/local/lib/python3.5/site-packages/records.py in export(self, format, **kwargs)
        158     def export(self, format, **kwargs):
        159         """Export the RecordCollection to a given format (courtesy of Tablib)."""
    --> 160         return self.dataset.export(format, **kwargs)
        161
        162     @property
    
    /usr/local/lib/python3.5/site-packages/tablib/core.py in export(self, format, **kwargs)
        462             raise UnsupportedFormat('Format {0} cannot be exported.'.format(format))
        463
    --> 464         return export_set(self, **kwargs)
        465
        466     # -------
    
    /usr/local/lib/python3.5/site-packages/tablib/formats/_json.py in export_set(dataset)
         20 def export_set(dataset):
         21     """Returns JSON representation of Dataset."""
    ---> 22     return json.dumps(dataset.dict, default=date_handler)
         23
         24
    
    /usr/local/lib/python3.5/site-packages/tablib/packages/omnijson/core.py in dumps(o, **kwargs)
         67
         68         if any([(issubclass(ExceptionClass, e)) for e in _engine[2]]):
    ---> 69             raise JSONError(why)
         70         else:
         71             raise why
    
    JSONError: Decimal('9018') is not JSON serializable
    

    Additional info:

    In [18]: r = rows.next()
    
    In [19]: r.keys()
    Out[19]: ['donorno', 'dlname', 'dfname', 'dphone', 'dstate', 'dcity']
    
    In [20]: r.get('dfname')
    Out[20]: 'Dmitry'
    
    In [21]: r.get('dphone')
    Out[21]: Decimal('1521')
    
    In [22]: r.get('donorno')
    Out[22]: 102
    

    Even more info:

    In [25]: from sqlalchemy import *
    
    In [27]: db = create_engine(dbUrl)
    
    In [29]: cn = db.connect()
    
    In [30]: db.name()
    
    In [33]: res = db.execute("select * from donor")
    
    In [38]: res.first()
    Out[38]: (101, 'Abrams', 'Louis', Decimal('9018'), 'GA', 'London')
    

    The error may actually originate in Tablib or SQLAlchemy. If so, please let me know and I will move the issue.

    bug 
    opened by retrography 8
  • Column mismatch on export

    Column mismatch on export

    records==0.2.0 Postgres 9.4.6

    Table schema:

    postgres@production=# \d cell_per
                              Table "public.cell_per"
         Column     |              Type              |        Modifiers         
    ----------------+--------------------------------+--------------------------
     line_id        | integer                        | not null
     category       | character varying(10)          | not null
     cell_per       | integer                        | not null
     ts_insert      | timestamp(0) without time zone | default now()
     ts_update      | timestamp(0) without time zone | 
     user_insert    | character varying(20)          | default "session_user"()
     user_update    | character varying(20)          | 
     plant_type     | character varying(6)           | not null
     season         | character varying(9)           | not null
     short_category | text                           | not null
    Indexes:
        "cell_per_pkey" PRIMARY KEY, btree (line_id)
    Foreign-key constraints:
        "fk_category" FOREIGN KEY (category) REFERENCES category(category) ON UPDATE CASCADE
    Triggers:
        cell_per_ts_update BEFORE UPDATE ON cell_per FOR EACH ROW EXECUTE PROCEDURE ts_update()
        cell_per_user_update BEFORE UPDATE ON cell_per FOR EACH ROW EXECUTE PROCEDURE user_update()
    

    Query:

    db = records.Database('postgresql://aklaver:@localhost/production')
    rs = db.query('select * from cell_per')
    
    >>> rs[0].line_id 
    4
    
    >>> print rs.export('csv')
    line_id,category,cell_per,ts_insert,ts_update,user_insert,user_update,plant_type,season,short_category
    2004-06-02T15:11:26,HERB 3.5,18,,2004-06-02 15:11:26,,postgres,herb,none,HR3
    

    When exporting the line_id takes on the value of ts_update. This happens across multiple export formats.

    invalid 
    opened by aklaver 8
  • support namedtuple rows as an alternative to dicts

    support namedtuple rows as an alternative to dicts

    I'm a big fan of namedtuples for their safety, syntax, and performance benefits. This pull request adds a cursor_factory kwarg to Database.__init__ so users can pass in psycopg2.extras.NamedTupleCursor to opt in to getting namedtuples instead of dicts.

    ResultSet.dataset is updated to handle both dict and namedtuple row types.

    opened by btubbs 8
  • Would it be possible to issue a new release?

    Would it be possible to issue a new release?

    The lazy connection feature added in fe0ed3199dd952d57bfa12ecdb6c69acd1c98ece is critical for many use case (e.g. an API). Would you be so kind as to release a new version of records that contains it?

    Thanks, Kenneth.

    Charles

    question 
    opened by charlax 7
  • LICENSE: add title

    LICENSE: add title

    The title is not legally mandated, but for humans it's quite useful; it also works as additional metadata, and is part of the recommended license template text (see http://choosealicense.com/licenses/isc/ and https://opensource.org/licenses/isc-license)

    This PR is a re-submission of #71, following the conversation at https://github.com/kennethreitz/clint/pull/161.

    opened by waldyrious 7
  • Disable SQLAlchemy connection pooling

    Disable SQLAlchemy connection pooling

    The Database class provides no way to use it, and the current implementation means that a connection is held by the pool even after db.close().

    See also #67.

    opened by RazerM 7
  • Retrieve attributes using super instead of hard-coding

    Retrieve attributes using super instead of hard-coding

    You left a comment wanting to do it programmatically, and I found a way!

    >>> from records import Record
    >>> record = Record(['id','name'],[2,'Tobin'])
    >>> dir(record)
    ['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__',
     '__ge__', '__getattr__', '__getattribute__', '__getitem__', '__gt__',
     '__hash__', '__init__', '__le__', '__lt__', '__module__', '__ne__',
     '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__',
     '__sizeof__', '__slots__', '__str__', '__subclasshook__', '_keys', '_values',
     'as_dict', 'dataset', 'export', 'get', 'id', 'keys', 'name', 'values']
    
    enhancement 
    opened by Brobin 7
  • sqlalchemy 2.0 support.

    sqlalchemy 2.0 support.

    sqlalchemy v2.0 is almost done, a lot of features like async/await are involved.

    If this python library could support sqlalchemy v2.0, it's would be wonderful.

    opened by liudonghua123 0
  • removed a manual indexer iterator pitfall

    removed a manual indexer iterator pitfall

    The problem The code was iterating on the list 'outputs' manually using for i, element in enumerate(outputs

    for i in range(len(row))
    

    and accessing the data manually as:

    row[i]
    

    but Python has a built-in manner to deal with it using the method enumerate

    for index, element in enumerate(row):
    

    and thus making the access to the data easier

    The solution Changed the manual indexer to the enumerated iterator.

    opened by NaelsonDouglas 0
  • Records is not compatible with Python 3.8+

    Records is not compatible with Python 3.8+

    records python3 is not compatible with new sqlalchemy release 1.4.0 (#208) https://github.com/kennethreitz/records/issues/208 but versions of sqlalchemy older than 1.4.0 are not compatible with Python 3.8+: module 'time' has no attribute 'clock' in python3.8b1 #4731 https://github.com/sqlalchemy/sqlalchemy/issues/4731

    opened by robstel 0
  • db.query('SELECT * FROM persons) can't execute

    db.query('SELECT * FROM persons) can't execute

    when execute examples/randomuser-sqlite.py, python report 'Cannot operate on a closed database'.

    when rows = db.query(...) is executed, maybe the internal connection is disposed. but the result(rows) is a generator, not data. when use result(rows) to do something, it use internal connection to read data, then error occur.

    opened by schlsj 0
  • records (v0.5.3) is incompatible with latest Pandas (v1.2.4) cause they have requires different openpyxl version dependency, is there any way to fix this issue?

    records (v0.5.3) is incompatible with latest Pandas (v1.2.4) cause they have requires different openpyxl version dependency, is there any way to fix this issue?

    Incompatible Issues

    • ImportError: Pandas requires version '2.6.0' or newer of 'openpyxl' (version '2.4.11' currently installed).
    • records 0.5.3 requires openpyxl<2.5.0, but you have openpyxl 3.0.7 which is incompatible

    Dependency Requirements:

    • records 0.5.3 requires openpyxl<2.5.0
    • Pandas 1.2.4 requires version '2.6.0' or newer of 'openpyxl'
    opened by baiyyee 2
Owner
Kenneth Reitz
Software Engineer focused on abstractions, reducing cognitive overhead, and Design for Humans.
Kenneth Reitz
SQL for Humans™

Records: SQL for Humans™ Records is a very simple, but powerful, library for making raw SQL queries to most relational databases. Just write SQL. No b

Ken Reitz 6.9k Jan 3, 2023
dask-sql is a distributed SQL query engine in python using Dask

dask-sql is a distributed SQL query engine in Python. It allows you to query and transform your data using a mixture of common SQL operations and Python code and also scale up the calculation easily if you need it.

Nils Braun 271 Dec 30, 2022
A pandas-like deferred expression system, with first-class SQL support

Ibis: Python data analysis framework for Hadoop and SQL engines Service Status Documentation Conda packages PyPI Azure Coverage Ibis is a toolbox to b

Ibis Project 2.3k Jan 6, 2023
Easy-to-use data handling for SQL data stores with support for implicit table creation, bulk loading, and transactions.

dataset: databases for lazy people In short, dataset makes reading and writing data in databases as simple as reading and writing JSON files. Read the

Friedrich Lindenberg 4.2k Jan 2, 2023
Anomaly detection on SQL data warehouses and databases

With CueObserve, you can run anomaly detection on data in your SQL data warehouses and databases. Getting Started Install via Docker docker run -p 300

Cuebook 171 Dec 18, 2022
Simple DDL Parser to parse SQL (HQL, TSQL, AWS Redshift, Snowflake and other dialects) ddl files to json/python dict with full information about columns: types, defaults, primary keys, etc.

Simple DDL Parser Build with ply (lex & yacc in python). A lot of samples in 'tests/. Is it Stable? Yes, library already has about 5000+ usage per day

Iuliia Volkova 95 Jan 5, 2023
PyRemoteSQL is a python SQL client that allows you to connect to your remote server with phpMyAdmin installed.

PyRemoteSQL Python MySQL remote client Basically this is a python SQL client that allows you to connect to your remote server with phpMyAdmin installe

ProbablyX 3 Nov 4, 2022
A simple python package that perform SQL Server Source Control and Auto Deployment.

deploydb Deploy your database objects automatically when the git branch is updated. Production-ready! ⚙️ Easy-to-use ?? Customizable ?? Installation I

Mert Güvençli 10 Dec 7, 2022
SQL queries to collections

SQC SQL Queries to Collections Examples from sqc import sqc data = [ {"a": 1, "b": 1}, {"a": 2, "b": 1}, {"a": 3, "b": 2}, ] Simple filte

Alexander Volkovsky 0 Jul 6, 2022
edaSQL is a library to link SQL to Exploratory Data Analysis and further more in the Data Engineering.

edaSQL is a python library to bridge the SQL with Exploratory Data Analysis where you can connect to the Database and insert the queries. The query results can be passed to the EDA tool which can give greater insights to the user.

Tamil Selvan 8 Dec 12, 2022
Python script to clone SQL dashboard from one workspace to another

Databricks dashboard clone Unofficial project to allow Databricks SQL dashboard copy from one workspace to another. Resource clone Setup: Create a fil

Quentin Ambard 12 Jan 1, 2023
Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment.

Databank Databank is an easy-to-use Python library for making raw SQL queries in a multi-threaded environment. No ORM, no frills. Thread-safe. Only ra

snapADDY GmbH 4 Apr 4, 2022
Some scripts for microsoft SQL server in old version.

MSSQL_Stuff Some scripts for microsoft SQL server which is in old version. Table of content Overview Usage References Overview These script works when

小离 5 Dec 29, 2022
Making it easy to query APIs via SQL

Shillelagh Shillelagh (ʃɪˈleɪlɪ) is an implementation of the Python DB API 2.0 based on SQLite (using the APSW library): from shillelagh.backends.apsw

Beto Dealmeida 207 Dec 30, 2022
Estoult - a Python toolkit for data mapping with an integrated query builder for SQL databases

Estoult Estoult is a Python toolkit for data mapping with an integrated query builder for SQL databases. It currently supports MySQL, PostgreSQL, and

halcyon[nouveau] 15 Dec 29, 2022
Simplest SQL mapper in Python, probably

SQL MAPPER Basically what it does is: it executes some SQL thru a database connector you fed it, maps it to some model and gives to u. Also it can cre

null 2 Nov 7, 2022
Import entity definition document into SQLie3. Manage the entity. Also, create a "Create Table SQL file".

EntityDocumentMaker Version 1.00 After importing the entity definition (Excel file), store the data in sqlite3. エンティティ定義(Excelファイル)をインポートした後、データをsqlit

G-jon FujiYama 1 Jan 9, 2022
dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.

dbd: database prototyping tool dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL d

Zdenek Svoboda 47 Dec 7, 2022
Generate database table diagram from SQL data definition.

sql2diagram Generate database table diagram from SQL data definition. e.g. "CREATE TABLE ..." See Example below How does it works? Analyze the SQL to

django-cas-ng 1 Feb 8, 2022