Logica is a logic programming language that compiles to StandardSQL and runs on Google BigQuery.

Overview

Logica: language of Big Data

Logica is an open source declarative logic programming language for data manipulation. Logica is a successor to Yedalog, a language created at Google earlier.

Why?

Logica is for engineers, data scientists and other specialists who want to use logic programming syntax when writing queries and pipelines to run on BigQuery.

Logica compiles to StandardSQL and gives you access to the power of BigQuery engine with the convenience of logic programming syntax. This is useful because BigQuery is magnitudes more powerful than state of the art native logic programming engines.

We encourage you to try Logica, especially if

  • you already use logic programming and need more computational power, or
  • you use SQL, but feel unsatisfied about its readability, or
  • you want to learn logic programming and apply it to processing of Big Data.

In the future we plan to support more SQL dialects and engines.

I have not heard of logic programming. What is it?

Logic programming is a declarative programming paradigm where the program is written as a set of logical statements.

Logic programming was developed in academia from the late 60s. Prolog and Datalog are the most prominent examples of logic programming languages. Logica is a language of the Datalog family.

Datalog and relational databases start from the same idea: think of data as relations and think of data manipulation as a sequence of operations over these relations. But Datalog and SQL differ in how these operations are described. Datalog is inspired by the mathematical syntax of the first order propositional logic and SQL follows the syntax of natural language.

SQL was based on the natural language to give access to databases to the people without formal training in computer programming or mathematics. This convenience may become costly when the logic that you want to express is non trivial. There are many examples of hard-to-read SQL queries that correspond to simple logic programs.

How does Logica work?

Logica compiles the logic program into a SQL expression, so it can be executed on BigQuery, the state of the art SQL engine.

Among database theoreticians Datalog and SQL are known to be equivalent. And indeed the conversion from Datalog to SQL and back is often straightforward. However there are a few nuances, for example how to treat disjunction and negation. In Logica we tried to make choices that make understanding of the resulting SQL structure as easy as possible, thus empowering user to write programs that are executed efficiently.

Why is it called Logica?

Logica stands for Logic with aggregation.

How to learn?

Learn basics of Logica with the CoLab tutorial located at tutorial folder. See examples of using Logica in examples folder.

Tutorial and examples show how to access Logica from CoLab. You can also install Logica command line tool.

Prerequisites

To run Logica programs on BigQuery you will need a Google Cloud Project. Once you have a project you can run Logica programs in CoLab providing your project id.

To run Logica locally you need Python3.

To initiate Logica predicates execution from the command line you will need bq, a BigQuery command line tool. For that you need to install Google Cloud SDK.

Installation

Google Cloud Project is the only thing you need to run Logica in Colab, see Hello World example.

You can install Logica command with pip as follows.

# Install.
python3 -m pip install logica
# Run:
# To see usage message.
python3 -m logica
# To print SQL for HelloWorld program.
python3 -m logica - print Greet <<<'Greet(greeting: "Hello world!")'

If your PATH includes Python's bin folder then you will also be able to run it simply as

logica - print Greet <<<'Greet(greeting: "Hello world!")'

Alternatively, you can clone GitHub repository:

git clone https://github.com/evgskv/logica
cd logica
./logica - print Greet <<<'Greet(greeting: "Hello world!")'

Code samples

Here a couple examples of how Logica code looks like.

Prime numbers

Find prime numbers less than 30.

Program primes.l:

# Define natural numbers from 1 to 29.
N(x) :- x in Range(30);
# Define primes.
Prime(prime: x) :-
  N(x),
  x > 1,
  ~(
    N(y),
    y > 1,
    y != x,
    x % y == 0
  );

Running primes.l

$ logica primes.l run Prime
+-------+
| prime |
+-------+
|     2 |
|     3 |
|     5 |
|     7 |
|    11 |
|    13 |
|    17 |
|    19 |
|    23 |
|    29 |
+-------+

News mentions

Who was mentioned in the news in 2020 the most? Let's query GDELT Project dataset.

Program mentions.l

@OrderBy(Mentions, "mentions desc");
@Limit(Mentions, 10);
Mentions(person:, mentions? += 1) distinct :-
  gdelt-bq.gdeltv2.gkg(persons:, date:),
  Substr(ToString(date), 0, 4) == "2020",
  the_persons == Split(persons, ";"),
  person in the_persons;

Running mentions.l

$ logica mentions.l run Mentions
+----------------+----------+
|     person     | mentions |
+----------------+----------+
| donald trump   |  3624228 |
| joe biden      |  1591320 |
| los angeles    |  1221998 |
| george floyd   |   923472 |
| boris johnson  |   845955 |
| barack obama   |   541672 |
| vladimir putin |   486428 |
| bernie sanders |   409224 |
| andrew cuomo   |   375594 |
| nancy pelosi   |   375373 |
+----------------+----------+

Note that cities of Los Angeles and Las Vegas are mentioned in this table due to known missclasification issue in the GDELT data analysis.

Feedback

Feel free to create github issues for bugs and feature requests.

You questions and comments are welcome at our github discussions!


Unless otherwise noted, the Logica source files are distributed under the Apache 2.0 license found in the LICENSE file.

This is not an officially supported Google product.

Comments
  • Initialize VS Code highlighting extension.

    Initialize VS Code highlighting extension.

    This is the initial version of the VS Code highlighting extension. Here I try to structure the TextMate grammar file in the BNF way, but I've made simplifications here and there to produce a basic version for a start. Instructions on quick running, development and installation can be found in README.md.

    opened by EricWay1024 16
  • CamelCase weirdness

    CamelCase weirdness

    👋 Hello! I'm having great fun playing around with Logica. Thanks for the project.

    I run into something a bit strange and was wondering if it is a bug.

    I have a fact and rule like so:

    Tuple(namespace: "document", id: "1", rel: "writer", user: "id", userId: "badger", user_namespace: "", user_object_id: "");
    
    List(namespace:, id:, rel:, user:, userId:, user_namespace:, user_object_id:) :-
        Tuple(namespace:, id:, rel:, user:, userId:, user_namespace:, user_object_id:);
    

    and if I try to run this as is, it returns a parsing error:

    Parsing:
    List(namespace:, id:, rel:, user:, userId:, user_namespace:, user_object_id:) :-
        Tuple(namespace:, id:, rel:, user:, userId:, user_namespace:, user_object_id:)
    
    [ Error ] Could not parse expression of a value.
    

    If I parametrize userId (with an x) as in the following:

    Tuple(namespace: "document", id: "1", rel: "writer", user: "id", userId: "badger", user_namespace: "", user_object_id: "");
    
    List(namespace:, id:, rel:, user:, userId: x, user_namespace:, user_object_id:) :-
        Tuple(namespace:, id:, rel:, user:, userId: x, user_namespace:, user_object_id:);
    

    then it works fine:

    (env) $ logica zanzibar.l run List
    +-----------+----+--------+--------+----------+----------------+----------------+
    | namespace | id | rel    | user   | userId   | user_namespace | user_object_id |
    +-----------+----+--------+--------+----------+----------------+----------------+
    | document  | 1  | writer | id     | badger   |                |                |
    +-----------+----+--------+--------+----------+----------------+----------------+
    

    Similarly, if I don't parametrize, but use snake case user_id then it also works fine.

    So, should I always just stick with snake_case?

    Thanks for any insights 🙌

    opened by craigpastro 6
  • How to translate case when?

    How to translate case when?

    In bigquery and firebase data.

    
    SELECT user_pseudo_id,
     event_timestamp ,
    CASE event_name
      WHEN 'screen_view' THEN 
             event_name || (SELECT param.value.string_value FROM UNNEST(event_params) AS param WHERE 
            param.key="firebase_screen_class") 
    else event_name
    END as name
     FROM `bq.x.x` 
    
    order by event_timestamp
    

    I can't find example for document In Logica tutorial.ipynb .

    Can we support raw sql in logica? So we can write complicate sql in logica.

    opened by jiamo 5
  • Integrating DuckDB

    Integrating DuckDB

    Hi, I'm interested in using DuckDB(https://duckdb.org/) as a data source for Logica and I was wondering what would be the steps to integrate it. It's like SQLite for analytics and is a subset of the PostgreSQL syntax.

    We could use sqlalchemy but 1) I don't know how to proceed after and 2) I think the ORM design of sqlalchemy may make it slow if we are returning a lot of data(just an hypothesis based on https://github.com/duckdb/duckdb/issues/305)

    So the question is two fold.

    1. If we use the sqlalchemy interface, the setup code would be something like this.
    # Install Logica.
    !pip install logica
    !pip install duckdb
    !pip install duckdb-engine
    
    
    
    # Connect to the database.
    from logica import colab_logica
    from sqlalchemy import create_engine
    
    engine = create_engine('duckdb:///:memory:');
    connection = engine.connect();
    colab_logica.SetDbConnection(connection)
    

    but how do I use it after that?

    1. If I don't want to use the sqlachemy interface, how would I go about doing that? The general usage is:
    import duckdb
    con = duckdb.connect(database=':memory:', read_only=False) #can also use a file like sqlite
    # create a table
    con.execute("CREATE TABLE items(item VARCHAR, value DECIMAL(10,2), count INTEGER)")
    # insert two items into the table
    con.execute("INSERT INTO items VALUES ('jeans', 20.0, 1), ('hammer', 42.2, 2)")
    
    # retrieve the items again
    con.execute("SELECT * FROM items")
    print(con.fetchall())
    # [('jeans', 20.0, 1), ('hammer', 42.2, 2)]
    
    opened by RAbraham 5
  • Import search path

    Import search path

    In version 1.3.14, the import resolves relative to the current directory. We are trying to deploy Logica files embedded with our Python package, so they can be anywhere on sys.path. Would it be possible to change the parser so that a list of directories (like sys.path) could be provided for resolving import?

    opened by mhfrantz 4
  • Blog post with examples

    Blog post with examples

    Whoever wrote the article here does not seem to understand databases much. https://opensource.googleblog.com/2021/04/logica-organizing-your-data-queries.html

    SELECT 2 AS x
    
    UNION ALL
    
    SELECT 3 AS x
    
    UNION ALL
    
    SELECT 5 AS x;
    
    

    Would never happen. You'd have a table of ints.

    opened by kpconnell 4
  • Imports in an imported file cannot be found

    Imports in an imported file cannot be found

    Version 1.3.14.15.92

    Imports in an imported file cannot be found if they are relative to a path in import_root. They can only be found if they are relative to the directory Logica is initially called from.

    To reproduce using integration_tests/import_root_test.l change the imports in integration_tests/import_tests/canada.l to be relative to the import_root used for the test:

    import usa.ProductUSA;
    import usa.SoftwareEngineer;
    

    The test fails to find usa.l event though usa.l is on the import_root given to the test.

    opened by jesseagreene 3
  • Update VS Code syntax highlighting extension.

    Update VS Code syntax highlighting extension.

    I've now fixed some problems left in the last PR and the highlighting is now error-free in terms of user experience (at least from my side). (I still didn't strictly follow the BNF in constructing the syntax because I realized it's not necessary for now as long as the highlighting gets its job done.) I haven't packaged and published this new version yet in case I need to make further modifications. If you notice anything wrong with the highlighting, please tell me. Thanks!

    cla: yes 
    opened by EricWay1024 3
  • Number of arguments of `TimestampSub()`

    Number of arguments of `TimestampSub()`

    Since BigQuery's TIMESTAMP_SUB() has two arguments, I assume that TimestampSub() also has two arguments. However, the following predicate will result in an error.

    A(t: TimestampSub(CurrentDate("Asia/Tokyo"), SqlExpr("INTERVAL 9 HOUR",{})));
    
    Compiling:
    A(t: TimestampSub(CurrentDate("Asia/Tokyo"), SqlExpr("INTERVAL 9 HOUR",{})))
    
    [ Error ] Built-in function TimestampSub takes (3, 3) arguments, but 2 arguments were given.
    
    opened by tkawachi 3
  • hi! he did work in big quary?

    hi! he did work in big quary?

    hi, I have a question, does it work in big guary and will it work in the future? I have not figured it out yet to test it, but if it really works in the google cloud console, I will definitely start testing it in the near future

    opened by kingtelepuz5 3
  • Aggregating an Ordered Set in BigQuery — ARRAY_AGG(DISTINCT ... ORDER BY ...) is not support! 😭

    Aggregating an Ordered Set in BigQuery — ARRAY_AGG(DISTINCT ... ORDER BY ...) is not support! 😭

    I'm trying to generate a set ordered by a another value, but I'm really struggling to pull it off. I'm not sure how to work around BigQuery's limitations, after trying quite a few attempts. Without doing some clever work around for BigQuery not supporting ARRAY_AGG(DISTINCT ... ORDER BY ...), the only other way I see is JOINing another select after the GROUP BY is preformed, but I'm also struggling to figure out how to pull that off in Logica as well. Anyways, love the project! Overall it's been really cool! 🚀

    Code

    Here's the most straight forward attempt

    %%logica Test
    
    @Engine("bigquery");
    
    OrderedSet(a) = SqlExpr("ARRAY_AGG(DISTINCT {value} ORDER BY {arg})", {value: a.value, arg: a.arg});
    
    TestOrderedSet(g:, ids_by_t? Array= t -> id) distinct :-
      Data(g:, id:, t:);
    
    Data(g: 1, id: 1, t: 5);
    Data(g: 1, id: 2, t: 4);
    Data(g: 1, id: 2, t: 4);
    Data(g: 1, id: 3, t: 3);
    Data(g: 1, id: 3, t: 3);
    Data(g: 1, id: 4, t: 2);
    Data(g: 1, id: 5, t: 1);
    
    Data(g: 2, id: 10, t: 1);
    Data(g: 2, id: 10, t: 1);
    Data(g: 2, id: 20, t: 2);
    Data(g: 2, id: 20, t: 0);
    Data(g: 2, id: 30, t: 3);
    
    Expected(g: 1, ids_by_t: [5, 4, 3, 2, 1]);
    Expected(g: 2, ids_by_t: [20, 10, 30]);
    Expected(g: 3, ids_by_t: [20, 10, 30]);
    
    Test(g:, actual:, expected:) :-
      TestOrderedSet(g:, ids_by_t: actual),
      Expected(g:, ids_by_t: expected);
    

    Output

    And the error 😢

    BadRequest: 400 An aggregate function that has both DISTINCT and ORDER BY arguments can only ORDER BY expressions that are arguments to the function at [77:39]
    
                       -----Query Job SQL Follows-----                   
    
        |    .    |    .    |    .    |    .    |    .    |    .    |
       1:WITH t_2_Data AS (SELECT * FROM (
       2:  
       3:    SELECT
       4:      1 AS g,
       5:      1 AS id,
       6:      5 AS t
       7:   UNION ALL
       8:  
       9:    SELECT
      10:      1 AS g,
      11:      2 AS id,
      12:      4 AS t
      13:   UNION ALL
      14:  
      15:    SELECT
      16:      1 AS g,
      17:      2 AS id,
      18:      4 AS t
      19:   UNION ALL
      20:  
      21:    SELECT
      22:      1 AS g,
      23:      3 AS id,
      24:      3 AS t
      25:   UNION ALL
      26:  
      27:    SELECT
      28:      1 AS g,
      29:      3 AS id,
      30:      3 AS t
      31:   UNION ALL
      32:  
      33:    SELECT
      34:      1 AS g,
      35:      4 AS id,
      36:      2 AS t
      37:   UNION ALL
      38:  
      39:    SELECT
      40:      1 AS g,
      41:      5 AS id,
      42:      1 AS t
      43:   UNION ALL
      44:  
      45:    SELECT
      46:      2 AS g,
      47:      10 AS id,
      48:      1 AS t
      49:   UNION ALL
      50:  
      51:    SELECT
      52:      2 AS g,
      53:      10 AS id,
      54:      1 AS t
      55:   UNION ALL
      56:  
      57:    SELECT
      58:      2 AS g,
      59:      20 AS id,
      60:      2 AS t
      61:   UNION ALL
      62:  
      63:    SELECT
      64:      2 AS g,
      65:      20 AS id,
      66:      0 AS t
      67:   UNION ALL
      68:  
      69:    SELECT
      70:      2 AS g,
      71:      30 AS id,
      72:      3 AS t
      73:  
      74:) AS UNUSED_TABLE_NAME  ),
      75:t_0_TestOrderedSet AS (SELECT
      76:  Data.g AS g,
      77:  ARRAY_AGG(DISTINCT Data.id ORDER BY Data.t) AS ids_by_t
      78:FROM
      79:  t_2_Data AS Data
      80:GROUP BY g),
      81:t_3_Expected AS (SELECT * FROM (
      82:  
      83:    SELECT
      84:      1 AS g,
      85:      ARRAY[5, 4, 3, 2, 1] AS ids_by_t
      86:   UNION ALL
      87:  
      88:    SELECT
      89:      2 AS g,
      90:      ARRAY[20, 10, 30] AS ids_by_t
      91:   UNION ALL
      92:  
      93:    SELECT
      94:      3 AS g,
      95:      ARRAY[20, 10, 30] AS ids_by_t
      96:  
      97:) AS UNUSED_TABLE_NAME  )
      98:SELECT
      99:  TestOrderedSet.g AS g,
     100:  TestOrderedSet.ids_by_t AS actual,
     101:  Expected.ids_by_t AS expected
     102:FROM
     103:  t_0_TestOrderedSet AS TestOrderedSet, t_3_Expected AS Expected
     104:WHERE
     105:  (Expected.g = TestOrderedSet.g)
        |    .    |    .    |    .    |    .    |    .    |    .    |
    

    Generate SQL

    And the generated SQL for the lazy

    WITH t_2_Data AS (SELECT * FROM (
      
        SELECT
          1 AS g,
          1 AS id,
          5 AS t
       UNION ALL
      
        SELECT
          1 AS g,
          2 AS id,
          4 AS t
       UNION ALL
      
        SELECT
          1 AS g,
          2 AS id,
          4 AS t
       UNION ALL
      
        SELECT
          1 AS g,
          3 AS id,
          3 AS t
       UNION ALL
      
        SELECT
          1 AS g,
          3 AS id,
          3 AS t
       UNION ALL
      
        SELECT
          1 AS g,
          4 AS id,
          2 AS t
       UNION ALL
      
        SELECT
          1 AS g,
          5 AS id,
          1 AS t
       UNION ALL
      
        SELECT
          2 AS g,
          10 AS id,
          1 AS t
       UNION ALL
      
        SELECT
          2 AS g,
          10 AS id,
          1 AS t
       UNION ALL
      
        SELECT
          2 AS g,
          20 AS id,
          2 AS t
       UNION ALL
      
        SELECT
          2 AS g,
          20 AS id,
          0 AS t
       UNION ALL
      
        SELECT
          2 AS g,
          30 AS id,
          3 AS t
      
    ) AS UNUSED_TABLE_NAME  ),
    t_0_TestOrderedSet AS (SELECT
      Data.g AS g,
      ARRAY_AGG(DISTINCT Data.id ORDER BY Data.t) AS ids_by_t
    FROM
      t_2_Data AS Data
    GROUP BY g),
    t_3_Expected AS (SELECT * FROM (
      
        SELECT
          1 AS g,
          ARRAY[5, 4, 3, 2, 1] AS ids_by_t
       UNION ALL
      
        SELECT
          2 AS g,
          ARRAY[20, 10, 30] AS ids_by_t
       UNION ALL
      
        SELECT
          3 AS g,
          ARRAY[20, 10, 30] AS ids_by_t
      
    ) AS UNUSED_TABLE_NAME  )
    SELECT
      TestOrderedSet.g AS g,
      TestOrderedSet.ids_by_t AS actual,
      Expected.ids_by_t AS expected
    FROM
      t_0_TestOrderedSet AS TestOrderedSet, t_3_Expected AS Expected
    WHERE
      (Expected.g = TestOrderedSet.g);
    
    opened by munro 2
  • Engineers example broken under sqlite

    Engineers example broken under sqlite

    Code:

    %%logica Engineers
    @Engine("sqlite");
    Employee(name: "Alice", role: "Product Manager");
    Employee(name: "Bob", role: "Engineer");
    Employee(name: "Caroline", role: "Engineer");
    Employee(name: "David", role: "Data Scientist");
    Employee(name: "Eve", role: "Data Scientist");
    Engineers(..r) :- Employee(..r), r.role == "Engineer";
    

    Error:

    OperationalError: no such column: Employee
    

    I know that at this stage meaningful error messages may be a tall order but if it could perhaps print out the SQL statement it should be fairly easy to see what went wrong.

    Also, sincerely thank you for this project: it is inspired.

    opened by emiruz 1
  • Add checks of predicate arguments

    Add checks of predicate arguments

    It will take time to implement type inference. In the meanwhile we could have a simpler check: when a predicate is called (including databases) and arguments are used, verify that the predicate has corresponding columns.

    opened by EvgSkv 0
  • Add type inference

    Add type inference

    We would like to infer types of arguments of the predicates, including reading types of database columns. Then we can display a native error for the user, rather then compiling an SQL and having user debug SQL error.

    opened by EvgSkv 0
  • how to launch a subset query?

    how to launch a subset query?

    hi, do logica have subset function? I want to compare one set to another, and return a boolean value showing whether the two sets are equal or subset? eg: SUBSET({ }, { }) = TRUE SUBSET({ 1, 2, 3 }, { }) = TRUE SUBSET({ 1, 2 }, { 1, 2 }) = TRUE SUBSET({ 1, 2, 3 }, { 1, 2 }) = TRUE SUBSET({ 1, 3, 5 }, { 1, 2 }) = FALSE many thanks!

    opened by sunriseXu 1
Owner
Evgeny Skvortsov
Software Engineer
Evgeny Skvortsov
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 39 Dec 3, 2022
PostgreSQL database adapter for the Python programming language

psycopg2 - Python-PostgreSQL Database Adapter Psycopg is the most popular PostgreSQL database adapter for the Python programming language. Its main fe

The Psycopg Team 2.8k Jan 5, 2023
Google Cloud Client Library for Python

Google Cloud Python Client Python idiomatic clients for Google Cloud Platform services. Stability levels The development status classifier on PyPI ind

Google APIs 4.1k Jan 1, 2023
Google Sheets Python API v4

pygsheets - Google Spreadsheets Python API v4 A simple, intuitive library for google sheets which gets your work done. Features: Open, create, delete

Nithin Murali 1.4k Dec 31, 2022
PyPika is a python SQL query builder that exposes the full richness of the SQL language using a syntax that reflects the resulting query. PyPika excels at all sorts of SQL queries but is especially useful for data analysis.

PyPika - Python Query Builder Abstract What is PyPika? PyPika is a Python API for building SQL queries. The motivation behind PyPika is to provide a s

KAYAK 1.9k Jan 4, 2023
Apache Libcloud is a Python library which hides differences between different cloud provider APIs and allows you to manage different cloud resources through a unified and easy to use API

Apache Libcloud - a unified interface for the cloud Apache Libcloud is a Python library which hides differences between different cloud provider APIs

The Apache Software Foundation 1.9k Dec 25, 2022
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Dec 31, 2022
dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.

dbd: database prototyping tool dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL d

Zdenek Svoboda 47 Dec 7, 2022
Motor - the async Python driver for MongoDB and Tornado or asyncio

Motor Info: Motor is a full-featured, non-blocking MongoDB driver for Python Tornado and asyncio applications. Documentation: Available at motor.readt

mongodb 2.1k Dec 26, 2022
Motor - the async Python driver for MongoDB and Tornado or asyncio

Motor Info: Motor is a full-featured, non-blocking MongoDB driver for Python Tornado and asyncio applications. Documentation: Available at motor.readt

mongodb 1.6k Feb 6, 2021
a small, expressive orm -- supports postgresql, mysql and sqlite

peewee Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. a small, expressive ORM p

Charles Leifer 9.7k Dec 30, 2022
Easy-to-use data handling for SQL data stores with support for implicit table creation, bulk loading, and transactions.

dataset: databases for lazy people In short, dataset makes reading and writing data in databases as simple as reading and writing JSON files. Read the

Friedrich Lindenberg 4.2k Jan 2, 2023
A Python library for Cloudant and CouchDB

Cloudant Python Client This is the official Cloudant library for Python. Installation and Usage Getting Started API Reference Related Documentation De

Cloudant 162 Dec 19, 2022
Toolkit for storing files and attachments in web applications

DEPOT - File Storage Made Easy DEPOT is a framework for easily storing and serving files in web applications on Python2.6+ and Python3.2+. DEPOT suppo

Alessandro Molina 139 Dec 25, 2022
The JavaScript Database, for Node.js, nw.js, electron and the browser

The JavaScript Database Embedded persistent or in memory database for Node.js, nw.js, Electron and browsers, 100% JavaScript, no binary dependency. AP

Louis Chatriot 13.2k Jan 2, 2023
Anomaly detection on SQL data warehouses and databases

With CueObserve, you can run anomaly detection on data in your SQL data warehouses and databases. Getting Started Install via Docker docker run -p 300

Cuebook 171 Dec 18, 2022
A wrapper for SQLite and MySQL, Most of the queries wrapped into commands for ease.

Before you proceed, make sure you know Some real SQL, before looking at the code, otherwise you probably won't understand anything. Installation pip i

Refined 4 Jul 30, 2022
Simple DDL Parser to parse SQL (HQL, TSQL, AWS Redshift, Snowflake and other dialects) ddl files to json/python dict with full information about columns: types, defaults, primary keys, etc.

Simple DDL Parser Build with ply (lex & yacc in python). A lot of samples in 'tests/. Is it Stable? Yes, library already has about 5000+ usage per day

Iuliia Volkova 95 Jan 5, 2023
A simple python package that perform SQL Server Source Control and Auto Deployment.

deploydb Deploy your database objects automatically when the git branch is updated. Production-ready! ⚙️ Easy-to-use ?? Customizable ?? Installation I

Mert Güvençli 10 Dec 7, 2022