An open source multi-tool for exploring and publishing data

Overview

Datasette

PyPI Changelog Python 3.x Tests Documentation Status License docker: datasette

An open source multi-tool for exploring and publishing data

Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API.

Datasette is aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world.

Explore a demo, watch a video about the project or try it out by uploading and publishing your own CSV data.

Want to stay up-to-date with the project? Subscribe to the Datasette Weekly newsletter for tips, tricks and news on what's new in the Datasette ecosystem.

Installation

If you are on a Mac, Homebrew is the easiest way to install Datasette:

brew install datasette

You can also install it using pip or pipx:

pip install datasette

Datasette requires Python 3.6 or higher. We also have detailed installation instructions covering other options such as Docker.

Basic usage

datasette serve path/to/database.db

This will start a web server on port 8001 - visit http://localhost:8001/ to access the web interface.

serve is the default subcommand, you can omit it if you like.

Use Chrome on OS X? You can run datasette against your browser history like so:

 datasette ~/Library/Application\ Support/Google/Chrome/Default/History

Now visiting http://localhost:8001/History/downloads will show you a web interface to browse your downloads data:

Downloads table rendered by datasette

datasette serve options

Usage: datasette serve [OPTIONS] [FILES]...

  Serve up specified SQLite database files with a web UI

Options:
  -i, --immutable PATH      Database files to open in immutable mode
  -h, --host TEXT           Host for server. Defaults to 127.0.0.1 which means
                            only connections from the local machine will be
                            allowed. Use 0.0.0.0 to listen to all IPs and
                            allow access from other machines.
  -p, --port INTEGER        Port for server, defaults to 8001
  --reload                  Automatically reload if database or code change
                            detected - useful for development
  --cors                    Enable CORS by serving Access-Control-Allow-
                            Origin: *
  --load-extension PATH     Path to a SQLite extension to load
  --inspect-file TEXT       Path to JSON file created using "datasette
                            inspect"
  -m, --metadata FILENAME   Path to JSON file containing license/source
                            metadata
  --template-dir DIRECTORY  Path to directory containing custom templates
  --plugins-dir DIRECTORY   Path to directory containing custom plugins
  --static STATIC MOUNT     mountpoint:path-to-directory for serving static
                            files
  --memory                  Make /_memory database available
  --config CONFIG           Set config option using configname:value
                            docs.datasette.io/en/stable/config.html
  --version-note TEXT       Additional note to show on /-/versions
  --help-config             Show available config options
  --help                    Show this message and exit.

metadata.json

If you want to include licensing and source information in the generated datasette website you can do so using a JSON file that looks something like this:

{
    "title": "Five Thirty Eight",
    "license": "CC Attribution 4.0 License",
    "license_url": "http://creativecommons.org/licenses/by/4.0/",
    "source": "fivethirtyeight/data on GitHub",
    "source_url": "https://github.com/fivethirtyeight/data"
}

Save this in metadata.json and run Datasette like so:

datasette serve fivethirtyeight.db -m metadata.json

The license and source information will be displayed on the index page and in the footer. They will also be included in the JSON produced by the API.

datasette publish

If you have Heroku or Google Cloud Run configured, Datasette can deploy one or more SQLite databases to the internet with a single command:

datasette publish heroku database.db

Or:

datasette publish cloudrun database.db

This will create a docker image containing both the datasette application and the specified SQLite database files. It will then deploy that image to Heroku or Cloud Run and give you a URL to access the resulting website and API.

See Publishing data in the documentation for more details.

Comments
  • Upgrade to CodeMirror 6, add SQL autocomplete

    Upgrade to CodeMirror 6, add SQL autocomplete

    In an effort to get closer to table / column autocomplete I took a shot at https://github.com/simonw/datasette/issues/1796. I haven't done a lot of testing but would be curious if this fixes some of the concerns raised in https://github.com/simonw/datasette/issues/1796#issue-1355148385 for example.

    Done:

    • Changed to bundling using rollup as per https://codemirror.net/examples/bundle/
    • Restored a fromTextArea-like function from https://codemirror.net/docs/migration/
    • Removed old JS and CSS files (no external CSS needed anymore as per https://codemirror.net/examples/styling/)
    • Updated instructions for building the bundle

    Not done:

    • cmResize had an error, so commented out the resize handle
    • Add extraKeys option for shift+enter and tab

    :books: Documentation preview :books:: https://datasette--1893.org.readthedocs.build/en/1893/

    opened by bgrins 48
  • Rethink how .ext formats (v.s. ?_format=) works before 1.0

    Rethink how .ext formats (v.s. ?_format=) works before 1.0

    Datasette currently has surprising special behaviour for if a table name ends in .csv - which can happen when a tool like csvs-to-sqlite creates tables that match the filename that they were imported from.

    https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv illustrates this behaviour: it links to .csv and .json that look like this:

    • https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=json
    • https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=csv&_size=max

    Where normally Datasette would add the .csv or .json extension to the path component of the URL (as seen on other pages such as https://latest.datasette.io/fixtures/facet_cities) here the path_with_format() function notices that there is already a . in the path and instead adds ?_format=csv to the query string instead.

    The problem with this mechanism is that it's pretty surprising. Anyone writing external code to Datasette who wants to get back the .csv or .json version giving the URL to a table page will need to know about and implement this behaviour themselves. That's likely to cause all kinds of bugs in the future.

    research 
    opened by simonw 48
  • Updated Dockerfile with SpatiaLite version 5.0

    Updated Dockerfile with SpatiaLite version 5.0

    The version bundled in Datasette's Docker image right now is 4.4.0-RC0

    https://github.com/simonw/datasette/blob/d0fd833b8cdd97e1b91d0f97a69b494895d82bee/Dockerfile#L16-L17

    5 has been out for a couple of months and has a bunch of big improvements, most notable stable KNN support.

    datasette-publish spatialite 
    opened by simonw 45
  • Port Datasette to ASGI

    Port Datasette to ASGI

    Datasette doesn't take much advantage of Sanic, and I'm increasingly having to work around parts of it because of idiosyncrasies that are specific to Datasette - caring about the exact order of querystring arguments for example.

    Since Datasette is GET-only our needs from a web framework are actually pretty slim.

    This becomes more important as I expand the plugins #14 framework. Am I sure I want the plugin ecosystem to depend on a Sanic if I might move away from it in the future?

    If Datasette wasn't all about async/await I would use WSGI, but today it makes more sense to use ASGI. I'd like to be confident that switching to ASGI would still give me the excellent performance that Sanic provides.

    https://github.com/django/asgiref/blob/master/specs/asgi.rst

    large feature 
    opened by simonw 42
  • Authentication (and permissions) as a core concept

    Authentication (and permissions) as a core concept

    Right now Datasette authentication is provided exclusively by plugins:

    • https://github.com/simonw/datasette-auth-github
    • https://github.com/simonw/datasette-auth-existing-cookies

    This is an all-or-nothing approach: either your Datasette instance requires authentication at the top level or it does not.

    But... as I build new plugins like https://github.com/simonw/datasette-configure-fts and https://github.com/simonw/datasette-edit-tables I increasingly have individual features which should be reserved for logged-in users while still wanting other parts of Datasette to be open to all.

    This is too much for plugins to own independently of Datasette core. Datasette needs to ship a single "user is authenticated" concept (independent of how users actually sign in) so that different plugins can integrate with it.

    large documentation plugins feature authentication-and-permissions 
    opened by simonw 40
  • invoke_startup() is not run in some conditions, e.g. gunicorn/uvicorn workers, breaking lots of things

    invoke_startup() is not run in some conditions, e.g. gunicorn/uvicorn workers, breaking lots of things

    In the past (pre-september 14, #1809) I had a running deployment of Datasette on Azure WebApps by emulating the call in cli.py to Gunicorn: gunicorn -w 2 -k uvicorn.workers.UvicornWorker app:app.

    My most recent deployment, however, fails loudly by shouting that Datasette.invoke_startup() was not called. It does not seem to be possible to call invoke_startup when running using a uvicorn command directly like this (I've reproduced this locally using uvicorn). Two candidates that I have tried:

    • Uvicorn has a --factory option, but the app factory has to be synchronous, so no await invoke_startup there
    • asyncio.get_event_loop().run_until_complete is also not an option because uvicorn already has the event loop running.

    One additional option is:

    • Use Gunicorn's server hooks to call invoke_startup. These are also synchronous, but I might be able to get ahead of the event loop starting here.

    In my current deployment setup, it does not appear to be possible to use datasette serve directly, so I'm stuck either

    • Trying to rework my complete deployment setup, for instance, using Azure functions as described here)
    • Or dig into the ASGI spec and write a wrapper for the sole purpose of launching Datasette using a direct Uvicorn invocation.

    Questions for the maintainers:

    • Is this intended behaviour/will not support/etc.? If so, I'd be happy to add a PR with a couple lines in the documentation.
    • if this is not intended behaviour, what is a good way to fix it? I could have a go at the ASGI spec thing (I think the Azure Functions thing is related) and provide a PR with the wrapper here, but I'm all ears!

    Almost forgot, minimal reproducer:

    from datasette import Datasette
    
    ds = Datasette(files=['./global-power-plants.db'])]
    app = ds.app()
    

    Save as app.py in the same folder as global-power-plants.db, and then try running uvicorn app:app.

    Opening the resulting Datasette instance in the browser will show the error message.

    bug 
    opened by Rik-de-Kort 36
  • Deploy a live instance of demos/apache-proxy

    Deploy a live instance of demos/apache-proxy

    I'll get this working on my laptop first, but then I want to get it up and running on Cloud Run - maybe with a GitHub Actions workflow in this repo that re-deploys it on manual execution.

    Originally posted by @simonw in https://github.com/simonw/datasette/issues/1521#issuecomment-974322178

    I started by following https://ahmet.im/blog/cloud-run-multiple-processes-easy-way/ - see example in https://github.com/ahmetb/multi-process-container-lazy-solution

    help wanted ci ops docker 
    opened by simonw 34
  • await datasette.client.get(path) mechanism for executing internal requests

    await datasette.client.get(path) mechanism for executing internal requests

    datasette-graphql works by making internal requests to the TableView class (in order to take advantage of existing pagination logic, plus options like ?_search= and ?_where=) - see #915

    I want to support a mod_rewrite style mechanism for putting nicer URLs on top of Datasette pages - I botched that together for a project here using an internal ASGI proxying trick: https://github.com/natbat/tidepools_near_me/commit/ec102c6da5a5d86f17628740d90b6365b671b5e1

    If the datasette object provided a documented method for executing internal requests (in a way that makes sense with logging etc - i.e. doesn't get logged as a separate request) both of these use-cases would be much neater.

    plugins feature 
    opened by simonw 33
  • Maintain an in-memory SQLite table of connected databases and their tables

    Maintain an in-memory SQLite table of connected databases and their tables

    I want Datasette to have its own internal metadata about connected tables, to power features like a paginated searchable homepage in #461. I want this to be a SQLite table.

    This could also be part of the directory scanning mechanism prototyped in #672 - where Datasette can be set to continually scan a directory for new database files that it can serve.

    Also relevant to the Datasette Library concept in #417.

    large feature 
    opened by simonw 32
  • Ability to sort (and paginate) by column

    Ability to sort (and paginate) by column

    As requested in https://github.com/simonw/datasette/issues/185#issuecomment-376614973

    I've previously avoided this for performance reasons: sort-by-column on a column without an index is likely to perform badly for hundreds of thousands of rows.

    That's not a good enough reason to avoid the feature entirely though. A few options:

    • Allow sort-by-column by default, give users the option to disable it for specific tables/columns
    • Disallow sort-by-column by default, give users option (probably in metadata.json) to enable it for specific tables/columns
    • Automatically detect if a column either has an index on it OR a table has less than X rows in it

    We already have the mechanism in place to cut off SQL queries that take more than X seconds, so if someone DOES try to sort by a column that's too expensive it won't actually hurt anything - but it would be nice to not show people a "sort" option which is guaranteed to throw a timeout error.

    The vast majority of datasette usage that I've seen so far is on smaller datasets where the performance penalties of sort-by-column are extremely unlikely to show up.


    Still left to do:

    • [x] UI that shows which sort order is currently being applied (in HTML and in JSON)
    • [x] UI for applying a sort order (with rel=nofollow to avoid Google crawling it)
    • [x] Sort column names should be escaped correctly in generated SQL
    • [x] Validation that the selected sort order is a valid column
    • [x] Throw error if user attempts to apply _sort AND _sort_desc at the same time
    • [x] Ability to disable sorting (or sort only for specific columns) in metadata.json
    • [x] Fix "201 rows where sorted by sortable_with_nulls " bug
    medium 
    opened by simonw 31
  • Default API token authentication mechanism

    Default API token authentication mechanism

    API authentication will be via Authorization: Bearer XXX request headers.

    I'm inclined to add a default token mechanism to Datasette based on tokens that are signed with the DATASETTE_SECRET. Maybe the root user can access /-/create-token which provides a UI for generating a time-limited signed token? Could also have a datasette token command for creating such tokens at the command-line.

    Plugins can then define alternative ways of creating tokens, such as the existing https://datasette.io/plugins/datasette-auth-tokens plugin.

    Originally posted by @simonw in https://github.com/simonw/datasette/issues/1850#issuecomment-1289706439

    json-api authentication-and-permissions 
    opened by simonw 30
  • _col=id can cause id column to export twice in CSV export

    _col=id can cause id column to export twice in CSV export

    https://datasette.simonwillison.net/simonwillisonblog/blog_entry.csv?_col=id&_col=title&_col=body&_labels=on&_size=1

    id,id,title,body
    1,1,WaSP Phase II,"<p>The <a href=""http://www.webstandards.org/"">Web Standards</a> project has launched Phase II.</p>"
    

    That should not have two id columns.

    bug csv 
    opened by simonw 0
  • Bump sphinx from 5.3.0 to 6.0.0

    Bump sphinx from 5.3.0 to 6.0.0

    Bumps sphinx from 5.3.0 to 6.0.0.

    Release notes

    Sourced from sphinx's releases.

    v6.0.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 6.0.0 (released Dec 29, 2022)

    Dependencies

    • #10468: Drop Python 3.6 support
    • #10470: Drop Python 3.7, Docutils 0.14, Docutils 0.15, Docutils 0.16, and Docutils 0.17 support. Patch by Adam Turner

    Incompatible changes

    • #7405: Removed the jQuery and underscore.js JavaScript frameworks.

      These frameworks are no longer be automatically injected into themes from Sphinx 6.0. If you develop a theme or extension that uses the jQuery, $, or $u global objects, you need to update your JavaScript to modern standards, or use the mitigation below.

      The first option is to use the sphinxcontrib.jquery_ extension, which has been developed by the Sphinx team and contributors. To use this, add sphinxcontrib.jquery to the extensions list in conf.py, or call app.setup_extension("sphinxcontrib.jquery") if you develop a Sphinx theme or extension.

      The second option is to manually ensure that the frameworks are present. To re-add jQuery and underscore.js, you will need to copy jquery.js and underscore.js from the Sphinx repository_ to your static directory, and add the following to your layout.html:

      .. code-block:: html+jinja

      {%- block scripts %} {{ super() }} {%- endblock %}

      .. _sphinxcontrib.jquery: https://github.com/sphinx-contrib/jquery/

      Patch by Adam Turner.

    • #10471, #10565: Removed deprecated APIs scheduled for removal in Sphinx 6.0. See :ref:dev-deprecated-apis for details. Patch by Adam Turner.

    • #10901: C Domain: Remove support for parsing pre-v3 style type directives and roles. Also remove associated configuration variables c_allow_pre_v3 and c_warn_on_allowed_pre_v3. Patch by Adam Turner.

    Features added

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

    :books: Documentation preview :books:: https://datasette--1974.org.readthedocs.build/en/1974/

    dependencies 
    opened by dependabot[bot] 1
  • render_cell plugin hook's row object is not a sqlite.Row

    render_cell plugin hook's row object is not a sqlite.Row

    From https://docs.datasette.io/en/stable/plugin_hooks.html#render-cell-row-value-column-table-database-datasette:

    row - sqlite.Row The SQLite row object that the value being rendered is part of

    This appears to actually be a CustomRow, but I think that's unrelated to my issue.

    I have a table:

    CREATE TABLE IF NOT EXISTS "dss_job_stats"(
      job_id integer not null references dss_job(id) on delete cascade,
      host text not null,
      // other columns elided as irrelevant
      primary key (job_id, host)
    );
    

    On datasette 0.63.2, the render_cell hook receives a row value that looks like:

    CustomRow([('job_id', {'value': 2, 'label': '2'}), ('host', 'cldellow.com')])
    

    I expected the job_id value to be 2, but it's actually {'value': 2, 'label': '2'}.

    I can work around this, but was wondering if this was intended behaviour?

    bug documentation plugins 
    opened by cldellow 3
  • Upgrade for Sphinx 6.0 (once Furo has support for it)

    Upgrade for Sphinx 6.0 (once Furo has support for it)

    A deployment of #1967 to ReadTheDocs just failed like this: https://readthedocs.org/projects/datasette/builds/19045460/

    Running Sphinx v6.0.0
    making output directory... done
    building [mo]: targets for 0 po files that are out of date
    building [html]: targets for 28 source files that are out of date
    updating environment: [new config] 28 added, 0 changed, 0 removed
    reading sources... [  3%] authentication
    reading sources... [  7%] binary_data
    reading sources... [ 10%] changelog
    
    Traceback (most recent call last):
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 299, in next_line
        self.line = self.input_lines[self.line_offset]
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 1136, in __getitem__
        return self.data[i]
    IndexError: list index out of range
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 226, in run
        self.next_line()
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 302, in next_line
        raise EOFError
    EOFError
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/cmd/build.py", line 281, in build_main
        app.build(args.force_all, args.filenames)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/application.py", line 344, in build
        self.builder.build_update()
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 310, in build_update
        self.build(to_build,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 326, in build
        updated_docnames = set(self.read())
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 433, in read
        self._read_serial(docnames)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 454, in _read_serial
        self.read_doc(docname)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 510, in read_doc
        publisher.publish()
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/core.py", line 224, in publish
        self.document = self.reader.read(self.source, self.parser,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/io.py", line 103, in read
        self.parse()
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/readers/__init__.py", line 76, in parse
        self.parser.parse(self.input, document)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/parsers.py", line 78, in parse
        self.statemachine.run(inputlines, document, inliner=self.inliner)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 169, in run
        results = StateMachineWS.run(self, input_lines, input_offset,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 233, in run
        context, next_state, result = self.check_line(
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 445, in check_line
        return method(match, context, next_state)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 3024, in text
        self.section(title.lstrip(), source, style, lineno + 1, messages)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 325, in section
        self.new_subsection(title, lineno, messages)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 391, in new_subsection
        newabsoffset = self.nested_parse(
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 279, in nested_parse
        state_machine.run(block, input_offset, memo=self.memo,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 195, in run
        results = StateMachineWS.run(self, input_lines, input_offset)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 233, in run
        context, next_state, result = self.check_line(
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 445, in check_line
        return method(match, context, next_state)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 2785, in underline
        self.section(title, source, style, lineno - 1, messages)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 325, in section
        self.new_subsection(title, lineno, messages)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 391, in new_subsection
        newabsoffset = self.nested_parse(
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 279, in nested_parse
        state_machine.run(block, input_offset, memo=self.memo,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 195, in run
        results = StateMachineWS.run(self, input_lines, input_offset)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 233, in run
        context, next_state, result = self.check_line(
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 445, in check_line
        return method(match, context, next_state)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 1273, in bullet
        i, blank_finish = self.list_item(match.end())
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 1295, in list_item
        self.nested_parse(indented, input_offset=line_offset,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 279, in nested_parse
        state_machine.run(block, input_offset, memo=self.memo,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 195, in run
        results = StateMachineWS.run(self, input_lines, input_offset)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/statemachine.py", line 239, in run
        result = state.eof(context)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 2725, in eof
        self.blank(None, context, None)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 2716, in blank
        paragraph, literalnext = self.paragraph(
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 416, in paragraph
        textnodes, messages = self.inline_text(text, lineno)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 425, in inline_text
        nodes, messages = self.inliner.parse(text, lineno,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 649, in parse
        before, inlines, remaining, sysmessages = method(self, match,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 792, in interpreted_or_phrase_ref
        nodelist, messages = self.interpreted(rawsource, escaped, role,
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/docutils/parsers/rst/states.py", line 889, in interpreted
        nodes, messages2 = role_fn(role, rawsource, text, lineno, self)
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/ext/extlinks.py", line 101, in role
        title = caption % part
    TypeError: not all arguments converted during string formatting
    
    Exception occurred:
      File "/home/docs/checkouts/readthedocs.org/user_builds/datasette/envs/latest/lib/python3.9/site-packages/sphinx/ext/extlinks.py", line 101, in role
        title = caption % part
    TypeError: not all arguments converted during string formatting
    The full traceback has been saved in /tmp/sphinx-err-kq7ylgqo.log, if you want to report the issue to the developers.
    Please also report this if it was a user error, so that a better error message can be provided next time.
    A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks! 
    
    bug documentation blocked 
    opened by simonw 3
  • sql-formatter javascript is not now working with CloudFlare rocketloader

    sql-formatter javascript is not now working with CloudFlare rocketloader

    This is probably not a bug with datasette, but I thought you might want to know, @simonw.

    I noticed today that my CloudFlare proxied datasette instance lost the "Format SQL" option. I'm pretty sure it was there last week.

    In the CloudFlare settings, if I turn off Rocket Loader, I get the "Format SQL" option back.

    Rocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading?

    I'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89

    opened by fgregg 0
  • Allow to hide some queries in metadata.yml

    Allow to hide some queries in metadata.yml

    By default all queries are displayed.

    But there are many cases where it would be interesting to hide the queries by default:

    • the website is targeting non-tech people
    • the query is veeeeeery long (eg.)
    • reading the query is not important for the users, they only want to see the result

    Of course, the user still could have the option to see the query.

    It could be an option in the metadata file:

    databases:
      awesome_db:
        tables:
          products:
            hide_sql: true
        queries:
          great_query:
            hide_sql: true
            sql: select * from products where code = :barcode
    

    The priority could be:

    • no option in the metadata and nothing in the URL: query displayed
    • hide_sql in the metadata and nothing in the URL: query displayed as asked in the metadata
    • hide_sql in the metadata and &_hide_sql= in the URL: query as asked in the URL

    See also: #1824

    opened by CharlesNepote 0
Releases(0.63.3)
  • 0.63.3(Dec 18, 2022)

    • Fixed a bug where datasette --root, when running in Docker, would only output the URL to sign in as root when the server shut down, not when it started up. (#1958)
    • You no longer need to ensure await datasette.invoke_startup() has been called in order for Datasette to start correctly serving requests - this is now handled automatically the first time the server receives a request. This fixes a bug experienced when Datasette is served directly by an ASGI application server such as Uvicorn or Gunicorn. It also fixes a bug with the datasette-gunicorn plugin. (#1955)
    Source code(tar.gz)
    Source code(zip)
  • 1.0a2(Dec 15, 2022)

    The third Datasette 1.0 alpha release adds upsert support to the JSON API, plus the ability to specify finely grained permissions when creating an API token.

    See Datasette 1.0a2: Upserts and finely grained permissions for an extended, annotated version of these release notes.

    • New /db/table/-/upsert API, documented here. upsert is an update-or-insert: existing rows will have specified keys updated, but if no row matches the incoming primary key a brand new row will be inserted instead. (#1878)
    • New register_permissions(datasette) plugin hook. Plugins can now register named permissions, which will then be listed in various interfaces that show available permissions. (#1940)
    • The /db/-/create API for creating a table now accepts "ignore": true and "replace": true options when called with the "rows" property that creates a new table based on an example set of rows. This means the API can be called multiple times with different rows, setting rules for what should happen if a primary key collides with an existing row. (#1927)
    • Arbitrary permissions can now be configured at the instance, database and resource (table, SQL view or canned query) level in Datasette's Metadata JSON and YAML files. The new "permissions" key can be used to specify which actors should have which permissions. See Other permissions in metadata for details. (#1636)
    • The /-/create-token page can now be used to create API tokens which are restricted to just a subset of actions, including against specific databases or resources. See API Tokens for details. (#1947)
    • Likewise, the datasette create-token CLI command can now create tokens with a subset of permissions. (#1855)
    • New datasette.create_token() API method <datasette_create_token>` for programmatically creating signed API tokens. (#1951)
    • /db/-/create API now requires actor to have insert-row permission in order to use the "row" or "rows" properties. (#1937)
    Source code(tar.gz)
    Source code(zip)
  • 1.0a1(Dec 1, 2022)

    • Write APIs now serve correct CORS headers if Datasette is started in --cors mode. See the full list of CORS headers in the documentation. (#1922)
    • Fixed a bug where the _memory database could be written to even though writes were not persisted. (#1917)
    • The https://latest.datasette.io/ demo instance now includes an ephemeral database which can be used to test Datasette's write APIs, using the new datasette-ephemeral-tables plugin to drop any created tables after five minutes. This database is only available if you sign in as the root user using the link on the homepage. (#1915)
    • Fixed a bug where hitting the write endpoints with a GET request returned a 500 error. It now returns a 405 (method not allowed) error instead. (#1916)
    • The list of endpoints in the API explorer now lists mutable databases first. (#1918)
    • The "ignore": true and "replace": true options for the insert API are now documented. (#1924)
    Source code(tar.gz)
    Source code(zip)
  • 1.0a0(Nov 29, 2022)

    This first alpha release of Datasette 1.0 introduces a brand new collection of APIs for writing to the database (#1850), as well as a new API token mechanism baked into Datasette core. Previously, API tokens have only been supported by installing additional plugins.

    This is very much a preview: expect many more backwards incompatible API changes prior to the full 1.0 release.

    Feedback enthusiastically welcomed, either through issue comments or via the Datasette Discord community.

    Signed API tokens

    • New /-/create-token page allowing authenticated users to create signed API tokens that can act on their behalf, see API Tokens. (#1852)
    • New datasette create-token command for creating tokens from the command line: datasette create-token.
    • New allow_signed_tokens setting which can be used to turn off signed token support. (#1856)
    • New max_signed_tokens_ttl setting for restricting the maximum allowed duration of a signed token. (#1858)

    Write API

    Source code(tar.gz)
    Source code(zip)
  • 0.63.2(Nov 19, 2022)

    • Fixed a bug in datasette publish heroku where deployments failed due to an older version of Python being requested. (#1905)
    • New datasette publish heroku --generate-dir <dir> option for generating a Heroku deployment directory without deploying it.
    Source code(tar.gz)
    Source code(zip)
  • 0.63.1(Nov 11, 2022)

    • Fixed a bug where Datasette's table filter form would not redirect correctly when run behind a proxy using the base_url setting. (#1883)
    • SQL query is now shown wrapped in a <textarea> if a query exceeds a time limit. (#1876)
    • Fixed an intermittent "Too many open files" error while running the test suite. (#1843)
    • New db.close() internal method.
    Source code(tar.gz)
    Source code(zip)
  • 0.63(Oct 27, 2022)

    See Datasette 0.63: The annotated release notes for more background on the changes in this release.

    Features

    • Now tested against Python 3.11. Docker containers used by datasette publish and datasette package both now use that version of Python. (#1853)
    • --load-extension option now supports entrypoints. Thanks, Alex Garcia. (#1789)
    • Facet size can now be set per-table with the new facet_size table metadata option. (#1804)
    • The truncate_cells_html setting now also affects long URLs in columns. (#1805)
    • The non-JavaScript SQL editor textarea now increases height to fit the SQL query. (#1786)
    • Facets are now displayed with better line-breaks in long values. Thanks, Daniel Rech. (#1794)
    • The settings.json file used in Configuration directory mode is now validated on startup. (#1816)
    • SQL queries can now include leading SQL comments, using /* ... */ or -- ... syntax. Thanks, Charles Nepote. (#1860)
    • SQL query is now re-displayed when terminated with a time limit error. (#1819)
    • The inspect data mechanism is now used to speed up server startup - thanks, Forest Gregg. (#1834)
    • In Configuration directory mode databases with filenames ending in .sqlite or .sqlite3 are now automatically added to the Datasette instance. (#1646)
    • Breadcrumb navigation display now respects the current user's permissions. (#1831)

    Plugin hooks and internals

    • The prepare_jinja2_environment(env, datasette) plugin hook now accepts an optional datasette argument. Hook implementations can also now return an async function which will be awaited automatically. (#1809)
    • Database(is_mutable=) now defaults to True. (#1808)
    • The datasette.check_visibility() method now accepts an optional permissions= list, allowing it to take multiple permissions into account at once when deciding if something should be shown as public or private. This has been used to correctly display padlock icons in more places in the Datasette interface. (#1829)
    • Datasette no longer enforces upper bounds on its dependencies. (#1800)

    Documentation

    Source code(tar.gz)
    Source code(zip)
  • 0.63a1(Oct 24, 2022)

    Source code(tar.gz)
    Source code(zip)
  • 0.63a0(Sep 26, 2022)

    • The prepare_jinja2_environment(env, datasette) plugin hook now accepts an optional datasette argument. Hook implementations can also now return an async function which will be awaited automatically. (#1809)
    • --load-extension option now supports entrypoints. Thanks, Alex Garcia. (#1789)
    • New tutorial: Cleaning data with sqlite-utils and Datasette.
    • Facet size can now be set per-table with the new facet_size table metadata option. (#1804)
    • truncate_cells_html setting now also affects long URLs in columns. (#1805)
    • Database(is_mutable=) now defaults to True. (#1808)
    • Non-JavaScript textarea now increases height to fit the SQL query. (#1786)
    • More detailed command descriptions on the CLI reference page. (#1787)
    • Datasette no longer enforces upper bounds on its depenedencies. (#1800)
    • Facets are now displayed with better line-breaks in long values. Thanks, Daniel Rech. (#1794)
    • The settings.json file used in Configuration directory mode is now validated on startup. (#1816)
    Source code(tar.gz)
    Source code(zip)
  • 0.62(Aug 14, 2022)

    Datasette can now run entirely in your browser using WebAssembly. Try out Datasette Lite, take a look at the code or read more about it in Datasette Lite: a server-side Python web application running in a browser.

    Datasette now has a Discord community for questions and discussions about Datasette and its ecosystem of projects.

    Features

    • Datasette is now compatible with Pyodide. This is the enabling technology behind Datasette Lite. (#1733)
    • Database file downloads now implement conditional GET using ETags. (#1739)
    • HTML for facet results and suggested results has been extracted out into new templates _facet_results.html and _suggested_facets.html. Thanks, M. Nasimul Haque. (#1759)
    • Datasette now runs some SQL queries in parallel. This has limited impact on performance, see this research issue for details.
    • New --nolock option for ignoring file locks when opening read-only databases. (#1744)
    • Spaces in the database names in URLs are now encoded as + rather than ~20. (#1701)
    • <Binary: 2427344 bytes> is now displayed as <Binary: 2,427,344 bytes> and is accompanied by tooltip showing "2.3MB". (#1712)
    • The base Docker image used by datasette publish cloudrun, datasette package and the official Datasette image has been upgraded to 3.10.6-slim-bullseye. (#1768)
    • Canned writable queries against immutable databases now show a warning message. (#1728)
    • datasette publish cloudrun has a new --timeout option which can be used to increase the time limit applied by the Google Cloud build environment. Thanks, Tim Sherratt. (#1717)
    • datasette publish cloudrun has new --min-instances and --max-instances options. (#1779)

    Plugin hooks

    • New plugin hook: handle_exception(), for custom handling of exceptions caught by Datasette. (#1770)
    • The render_cell() plugin hook is now also passed a row argument, representing the sqlite3.Row object that is being rendered. (#1300)
    • The configuration directory is now stored in datasette.config_dir, making it available to plugins. Thanks, Chris Amico. (#1766)

    Bug fixes

    • Don't show the facet option in the cog menu if faceting is not allowed. (#1683)
    • ?_sort and ?_sort_desc now work if the column that is being sorted has been excluded from the query using ?_col= or ?_nocol=. (#1773)
    • Fixed bug where ?_sort_desc was duplicated in the URL every time the Apply button was clicked. (#1738)

    Documentation

    Source code(tar.gz)
    Source code(zip)
  • 0.62a1(Jul 18, 2022)

    • New plugin hook: handle_exception(), for custom handling of exceptions caught by Datasette. (#1770)
    • The render_cell() plugin hook is now also passed a row argument, representing the sqlite3.Row object that is being rendered. (#1300)
    • New --nolock option for ignoring file locks when opening read-only databases. (#1744)
    • Documentation now uses the Furo Sphinx theme. (#1746)
    • Datasette now has a Discord community.
    • Database file downloads now implement conditional GET using ETags. (#1739)
    • Examples in the documentation now include a copy-to-clipboard button. (#1748)
    • HTML for facet results and suggested results has been extracted out into new templates _facet_results.html and _suggested_facets.html. Thanks, M. Nasimul Haque. (#1759)
    Source code(tar.gz)
    Source code(zip)
  • 0.62a0(May 2, 2022)

    • Datasette now runs some SQL queries in parallel. This has limited impact on performance, see this research issue for details.
    • Datasette should now be compatible with Pyodide. (#1733)
    • datasette publish cloudrun has a new --timeout option which can be used to increase the time limit applied by the Google Cloud build environment. Thanks, Tim Sherratt. (#1717)
    • Spaces in database names are now encoded as + rather than ~20. (#1701)
    • <Binary: 2427344 bytes> is now displayed as <Binary: 2,427,344 bytes> and is accompanied by tooltip showing "2.3MB". (#1712)
    • Don't show the facet option in the cog menu if faceting is not allowed. (#1683)
    • Code examples in the documentation are now all formatted using Black. (#1718)
    • Request.fake() method is now documented, see Request object.
    Source code(tar.gz)
    Source code(zip)
  • 0.61.1(Mar 23, 2022)

  • 0.61(Mar 23, 2022)

    In preparation for Datasette 1.0, this release includes two potentially backwards-incompatible changes. Hashed URL mode has been moved to a separate plugin, and the way Datasette generates URLs to databases and tables with special characters in their name such as / and . has changed.

    Datasette also now requires Python 3.7 or higher.

    See also the annotated release notes.

    • URLs within Datasette now use a different encoding scheme for tables or databases that include "special" characters outside of the range of a-zA-Z0-9_-. This scheme is explained here: Tilde encoding. (#1657)
    • Removed hashed URL mode from Datasette. The new datasette-hashed-urls plugin can be used to achieve the same result, see datasette-hashed-urls for details. (#1661)
    • Databases can now have a custom path within the Datasette instance that is independent of the database name, using the db.route property. (#1668)
    • Datasette is now covered by a Code of Conduct. (#1654)
    • Python 3.6 is no longer supported. (#1577)
    • Tests now run against Python 3.11-dev. (#1621)
    • New datasette.ensure_permissions(actor, permissions) internal method for checking multiple permissions at once. (#1675)
    • New datasette.check_visibility(actor, action, resource=None) internal method for checking if a user can see a resource that would otherwise be invisible to unauthenticated users. (#1678)
    • Table and row HTML pages now include a <link rel="alternate" type="application/json+datasette" href="..."> element and return a Link: URL; rel="alternate"; type="application/json+datasette" HTTP header pointing to the JSON version of those pages. (#1533)
    • Access-Control-Expose-Headers: Link is now added to the CORS headers, allowing remote JavaScript to access that header.
    • Canned queries are now shown at the top of the database page, directly below the SQL editor. Previously they were shown at the bottom, below the list of tables. (#1612)
    • Datasette now has a default favicon. (#1603)
    • sqlite_stat tables are now hidden by default. (#1587)
    • SpatiaLite tables data_licenses, KNN and KNN2 are now hidden by default. (#1601)
    • SQL query tracing mechanism now works for queries executed in asyncio sub-tasks, such as those created by asyncio.gather(). (#1576)
    • datasette.tracer mechanism is now documented.
    • Common Datasette symbols can now be imported directly from the top-level datasette package, see Import shortcuts. Those symbols are Response, Forbidden, NotFound, hookimpl, actor_matches_allow. (#957)
    • /-/versions page now returns additional details for libraries used by SpatiaLite. (#1607)
    • Documentation now links to the Datasette Tutorials.
    • Datasette will now also look for SpatiaLite in /opt/homebrew - thanks, Dan Peterson. (#1649)
    • Fixed bug where custom pages did not work on Windows. Thanks, Robert Christie. (#1545)
    • Fixed error caused when a table had a column named n. (#1228)
    Source code(tar.gz)
    Source code(zip)
  • 0.61a0(Mar 20, 2022)

    • Removed hashed URL mode from Datasette. The new datasette-hashed-urls plugin can be used to achieve the same result, see datasette-hashed-urls for details. (#1661)
    • Databases can now have a custom path within the Datasette instance that is indpendent of the database name, using the db.route property. (#1668)
    • URLs within Datasette now use a different encoding scheme for tables or databases that include "special" characters outside of the range of a-zA-Z0-9_-. This scheme is explained here: Tilde encoding. (#1657)
    • Table and row HTML pages now include a <link rel="alternate" type="application/json+datasette" href="..."> element and return a Link: URL; rel="alternate"; type="application/json+datasette" HTTP header pointing to the JSON version of those pages. (#1533)
    • Access-Control-Expose-Headers: Link is now added to the CORS headers, allowing remote JavaScript to access that header.
    • Canned queries are now shown at the top of the database page, directly below the SQL editor. Previously they were shown at the bottom, below the list of tables. (#1612)
    • Datasette now has a default favicon. (#1603)
    • sqlite_stat tables are now hidden by default. (#1587)
    • SpatiaLite tables data_licenses, KNN and KNN2 are now hidden by default. (#1601)
    • Python 3.6 is no longer supported. (#1577)
    • Tests now run against Python 3.11-dev. (#1621)
    • Fixed bug where custom pages did not work on Windows. Thanks, Robert Christie. (#1545)
    • SQL query tracing mechanism now works for queries executed in asyncio sub-tasks, such as those created by asyncio.gather(). (#1576)
    • datasette.tracer mechanism is now documented.
    • Common Datasette symbols can now be imported directly from the top-level datasette package, see Import shortcuts. Those symbols are Response, Forbidden, NotFound, hookimpl, actor_matches_allow. (#957)
    • /-/versions page now returns additional details for libraries used by SpatiaLite. (#1607)
    • Documentation now links to the Datasette Tutorials.
    • Datasette will now also look for SpatiaLite in /opt/homebrew - thanks, Dan Peterson. (#1649)
    • Datasette is now covered by a Code of Conduct. (#1654)
    Source code(tar.gz)
    Source code(zip)
  • 0.60.2(Feb 7, 2022)

  • 0.60.1(Jan 21, 2022)

    • Fixed a bug where installation on Python 3.6 stopped working due to a change to an underlying dependency. This release can now be installed on Python 3.6, but is the last release of Datasette that will support anything less than Python 3.7. (#1609)
    Source code(tar.gz)
    Source code(zip)
  • 0.60(Jan 14, 2022)

    Plugins and internals

    Faceting

    • The number of unique values in a facet is now always displayed. Previously it was only displayed if the user specified ?_facet_size=max. (#1556)
    • Facets of type date or array can now be configured in metadata.json, see Facets in metadata.json. Thanks, David Larlet. (#1552)
    • New ?_nosuggest=1 parameter for table views, which disables facet suggestion. (#1557)
    • Fixed bug where ?_facet_array=tags&_facet=tags would only display one of the two selected facets. (#625)

    Other small fixes

    • Made several performance improvements to the database schema introspection code that runs when Datasette first starts up. (#1555)
    • Label columns detected for foreign keys are now case-insensitive, so Name or TITLE will be detected in the same way as name or title. (#1544)
    • Upgraded Pluggy dependency to 1.0. (#1575)
    • Now using Plausible analytics for the Datasette documentation.
    • explain query plan is now allowed with varying amounts of whitespace in the query. (#1588)
    • New CLI reference page showing the output of --help for each of the datasette sub-commands. This lead to several small improvements to the help copy. (#1594)
    • Fixed bug where writable canned queries could not be used with custom templates. (#1547)
    • Improved fix for a bug where columns with a underscore prefix could result in unnecessary hidden form fields. (#1527)
    Source code(tar.gz)
    Source code(zip)
  • 0.60a1(Dec 19, 2021)

    Source code(tar.gz)
    Source code(zip)
  • 0.60a0(Dec 17, 2021)

    • New plugin hook: filters_from_request(request, database, table, datasette), which runs on the table page and can be used to support new custom query string parameters that modify the SQL query. (#473)
    • The number of unique values in a facet is now always displayed. Previously it was only displayed if the user specified ?_facet_size=max. (#1556)
    • Fixed bug where ?_facet_array=tags&_facet=tags would only display one of the two selected facets. (#625)
    • Facets of type date or array can now be configured in metadata.json, see Facets in metadata.json. Thanks, David Larlet. (#1552)
    • New ?_nosuggest=1 parameter for table views, which disables facet suggestion. (#1557)
    • Label columns detected for foreign keys are now case-insensitive, so Name or TITLE will be detected in the same way as name or title. (#1544)
    • The query string variables exposed by request.args will now include blank strings for arguments such as foo in ?foo=&bar=1 rather than ignoring those parameters entirely. (#1551)
    Source code(tar.gz)
    Source code(zip)
  • 0.59.4(Nov 30, 2021)

    • Fixed bug where columns with a leading underscore could not be removed from the interactive filters list. (#1527)
    • Fixed bug where columns with a leading underscore were not correctly linked to by the "Links from other tables" interface on the row page. (#1525)
    • Upgraded dependencies aiofiles, black and janus.
    Source code(tar.gz)
    Source code(zip)
  • 0.59.3(Nov 20, 2021)

  • 0.59.2(Nov 14, 2021)

    • Column names with a leading underscore now work correctly when used as a facet. (#1506)
    • Applying ?_nocol= to a column no longer removes that column from the filtering interface. (#1503)
    • Official Datasette Docker container now uses Debian Bullseye as the base image. (#1497)
    • Datasette is four years old today! Here's the original release announcement from 2017.
    Source code(tar.gz)
    Source code(zip)
  • 0.59.1(Oct 24, 2021)

  • 0.59(Oct 14, 2021)

    • Columns can now have associated metadata descriptions in metadata.json, see Column descriptions. (#942)
    • New register_commands() plugin hook allows plugins to register additional Datasette CLI commands, e.g. datasette mycommand file.db. (#1449)
    • Adding ?_facet_size=max to a table page now shows the number of unique values in each facet. (#1423)
    • Upgraded dependency httpx 0.20 - the undocumented allow_redirects= parameter to datasette.client is now follow_redirects=, and defaults to False where it previously defaulted to True. (#1488)
    • The --cors option now causes Datasette to return the Access-Control-Allow-Headers: Authorization header, in addition to Access-Control-Allow-Origin: *. (#1467)
    • Code that figures out which named parameters a SQL query takes in order to display form fields for them is no longer confused by strings that contain colon characters. (#1421)
    • Renamed --help-config option to --help-settings. (#1431)
    • datasette.databases property is now a documented API. (#1443)
    • The base.html template now wraps everything other than the <footer> in a <div class="not-footer"> element, to help with advanced CSS customization. (#1446)
    • The render_cell() plugin hook can now return an awaitable function. This means the hook can execute SQL queries. (#1425)
    • register_routes(datasette) plugin hook now accepts an optional datasette argument. (#1404)
    • New hide_sql canned query option for defaulting to hiding the SQL quey used by a canned query, see Additional canned query options. (#1422)
    • New --cpu option for datasette publish cloudrun. (#1420)
    • If Rich is installed in the same virtual environment as Datasette, it will be used to provide enhanced display of error tracebacks on the console. (#1416)
    • datasette.utils parse_metadata(content) function, used by the new datasette-remote-metadata plugin, is now a documented API. (#1405)
    • Fixed bug where ?_next=x&_sort=rowid could throw an error. (#1470)
    • Column cog menu no longer shows the option to facet by a column that is already selected by the default facets in metadata. (#1469)
    Source code(tar.gz)
    Source code(zip)
  • 0.59a2(Aug 28, 2021)

    • Columns can now have associated metadata descriptions in metadata.json, see Column descriptions. (#942)
    • New register_commands() plugin hook allows plugins to register additional Datasette CLI commands, e.g. datasette mycommand file.db. (#1449)
    • Adding ?_facet_size=max to a table page now shows the number of unique values in each facet. (#1423)
    • Code that figures out which named parameters a SQL query takes in order to display form fields for them is no longer confused by strings that contain colon characters. (#1421)
    • Renamed --help-config option to --help-settings. (#1431)
    • datasette.databases property is now a documented API. (#1443)
    • Datasette base template now wraps everything other than the <footer> in a <div class="not-footer"> element, to help with advanced CSS customization. (#1446)
    Source code(tar.gz)
    Source code(zip)
  • 0.59a1(Aug 9, 2021)

  • 0.59a0(Aug 7, 2021)

    Source code(tar.gz)
    Source code(zip)
  • 0.58.1(Jul 16, 2021)

  • 0.58(Jul 15, 2021)

    Source code(tar.gz)
    Source code(zip)
Owner
Simon Willison
Simon Willison
The command-line tool that gives easy access to all of the capabilities of B2 Cloud Storage

B2 Command Line Tool The command-line tool that gives easy access to all of the capabilities of B2 Cloud Storage. This program provides command-line a

Backblaze 467 Dec 8, 2022
a full featured file system for online data storage

S3QL S3QL is a file system that stores all its data online using storage services like Google Storage, Amazon S3, or OpenStack. S3QL effectively provi

null 917 Dec 25, 2022
Barman - Backup and Recovery Manager for PostgreSQL

Barman, Backup and Recovery Manager for PostgreSQL Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of

EDB 1.5k Dec 30, 2022
A generic JSON document store with sharing and synchronisation capabilities.

Kinto Kinto is a minimalist JSON storage service with synchronisation and sharing abilities. Online documentation Tutorial Issue tracker Contributing

Kinto 4.2k Dec 26, 2022
A Terminal Client for MySQL with AutoCompletion and Syntax Highlighting.

mycli A command line client for MySQL that can do auto-completion and syntax highlighting. HomePage: http://mycli.net Documentation: http://mycli.net/

dbcli 10.7k Jan 7, 2023
Postgres CLI with autocompletion and syntax highlighting

A REPL for Postgres This is a postgres client that does auto-completion and syntax highlighting. Home Page: http://pgcli.com MySQL Equivalent: http://

dbcli 10.8k Dec 30, 2022
ckan 3.6k Dec 27, 2022
Shut is an opinionated tool to simplify publishing pure Python packages.

Welcome to Shut Shut is an opinionated tool to simplify publishing pure Python packages. What can Shut do for you? Generate setup files (setup.py, MAN

Niklas Rosenstein 6 Nov 18, 2022
Rubrix is a free and open-source tool for exploring and iterating on data for artificial intelligence projects.

Open-source tool for exploring, labeling, and monitoring data for AI projects

Recognai 1.5k Jan 7, 2023
AdaFruit Funhouse publishing Temperature, Humidity and Pressure to MQTT / Apache Pulsar

pulsar-adafruit-funhouse AdaFruit Funhouse publishing Temperature, Humidity and Pressure to MQTT / Apache Pulsar Device Get your own from adafruit Ada

Timothy Spann 1 Dec 30, 2021
dirmaker is a simple, opinionated static site generator for quickly publishing directory websites.

dirmaker is a simple, opinionated static site generator for publishing directory websites (eg: Indic.page, env.wiki It takes entries from a YAML file and generates a categorised, paginated directory website.

Kailash Nadh 40 Nov 20, 2022
CleanX is an open source python library for exploring, cleaning and augmenting large datasets of X-rays, or certain other types of radiological images.

cleanX CleanX is an open source python library for exploring, cleaning and augmenting large datasets of X-rays, or certain other types of radiological

Candace Makeda Moore, MD 20 Jan 5, 2023
Unofficial Open Corporates CLI: OpenCorporates is a website that shares data on corporations under the copyleft Open Database License. This is an unofficial open corporates python command line tool.

Unofficial Open Corporates CLI OpenCorporates is a website that shares data on corporations under the copyleft Open Database License. This is an unoff

Richard Mwewa 30 Sep 8, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
Data Version Control or DVC is an open-source tool for data science and machine learning projects

Continuous Machine Learning project integration with DVC Data Version Control or DVC is an open-source tool for data science and machine learning proj

Azaria Gebremichael 2 Jul 29, 2021
【Arxiv】Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution

SANet Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution Dependencies numpy==1.18.5 scikit_image==0.16.2 torchvision==0.8.1 to

null 36 Jan 5, 2023
Exploring Relational Context for Multi-Task Dense Prediction [ICCV 2021]

Adaptive Task-Relational Context (ATRC) This repository provides source code for the ICCV 2021 paper Exploring Relational Context for Multi-Task Dense

David Brüggemann 35 Dec 5, 2022