✨️🐍 SPARQL endpoint built with RDFLib to serve machine learning models, or any other logic implemented in Python

Overview

SPARQL endpoint for RDFLib

Version Python versions

Run tests Publish to PyPI CodeQL Coverage

rdflib-endpoint is a SPARQL endpoint based on a RDFLib Graph to easily serve machine learning models, or any other logic implemented in Python via custom SPARQL functions.

It aims to enable python developers to easily deploy functions that can be queried in a federated fashion using SPARQL. For example: using a python function to resolve labels for specific identifiers, or run a classifier given entities retrieved using a SERVICE query to another SPARQL endpoint.

🧑‍🏫 How it works

The user defines and registers custom SPARQL functions using Python, and/or populate the RDFLib Graph, then the endpoint is deployed based on the FastAPI framework.

The deployed SPARQL endpoint can be used as a SERVICE in a federated SPARQL query from regular triplestores SPARQL endpoints. Tested on OpenLink Virtuoso (Jena based) and Ontotext GraphDB (rdf4j based). The endpoint is CORS enabled.

Built with RDFLib and FastAPI. Tested for Python 3.7, 3.8 and 3.9

Please create an issue, or send a pull request if you are facing issues or would like to see a feature implemented

📥 Install the package

Install the package from PyPI:

pip install rdflib-endpoint

🐍 Define custom SPARQL functions

Checkout the example folder for a complete working app example to get started, with a docker deployment. The best way to create a new SPARQL endpoint is to copy this example folder and start from it.

Create a app/main.py file in your project folder with your functions and endpoint parameters:

from rdflib_endpoint import SparqlEndpoint
import rdflib
from rdflib.plugins.sparql.evalutils import _eval

def custom_concat(query_results, ctx, part, eval_part):
    """Concat 2 strings in the 2 senses and return the length as additional Length variable
    """
    argument1 = str(_eval(part.expr.expr[0], eval_part.forget(ctx, _except=part.expr._vars)))
    argument2 = str(_eval(part.expr.expr[1], eval_part.forget(ctx, _except=part.expr._vars)))
    evaluation = []
    scores = []
    evaluation.append(argument1 + argument2)
    evaluation.append(argument2 + argument1)
    scores.append(len(argument1 + argument2))
    scores.append(len(argument2 + argument1))
    # Append the results for our custom function
    for i, result in enumerate(evaluation):
        query_results.append(eval_part.merge({
            part.var: rdflib.Literal(result), 
            rdflib.term.Variable(part.var + 'Length'): rdflib.Literal(scores[i])
        }))
    return query_results, ctx, part, eval_part

# Start the SPARQL endpoint based on a RDFLib Graph and register your custom functions
g = rdflib.graph.ConjunctiveGraph()
app = SparqlEndpoint(
    graph=g,
    # Register the functions:
    functions={
        'https://w3id.org/um/sparql-functions/custom_concat': custom_concat
    },
    # CORS enabled by default
    cors_enabled=True,
    # Metadata used for the service description and Swagger UI:
    title="SPARQL endpoint for RDFLib graph", 
    description="A SPARQL endpoint to serve machine learning models, or any other logic implemented in Python. \n[Source code](https://github.com/vemonet/rdflib-endpoint)",
    version="0.1.0",
    public_url='https://your-endpoint-url/sparql',
    # Example queries displayed in the Swagger UI to help users try your function
    example_query="""Example query:\n
```
PREFIX myfunctions: <https://w3id.org/um/sparql-functions/>
SELECT ?concat ?concatLength WHERE {
    BIND("First" AS ?first)
    BIND(myfunctions:custom_concat(?first, "last") AS ?concat)
}
```"""
)

🦄 Run the SPARQL endpoint

To quickly get started you can run the FastAPI server from the example folder with uvicorn on http://localhost:8000

cd example
uvicorn main:app --reload --app-dir app

Checkout in the example/README.md for more details, such as deploying it with docker.

🧑‍💻 Development

📥 Install for development

Install from the latest GitHub commit to make sure you have the latest updates:

pip install rdflib-endpoint@git+https://github.com/vemonet/rdflib-endpoint@main

Or clone and install locally for development:

git clone https://github.com/vemonet/rdflib-endpoint
cd rdflib-endpoint
pip install -e .

You can use a virtual environment to avoid conflicts:

# Create the virtual environment folder in your workspace
python3 -m venv .venv
# Activate it using a script in the created folder
source .venv/bin/activate

✅️ Run the tests

Install additional dependencies:

pip install pytest requests

Run the tests locally (from the root folder):

pytest -s

📂 Projects using rdflib-endpoint

Here are some projects using rdflib-endpoint to deploy custom SPARQL endpoints with python:

Comments
  • rdflib.plugins.sparql.CUSTOM_EVALS[

    rdflib.plugins.sparql.CUSTOM_EVALS["exampleEval"] = customEval

    I would like to know if this is possible to implement this kind of custom evaluation function, such as in this example: Source code for examples.custom_eval

    Also, do you please know if this would work in a federated query from Wikidata or Wikibase ? Thanks.

    enhancement 
    opened by rchateauneu 3
  • Fix error when dealing with AND operator (&&)

    Fix error when dealing with AND operator (&&)

    Hello, There is an error when dealing with AND operator (&&). Actually parse_qsl incorrectly parse the request body, misunderstanding && as new parameters. I suggest unquoting the data later on. Best regards, Ba Huy

    opened by tbhuy 2
  • Should be able to use prefixes bound in graph's NamespaceManager

    Should be able to use prefixes bound in graph's NamespaceManager

    Prefixes can be bound to a graph using the NamespaceManager. This makes them globally available, when loading and serializing as well as when querying.

    rdflib-endpoint, however, requires prefixes to be declared in the query input even for those bound to the graph, because prepareQuery is not aware of the graph.

    It would be convenient if the SPARQL endpoint could use the globally bound prefixes. My suggestion would be to drop the prepareQuery completely. I actually don't see why it is there in the first place, given that the prepared query is never used later on.

    opened by Natureshadow 0
  • SPARQL query containing 'coalesce' returns no result on rdflib-endpoint.SparqlEndpoint()

    SPARQL query containing 'coalesce' returns no result on rdflib-endpoint.SparqlEndpoint()

    Hello!

    My setup:

    I define a graph, g = Graph(), then I g.parse() a number of .ttl files and create the endpoint with SparqlEndpoint(graph=g). Then I use uvicorn.run(app, ...) to expose the endpoint on my local machine.

    I can successfully run this simple sparql statement to query for keywords:

    PREFIX dcat: <http://www.w3.org/ns/dcat#>
    SELECT 
    ?keyword
    WHERE { ?subj dcat:keyword ?keyword }
    

    However, as soon as I add a coalesce statement, the query does not return results any more:

    PREFIX dcat: <http://www.w3.org/ns/dcat#>
    SELECT 
    ?keyword
    (coalesce(?keyword, "xyz") as ?foo) 
    WHERE { ?subj dcat:keyword ?keyword }
    

    I tried something similar on wikidata which has no problems:

    SELECT ?item ?itemLabel 
    (coalesce(?itemLabel, 2) as ?foo)
    WHERE 
    {
      ?item wdt:P31 wd:Q146. # Must be of a cat
      SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". } 
    }
    

    The server debug output for the keywords query looks ok to me.

    INFO:     127.0.0.1:43942 - "GET /sparql?format=json&query=%0APREFIX+dcat%3A+%3Chttp%3A%2F%2Fwww.w3.org%2Fns%2Fdcat%23%3E%0ASELECT+%0A%3Fkeyword%0A%28coalesce%28%3Fkeyword%2C+%22xyz%22%29+as+%3Ffoo%29+%0AWHERE+%7B+%3Fsubj+dcat%3Akeyword+%3Fkeyword+%7D%0ALIMIT+10%0A HTTP/1.1" 200 OK
    

    Putting COALESCE in the WHERE clause does not solve the problem:

    PREFIX dcat: <http://www.w3.org/ns/dcat#>
    SELECT
    ?keyword
    ?foo
    WHERE {
    ?subj dcat:keyword ?keyword
    BIND(coalesce(?keyword, 2) as ?foo)
    }
    

    Am I missing something?

    Thanks in advance

    opened by byte-for-byte 4
  • Construct query

    Construct query

    Use rdflib-endpoint more frequently I ran into an issue with a sparql construct.

    sparql without prefixes

    CONSTRUCT {
      ?work dcterms:date ?jaar .
    }
    WHERE {
    {  select ?work (min(?year) as ?jaar)
        where {
              ?work dc:date ?year .
        }
        group by ?work }
    }
    

    The correct tuples are selected but no triples are created. Using rdflib directly does give the correct triples.

    opened by RichDijk 1
Releases(0.2.6)
  • 0.2.6(Dec 19, 2022)

    Changelog

    • YASGUI now use the provided example query as default query

    Full Changelog: https://github.com/vemonet/rdflib-endpoint/compare/0.2.5...0.2.6

    Source code(tar.gz)
    Source code(zip)
  • 0.2.5(Dec 19, 2022)

  • 0.2.4(Dec 19, 2022)

    Changelog

    • Add support for rdflib.Graph using store="Oxigraph" (oxrdflib package https://github.com/oxigraph/oxrdflib), related to https://github.com/vemonet/rdflib-endpoint/issues/4
    • CLI option for defining the store has been added

    Full Changelog: https://github.com/vemonet/rdflib-endpoint/compare/0.2.3...0.2.4

    Source code(tar.gz)
    Source code(zip)
  • 0.2.3(Dec 19, 2022)

  • 0.2.2(Dec 19, 2022)

  • 0.2.1(Dec 19, 2022)

  • 0.2.0(Dec 19, 2022)

    Changelog

    • Migrate from setup.py to pyproject.toml with a hatch build backend and src/ layout
    • Added types
    • Check for strict type compliance using mypy
    • Added tests for python 3.10 and 3.11
    • Added CITATION.cff file and pre-commit hooks
    • Merged pull request https://github.com/vemonet/rdflib-endpoint/pull/2 "Fix error when dealing with AND operator (&&)"

    Full Changelog: https://github.com/vemonet/rdflib-endpoint/compare/0.1.6...0.2.0

    Source code(tar.gz)
    Source code(zip)
  • 0.1.6(Feb 14, 2022)

  • 0.1.5(Jan 10, 2022)

  • 0.1.4(Dec 15, 2021)

  • 0.1.3(Dec 14, 2021)

  • 0.1.2(Dec 14, 2021)

    rdflib-endpoint, a library to quickly deploy SPARQL endpoint based on a RDFlib Graph optionally with custom functions defined in python. Support for Select, Construct, Describe, and Ask query. No support for Insert in RDFlib currently

    This SPARQL endpoint can perform federated SERVICE queries, and can be queried through a SERVICE query on another SPARQL endpoint (tested for endpoints based on RDF4J Sail and Jena-based triplestores, such as Virtuoso)

    Changelog:

    • Added a CLI feature to quickly start a SPARQL endpoint based on a local RDF file: rdflib-endpoint server your-file.nt

    Full Changelog: https://github.com/vemonet/rdflib-endpoint/compare/0.1.1...0.1.2

    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Dec 8, 2021)

    rdflib-endpoint, a library to quickly deploy SPARQL endpoint based on a RDFlib Graph optionally with custom functions defined in python. Support for Select, Construct, Describe, and Ask query. No support for Insert in RDFlib currently

    This SPARQL endpoint can perform federated SERVICE queries, and can be queried through a SERVICE query on another SPARQL endpoint (tested for endpoints based on RDF4J Sail and Jena-based triplestores, such as Virtuoso)

    Changelog:

    • Minor improvements to tests and workflows
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Dec 7, 2021)

    First release of rdflib-endpoint, a library to quickly deploy SPARQL endpoint based on a RDFlib Graph optionally with custom functions defined in python

    Support for Select, Construct, Describe, and Ask query. No support for Insert in RDFlib currently

    This SPARQL can perform federated SERVICE queries, and can be queried through a SERVICE query on another SPARQL endpoint (tested for endpoints based on RDF4J Sail and Jena-based triplestores, such as Virtuoso)

    Source code(tar.gz)
    Source code(zip)
Qwerkey is a social media platform for connecting and learning more about mechanical keyboards built on React and Redux in the frontend and Flask in the backend on top of a PostgreSQL database.

Flask React Project This is the backend for the Flask React project. Getting started Clone this repository (only this branch) git clone https://github

Peter Mai 22 Dec 20, 2022
The template for building scalable web APIs based on FastAPI, Tortoise ORM and other.

FastAPI and Tortoise ORM. Powerful but simple template for web APIs w/ FastAPI (as web framework) and Tortoise-ORM (for working via database without h

prostomarkeloff 95 Jan 8, 2023
Stac-fastapi built on Tile38 and Redis to support caching

stac-fastapi-caching Stac-fastapi built on Tile38 to support caching. This code is built on top of stac-fastapi-elasticsearch 0.1.0 with pyle38, a Pyt

Jonathan Healy 4 Apr 11, 2022
A RESTful API for creating and monitoring resource components of a hypothetical build system. Built with FastAPI and pydantic. Complete with testing and CI.

diskspace-monitor-CRUD Background The build system is part of a large environment with a multitude of different components. Many of the components hav

Nick Hopewell 67 Dec 14, 2022
Cookiecutter template for FastAPI projects using: Machine Learning, Poetry, Azure Pipelines and Pytests

cookiecutter-fastapi In order to create a template to FastAPI projects. ?? Important To use this project you don't need fork it. Just run cookiecutter

Arthur Henrique 225 Dec 28, 2022
A dynamic FastAPI router that automatically creates CRUD routes for your models

⚡ Create CRUD routes with lighting speed ⚡ A dynamic FastAPI router that automatically creates CRUD routes for your models

Adam Watkins 950 Jan 8, 2023
A dynamic FastAPI router that automatically creates CRUD routes for your models

⚡ Create CRUD routes with lighting speed ⚡ A dynamic FastAPI router that automatically creates CRUD routes for your models Documentation: https://fast

Adam Watkins 943 Jan 1, 2023
A dynamic FastAPI router that automatically creates CRUD routes for your models

⚡ Create CRUD routes with lighting speed ⚡ A dynamic FastAPI router that automatically creates CRUD routes for your models Documentation: https://fast

Adam Watkins 130 Feb 13, 2021
Hyperlinks for pydantic models

Hyperlinks for pydantic models In a typical web application relationships between resources are modeled by primary and foreign keys in a database (int

Jaakko Moisio 10 Apr 18, 2022
This is an API developed in python with the FastApi framework and putting into practice the recommendations of the book Clean Architecture in Python by Leonardo Giordani,

This is an API developed in python with the FastApi framework and putting into practice the recommendations of the book Clean Architecture in Python by Leonardo Giordani,

null 0 Sep 24, 2022
Cookiecutter API for creating Custom Skills for Azure Search using Python and Docker

cookiecutter-spacy-fastapi Python cookiecutter API for quick deployments of spaCy models with FastAPI Azure Search The API interface is compatible wit

Microsoft 379 Jan 3, 2023
Generate modern Python clients from OpenAPI

openapi-python-client Generate modern Python clients from OpenAPI 3.x documents. This generator does not support OpenAPI 2.x FKA Swagger. If you need

Triax Technologies 558 Jan 7, 2023
Docker image with Uvicorn managed by Gunicorn for high-performance FastAPI web applications in Python 3.6 and above with performance auto-tuning. Optionally with Alpine Linux.

Supported tags and respective Dockerfile links python3.8, latest (Dockerfile) python3.7, (Dockerfile) python3.6 (Dockerfile) python3.8-slim (Dockerfil

Sebastián Ramírez 2.1k Dec 31, 2022
High-performance Async REST API, in Python. FastAPI + GINO + Arq + Uvicorn (w/ Redis and PostgreSQL).

fastapi-gino-arq-uvicorn High-performance Async REST API, in Python. FastAPI + GINO + Arq + Uvicorn (powered by Redis & PostgreSQL). Contents Get Star

Leo Sussan 351 Jan 4, 2023
Turns your Python functions into microservices with web API, interactive GUI, and more.

Instantly turn your Python functions into production-ready microservices. Deploy and access your services via HTTP API or interactive UI. Seamlessly export your services into portable, shareable, and executable files or Docker images.

Machine Learning Tooling 2.8k Jan 4, 2023
Mixer -- Is a fixtures replacement. Supported Django, Flask, SqlAlchemy and custom python objects.

The Mixer is a helper to generate instances of Django or SQLAlchemy models. It's useful for testing and fixture replacement. Fast and convenient test-

Kirill Klenov 871 Dec 25, 2022
Beyonic API Python official client library simplified examples using Flask, Django and Fast API.

Beyonic API Python Examples. The beyonic APIs Doc Reference: https://apidocs.beyonic.com/ To start using the Beyonic API Python API, you need to start

Harun Mbaabu Mwenda 46 Sep 1, 2022
A Python framework to build Slack apps in a flash with the latest platform features.

Bolt for Python A Python framework to build Slack apps in a flash with the latest platform features. Read the getting started guide and look at our co

SlackAPI 684 Jan 9, 2023
Github timeline htmx based web app rewritten from Common Lisp to Python FastAPI

python-fastapi-github-timeline Rewrite of Common Lisp htmx app _cl-github-timeline into Python using FastAPI. This project tries to prove, that with h

Jan Vlčinský 4 Mar 25, 2022