Use Jupyter Notebooks to demonstrate how to build a Recommender with Apache Spark & Elasticsearch

Overview

Read this in other languages: 中国.

Building a Recommender with Apache Spark & Elasticsearch

Recommendation engines are one of the most well known, widely used and highest value use cases for applying machine learning. Despite this, while there are many resources available for the basics of training a recommendation model, there are relatively few that explain how to actually deploy these models to create a large-scale recommender system.

This Code Pattern demonstrates the key elements of creating such a system, using Apache Spark and Elasticsearch.

This repo contains a Jupyter notebook illustrating how to use Spark for training a collaborative filtering recommendation model from ratings data stored in Elasticsearch, saving the model factors to Elasticsearch, and then using Elasticsearch to serve real-time recommendations using the model. The data you will use comes from MovieLens and is a common benchmark dataset in the recommendations community. The data consists of a set of ratings given by users of the MovieLens movie rating system, to various movies. It also contains metadata (title and genres) for each movie.

When you have completed this Code Pattern, you will understand how to:

  • Ingest and index user event data into Elasticsearch using the Elasticsearch Spark connector
  • Load event data into Spark DataFrames and use Spark's machine learning library (MLlib) to train a collaborative filtering recommender model
  • Export the trained model into Elasticsearch
  • Using a script score query in Elasticsearch, compute similar item and personalized user recommendations and combine recommendations with search and content filtering

Architecture diagram

Flow

  1. Load the movie dataset into Spark.
  2. Use Spark DataFrame operations to clean up the dataset and load it into Elasticsearch.
  3. Using Spark MLlib, train a collaborative filtering recommendation model from the ratings data in Elasticsearch.
  4. Save the resulting model into Elasticsearch.
  5. Using Elasticsearch queries, generate some example recommendations. The Movie Database API is used to display movie poster images for the recommended movies.

Included components

  • Apache Spark: An open-source, fast and general-purpose cluster computing system
  • Elasticsearch: Open-source search and analytics engine
  • Jupyter Notebooks: An open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.

Featured technologies

  • Data Science: Systems and scientific methods to analyze structured and unstructured data in order to extract knowledge and insights.
  • Artificial Intelligence: Artificial intelligence can be applied to disparate solution spaces to deliver disruptive technologies.
  • Python: Python is a programming language that lets you work more quickly and integrate your systems more effectively.

Watch the Video

Steps

Follow these steps to create the required services and run the notebook locally.

  1. Clone the repo
  2. Set up Elasticsearch
  3. Download the Elasticsearch Spark connector
  4. Download Apache Spark
  5. Download the data
  6. Launch the notebook
  7. Run the notebook

1. Clone the repo

Clone the elasticsearch-spark-recommender repository locally. In a terminal, run the following command:

$ git clone https://github.com/IBM/elasticsearch-spark-recommender

2. Set up Elasticsearch

This Code Pattern currently depends on Elasticsearch 7.6.x. Go to the downloads page and download the appropriate package for your system. If you do not see a valid release version there, go to the previous release page.

In this Code Pattern readme we will base instructions on Elasticsearch 7.6.2.

For example, on Mac you can download the TAR archive and unzip it using the commands:

$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-darwin-x86_64.tar.gz
$ tar xfz elasticsearch-7.6.2-darwin-x86_64.tar.gz

Change directory to the newly unzipped folder using:

$ cd elasticsearch-7.6.2

Next, start Elasticsearch (do this in a separate terminal window in order to keep it up and running):

$ ./bin/elasticsearch

Note: the first time you try to run this command, you may see an error like the following:

ElasticsearchException[Failure running machine learning native code. This could be due to running on an unsupported OS or distribution, missing OS libraries, or a problem with the temp directory. To bypass this problem by running Elasticsearch without machine learning functionality set [xpack.ml.enabled: false].]

In this case, re-running the command should successfully start up Elasticsearch.

Finally, you will need to install the Elasticsearch Python client. You can do this by running the following command (you should do this in a separate terminal window to the one running Elasticsearch):

$ pip install elasticsearch

3. Download the Elasticsearch Spark connector

The Elasticsearch Hadoop project provides connectors between Elasticsearch and various Hadoop-compatible systems, including Spark. The project provides a ZIP file to download that contains all these connectors. You will need to run your PySpark notebook with the Spark-specific connector JAR file on the classpath. Follow these steps to set up the connector:

  1. Download the elasticsearch-hadoop-7.6.2.zip file, which contains all the connectors. You can do this by running:
$ wget https://artifacts.elastic.co/downloads/elasticsearch-hadoop/elasticsearch-hadoop-7.6.2.zip
  1. Unzip the file by running:
$ unzip elasticsearch-hadoop-7.6.2.zip
  1. The JAR for the Spark connector is called elasticsearch-spark-20_2.11-7.6.2.jar and it will be located in the dist subfolder of the directory in which you unzipped the file above.

4. Download Apache Spark

This Code Pattern should work with any Spark 2.x version, however this readme uses version 2.4.5.

Download Apache Spark

Download the Spark from the downloads page. Once you have downloaded the file, unzip it by running:

$ tar xfz spark-2.4.5-bin-hadoop2.7.tgz

Note if you download a different version, adjust the relevant command used above and elsewhere in this Code Pattern accordingly

You will also need to install Numpy in order to use Spark's machine learning library, MLlib. If you don't have Numpy installed, run:

$ pip install numpy

5. Download the data

You will be using the Movielens dataset of ratings given by a set of users to movies, together with movie metadata. There are a few versions of the dataset. You should download the "latest small" version.

Run the following commands from the base directory of the cloned Code Pattern repository:

$ cd data
$ wget http://files.grouplens.org/datasets/movielens/ml-latest-small.zip
$ unzip ml-latest-small.zip

6. Launch the notebook

The notebook should work with Python 2.7+ or 3.x (but has only been tested on 3.6)

To run the notebook you will need to launch a PySpark session within a Jupyter notebook. If you don't have Jupyter installed, you can install it by running the command:

$ pip install notebook

Remember to include the Elasticsearch Spark connector JAR from step 3 on the Spark classpath when launching your notebook.

Run the following command to launch your PySpark notebook server locally. For this command to work correctly, you will need to launch the notebook from the base directory of the Code Pattern repository that you cloned in step 1. If you are not in that directory, first cd into it.

PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../spark-2.4.5-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path ../../elasticsearch-hadoop-7.6.2/dist/elasticsearch-spark-20_2.11-7.6.2.jar

This should open a browser window with the Code Pattern folder contents displayed. Click on the notebooks subfolder and then click on the elasticsearch-spark-recommender.ipynb file to launch the notebook.

Launch notebook

Optional:

In order to display the images in the recommendation demo, you will need to access The Movie Database API. Please follow the instructions to get an API key. You will also need to install the Python client using the command:

$ pip install tmdbsimple

The demo will still work without this API access, but no images will be displayed (so it won't look as good!).

7. Run the notebook

When a notebook is executed, what is actually happening is that each code cell in the notebook is executed, in order, from top to bottom.

Each code cell is selectable and is preceded by a tag in the left margin. The tag format is In [x]:. Depending on the state of the notebook, the x can be:

  • A blank, this indicates that the cell has never been executed.
  • A number, this number represents the relative order this code step was executed.
  • A *, this indicates that the cell is currently executing.

There are several ways to execute the code cells in your notebook:

  • One cell at a time.
    • Select the cell, and then press the Play button in the toolbar. You can also hit Shift+Enter to execute the cell and move to the next cell.
  • Batch mode, in sequential order.
    • From the Cell menu bar, there are several options available. For example, you can Run All cells in your notebook, or you can Run All Below, that will start executing from the first cell under the currently selected cell, and then continue executing all cells that follow.

Sample output

The example output in the data/examples folder shows the output of the notebook after running it in full. View it here.

Note: To see the code and markdown cells without output, you can view the raw notebook.

Troubleshooting

  • Error: java.lang.ClassNotFoundException: Failed to find data source: es.

If you see this error when trying to write data from Spark to Elasticsearch in the notebook, it means that the Elasticsearch Spark connector (elasticsearch-spark-20_2.11-7.6.2.jar) was not found on the class path by Spark when launching the notebook.

Solution: First try the launch command from step 6, ensuring you run it from the base directory of the Code Pattern repo.

If that does not work, try to use the fully-qualified path to the JAR file when launching the notebook, e.g.:

PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../spark-2.4.5-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path /FULL_PATH/elasticsearch-hadoop-7.6.2/dist/elasticsearch-spark-20_2.11-7.6.2.jar

where FULL_PATH is the fully-qualified (not relative) path to the directory from which you unzippd the elasticsearch-hadoop ZIP file.

  • Error: org.elasticsearch.hadoop.EsHadoopIllegalStateException: SaveMode is set to ErrorIfExists and index ratings exists and contains data. Consider changing the SaveMode

If you see this error when trying to write data from Spark to Elasticsearch in the notebook, it means that you have already written data to the relevant index (for example the ratings data into the ratings index).

Solution: Try to continue working through the notebook from the next cell down. Alternatively, you can first delete all your indics and re-run the Elasticsearch command to create index mappings (see the section Step 2: Load data into Elasticsearch in the notebook).

  • Error: ConnectionRefusedError: [Errno 61] Connection refused

You may see this error when trying to connect to Elasticsearch in the notebook. This likely means your Elasticsearch instance is not running.

Solution: In a new terminal window, cd to the directory in which Elasticsearch is installed and run ./bin/elasticsearch to start up Elasticsearch.

  • Error: Py4JJavaError: An error occurred while calling o130.save. : org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[127.0.0.1:9200]]

You may see this error when trying to read data from Elasticsearch into Spark (or writing data from Spark to Elasticsearch) in the notebook. This likely means your Elasticsearch instance is not running.

Solution: In a new terminal window, cd to the directory in which Elasticsearch is installed and run ./bin/elasticsearch to start up Elasticsearch.

  • Error: ImportError: No module named elasticsearch

If you encounter this error, it either means the Elasticsearch Python client is not installed, or cannot be found on the PYTHONPATH.

Solution: First try to install the client using $ pip install elasticsearch (if running in a Python virtual environment e.g. Conda or Virtualenv) or $ sudo pip install elasticsearch otherwise. If that doesn't work, add your site-packages folder to your Python path (e.g. on Mac: export PYTHONPATH=/Library/Python/2.7/site-packages for Python 2.7). See this Stackoverflow issue for another example on Linux. Note: the same general solution applies to any other module import error that you may encounter.

  • Error: HTTPError: 401 Client Error: Unauthorized for url: https://api.themoviedb.org/3/movie/1893?api_key=...

If you see this error in your notebook while testing your TMDb API access, or generating recommendations, it means you have installed tmdbsimple Python package, but have not set up your API key.

Solution: Follow the instructions at the end of step 6 to set up your TMDb account and get your API key. Then copy the key into the tmdb.API_KEY = 'YOUR_API_KEY' line in the notebook cell at the end of Step 1: Prepare the data (i.e. replacing YOR_API_KEY with the correct key). Once you have done that, execute that cell to test your access to TMDb API.

Links

Note the slide and video links below refer to an older version of this Code Pattern, that utilized the Elasticsearch Vector Scoring Plugin. Since Elasticsearch added native support for dense vector scoring, the plugin is no longer required. However, the details about the way in which the models and scoring functions work are still valid.

Learn more

  • Data Analytics Code Patterns: Enjoyed this Code Pattern? Check out our other Data Analytics Code Patterns
  • AI and Data Code Pattern Playlist: Bookmark our playlist with all of our Code Pattern videos
  • Watson Studio: Master the art of data science with IBM's Watson Studio
  • Spark on IBM Cloud: Need a Spark cluster? Create up to 30 Spark executors on IBM Cloud with our Spark service

License

This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.

Apache Software License (ASL) FAQ

Comments
  • Support for ES 6.6?

    Support for ES 6.6?

    I'm extremely new to ES but I've been going through the notebook changing code where required to make it work (e.g multiple types per index are no longer supported). I'm now stuck at the retrieving/calculating the recommendations part of the example (after calculating the embeddings and bringing them back into ES).

    Specifically in the fn_query this part of the ES query is out of date:

    "script": {
    "inline": "payload_vector_score",
    "lang": "native",
    "params": {
    "field": "@model.factor",
    "vector": query_vec,
    "cosine" : cosine
    }
    }
    

    fails and I get the error : RequestError: RequestError(400, 'search_phase_execution_exception', 'script_score: the script could not be loaded')

    And I'm not sure how to get this up to speed for ES 6.6.

    Cheers!

    EDIT: Ah, I'm assuming the script is a plugin written by MLnick in his other repository? Which I notice has a TODO for porting to latest ES.

    question 
    opened by CDBridger 6
  • PYSPARK_DRIVER_PYTHON is not defined

    PYSPARK_DRIVER_PYTHON is not defined

    PYSPARK_DRIVER_PYTHON="jupyter"`` PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../spark-2.2.0-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path ../../elasticsearch-hadoop-5.3.0/dist/elasticsearch-spark-20_2.11-5.3.0.jar

    This gives an error in command prompt

    PYSPARK_DRIVER_PYTHON is not recognized as an internal or external command, operable command or a batch file

    question 
    opened by RENEEGAILP 6
  • 2018-05-15 18:38:15 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); selected next node [17.13.50.21:9200] 2018-05-15 18:38:19 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); no other nodes left - aborting...

    2018-05-15 18:38:15 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); selected next node [17.13.50.21:9200] 2018-05-15 18:38:19 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); no other nodes left - aborting...

    Hello there, I can not connect ES-Spark from local to Elasticsearch in server. I think services need dependencces, so I connect Spark to Elasticsearch in servers. This is my code:

    es_read_conf = { 
        # specify the node that we are sending data to (this should be the master)    
        "es.nodes" : '17.13.50.21:9200',
                
            # specify the port in case it is not the default port
        "es.port" : '9200',
        
        # specify the read resource in the format 'index/doc-type'
        "es.resource" : "stream-test/sample"
        }
    
    es_rdd = sc.newAPIHadoopRDD(
        inputFormatClass="org.elasticsearch.hadoop.mr.EsInputFormat",
        keyClass="org.apache.hadoop.io.NullWritable", 
        valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", 
        conf=es_read_conf)
    

    And I met error: 2018-05-15 18:38:15 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); selected next node [17.13.50.21:9200] 2018-05-15 18:38:19 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); no other nodes left - aborting...

    Can you help me? Thank you.

    opened by LunaLuan 4
  • Running into build issue

    Running into build issue

    @MLnick - Seeing the following error:

    $ PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../../spark-2.2.0-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path ../../elasticsearch-hadoop-5.3.0//dist/elasticsearch-spark_2.11-5.3.0.jar env: jupyter: No such file or directory

    opened by rhagarty 4
  • Elasticsearch connector confusion

    Elasticsearch connector confusion

    @MLnick - step 3 is confusing. The title and text calls it "elasticsearch-spark", but the downloaded file and unzip command use "elasticsearch-hadoop". Can you clarify this step?

    opened by rhagarty 4
  • Support Elasticsearch 7.x

    Support Elasticsearch 7.x

    I too would be interested in seeing this repository updated to a later version of elasticsearch. With the removal of types from indexes, the introduction of the dense_vector data type, and the vector scoring plugin being incompatible with ES 7.x this sample could benefit from an update. There is a lot of extremely valuable information available here that is hard to apply to a current version of Elasticsearch.

    opened by MattMinke 3
  • Getting Py4JJavaError error

    Getting Py4JJavaError error

    Getting this error in Cell "Load Ratings and Movies ..." Maybe a compatibility issue?

    /Users/rhagarty/spark-2.2.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 317 raise Py4JJavaError( 318 "An error occurred while calling {0}{1}{2}.\n". --> 319 format(target_id, ".", name), value) 320 else: 321 raise Py4JError(

    Py4JJavaError: An error occurred while calling o127.save. : java.lang.ClassNotFoundException: Failed to find data source: es. Please find packages at http://spark.apache.org/third-party-projects.html at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:549) at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86) at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86) at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:470) at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:217) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: es.DefaultSource at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$21$$anonfun$apply$12.apply(DataSource.scala:533) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$21$$anonfun$apply$12.apply(DataSource.scala:533) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$21.apply(DataSource.scala:533) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$21.apply(DataSource.scala:533) at scala.util.Try.orElse(Try.scala:84) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:533) ... 29 more

    FYI - Also seeing this in the console log: 17/10/10 13:11:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/10/10 13:11:50 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException

    opened by rhagarty 3
  • Update Code Pattern to latest Elasticsearch version

    Update Code Pattern to latest Elasticsearch version

    This PR updates the Code Pattern as follows:

    • Upgrade to use latest Elasticsearch version (7.6.x)
    • Use new built-in dense_vector field and scoring functionality from ES 7.6.x
      • as a result the custom ES plugin is no longer required
    • Upgrade to latest Apache Spark version (2.4.5)

    README and notebook updates for these changes.

    opened by MLnick 2
  • Thank you

    Thank you

    Thank you for this project, it is very easy to set up and use, yet it is scalable and built on nice software.

    I managed to feed it 45M+ ratings and get real-time recommendations on my macbook air. For now I have an issue when people receive recommendations of products with 2 to 5 ratings, so it is pretty weird, but I'll remove them from my sample and see how it goes (or could I tinker with ALS parameters?)

    Anyway thank you!

    opened by Xrampino 2
  • name 'spark' is not defined

    name 'spark' is not defined

    Downloaded and installed everything. When I'm running the # check PySpark is running spark command. I get this:

    NameError Traceback (most recent call last) in () 2 from IPython.display import Image, HTML, display 3 # check PySpark is running ----> 4 spark

    NameError: name 'spark' is not defined

    My commands:

    SET PYSPARK_DRIVER_PYTHON=C:\Program Files (x86)\Python36-32\Scripts\jupyter.exe
    SET PYSPARK_DRIVER_PYTHON_OPTS=notebook --no-browser
    ..\spark-2.2.0-bin-hadoop2.7\bin\pyspark --driver-memory 4g --driver-class-path ..\elasticsearch-hadoop-5.3.0\dist\elasticsearch-hadoop-5.3.0.jar
    
    question 
    opened by aclowkey 2
  • "Recommendations" error

    All the recommendation steps at the end of the notebook had errors, all similar to the following:

    NameError Traceback (most recent call last) in () ----> 1 display_user_recs(12, num=5, num_last=5)

    in display_user_recs(the_id, q, num, num_last, index) 103 i = 0 104 for movie in user_movies: --> 105 movie_im_url = get_poster_url(movie['tmdbId']) 106 movie_title = movie['title'] 107 user_html += "

    %s
    " % (movie_title, movie_im_url)

    in get_poster_url(id) 9 poster_url = IMAGE_URL + movie['poster_path'] if 'poster_path' in movie and movie['poster_path'] is not None else "" 10 return poster_url ---> 11 except ModuleNotFoundError: 12 return "NA" 13

    NameError: global name 'ModuleNotFoundError' is not defined

    NOTE - this is probably related to the fact that I did NOT do the optional step of setting up the tmdb.API_KEY (I had errors). If so, the name of the recommended movie should still be listed, even if it's poster can't be found.

    Otherwise, the optional step should probably be required.

    opened by rhagarty 2
  • RequestError: RequestError(400, 'search_phase_execution_exception', 'runtime error')

    RequestError: RequestError(400, 'search_phase_execution_exception', 'runtime error')

    I recieved this error. then call function "get_similar()" in row "result = es.search(index=index, body=q)" i don't know what is problem please help

    opened by lazpuzzle 1
  • Py4JJavaError: An error occurred while calling o226.save. : java.lang.NoClassDefFoundError: scala/Product$class

    Py4JJavaError: An error occurred while calling o226.save. : java.lang.NoClassDefFoundError: scala/Product$class

    i am getting this error Py4JJavaError: An error occurred while calling o226.save. : java.lang.NoClassDefFoundError: scala/Product$class in this step

    write ratings data

    ratings.write.format("es").save("ratings") num_ratings_es = es.count(index="ratings")['count'] num_ratings_df = ratings.count()

    check write went ok

    print("Dataframe count: {}".format(num_ratings_df)) print("ES index count: {}".format(num_ratings_es)) plz help me

    opened by mednourconsulting 1
  • Use ALS instead of SVD?

    Use ALS instead of SVD?

    This repo very great! But I still don't understand why we should use ALS instead of SVD (Step 3 in Jupyter notebook)? Could you explain more for this experiment?

    opened by dc-thanh 1
  • Getting error in running code

    Getting error in running code

    When I run this code I got these errors in each of the situations below:

    Elasticsearch = 6.8.6 elasticsearch (python) = 7.5.1

    RequestError: RequestError(400, 'illegal_argument_exception', 'Rejecting mapping update to [demo] as the final mapping would have more than 1 type: [movies, ratings, users]') ——————————— 2) Elasticsearch = 5.3.0 elasticsearch (python) = 5.4.0

    RequestError: TransportError(400, 'search_phase_execution_exception', 'Failed to compile inline script [payload_vector_score] using lang [native]') ———————————— 3) Elasticsearch = 5.3.0 elasticsearch (python) = 7.5.1

    TypeError: search() got multiple values for argument 'body'

    opened by MaryamAkhavan 1
  • Add similar users example

    Add similar users example

    Hello,

    I've added functions to display users similar to a given one, for user-user recommendation. Since we only use user IDs and not usernames, it is a bit raw, but it is still functional

    opened by Xrampino 0
Recommender System Papers

Included Conferences: SIGIR 2020, SIGKDD 2020, RecSys 2020, CIKM 2020, AAAI 2021, WSDM 2021, WWW 2021

RUCAIBox 704 Jan 6, 2023
Graph Neural Networks for Recommender Systems

This repository contains code to train and test GNN models for recommendation, mainly using the Deep Graph Library (DGL).

null 217 Jan 4, 2023
RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems

RecSim NG, a probabilistic platform for multi-agent recommender systems simulation. RecSimNG is a scalable, modular, differentiable simulator implemented in Edward2 and TensorFlow. It offers: a powerful, general probabilistic programming language for agent-behavior specification;

Google Research 110 Dec 16, 2022
NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs.

NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

null 420 Jan 4, 2023
Collaborative variational bandwidth auto-encoder (VBAE) for recommender systems.

Collaborative Variational Bandwidth Auto-encoder The codes are associated with the following paper: Collaborative Variational Bandwidth Auto-encoder f

Yaochen Zhu 14 Dec 11, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and newly state-of-the-art recommendation models are implemented. QRec has a lightweight architecture and provides user-friendly interfaces. It can facilitate model implementation and evaluation.

Yu 1.4k Dec 27, 2022
E-Commerce recommender demo with real-time data and a graph database

?? E-Commerce recommender demo ?? This is a simple stream setup that uses Memgraph to ingest real-time data from a simulated online store. Data is str

g-despot 3 Feb 23, 2022
Movie Recommender System

Movie-Recommender-System Movie-Recommender-System is a web application using which a user can select his/her watched movie from list and system will r

null 1 Jul 14, 2022
Mutual Fund Recommender System. Tailor for fund transactions.

Explainable Mutual Fund Recommendation Data Please see 'DATA_DESCRIPTION.md' for mode detail. Recommender System Methods Baseline Collabarative Fiilte

JHJu 2 May 19, 2022
Movies/TV Recommender

recommender Movies/TV Recommender. Recommends Movies, TV Shows, Actors, Directors, Writers. Setup Create file API_KEY and paste your TMDB API key in i

Aviem Zur 3 Apr 22, 2022
6002project-rl - An implemention of offline RL on recommender system

An implemention of offline RL on recommender system @author: misajie @update: 20

Tzay Lee 3 May 24, 2022
Plex-recommender - Get movie recommendations based on your current PleX library

plex-recommender Description: Get movie/tv recommendations based on your current

null 5 Jul 19, 2022
An open source movie recommendation WebApp build by movie buffs and mathematicians that uses cosine similarity on the backend.

Movie Pundit Find your next flick by asking the (almost) all-knowing Movie Pundit Jump to Project Source » View Demo · Report Bug · Request Feature Ta

Kapil Pramod Deshmukh 8 May 28, 2022
Run your jupyter notebooks as a REST API endpoint. This isn't a jupyter server but rather just a way to run your notebooks as a REST API Endpoint.

Jupter Notebook REST API Run your jupyter notebooks as a REST API endpoint. This isn't a jupyter server but rather just a way to run your notebooks as

Invictify 54 Nov 4, 2022
Unit testing AWS interactions with pytest and moto. These examples demonstrate how to structure, setup, teardown, mock, and conduct unit testing. The source code is only intended to demonstrate unit testing.

Unit Testing Interactions with Amazon Web Services (AWS) Unit testing AWS interactions with pytest and moto. These examples demonstrate how to structu

AWS Samples 21 Nov 17, 2022
Fully Automated YouTube Channel ▶️with Added Extra Features.

Fully Automated Youtube Channel ▒█▀▀█ █▀▀█ ▀▀█▀▀ ▀▀█▀▀ █░░█ █▀▀▄ █▀▀ █▀▀█ ▒█▀▀▄ █░░█ ░░█░░ ░▒█░░ █░░█ █▀▀▄ █▀▀ █▄▄▀ ▒█▄▄█ ▀▀▀▀ ░░▀░░ ░▒█░░ ░▀▀▀ ▀▀▀░

sam-sepiol 249 Jan 2, 2023
This mini project showcase how to build and debug Apache Spark application using Python

Spark app can't be debugged using normal procedure. This mini project showcase how to build and debug Apache Spark application using Python programming language. There are also options to run Spark application on Spark container

Denny Imanuel 1 Dec 29, 2021
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

joisino 20 Aug 21, 2022
Eland is a Python Elasticsearch client for exploring and analyzing data in Elasticsearch with a familiar Pandas-compatible API.

Python Client and Toolkit for DataFrames, Big Data, Machine Learning and ETL in Elasticsearch

elastic 463 Dec 30, 2022
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.

Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us

Amazon Web Services 501 Dec 28, 2022