A Time Series Library for Apache Spark

Overview

Flint: A Time Series Library for Apache Spark

The ability to analyze time series data at scale is critical for the success of finance and IoT applications based on Spark. Flint is Two Sigma's implementation of highly optimized time series operations in Spark. It performs truly parallel and rich analyses on time series data by taking advantage of the natural ordering in time series data to provide locality-based optimizations.

Flint is an open source library for Spark based around the TimeSeriesRDD, a time series aware data structure, and a collection of time series utility and analysis functions that use TimeSeriesRDDs. Unlike DataFrame and Dataset, Flint's TimeSeriesRDDs can leverage the existing ordering properties of datasets at rest and the fact that almost all data manipulations and analysis over these datasets respect their temporal ordering properties. It differs from other time series efforts in Spark in its ability to efficiently compute across panel data or on large scale high frequency data.

Documentation Status

Requirements

Dependency Version
Spark Version 2.3 and 2.4
Scala Version 2.12
Python Version 3.5 and above

How to install

Scala artifact is published in maven central:

https://mvnrepository.com/artifact/com.twosigma/flint

Python artifact is published in PyPi:

https://pypi.org/project/ts-flint

Note you will need both Scala and Python artifact to use Flint with PySpark.

How to build

To build from source:

Scala (in top-level dir):

sbt assemblyNoTest

Python (in python subdir):

python setup.py install

or

pip install .

Python bindings

The python bindings for Flint, including quickstart instructions, are documented at python/README.md. API documentation is available at http://ts-flint.readthedocs.io/en/latest/.

Getting Started

Starting Point: TimeSeriesRDD and TimeSeriesDataFrame

The entry point into all functionalities for time series analysis in Flint is TimeSeriesRDD (for Scala) and TimeSeriesDataFrame (for Python). In high level, a TimeSeriesRDD contains an OrderedRDD which could be used to represent a sequence of ordering key-value pairs. A TimeSeriesRDD uses Long to represent timestamps in nanoseconds since epoch as keys and InternalRows as values for OrderedRDD to represent a time series data set.

Create TimeSeriesRDD

Applications can create a TimeSeriesRDD from an existing RDD, from an OrderedRDD, from a DataFrame, or from a single csv file.

As an example, the following creates a TimeSeriesRDD from a gzipped CSV file with header and specific datetime format.

import com.twosigma.flint.timeseries.CSV
val tsRdd = CSV.from(
  sqlContext,
  "file://foo/bar/data.csv",
  header = true,
  dateFormat = "yyyyMMdd HH:mm:ss.SSS",
  codec = "gzip",
  sorted = true
)

To create a TimeSeriesRDD from a DataFrame, you have to make sure the DataFrame contains a column named "time" of type LongType.

import com.twosigma.flint.timeseries.TimeSeriesRDD
import scala.concurrent.duration._
val df = ... // A DataFrame whose rows have been sorted by their timestamps under "time" column
val tsRdd = TimeSeriesRDD.fromDF(dataFrame = df)(isSorted = true, timeUnit = MILLISECONDS)

One could also create a TimeSeriesRDD from a RDD[Row] or an OrderedRDD[Long, Row] by providing a schema, e.g.

import com.twosigma.flint.timeseries._
import scala.concurrent.duration._
val rdd = ... // An RDD whose rows have sorted by their timestamps
val tsRdd = TimeSeriesRDD.fromRDD(
  rdd,
  schema = Schema("time" -> LongType, "price" -> DoubleType)
)(isSorted = true,
  timeUnit = MILLISECONDS
)

It is also possible to create a TimeSeriesRDD from a dataset stored as parquet format file(s). The TimeSeriesRDD.fromParquet() function provides the option to specify which columns and/or the time range you are interested, e.g.

import com.twosigma.flint.timeseries._
import scala.concurrent.duration._
val tsRdd = TimeSeriesRDD.fromParquet(
  sqlContext,
  path = "hdfs://foo/bar/"
)(isSorted = true,
  timeUnit = MILLISECONDS,
  columns = Seq("time", "id", "price"),  // By default, null for all columns
  begin = "20100101",                    // By default, null for no boundary at begin
  end = "20150101"                       // By default, null for no boundary at end
)

Group functions

A group function is to group rows with nearby (or exactly the same) timestamps.

  • groupByCycle A function to group rows within a cycle, i.e. rows with exactly the same timestamps. For example,
val priceTSRdd = ...
// A TimeSeriesRDD with columns "time" and "price"
// time  price
// -----------
// 1000L 1.0
// 1000L 2.0
// 2000L 3.0
// 2000L 4.0
// 2000L 5.0

val results = priceTSRdd.groupByCycle()
// time  rows
// ------------------------------------------------
// 1000L [[1000L, 1.0], [1000L, 2.0]]
// 2000L [[2000L, 3.0], [2000L, 4.0], [2000L, 5.0]]
  • groupByInterval A function to group rows whose timestamps fall into an interval. Intervals could be defined by another TimeSeriesRDD. Its timestamps will be used to defined intervals, i.e. two sequential timestamps define an interval. For example,
val priceTSRdd = ...
// A TimeSeriesRDD with columns "time" and "price"
// time  price
// -----------
// 1000L 1.0
// 1500L 2.0
// 2000L 3.0
// 2500L 4.0

val clockTSRdd = ...
// A TimeSeriesRDD with only column "time"
// time
// -----
// 1000L
// 2000L
// 3000L

val results = priceTSRdd.groupByInterval(clockTSRdd)
// time  rows
// ----------------------------------
// 1000L [[1000L, 1.0], [1500L, 2.0]]
// 2000L [[2000L, 3.0], [2500L, 4.0]]
  • addWindows For each row, this function adds a new column whose value for a row is a list of rows within its window.
val priceTSRdd = ...
// A TimeSeriesRDD with columns "time" and "price"
// time  price
// -----------
// 1000L 1.0
// 1500L 2.0
// 2000L 3.0
// 2500L 4.0

val result = priceTSRdd.addWindows(Window.pastAbsoluteTime("1000ns"))
// time  price window_past_1000ns
// ------------------------------------------------------
// 1000L 1.0   [[1000L, 1.0]]
// 1500L 2.0   [[1000L, 1.0], [1500L, 2.0]]
// 2000L 3.0   [[1000L, 1.0], [1500L, 2.0], [2000L, 3.0]]
// 2500L 4.0   [[1500L, 2.0], [2000L, 3.0], [2500L, 4.0]]

Temporal Join Functions

A temporal join function is a join function defined by a matching criteria over time. A tolerance in temporal join matching criteria specifies how much it should look past or look futue.

  • leftJoin A function performs the temporal left-join to the right TimeSeriesRDD, i.e. left-join using inexact timestamp matches. For each row in the left, append the most recent row from the right at or before the same time. An example to join two TimeSeriesRDDs is as follows.
val leftTSRdd = ...
val rightTSRdd = ...
val result = leftTSRdd.leftJoin(rightTSRdd, tolerance = "1day")
  • futureLeftJoin A function performs the temporal future left-join to the right TimeSeriesRDD, i.e. left-join using inexact timestamp matches. For each row in the left, appends the closest future row from the right at or after the same time.
val result = leftTSRdd.futureLeftJoin(rightTSRdd, tolerance = "1day")

Summarize Functions

Summarize functions are the functions to apply summarizer(s) to rows within a certain period, like cycle, interval, windows, etc.

  • summarizeCycles A function computes aggregate statistics of rows that are within a cycle, i.e. rows share a timestamp.
val volTSRdd = ...
// A TimeSeriesRDD with columns "time", "id", and "volume"
// time  id volume
// ------------
// 1000L 1  100
// 1000L 2  200
// 2000L 1  300
// 2000L 2  400

val result = volTSRdd.summarizeCycles(Summary.sum("volume"))
// time  volume_sum
// ----------------
// 1000L 300
// 2000L 700

Similarly, we could summarize over intervals, windows, or the whole time series data set. See

  • summarizeIntervals
  • summarizeWindows
  • addSummaryColumns

One could check timeseries.summarize.summarizer for different kinds of summarizer(s), like ZScoreSummarizer, CorrelationSummarizer, NthCentralMomentSummarizer etc.

Contributing

In order to accept your code contributions, please fill out the appropriate Contributor License Agreement in the cla folder and submit it to [email protected].

Disclaimer

Apache Spark is a trademark of The Apache Software Foundation. The Apache Software Foundation is not affiliated, endorsed, connected, sponsored or otherwise associated in any way to Two Sigma, Flint, or this website in any manner.

© Two Sigma Open Source, LLC

Comments
  • NoSuchMethodError: internalCreateDataFrame

    NoSuchMethodError: internalCreateDataFrame

    Thank you for this amazing library! 🥇 I'm running Spark 2.2.0 and tried to initialize a clock:

    clock = clocks.uniform(sqlContext, frequency="1day")
    

    This threw an exception:

    py4j.protocol.Py4JJavaError: An error occurred while calling z:com.twosigma.flint.timeseries.Clocks.uniform.
    : java.lang.NoSuchMethodError: org.apache.spark.sql.SparkSession.internalCreateDataFrame$default$3()Z
    	at org.apache.spark.sql.DFConverter$.toDataFrame(DFConverter.scala:42)
    	at com.twosigma.flint.timeseries.clock.Clock.asTimeSeriesRDD(Clock.scala:148)
    	at com.twosigma.flint.timeseries.Clocks$.uniform(Clocks.scala:54)
    	at com.twosigma.flint.timeseries.Clocks.uniform(Clocks.scala)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    
    

    NoSuchMethodError: org.apache.spark.sql.SparkSession.internalCreateDataFrame$default$3()Z

    I found this method in the docs: https://spark.apache.org/docs/preview/api/java/org/apache/spark/sql/SparkSession.html#internalCreateDataFrame(org.apache.spark.rdd.RDD,%20org.apache.spark.sql.types.StructType)

    However it seems like it's not available in the version I'm running? Can you please provide me with a hint on how to resolve this issue? Thanks! 👍

    opened by emmanuelmillionaer 4
  • Test failure in ClockSpec tests

    Test failure in ClockSpec tests

    When running sbt assembly, I get an error that looks like it's picking up an ambient timezone somewhere in my environment:

    [info] UniformClock
    [info] - should generate clock ticks correctly (4 milliseconds)
    [info] - should generate clock ticks in RDD correctly (90 milliseconds)
    [info] - should generate clock ticks in TimeSeriesRDD correctly (164 milliseconds)
    [info] - should generate clock ticks with offset in TimeSeriesRDD correctly (74 milliseconds)
    [info] - should generate clock ticks with offset & time zone in TimeSeriesRDD correctly (75 milliseconds)
    [info] - should generate clock ticks with default in TimeSeriesRDD correctly (172 milliseconds)
    [info] - should generate timestamp correctly *** FAILED *** (160 milliseconds)
    [info]   1989-12-31 18:00:00.0 did not equal 1990-01-01 00:00:00.0 (ClockSpec.scala:85)
    

    Is this a known issue, or a known problem in my setup maybe?

    opened by kenahoo 4
  • join not only by time but additionally also by column

    join not only by time but additionally also by column

    How can I join not only by time but also by a column?

    Currently, I get: Found duplicate columns, but I would like to perform the time series join per group.

    val left = Seq((1,1L, 0.1), (1, 2L,0.2), (3,1L,0.3), (3, 2L,0.4)).toDF("group", "time", "valueA")
      val right = Seq((1,1L, 11), (1, 2L,12), (3,1L,13), (3, 2L,14)).toDF("group", "time", "valueB")
      val leftTs = TimeSeriesRDD.fromDF(dataFrame = left)(isSorted = false, timeUnit = MILLISECONDS)
      val rightTS        = TimeSeriesRDD.fromDF(dataFrame = right)(isSorted = false, timeUnit = MILLISECONDS)
    
      val mergedPerGroup = leftTs.leftJoin(rightTS, tolerance = "1s")
    

    fails due to duplicate columns.

    When renaming the columns:

    val left = Seq((1,1L, 0.1), (1, 2L,0.2), (3,1L,0.3), (3, 2L,0.4)).toDF("groupA", "time", "valueA")
      val right = Seq((1,1L, 11), (1, 2L,12), (3,1L,13), (3, 2L,14)).toDF("groupB", "time", "valueB")
      val leftTs = TimeSeriesRDD.fromDF(dataFrame = left)(isSorted = false, timeUnit = MILLISECONDS)
      val rightTS        = TimeSeriesRDD.fromDF(dataFrame = right)(isSorted = false, timeUnit = MILLISECONDS)
    
      val mergedPerGroup = leftTs.leftJoin(rightTS, tolerance = "1s")
      mergedPerGroup.toDF.printSchema
      mergedPerGroup.toDF.show
    +-------+------+------+------+------+
    |   time|groupA|valueA|groupB|valueB|
    +-------+------+------+------+------+
    |1000000|     1|   0.1|     3|    13|
    |1000000|     3|   0.3|     3|    13|
    |2000000|     1|   0.2|     3|    14|
    |2000000|     3|   0.4|     3|    14|
    +-------+------+------+------+------+
    
    

    a cross join is performed between each group and time series. that needs to be manually reduced.

    mergedPerGroup.toDF.filter(col("groupA") === col("groupB")).show
    +-------+------+------+------+------+
    |   time|groupA|valueA|groupB|valueB|
    +-------+------+------+------+------+
    |1000000|     3|   0.3|     3|    13|
    |2000000|     3|   0.4|     3|    14|
    
    

    Is there any functionality to perform this type of join more efficiently / built in?

    opened by geoHeil 3
  • Method not found when creating time series add

    Method not found when creating time series add

    val tsRdd = TimeSeriesRDD.fromDF(dataFrame = df)(isSorted = true, timeUnit = MILLISECONDS)

    throws

    scala> val tsRdd = TimeSeriesRDD.fromDF(dataFrame = cellFeed)(isSorted = true, timeUnit = MILLISECONDS)
    java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.plans.physical.ClusteredDistribution$.apply$default$2()Lscala/Option;
      at com.twosigma.flint.timeseries.TimeSeriesStore$.isClustered(TimeSeriesStore.scala:149)
      at com.twosigma.flint.timeseries.TimeSeriesStore$.apply(TimeSeriesStore.scala:64)
      at com.twosigma.flint.timeseries.TimeSeriesRDD$.fromDFWithPartInfo(TimeSeriesRDD.scala:509)
      at com.twosigma.flint.timeseries.TimeSeriesRDD$.fromDF(TimeSeriesRDD.scala:304)
      ... 52 elided
    
    

    on spark 2.2 when trying to create the initial RDD.

    Minimal reproducible sample:

    import spark.implicits._
      import com.twosigma.flint.timeseries.TimeSeriesRDD
      import scala.concurrent.duration._
      val df = Seq((1, 1, 1L), (2, 3, 1L), (1, 4, 2L), (2, 2, 2L)).toDF("id", "value", "time")
      val tsRdd = TimeSeriesRDD.fromDF(dataFrame = df)(isSorted = true, timeUnit = MILLISECONDS)
    

    on spark 2.2 via HDP 2.6.4

    opened by geoHeil 3
  • add more information in Summarizer fromV

    add more information in Summarizer fromV

    In this interface: https://github.com/twosigma/flint/blob/master/src/main/scala/com/twosigma/flint/timeseries/summarize/Summarizer.scala#L211

    Sometime it is also good to know on which row it is rending. Is it possible to have something like: def fromV(v: V, t: T): InternalRow

    opened by soloman817 3
  • Narrow dependency

    Narrow dependency

    I saw this code in overlapped RDD implementation: https://github.com/twosigma/flint/blob/master/src/main/scala/com/twosigma/flint/rdd/OverlappedOrderedRDD.scala#L50

    Then I read about what does narrow dependency mean: https://github.com/rohgar/scala-spark-4/wiki/Wide-vs-Narrow-Dependencies , there it says: "Narrow dependencies: Each partition of the parent RDD is used by at most one partition of the child RDD."

    But in flint implementation, the overlappedRDD is the child RDD and the orderRDD is its parent. But one partition in the parent will be used in multiple partitions in the child overlappedRDD, because of the overlapping.

    So, what am I missing to understand this?

    Thanks in advance, Xiang

    opened by soloman817 3
  • "Rankers" Import error

    I can successfully open the PySpark shell with the command provided on your python/README.md file

    pyspark --master=local --jars /path/to/assembly/flint-assembly-0.6.0-SNAPSHOT.jar --py-files /path/to/assembly/flint-assembly-0.2.0-SNAPSHOT.jar

    However, when I try to import the ts.flint module with the command

    import ts.flint

    I always get the error

    cannot import name 'rankers'

    Am I missing something? The file python/ts/flint/dataframe.py contains the statement

    from . import rankers

    but I cannot find any such file in the repository.

    Any help would be appreciated.

    opened by gbaettig 3
  • fix fromNormalizedSortedRDD range split array indexes

    fix fromNormalizedSortedRDD range split array indexes

    This fixes a bug in the Conversion.fromNormalizedSortedRDD. Currently rangeSplits are generated from a map (partitionId -> (parentPartitionId, K)). The ordering of the keys in the map is not guaranteed so the array of RangeSplits does not contain sorted OrderedRDDPartitions. Therefore, using the partitionId to get the range split from the rangeSplits array will return an arbitrary range split causing errors during range validation.

    I modified the test for it so that the bug can be reproduced if the fix is not present.

    Here the solution is to sort the range split array by the id of the partition. This is a minor fix so I didn't send in a contributor agreement thingy.

    opened by alazareva 3
  • Example of sqlContext in Weather.ipynb and Flint Example.ipynb

    Example of sqlContext in Weather.ipynb and Flint Example.ipynb

    Example of SQLContext in Weather.ipynb and Flint Example.ipynb?

    I get TypeError: 'JavaPackage' object is not callable by trying some SQLContext (although I believe this is my configuration issue).

    Thanks!

    opened by yezhengli-Mr9 2
  • How to get the last row of a TimeSeriesRDD?

    How to get the last row of a TimeSeriesRDD?

    I have a time series RDD object, and I know internally it is sorted by the timestamps. What is the efficient way to get the start time and last time? There is a TimeSeriesRDD.first which returns the first row, so I can get the start time. But how to get the last row efficiently?

    opened by soloman817 2
  • usage of merge of a summarizer

    usage of merge of a summarizer

    I created an experimental summarizer to try to understand how it works: https://github.com/soloman817/flint/blob/feature/experiment/src/test/scala/com/twosigma/flint/timeseries/experiment/AccumulateSummarizerSpec.scala

    There I print informations when the methods of a summarizer are called, such as add, merge, etc.

    I found that, if I call TimeSeriesRDD.summarize(), then the subtract will not be called, and if I have multiple partitions, the merge will be called. which means, aggregation on all rows will run in parallel and eventually merged into final result.

    But if I run the summarizer with TimeSeriesRDD.summarizeWindow(), then for the window aggregation, add and subtract will be called, but not merge. Which means, inside one window, it is not parallel.

    Am I right? This knowlege will be very helpful for my implementation to our problem, which is an extension of that experimental code.

    Thanks, Xiang.

    opened by soloman817 2
  • java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps;

    java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps;

    Hi,

    I am using Spark 3.0, perhaps this is the cause of the following error:

    `val tsRDD = TimeSeriesRDD.fromDF(dataFrame = dffg_ts)(isSorted = true, timeUnit = MILLISECONDS)

    java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps; at com.twosigma.flint.timeseries.TimeSeriesRDD$.canonizeDF(TimeSeriesRDD.scala:331) at com.twosigma.flint.timeseries.TimeSeriesRDD$.fromDF(TimeSeriesRDD.scala:303) ... 62 elided`

    Be great to get it to work. Thanks

    opened by durranaik 1
  • How to get the latest two events on a specific window using Flint?

    How to get the latest two events on a specific window using Flint?

    How to get the latest two events using this library?

    Say I have the following

    time    | price
    ---------------
    1000L | 40
    2000L | 20
    3000L | 80
    4000L | 10
    5000L | 60
    6000L | 30
    

    I want to do operations like

    1. Get the last 2 events or last 4 events in the past 6 hours
    2. Difference in price for the last two events in 6 hours.

    How to do this using Flint? I can write my own udf and solve this but I am wondering if there is any inbuilt function that is already available?

    opened by kant777 1
  • How to make Flint work with Spark 3.0

    How to make Flint work with Spark 3.0

    Hi n00b question: given how awesome and popular Flint has been, I'm really interested in making it work with Spark 3.0.

    So I went ahead and tried the changes in https://github.com/twosigma/flint/pull/82/files which made Flint build successfully with Spark 3.0-preview2, but now some tests are failing (see flint-spark-3.0.0-build.log ), and I'm not completely sure how to fix the test failures or how much work it might take to fix them.

    Any idea how to resolve the test failures? Thanks in advance!

    opened by yitao-li 3
  • UDF error

    UDF error

    I was testing flint example notebook with Spark 2.4.3 (Python 3.7) and get this error on paragraph 8 -

    Py4JJavaError Traceback (most recent call last) in 10 sp500_decayed_return = sp500_joined_return.summarizeWindows( 11 window = windows.past_absolute_time('7day'), ---> 12 summarizer = {'previous_day_return_decayed_sum': decayed(sp500_joined_return[['previous_day_return']])} 13 ) 14

    /opt/conda/lib/python3.7/site-packages/ts/flint/dataframe.py in summarizeWindows(self, window, summarizer, key) 1269 1270 if isinstance(summarizer, collections.Mapping): -> 1271 return self._summarizeWindows_udf(window, summarizer, key) 1272 else: 1273 return self._summarizeWindows_builtin(window, summarizer, key)

    /opt/conda/lib/python3.7/site-packages/ts/flint/dataframe.py in _summarizeWindows_udf(self, window, columns, key) 1487 windowed = windowed.filter(windowed[base_rows_col_name].isNotNull()) 1488 -> 1489 result = windowed._concatArrowAndExplode(base_rows_col_name, schema_col_names, data_col_names) 1490 1491 return result

    /opt/conda/lib/python3.7/site-packages/ts/flint/dataframe.py in _concatArrowAndExplode(self, base_rows_col, schema_cols, data_cols) 928 base_rows_col, 929 utils.list_to_seq(self._sc, schema_cols), --> 930 utils.list_to_seq(self._sc, data_cols)) 931 return TimeSeriesDataFrame._from_tsrdd(tsrdd, self.sql_ctx) 932

    /vagrant/spark-2.4.3-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in call(self, *args) 1255 answer = self.gateway_client.send_command(command) 1256 return_value = get_return_value( -> 1257 answer, self.gateway_client, self.target_id, self.name) 1258 1259 for temp_arg in temp_args:

    /vagrant/spark-2.4.3-bin-hadoop2.7/python/pyspark/sql/utils.py in deco(*a, **kw) 61 def deco(*a, **kw): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString()

    /vagrant/spark-2.4.3-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". --> 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError(

    Py4JJavaError: An error occurred while calling o459.concatArrowAndExplode. : java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.expressions.codegen.ExprCode.code()Ljava/lang/String; at org.apache.spark.sql.TimestampCast$class.doGenCode(TimestampCast.scala:76) at org.apache.spark.sql.NanosToTimestamp.doGenCode(TimestampCast.scala:31) at org.apache.spark.sql.catalyst.expressions.Expression$$anonfun$genCode$2.apply(Expression.scala:108) at org.apache.spark.sql.catalyst.expressions.Expression$$anonfun$genCode$2.apply(Expression.scala:105) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:105) at org.apache.spark.sql.catalyst.expressions.Alias.genCode(namedExpressions.scala:155) at org.apache.spark.sql.execution.ProjectExec$$anonfun$6.apply(basicPhysicalOperators.scala:60) at org.apache.spark.sql.execution.ProjectExec$$anonfun$6.apply(basicPhysicalOperators.scala:60) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:60) at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:189) at org.apache.spark.sql.execution.InputAdapter.consume(WholeStageCodegenExec.scala:374) at org.apache.spark.sql.execution.InputAdapter.doProduce(WholeStageCodegenExec.scala:403) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:90) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:85) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:85) at org.apache.spark.sql.execution.InputAdapter.produce(WholeStageCodegenExec.scala:374) at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:45) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:90) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:85) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:85) at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:35) at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:544) at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:598) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.GenerateExec.doExecute(GenerateExec.scala:80) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:391) at org.apache.spark.sql.execution.FilterExec.inputRDDs(basicPhysicalOperators.scala:121) at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:627) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.python.EvalPythonExec.doExecute(EvalPythonExec.scala:87) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:391) at org.apache.spark.sql.execution.ProjectExec.inputRDDs(basicPhysicalOperators.scala:41) at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:627) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80) at com.twosigma.flint.timeseries.NormalizedDataFrameStore.toOrderedRdd(TimeSeriesStore.scala:252) at com.twosigma.flint.timeseries.NormalizedDataFrameStore.orderedRdd(TimeSeriesStore.scala:237) at com.twosigma.flint.timeseries.TimeSeriesRDDImpl.orderedRdd(TimeSeriesRDD.scala:1346) at com.twosigma.flint.timeseries.TimeSeriesRDDImpl.concatArrowAndExplode(TimeSeriesRDD.scala:1946) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748)

    opened by thbeh 0
  • Flint Could not initialize class

    Flint Could not initialize class

    Hi, I am trying to run sample code:

    from ts.flint import FlintContext
    from ts.flint import summarizers
    from ts.flint import TimeSeriesDataFrame
    from pyspark.sql.functions import from_utc_timestamp, col
    
    flintContext = FlintContext(sqlContext)
    
    df = spark.createDataFrame(
      [('2018-08-20', 1.0), ('2018-08-21', 2.0), ('2018-08-24', 3.0)], 
      ['time', 'v']
    ).withColumn('time', from_utc_timestamp(col('time'), 'UTC'))
    
    # Convert to Flint DataFrame
    flint_df = flintContext.read.dataframe(df)
    
    # Use Spark DataFrame functionality
    flint_df = flint_df.withColumn('v', flint_df['v'] + 1)
    
    # Use Flint functionality
    flint_df = flint_df.summarizeCycles(summarizers.count())
    

    and the last command returns an error:

    SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 4 times, most recent failure: Lost task 0.3 in stage 10.0 (TID 35, 10.9.37.18, executor 0): java.lang.NoClassDefFoundError: Could not initialize class com.twosigma.flint.rdd.PartitionsIterator$
    	at com.twosigma.flint.rdd.Conversion$$anonfun$fromRDD$4.apply(Conversion.scala:309)
    	at com.twosigma.flint.rdd.Conversion$$anonfun$fromRDD$4.apply(Conversion.scala:303)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$summarizeByKey$1.apply(OrderedRDD.scala:545)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$summarizeByKey$1.apply(OrderedRDD.scala:544)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$mapValues$1.apply(OrderedRDD.scala:472)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$mapValues$1.apply(OrderedRDD.scala:472)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)
    	at org.apache.spark.scheduler.Task.run(Task.scala:112)
    	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)
    	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1526)
    	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    
    Driver stacktrace:
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 4 times, most recent failure: Lost task 0.3 in stage 10.0 (TID 35, 10.9.37.18, executor 0): java.lang.NoClassDefFoundError: Could not initialize class com.twosigma.flint.rdd.PartitionsIterator$
    	at com.twosigma.flint.rdd.Conversion$$anonfun$fromRDD$4.apply(Conversion.scala:309)
    	at com.twosigma.flint.rdd.Conversion$$anonfun$fromRDD$4.apply(Conversion.scala:303)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$summarizeByKey$1.apply(OrderedRDD.scala:545)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$summarizeByKey$1.apply(OrderedRDD.scala:544)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$mapValues$1.apply(OrderedRDD.scala:472)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$mapValues$1.apply(OrderedRDD.scala:472)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)
    	at org.apache.spark.scheduler.Task.run(Task.scala:112)
    	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)
    	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1526)
    	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    
    Driver stacktrace:
    	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2355)
    	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2343)
    	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2342)
    	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2342)
    	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1096)
    	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1096)
    	at scala.Option.foreach(Option.scala:257)
    	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1096)
    	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2574)
    	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
    	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2510)
    	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:893)
    	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2243)
    	at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:270)
    	at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:280)
    	at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:80)
    	at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:86)
    	at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:508)
    	at org.apache.spark.sql.execution.CollectLimitExec.executeCollectResult(limit.scala:55)
    	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectResult(Dataset.scala:2842)
    	at org.apache.spark.sql.Dataset$$anonfun$collectResult$1.apply(Dataset.scala:2833)
    	at org.apache.spark.sql.Dataset$$anonfun$collectResult$1.apply(Dataset.scala:2832)
    	at org.apache.spark.sql.Dataset$$anonfun$56.apply(Dataset.scala:3446)
    	at org.apache.spark.sql.Dataset$$anonfun$56.apply(Dataset.scala:3441)
    	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv$1.apply(SQLExecution.scala:111)
    	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:240)
    	at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:97)
    	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:170)
    	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withAction(Dataset.scala:3441)
    	at org.apache.spark.sql.Dataset.collectResult(Dataset.scala:2832)
    	at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation0(OutputAggregator.scala:149)
    	at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation(OutputAggregator.scala:54)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal$$anonfun$getResultBufferInternal$1.apply(PythonDriverLocal.scala:828)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal$$anonfun$getResultBufferInternal$1.apply(PythonDriverLocal.scala:783)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal.withInterpLock(PythonDriverLocal.scala:728)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal.getResultBufferInternal(PythonDriverLocal.scala:783)
    	at com.databricks.backend.daemon.driver.DriverLocal.getResultBuffer(DriverLocal.scala:463)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal.com$databricks$backend$daemon$driver$PythonDriverLocal$$outputSuccess(PythonDriverLocal.scala:770)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal$$anonfun$repl$6.apply(PythonDriverLocal.scala:322)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal$$anonfun$repl$6.apply(PythonDriverLocal.scala:309)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal.withInterpLock(PythonDriverLocal.scala:728)
    	at com.databricks.backend.daemon.driver.PythonDriverLocal.repl(PythonDriverLocal.scala:309)
    	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:373)
    	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:350)
    	at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)
    	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
    	at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)
    	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)
    	at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)
    	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)
    	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:350)
    	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
    	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
    	at scala.util.Try$.apply(Try.scala:192)
    	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)
    	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)
    	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
    	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
    	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
    	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: java.lang.NoClassDefFoundError: Could not initialize class com.twosigma.flint.rdd.PartitionsIterator$
    	at com.twosigma.flint.rdd.Conversion$$anonfun$fromRDD$4.apply(Conversion.scala:309)
    	at com.twosigma.flint.rdd.Conversion$$anonfun$fromRDD$4.apply(Conversion.scala:303)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$summarizeByKey$1.apply(OrderedRDD.scala:545)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$summarizeByKey$1.apply(OrderedRDD.scala:544)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$mapValues$1.apply(OrderedRDD.scala:472)
    	at com.twosigma.flint.rdd.OrderedRDD$$anonfun$mapValues$1.apply(OrderedRDD.scala:472)
    	at com.twosigma.flint.rdd.OrderedRDD.compute(OrderedRDD.scala:169)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:340)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:304)
    	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)
    	at org.apache.spark.scheduler.Task.run(Task.scala:112)
    	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)
    	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1526)
    	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	... 1 more
    

    I am using Spark 2.4.3, Scala 2.11 and Python 3.6.

    Do you know what is the reason of this error? Is this due to Scala version? Thanks in advance!

    opened by annabednarska 3
Owner
Two Sigma
Two Sigma is a financial sciences company. Our scientists use rigorous inquiry, data analysis, and invention to solve tough challenges across financial services
Two Sigma
Microsoft Machine Learning for Apache Spark

Microsoft Machine Learning for Apache Spark MMLSpark is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark

Microsoft Azure 3.9k Dec 30, 2022
Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray

A unified Data Analytics and AI platform for distributed TensorFlow, Keras and PyTorch on Apache Spark/Flink & Ray What is Analytics Zoo? Analytics Zo

null 2.5k Dec 28, 2022
[DEPRECATED] Tensorflow wrapper for DataFrames on Apache Spark

TensorFrames (Deprecated) Note: TensorFrames is deprecated. You can use pandas UDF instead. Experimental TensorFlow binding for Scala and Apache Spark

Databricks 757 Dec 31, 2022
Kats is a toolkit to analyze time series data, a lightweight, easy-to-use, and generalizable framework to perform time series analysis.

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.

Facebook Research 4.1k Dec 29, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Jan 3, 2023
Open source time series library for Python

PyFlux PyFlux is an open source time series library for Python. The library has a good array of modern time series models, as well as a flexible array

Ross Taylor 2k Jan 2, 2023
A statistical library designed to fill the void in Python's time series analysis capabilities, including the equivalent of R's auto.arima function.

pmdarima Pmdarima (originally pyramid-arima, for the anagram of 'py' + 'arima') is a statistical library designed to fill the void in Python's time se

alkaline-ml 1.3k Dec 22, 2022
A python library for easy manipulation and forecasting of time series.

Time Series Made Easy in Python darts is a python library for easy manipulation and forecasting of time series. It contains a variety of models, from

Unit8 5.2k Jan 4, 2023
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 6, 2023
A python library for Bayesian time series modeling

PyDLM Welcome to pydlm, a flexible time series modeling library for python. This library is based on the Bayesian dynamic linear model (Harrison and W

Sam 438 Dec 17, 2022
An open-source library of algorithms to analyse time series in GPU and CPU.

An open-source library of algorithms to analyse time series in GPU and CPU.

Shapelets 216 Dec 30, 2022
Nixtla is an open-source time series forecasting library.

Nixtla Nixtla is an open-source time series forecasting library. We are helping data scientists and developers to have access to open source state-of-

Nixtla 401 Jan 8, 2023
Uber Open Source 1.6k Dec 31, 2022
Distributed Deep learning with Keras & Spark

Elephas: Distributed Deep Learning with Keras & Spark Elephas is an extension of Keras, which allows you to run distributed deep learning models at sc

Max Pumperla 1.6k Dec 29, 2022
Spark development environment for k8s

Local Spark Dev Env with Docker Development environment for k8s. Using the spark-operator image to ensure it will be the same environment. Start conta

Otacilio Filho 18 Jan 4, 2022
Code base of KU AIRS: SPARK Autonomous Vehicle Team

KU AIRS: SPARK Autonomous Vehicle Project Check this link for the blog post describing this project and the video of SPARK in simulation and on parkou

Mehmet Enes Erciyes 1 Nov 23, 2021
A machine learning toolkit dedicated to time-series data

tslearn The machine learning toolkit for time series analysis in Python Section Description Installation Installing the dependencies and tslearn Getti

null 2.3k Jan 5, 2023
Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.

Prophet: Automatic Forecasting Procedure Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends ar

Facebook 15.4k Jan 7, 2023
Automatic extraction of relevant features from time series:

tsfresh This repository contains the TSFRESH python package. The abbreviation stands for "Time Series Feature extraction based on scalable hypothesis

Blue Yonder GmbH 7k Jan 6, 2023