:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

Overview

Logo Optimus

PyPI version Build Status Documentation Status built_by iron Updates GitHub release Codacy Badge Coverage Status Mentioned in Awesome Data ScienceDiscord

Downloads Downloads Downloads

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges:

Binder Colab

Optimus is the missing framework to profile, clean, process and do ML in a distributed fashion using Apache Spark(PySpark).

Installation (pip):

In your terminal just type pip install optimuspyspark

Requirements

  • Apache Spark>= 2.4.0
  • Python>=3.6

Examples

You can go to the 10 minutes to Optimus notebook where you can find the basic to start working.

Also you can go to the examples folder to found specific notebooks about data cleaning, data munging, profiling, data enrichment and how to create ML and DL models.

Besides check the Cheat Sheet

Documentation

Documentation

Feedback

Feedback is what drive Optimus future, so please take a couple of minutes to help shape the Optimus' Roadmap: http://bit.ly/optimus_survey

Also if you want to a suggestion or feature request use https://github.com/ironmussa/optimus/issues

Start Optimus

from optimus import Optimus
op= Optimus(verbose=True)

You also can use an already created Spark session:

from pyspark.sql import SparkSession
from optimus import Optimus

spark = SparkSession.builder.appName('optimus').getOrCreate()
op= Optimus(spark)

Loading data

Now Optimus can load data in csv, json, parquet, avro, excel from a local file or URL.

#csv
df = op.load.csv("../examples/data/foo.csv")

#json
# Use a local file
df = op.load.json("../examples/data/foo.json")

# Use a url
df = op.load.json("https://raw.githubusercontent.com/ironmussa/Optimus/master/examples/data/foo.json")

# parquet
df = op.load.parquet("../examples/data/foo.parquet")

# avro
# df = op.load.avro("../examples/data/foo.avro").table(5)

# excel 
df = op.load.excel("../examples/data/titanic3.xls")

Also you can load data from oracle, redshift, mysql and postgres. See Database connection

Saving Data

#csv
df.save.csv("data/foo.csv")

# json
df.save.json("data/foo.json")

# parquet
df.save.parquet("data/foo.parquet")

# avro
#df.save.avro("examples/data/foo.avro")

Also you can save data to oracle, redshift, mysql and postgres. See Database connection

Handling Spark jars, packages and repositories

With optimus is easy to loading jars, packages and repos. You can init optimus/spark like

op= Optimus(repositories = "myrepo", packages="org.apache.spark:spark-avro_2.12:2.4.3", jars="my.jar", driver_class_path="this_is_a_jar_class_path.jar", verbose= True)

Create dataframes

Also you can create a dataframe from scratch

from pyspark.sql.types import *
from datetime import date, datetime

df = op.create.df(
    [
        ("names", "str", True), 
        ("height(ft)","int", True), 
        ("function", "str", True), 
        ("rank", "int", True), 
        ("age","int",True),
        ("weight(t)","float",True),
        ("japanese name", ArrayType(StringType()), True),
        ("last position seen", "str", True),
        ("date arrival", "str", True),
        ("last date seen", "str", True),
        ("attributes", ArrayType(FloatType()), True),
        ("DateType"),
        ("Tiemstamp"),
        ("Cybertronian", "bool", True), 
        ("NullType", "null", True),
    ],
    [
        ("Optim'us", 28, "Leader", 10, 5000000, 4.3, ["Inochi", "Convoy"], "19.442735,-99.201111", "1980/04/10",
         "2016/09/10", [8.5344, 4300.0], date(2016, 9, 10), datetime(2014, 6, 24), True,
         None),
        ("bumbl#ebéé  ", 17, "Espionage", 7, 5000000, 2.0, ["Bumble", "Goldback"], "10.642707,-71.612534", "1980/04/10",
         "2015/08/10", [5.334, 2000.0], date(2015, 8, 10), datetime(2014, 6, 24), True,
         None),
        ("ironhide&", 26, "Security", 7, 5000000, 4.0, ["Roadbuster"], "37.789563,-122.400356", "1980/04/10",
         "2014/07/10", [7.9248, 4000.0], date(2014, 6, 24), datetime(2014, 6, 24), True,
         None),
        ("Jazz", 13, "First Lieutenant", 8, 5000000, 1.80, ["Meister"], "33.670666,-117.841553", "1980/04/10",
         "2013/06/10", [3.9624, 1800.0], date(2013, 6, 24), datetime(2014, 6, 24), True, None),
        ("Megatron", None, "None", 10, 5000000, 5.70, ["Megatron"], None, "1980/04/10", "2012/05/10", [None, 5700.0],
         date(2012, 5, 10), datetime(2014, 6, 24), True, None),
        ("Metroplex_)^$", 300, "Battle Station", 8, 5000000, None, ["Metroflex"], None, "1980/04/10", "2011/04/10",
         [91.44, None], date(2011, 4, 10), datetime(2014, 6, 24), True, None),

    ], infer_schema = True).ext.h_repartition(1)

With .table() you have a beautifull way to show your data. You have extra information like column number, column data type and marked white spaces

df.table()

Also you can create a dataframe from a panda dataframe

import pandas as pd
pdf = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c',3:'d'},
                    'B': {0: 1, 1: 3, 2: 5,3:7},
                       'C': {0: 2, 1: 4, 2: 6,3:None},
                       'D': {0:'1980/04/10',1:'1980/04/10',2:'1980/04/10',3:'1980/04/10'},
                       })

s_pdf = op.create.df(pdf=pdf)
s_pdf.table()

Cleaning and Processing

Optimus V2 was created to make data cleaning a breeze. The API was designed to be super easy to newcomers and very familiar for people that comes from Pandas. Optimus expands the Spark DataFrame functionality adding .rows and .cols attributes.

For example you can load data from a url, transform and apply some predefined cleaning functions:

# This is a custom function
def func(value, arg):
    return "this was a number"
    
new_df = df\
    .rows.sort("rank","desc")\
    .withColumn('new_age', df.age)\
    .cols.lower(["names","function"])\
    .cols.date_transform("date arrival", "yyyy/MM/dd", "dd-MM-YYYY")\
    .cols.years_between("date arrival", "dd-MM-YYYY", output_cols = "from arrival")\
    .cols.remove_accents("names")\
    .cols.remove_special_chars("names")\
    .rows.drop(df["rank"]>8)\
    .cols.rename(str.lower)\
    .cols.trim("*")\
    .cols.unnest("japanese name", output_cols="other names")\
    .cols.unnest("last position seen",separator=",", output_cols="pos")\
    .cols.drop(["last position seen", "japanese name","date arrival", "cybertronian", "nulltype"])

You transform this

df.table()

Into this

new_df.table()

Note that you can use Optimus functions and Spark functions(.WithColumn()) and all the df function availables in a Spark Dataframe at the same time. To know about all the Optimus functionality please go to this notebooks

Handling column output

With Optimus you can handle how the output column from a transformation in going to be handled.

from pyspark.sql import functions as F

def func(col_name, attr):
    return F.upper(F.col(col_name))

If a string is passed to input_cols and output_cols is not defined the result from the operation is going to be saved in the same input column

output_df = df.cols.apply(input_cols="names", output_cols=None,func=func)
output_df.table()

If a string is passed to input_cols and a string is passed to output_cols the output is going to be saved in the output column

output_df = df.cols.apply(input_cols="names", output_cols="names_up",func=func)
output_df.table()

If a list is passed to input_cols and a string is passed to out_cols Optimus will concatenate the list with every element in the list to create a new column name with the output

output_df = df.cols.apply(input_cols=["names","function"], output_cols="_up",func=func)
output_df.table()

If a list is passed to input_cols and a list is passed in out_cols Optimus will output every input column in the respective output column

output_df = df.cols.apply(input_cols=["names","function"], output_cols=["names_up","function_up"],func=func)
output_df.table()

Custom functions

Spark has multiple ways to transform your data like rdd, Column Expression, udf and pandas udf. In Optimus we created the apply() and apply_expr which handles all the implementation complexity.

Here you apply a function to the "billingid" column. Sum 1 and 2 to the current column value. All powered by Pandas UDF

def func(value, args):
    return value + args[0] + args[1]

df.cols.apply("height(ft)",func,"int", [1,2]).table()

If you want to apply a Column Expression use apply_expr() like this. In this case we pass an argument 10 to divide the actual column value

from pyspark.sql import functions as F

def func(col_name, args):
    return F.col(col_name)/20

df.cols.apply("height(ft)", func=func, args=20).table()

You can change the table output back to ascii if you wish

op.output("ascii")

To return to HTML just:

op.output("html")

Data profiling

Optimus comes with a powerful and unique data profiler. Besides basic and advance stats like min, max, kurtosis, mad etc, it also let you know what type of data has every column. For example if a string column have string, integer, float, bool, date Optimus can give you an unique overview about your data. Just run df.profile("*") to profile all the columns. For more info about the profiler please go to this notebook.

Let's load a "big" dataset

df = op.load.csv("https://raw.githubusercontent.com/ironmussa/Optimus/master/examples/data/Meteorite_Landings.csv").ext.h_repartition()

Numeric

op.profiler.run(df, "mass (g)", infer=False)

op.profiler.run(df, "name", infer=False)

Processing Dates

For dates data types Optimus can give you extra information

op.profiler.run(df, "year", infer=True)

Profiler Speed

With relative_error and approx_count params you can control how some operations are caculated so you can speedup the profiling in case is needed.

relative_error: Relative Error for quantile discretizer calculation. 1 is Faster, 0 Slower

approx_count: Use approx_count_distinct or countDistinct. approx_count_distinct is faster

op.profiler.run(df, "mass (g)", infer=False, relative_error =1, approx_count=True)

Plots

Besides histograms and frequency plots you also have scatter plots and box plots. All powered by Apache by pyspark

df = op.load.excel("../examples/data/titanic3.xls")
df = df.rows.drop_na(["age","fare"])

You can output to the notebook or as an image

# Output and image

df.plot.frequency("age")

df.plot.scatter(["fare", "age"], buckets=30)

df.plot.box("age")

df.plot.correlation("*")

Using other plotting libraries

Optimus has a tiny API so you can use any plotting library. For example, you can use df.cols.scatter(), df.cols.frequency(), df.cols.boxplot() or df.cols.hist() to output a JSON that you can process to adapt the data to any plotting library.

Outliers

Get the ouliers using tukey

df.outliers.tukey("age").select().table()

Remove the outliers using tukey

df.outliers.tukey("age").drop().table()

df.outliers.tukey("age").info()

You can also use z_score, modified_z_score or mad

df.outliers.z_score("age", threshold=2).drop()
df.outliers.modified_z_score("age", threshold = 2).drop()
df.outliers.mad("age", threshold = 2).drop()

Database connection

Optimus have handy tools to connect to databases and extract informacion. Optimus can handle redshift, postgres, oracle and mysql

from optimus import Optimus
op= Optimus(verbose=True)
# This import is only to hide the credentials
from credentials import *

# For others databases use in db_type accepts 'oracle','mysql','redshift','postgres'

db =  op.connect(
    db_type=DB_TYPE,
    host=HOST,
    database= DATABASE,
    user= USER,
    password = PASSWORD,
    port=PORT)
    
# Show all tables names
db.tables(limit="all")
# # Show a summary of every table
db.table.show("*",20)
spark
df_ = db.table_to_df("places_interest").table()
# # Create new table in the database
db.df_to_table(df, "new_table")

Data enrichment

You can connect to any external API to enrich your data using Optimus. Optimus uses MongoDB to download the data and then merge it with the Spark Dataframe. You need to install MongoDB

Let's load a tiny dataset we can enrich

df = op.load.json("https://raw.githubusercontent.com/ironmussa/Optimus/master/examples/data/foo.json")
import requests

def func_request(params):
    # You can use here whatever header or auth info you need to send. 
    # For more information see the requests library
    
    url= "https://jsonplaceholder.typicode.com/todos/" + str(params["id"])
    return requests.get(url)

def func_response(response):
    # Here you can parse de response
    return response["title"]


e = op.enrich(host="localhost", port=27017, db_name="jazz")

df_result = e.run(df, func_request, func_response, calls= 60, period = 60, max_tries = 8)
df_result.table("all")
df_result.table()

Clustering Strings

Optimus implements some funciton to cluster Strings. We get graet inspiration from OpenRefine

Here a quote from its site:

"In OpenRefine, clustering refers to the operation of "finding groups of different values that might be alternative representations of the same thing". For example, the two strings "New York" and "new york" are very likely to refer to the same concept and just have capitalization differences. Likewise, "Gödel" and "Godel" probably refer to the same person."

For more informacion see this: https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth

Keycolision

df = op.read.csv("../examples/data/random.csv",header=True, sep=";")
from optimus.ml import keycollision as keyCol
df_kc = keyCol.fingerprint_cluster(df, 'STATE')
df_kc.table()
df_kc.table()

keyCol.fingerprint_cluster(df, "STATE").to_json()
df_kc = keyCol.n_gram_fingerprint_cluster(df, "STATE" , 2)
df_kc.table()
df_kc.table()

keyCol.n_gram_fingerprint_cluster(df, "STATE" , 2).to_json()

Nearest Neighbor Methods

from optimus.ml import distancecluster as dc
df_dc = dc.levenshtein_matrix(df,"STATE")
df_dc.table()

df_dc=dc.levenshtein_filter(df,"STATE")
df_dc.table()
df_dc.table()

df_dc = dc.levenshtein_cluster(df,"STATE")
df_dc.table()
df_dc.table()

dc.to_json(df, "STATE")

Machine Learning

Machine Learning is one of the last steps, and the goal for most Data Science WorkFlows.

Apache Spark created a library called MLlib where they coded great algorithms for Machine Learning. Now with the ML library we can take advantage of the Dataframe API and its optimization to create Machine Learning Pipelines easily.

Even though this task is not extremely hard, it is not easy. The way most Machine Learning models work on Spark are not straightforward, and they need lots of feature engineering to work. That's why we created the feature engineering section inside Optimus.

One of the best "tree" models for machine learning is Random Forest. What about creating a RF model with just one line? With Optimus is really easy.

df_cancer = op.load.csv("https://raw.githubusercontent.com/ironmussa/Optimus/master/tests/data_cancer.csv")
columns = ['diagnosis', 'radius_mean', 'texture_mean', 'perimeter_mean', 'area_mean', 'smoothness_mean',
           'compactness_mean', 'concavity_mean', 'concave points_mean', 'symmetry_mean',
           'fractal_dimension_mean']

df_predict, rf_model = op.ml.random_forest(df_cancer, columns, "diagnosis")

This will create a DataFrame with the predictions of the Random Forest model.

So lets see the prediction compared with the actual label:

df_predict.cols.select(["label","prediction"]).table()

The rf_model variable contains the Random Forest model for analysis.

Troubleshooting

###ImportError: failed to find libmagic. Check your installation Install libmagic https://anaconda.org/conda-forge/libmagic

Contributing to Optimus

Contributions go far beyond pull requests and commits. We are very happy to receive any kind of contributions
including:

Backers

[Become a backer] and get your image on our README on Github with a link to your site.
OpenCollective

Sponsors

[Become a sponsor] and get your image on our README on Github with a link to your site.
OpenCollective

Core Team

Argenis Leon and Luis Aguirre

Contributors:

Here is the amazing people that make Optimus possible:

01234567

License:

Apache 2.0 © Iron

Logo Iron

Optimus twitter

Comments
  • Importerror

    Importerror

    This issue got closed by mistake. Getting an following import error immediately upon import using optimuspyspark==1.2.4.

    `ImportError: cannot import name '_to_unmasked_float_array' ImportError Traceback (most recent call last) in () ----> 1 import optimus

    /databricks/python/lib/python3.5/site-packages/optimus/init.py in () 2 from optimus.df_transformer import DataFrameTransformer 3 # Importing DataFrameAnalyzer library ----> 4 from optimus.df_analyzer import DataFrameAnalyzer 5 # Importing DfProfiler library 6 from optimus.df_analyzer import DataFrameProfiler

    /databricks/python/lib/python3.5/site-packages/optimus/df_analyzer.py in () 6 from pyspark.ml.stat import Correlation 7 # Importing plotting libraries ----> 8 import matplotlib.pyplot as plt 9 import seaborn as sns 10 # Importing numeric module

    /databricks/python/lib/python3.5/site-packages/matplotlib/pyplot.py in () 30 from cycler import cycler 31 import matplotlib ---> 32 import matplotlib.colorbar 33 from matplotlib import style 34 from matplotlib import _pylab_helpers, interactive

    /databricks/python/lib/python3.5/site-packages/matplotlib/colorbar.py in () 30 31 import matplotlib as mpl ---> 32 import matplotlib.artist as martist 33 import matplotlib.cbook as cbook 34 import matplotlib.collections as collections

    /databricks/python/lib/python3.5/site-packages/matplotlib/artist.py in () 14 import matplotlib 15 from . import cbook, docstring, rcParams ---> 16 from .path import Path 17 from .transforms import (Bbox, IdentityTransform, Transform, TransformedBbox, 18 TransformedPatchPath, TransformedPath)

    /databricks/python/lib/python3.5/site-packages/matplotlib/path.py in () 24 25 from . import _path, rcParams ---> 26 from .cbook import (_to_unmasked_float_array, simple_linear_interpolation, 27 maxdict) 28

    ImportError: cannot import name '_to_unmasked_float_array'`

    opened by maresk 25
  • Explore options for a different DataFrameTransformer interface

    Explore options for a different DataFrameTransformer interface

    I'm not sure how much we'll want to explore this option. Just want to introduce a design pattern that works well with the Scala API of Spark.

    The Spark Scala API has a nifty transform method that lets users chain user defined transformations and methods defined in the Dataset class. See this blog post for more information.

    I like the DataFrameTransformer class, but it doesn't let users easily access the native PySpark DataFrame methods.

    We might want to take these methods out of the DataFrameTransfrormer class, so the user can mix and match the Optimus API and the PySpark API.

    source_df\
        .transform(lambda df: lower_case(df, "*"))\
        .withColumn("funny", lit("spongebob"))\
        .transform(lambda df: trim_col(df, "address"))
    

    The transform method is defined in quinn. I'd love to make an interface like this, but not sure how to implement it with Python.

    source_df\
        .transform(lower_case("*"))\
        .withColumn("funny", lit("spongebob"))\
        .transform(trim_col("address"))
    

    Let me know what you think!

    enhancement help wanted 
    opened by MrPowers 24
  • Unable to install to install Optimus

    Unable to install to install Optimus

    Hello all,

    I'm really excited about what I've read about Optimus. Unfortunately, I'm coming across a number of issues getting it up and running.

    I have followed the instructions here:

    https://medium.com/hi-optimus/how-to-install-jupyter-notebook-4-4-0-and-optimus-on-ubuntu-18-04-92ff5ef30ea4

    However, whenever i try to issue the following script to my notebook I get the following error:

    import optimus as op

    ModuleNotFoundError Traceback (most recent call last) in () ----> 1 import optimus as op

    ModuleNotFoundError: No module named 'optimus'

    Im running python 3.6

    I feel that this is a simple error that I have made, but I'm at a loss.

    Any help will be greatly appreciated.

    Cheers

    opened by cpatte7372 17
  • Profiler can give the user extra info

    Profiler can give the user extra info

    Profiler can detect at the moment:

    • String
    • Integer
    • Decimal
    • Boolean
    • Array
    • Date/Time

    If a column is String it can give the user extra info like:

    • IP Address
    • URL
    • Phone Number
    • Email Address
    • Credit Card
    • Zip Code

    Also a dict/json could be detected.

    feature_request 
    opened by argenisleon 14
  • Support for Avro

    Support for Avro

    Is there support for reading Avro files?

    Can you also provide an example of loading data which is not a CSV file from a URL? That's the only one I could see in the docs.

    Thanks!

    opened by darrenhaken 14
  • Create docker compose file with db containers

    Create docker compose file with db containers

    For testing purpose, we need to create a docker compose file to test the JDBC Class against multiple databases.

    We need:

    • Postgre
    • Redshift
    • MariaDB
    • Mysql
    • Oracle
    • Microsoft SQL Server

    At the moment we have some docker commands here:https://github.com/ironmussa/Optimus/blob/master/tests/sql/readme.md

    After the initialization, the databases need to be filed with dummy data. We have some work here https://github.com/ironmussa/Optimus/blob/master/tests/sql/sql.ipynb

    testing 
    opened by argenisleon 12
  • Add ability to create Optimus DF from Spark DF

    Add ability to create Optimus DF from Spark DF

    As far as I can tell, functionality to create an Optimus DataFrame from an existing Spark DataFrame is not supported. This would be useful when working with data from a source that Optimus doesn't currently support, but for which there is a custom Spark data source API connector.

    question Answered 
    opened by westonsankey 9
  • I am getting below exception while reading csv file and writing it to parquet file

    I am getting below exception while reading csv file and writing it to parquet file

    Today After Upgrade when I ran my Program I am getting below exception.

    
    {Py4JJavaError}An error occurred while calling o125.parquet.
    : java.lang.NoSuchMethodError: org.apache.spark.sql.internal.SQLConf$.LEGACY_PASS_PARTITION_BY_AS_OPTIONS()Lorg/apache/spark/internal/config/ConfigEntry;
    	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:277)
    	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
    	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
    	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:566)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    	at py4j.Gateway.invoke(Gateway.java:282)
    	at py4j.comm...
    
    
    

    Can you tell me why is this exception occurring.

    bug 
    opened by pallav1991 9
  • pip install not working

    pip install not working

    Describe the bug I am unable to install the optimuspyspark using pip for version 2.2.29

    To Reproduce Steps to reproduce the behavior: pip install error with message " No such file or directory requirement.txt"

    Expected behavior pip install should not fail

    bug 
    opened by brajesheee 8
  • Slow speed

    Slow speed

    Hello @argenisleon / @FavioVazquez -

    I am running couple of operations like mentioned below - df.rows.select(fbdt("id", "integer")).table() df.rows.select(fbdt("id", "float")).table()

    I am using notebook for performing these operations. As per console, the operation is in progress but it is slow in spitting out results. The dataset on which I am running operations has close to 90 million records. My question is there any configuration which I can use to speed up computation of spark job. FYI, I am using an Ubuntu machine with 4 chores and 32 GB RAM. Could you please advise.

    Thank you for your support.

    Best- Ishan

    help wanted question 
    opened by IshanDindorkar 8
  • Manejando variables categoricas

    Manejando variables categoricas

    Hola, quisiera saber si con el metodo usado para procesar las variables categoricas puede pasar que el algoritmo interperete que por ejemplo una categoria con indice 10, es mejor que una categoria con indice 2.

    Gracias.

    help wanted 
    opened by EdgarSanchez1796 8
  • Bump setuptools from 41.6.0 to 65.5.1 in /requirements

    Bump setuptools from 41.6.0 to 65.5.1 in /requirements

    Bumps setuptools from 41.6.0 to 65.5.1.

    Release notes

    Sourced from setuptools's releases.

    v65.5.1

    No release notes provided.

    v65.5.0

    No release notes provided.

    v65.4.1

    No release notes provided.

    v65.4.0

    No release notes provided.

    v65.3.0

    No release notes provided.

    v65.2.0

    No release notes provided.

    v65.1.1

    No release notes provided.

    v65.1.0

    No release notes provided.

    v65.0.2

    No release notes provided.

    v65.0.1

    No release notes provided.

    v65.0.0

    No release notes provided.

    v64.0.3

    No release notes provided.

    v64.0.2

    No release notes provided.

    v64.0.1

    No release notes provided.

    v64.0.0

    No release notes provided.

    v63.4.3

    No release notes provided.

    v63.4.2

    No release notes provided.

    ... (truncated)

    Changelog

    Sourced from setuptools's changelog.

    v65.5.1

    Misc ^^^^

    • #3638: Drop a test dependency on the mock package, always use :external+python:py:mod:unittest.mock -- by :user:hroncok
    • #3659: Fixed REDoS vector in package_index.

    v65.5.0

    Changes ^^^^^^^

    • #3624: Fixed editable install for multi-module/no-package src-layout projects.
    • #3626: Minor refactorings to support distutils using stdlib logging module.

    Documentation changes ^^^^^^^^^^^^^^^^^^^^^

    • #3419: Updated the example version numbers to be compliant with PEP-440 on the "Specifying Your Project’s Version" page of the user guide.

    Misc ^^^^

    • #3569: Improved information about conflicting entries in the current working directory and editable install (in documentation and as an informational warning).
    • #3576: Updated version of validate_pyproject.

    v65.4.1

    Misc ^^^^

    • #3613: Fixed encoding errors in expand.StaticModule when system default encoding doesn't match expectations for source files.
    • #3617: Merge with pypa/distutils@6852b20 including fix for pypa/distutils#181.

    v65.4.0

    Changes ^^^^^^^

    v65.3.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies python 
    opened by dependabot[bot] 0
  • Scheduled biweekly dependency update for week 51

    Scheduled biweekly dependency update for week 51

    Update tensorflow from 2.9.1 to 2.11.0.

    Changelog

    2.11

    which means `tf.keras.optimizers.Optimizer` will be an alias of
     `tf.keras.optimizers.experimental.Optimizer`. The current
     `tf.keras.optimizers.Optimizer` will continue to be supported as
     `tf.keras.optimizers.legacy.Optimizer`,
     e.g.,`tf.keras.optimizers.legacy.Adam`. Most users won't be affected by this
     change, but please check the
     [API doc](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental)
     if any API used in your workflow is changed or deprecated, and make
     adaptations. If you decide to keep using the old optimizer, please
     explicitly change your optimizer to `tf.keras.optimizers.legacy.Optimizer`.
    *   RNG behavior change for `tf.keras.initializers`. Keras initializers will now
     use stateless random ops to generate random numbers.
     *   Both seeded and unseeded initializers will always generate the same
         values every time they are called (for a given variable shape). For
         unseeded initializers (`seed=None`), a random seed will be created and
         assigned at initializer creation (different initializer instances get
         different seeds).
     *   An unseeded initializer will raise a warning if it is reused (called)
         multiple times. This is because it would produce the same values each
         time, which may not be intended.
    *   API changes under `tf.experimental.dtensor`:
     *   New API for initialization of CPU/GPU/TPU in dtensor.
         `dtensor.initialize_accelerator_system` and
         `dtensor.shutdown_accelerator_system`.
     *   The following existing API will be removed:
         `dtensor.initialize_multi_client`, `dtensor.initialize_tpu_system`, and
         `dtensor.shutdown_tpu_system`.
    
    Deprecations
    
    *   The C++ `tensorflow::Code` and `tensorflow::Status` will become aliases of
     respectively `absl::StatusCode` and `absl::Status` in some future release.
     *   Use `tensorflow::OkStatus()` instead of `tensorflow::Status::OK()`.
     *   Stop constructing `Status` objects from `tensorflow::error::Code`.
     *   One MUST NOT access `tensorflow::errors::Code` fields. Accessing
         `tensorflow::error::Code` fields is fine.
         *   Use the constructors such as `tensorflow::errors:InvalidArgument` to
             create status using an error code without accessing it.
         *   Use the free functions such as
             `tensorflow::errors::IsInvalidArgument` if needed.
         *   In the last resort, use
             e.g.`static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT)`
             or `static_cast<int>(code)` for comparisons.
    *   `tensorflow::StatusOr` will also become in the future an alias to
     `absl::StatusOr`, so use `StatusOr::value` instead of
     `StatusOr::ConsumeValueOrDie`.
    
    Major Features and Improvements
    
    *   `tf.lite`:
    
     *   New operations supported:
         *   tflite SelectV2 now supports 5D.
         *   `tf.einsum` is supported with multiple unknown shapes.
         *   `tf.unsortedsegmentprod` op is supported.
         *   `tf.unsortedsegmentmax` op is supported.
         *   `tf.unsortedsegmentsum` op is supported.
     *   Updates to existing operations:
         *   `tfl.scatter_nd` now supports I1 for the `update` arg.
     *   Upgrade Flatbuffers v2.0.5 from v1.12.0
    
    *   `tf.keras`:
    
     *   `EinsumDense` layer is moved from experimental to core. Its import path
         is moved from `tf.keras.layers.experimental.EinsumDense` to
         `tf.keras.layers.EinsumDense`.
     *   Added `tf.keras.utils.audio_dataset_from_directory` utility to easily
         generate audio classification datasets from directories of `.wav` files.
     *   Added `subset="both"` support in
         `tf.keras.utils.image_dataset_from_directory`,`tf.keras.utils.text_dataset_from_directory`,
         and `audio_dataset_from_directory`, to be used with the
         `validation_split` argument, for returning both dataset splits at once,
         as a tuple.
     *   Added `tf.keras.utils.split_dataset` utility to split a `Dataset` object
         or a list/tuple of arrays into two `Dataset` objects (e.g. train/test).
     *   Added step granularity to `BackupAndRestore` callback for handling
         distributed training failures & restarts. The training state can now be
         restored at the exact epoch and step at which it was previously saved
         before failing.
     *   Added
         [`tf.keras.dtensor.experimental.optimizers.AdamW`](https://www.tensorflow.org/api_docs/python/tf/keras/dtensor/experimental/optimizers/AdamW).
         This optimizer is similar to the existing
         [`keras.optimizers.experimental.AdamW`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental/AdamW),
         and works in the DTensor training use case.
     *   Improved masking support for
         [`tf.keras.layers.MultiHeadAttention`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention).
         *   Implicit masks for `query`, `key` and `value` inputs will
             automatically be used to compute a correct attention mask for the
             layer. These padding masks will be combined with any
             `attention_mask` passed in directly when calling the layer. This can
             be used with
             [`tf.keras.layers.Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding)
             with `mask_zero=True` to automatically infer a correct padding mask.
         *   Added a `use_causal_mask` call time argument to the layer. Passing
             `use_causal_mask=True` will compute a causal attention mask, and
             optionally combine it with any `attention_mask` passed in directly
             when calling the layer.
     *   Added `ignore_class` argument in the loss
         `SparseCategoricalCrossentropy` and metrics `IoU` and `MeanIoU`, to
         specify a class index to be ignored during loss/metric computation (e.g.
         a background/void class).
     *   Added
         [`tf.keras.models.experimental.SharpnessAwareMinimization`](https://www.tensorflow.org/api_docs/python/tf/keras/models/experimental/SharpnessAwareMinimization).
         This class implements the sharpness-aware minimization technique, which
         boosts model performance on various tasks, e.g., ResNet on image
         classification.
    
    *   `tf.data`:
    
     *   Added support for cross-trainer data caching in tf.data service. This
         saves computation resources when concurrent training jobs train from the
         same dataset. See
         (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers)
         for more details.
     *   Added `dataset_id` to `tf.data.experimental.service.register_dataset`.
         If provided, `tf.data` service will use the provided ID for the dataset.
         If the dataset ID already exists, no new dataset will be registered.
         This is useful if multiple training jobs need to use the same dataset
         for training. In this case, users should call `register_dataset` with
         the same `dataset_id`.
     *   Added a new field, `inject_prefetch`, to
         `tf.data.experimental.OptimizationOptions`. If it is set to
         `True`,`tf.data` will now automatically add a `prefetch` transformation
         to datasets that end in synchronous transformations. This enables data
         generation to be overlapped with data consumption. This may cause a
         small increase in memory usage due to buffering. To enable this
         behavior, set `inject_prefetch=True` in
         `tf.data.experimental.OptimizationOptions`.
     *   Added a new value to `tf.data.Options.autotune.autotune_algorithm`:
         `STAGE_BASED`. If the autotune algorithm is set to `STAGE_BASED`, then
         it runs a new algorithm that can get the same performance with lower
         CPU/memory usage.
     *   Added
         [`tf.data.experimental.from_list`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/from_list),
         a new API for creating `Dataset`s from lists of elements.
     *   Graduated experimental APIs:
         *   [`tf.data.Dataset.counter`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset/#counter),
             which creates `Dataset`s of indefinite sequences of numbers.
         *   [`tf.data.Dataset.ignore_errors`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset/#ignore_errors),
             which drops erroneous elements from `Dataset`s.
     *   Added
         [`tf.data.Dataset.rebatch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#rebatch),
         a new API for rebatching the elements of a dataset.
    
    *   `tf.distribute`:
    
     *   Added
         [`tf.distribute.experimental.PreemptionCheckpointHandler`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/PreemptionCheckpointHandler)
         to handle worker preemption/maintenance and cluster-wise consistent
         error reporting for `tf.distribute.MultiWorkerMirroredStrategy`.
         Specifically, for the type of interruption with advance notice, it
         automatically saves a checkpoint, exits the program without raising an
         unrecoverable error, and restores the progress when training restarts.
    
    *   `tf.math`:
    
     *   Added `tf.math.approx_max_k` and `tf.math.approx_min_k` which are the
         optimized alternatives to `tf.math.top_k` on TPU. The performance
         difference ranges from 8 to 100 times depending on the size of k. When
         running on CPU and GPU, a non-optimized XLA kernel is used.
    
    *   `tf.train`:
    
     *   Added `tf.train.TrackableView` which allows users to inspect the
         TensorFlow Trackable object (e.g. `tf.Module`, Keras Layers and models).
    
    *   `tf.vectorized_map`:
    
     *   Added an optional parameter: `warn`. This parameter controls whether or
         not warnings will be printed when operations in the provided `fn` fall
         back to a while loop.
    
    *   XLA:
    
     *   `tf.distribute.MultiWorkerMirroredStrategy` is now compilable with XLA.
     *   [Compute Library for the Arm® Architecture (ACL)](https://github.com/ARM-software/ComputeLibrary)
         is supported for aarch64 CPU XLA runtime
    
    *   CPU performance optimizations:
    
     *   **x86 CPUs**:
         [oneDNN](https://github.com/tensorflow/community/blob/master/rfcs/20210930-enable-onednn-ops.md)
         bfloat16 auto-mixed precision grappler graph optimization pass has been
         renamed from `auto_mixed_precision_mkl` to
         `auto_mixed_precision_onednn_bfloat16`. See example usage
         [here](https://www.intel.com/content/www/us/en/developer/articles/guide/getting-started-with-automixedprecisionmkl.html).
     *   **aarch64 CPUs:** Experimental performance optimizations from
         [Compute Library for the Arm® Architecture (ACL)](https://github.com/ARM-software/ComputeLibrary)
         are available through oneDNN in the default Linux aarch64 package (`pip
         install tensorflow`).
         *   The optimizations are disabled by default.
         *   Set the environment variable `TF_ENABLE_ONEDNN_OPTS=1` to enable the
             optimizations. Setting the variable to 0 or unsetting it will
             disable the optimizations.
         *   These optimizations can yield slightly different numerical results
             from when they are off due to floating-point round-off errors from
             different computation approaches and orders.
         *   To verify that the optimizations are on, look for a message with
             "*oneDNN custom operations are on*" in the log. If the exact phrase
             is not there, it means they are off.
    
    Bug Fixes and Other Changes
    
    *   New argument `experimental_device_ordinal` in `LogicalDeviceConfiguration`
     to control the order of logical devices (GPU only).
    
    *   `tf.keras`:
    
     *   Changed the TensorBoard tag names produced by the
         `tf.keras.callbacks.TensorBoard` callback, so that summaries logged
         automatically for model weights now include either a `/histogram` or
         `/image` suffix in their tag names, in order to prevent tag name
         collisions across summary types.
    
    *   When running on GPU (with cuDNN version 7.6.3 or
     later),`tf.nn.depthwise_conv2d` backprop to `filter` (and therefore also
     `tf.keras.layers.DepthwiseConv2D`) now operate deterministically (and
     `tf.errors.UnimplementedError` is no longer thrown) when op-determinism has
     been enabled via `tf.config.experimental.enable_op_determinism`. This closes
     issue [47174](https://github.com/tensorflow/tensorflow/issues/47174).
    
    *   `tf.random`
    
     *   Added `tf.random.experimental.stateless_shuffle`, a stateless version of
         `tf.random.shuffle`.
    
    Security
    
    *   Fixes a `CHECK` failure in tf.reshape caused by overflows ([CVE-2022-35934](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35934))
    *   Fixes a `CHECK` failure in `SobolSample` caused by missing validation ([CVE-2022-35935](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935))
    *   Fixes an OOB read in `Gather_nd` op in TF Lite ([CVE-2022-35937](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35937))
    *   Fixes a `CHECK` failure in `TensorListReserve` caused by missing validation ([CVE-2022-35960](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35960))
    *   Fixes an OOB write in `Scatter_nd` op in TF Lite ([CVE-2022-35939](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35939))
    *   Fixes an integer overflow in `RaggedRangeOp` ([CVE-2022-35940](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35940))
    *   Fixes a `CHECK` failure in `AvgPoolOp` ([CVE-2022-35941](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35941))
    *   Fixes a `CHECK` failures in `UnbatchGradOp` ([CVE-2022-35952](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35952))
    *   Fixes a segfault TFLite converter on per-channel quantized transposed convolutions ([CVE-2022-36027](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36027))
    *   Fixes a `CHECK` failures in `AvgPool3DGrad` ([CVE-2022-35959](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35959))
    *   Fixes a `CHECK` failures in `FractionalAvgPoolGrad` ([CVE-2022-35963](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35963))
    *   Fixes a segfault in `BlockLSTMGradV2` ([CVE-2022-35964](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35964))
    *   Fixes a segfault in `LowerBound` and `UpperBound` ([CVE-2022-35965](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35965))
    *   Fixes a segfault in `QuantizedAvgPool` ([CVE-2022-35966](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35966))
    *   Fixes a segfault in `QuantizedAdd` ([CVE-2022-35967](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35967))
    *   Fixes a `CHECK` fail in `AvgPoolGrad` ([CVE-2022-35968](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35968))
    *   Fixes a `CHECK` fail in `Conv2DBackpropInput` ([CVE-2022-35969](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35969))
    *   Fixes a segfault in `QuantizedInstanceNorm` ([CVE-2022-35970](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35970))
    *   Fixes a `CHECK` fail in `FakeQuantWithMinMaxVars` ([CVE-2022-35971](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35971))
    *   Fixes a segfault in `Requantize` ([CVE-2022-36017](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36017))
    *   Fixes a segfault in `QuantizedBiasAdd` ([CVE-2022-35972](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35972))
    *   Fixes a `CHECK` fail in `FakeQuantWithMinMaxVarsPerChannel` ([CVE-2022-36019](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36019))
    *   Fixes a segfault in `QuantizedMatMul` ([CVE-2022-35973](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35973))
    *   Fixes a segfault in `QuantizeDownAndShrinkRange` ([CVE-2022-35974](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35974))
    *   Fixes segfaults in `QuantizedRelu` and `QuantizedRelu6` ([CVE-2022-35979](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35979))
    *   Fixes a `CHECK` fail in `FractionalMaxPoolGrad` ([CVE-2022-35981](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35981))
    *   Fixes a `CHECK` fail in `RaggedTensorToVariant` ([CVE-2022-36018](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36018))
    *   Fixes a `CHECK` fail in `QuantizeAndDequantizeV3` ([CVE-2022-36026](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36026))
    *   Fixes a segfault in `SparseBincount` ([CVE-2022-35982](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35982))
    *   Fixes a `CHECK` fail in `Save` and `SaveSlices` ([CVE-2022-35983](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35983))
    *   Fixes a `CHECK` fail in `ParameterizedTruncatedNormal` ([CVE-2022-35984](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35984))
    *   Fixes a `CHECK` fail in `LRNGrad` ([CVE-2022-35985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35985))
    *   Fixes a segfault in `RaggedBincount` ([CVE-2022-35986](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35986))
    *   Fixes a `CHECK` fail in `DenseBincount` ([CVE-2022-35987](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35987))
    *   Fixes a `CHECK` fail in `tf.linalg.matrix_rank` ([CVE-2022-35988](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35988))
    *   Fixes a `CHECK` fail in `MaxPool` ([CVE-2022-35989](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35989))
    *   Fixes a `CHECK` fail in `Conv2DBackpropInput` ([CVE-2022-35999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35999))
    *   Fixes a `CHECK` fail in `EmptyTensorList` ([CVE-2022-35998](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35998))
    *   Fixes a `CHECK` fail in `tf.sparse.cross` ([CVE-2022-35997](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35997))
    *   Fixes a floating point exception in `Conv2D` ([CVE-2022-35996](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35996))
    *   Fixes a `CHECK` fail in `AudioSummaryV2` ([CVE-2022-35995](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35995))
    *   Fixes a `CHECK` fail in `CollectiveGather` ([CVE-2022-35994](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35994))
    *   Fixes a `CHECK` fail in `SetSize` ([CVE-2022-35993](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35993))
    *   Fixes a `CHECK` fail in `TensorListFromTensor` ([CVE-2022-35992](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35992))
    *   Fixes a `CHECK` fail in `TensorListScatter` and `TensorListScatterV2` ([CVE-2022-35991](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35991))
    *   Fixes a `CHECK` fail in `FakeQuantWithMinMaxVarsPerChannelGradient` ([CVE-2022-35990](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35990))
    *   Fixes a `CHECK` fail in `FakeQuantWithMinMaxVarsGradient` ([CVE-2022-36005](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36005))
    *   Fixes a `CHECK` fail in `tf.random.gamma` ([CVE-2022-36004](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36004))
    *   Fixes a `CHECK` fail in `RandomPoissonV2` ([CVE-2022-36003](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36003))
    *   Fixes a `CHECK` fail in `Unbatch` ([CVE-2022-36002](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36002))
    *   Fixes a `CHECK` fail in `DrawBoundingBoxes` ([CVE-2022-36001](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36001))
    *   Fixes a `CHECK` fail in `Eig` ([CVE-2022-36000](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36000))
    *   Fixes a null dereference on MLIR on empty function attributes ([CVE-2022-36011](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36011))
    *   Fixes an assertion failure on MLIR empty edge names ([CVE-2022-36012](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36012))
    *   Fixes a null-dereference in `mlir::tfg::GraphDefImporter::ConvertNodeDef` ([CVE-2022-36013](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36013))
    *   Fixes a null-dereference in `mlir::tfg::TFOp::nameAttr` ([CVE-2022-36014](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36014))
    *   Fixes an integer overflow in math ops ([CVE-2022-36015](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36015))
    *   Fixes a `CHECK`-fail in `tensorflow::full_type::SubstituteFromAttrs` ([CVE-2022-36016](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36016))
    *   Fixes an OOB read in `Gather_nd` op in TF Lite Micro ([CVE-2022-35938](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35938))
    
    Thanks to our Contributors
    
    This release contains contributions from many people at Google, as well as:
    
    Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang
    

    2.11.0

    Breaking Changes
    *   `tf.keras.optimizers.Optimizer` now points to the new Keras optimizer, and
     old optimizers have moved to the `tf.keras.optimizers.legacy` namespace.
     If you find your workflow failing due to this change,
     you may be facing one of the following issues:
    
     *   **Checkpoint loading failure.** The new optimizer handles optimizer
         state differently from the old optimizer, which simplies the logic of
         checkpoint saving/loading, but at the cost of breaking checkpoint
         backward compatibility in some cases. If you want to keep using an old
         checkpoint, please change your optimizer to
         `tf.keras.optimizers.legacy.XXX` (e.g.
         `tf.keras.optimizers.legacy.Adam`).
     *   **TF1 compatibility.** The new optimizer does not support TF1 any more,
         so please use the legacy optimizer `tf.keras.optimizer.legacy.XXX`.
         We highly recommend to migrate your workflow to TF2 for stable
         support and new features.
     *   **API not found.** The new optimizer has a different set of public APIs
         from the old optimizer. These API changes are mostly related to
         getting rid of slot variables and TF1 support. Please check the API
         documentation to find alternatives to the missing API. If you must
         call the deprecated API, please change your optimizer to the legacy
         optimizer.
     *   **Learning rate schedule access.** When using a `LearningRateSchedule`,
         The new optimizer's `learning_rate` property returns the
         current learning rate value instead of a `LearningRateSchedule` object
         as before. If you need to access the `LearningRateSchedule` object,
         please use `optimizer._learning_rate`.
     *   **You implemented a custom optimizer based on the old optimizer.**
         Please set your optimizer to subclass
         `tf.keras.optimizer.legacy.XXX`. If you want to migrate to the new
         optimizer and find it does not support your optimizer, please file
         an issue in the Keras GitHub repo.
     *   **Error such as `Cannot recognize variable...`.** The new optimizer
         requires all optimizer variables to be created at the first
         `apply_gradients()` or `minimize()` call. If your workflow calls
         optimizer to update different parts of model in multiple stages,
         please call `optimizer.build(model.trainable_variables)` before the
         training loop.
     *   **Performance regression on `ParameterServerStrategy`.** This could be
         significant if you have many PS servers. We are aware of this issue and
         working on fixes, for now we suggest using the legacy optimizers when
         using `ParameterServerStrategy`.
     *   **Timeout or performance loss.** We don't anticipate this to happen, but
         if you see such issues, please use the legacy optimizer, and file
         an issue in the Keras GitHub repo.
    
     The old Keras optimizer will never be deleted, but will not see any
     new feature additions.
     New optimizers (e.g., `Adafactor`) will
     only be implemented based on `tf.keras.optimizers.Optimizer`, the new
     base class.
    
    Major Features and Improvements
    
    *   `tf.lite`:
    
     *   New operations supported:
           * tf.unsortedsegmentmin op is supported.
           * tf.atan2 op is supported.
           * tf.sign op is supported.
     *   Updates to existing operations:
           * tfl.mul now supports complex32 inputs.
    
    *   `tf.experimental.StructuredTensor`
    
     *   Introduced `tf.experimental.StructuredTensor`, which provides a flexible
         and Tensorflow-native way to encode structured data such as protocol
         buffers or pandas dataframes.
    
    *   `tf.keras`:
    
     *   Added method `get_metrics_result()` to `tf.keras.models.Model`.
         *   Returns the current metrics values of the model as a dict.
     *   Added group normalization layer `tf.keras.layers.GroupNormalization`.
     *   Added weight decay support for all Keras optimizers.
     *   Added Adafactor optimizer `tf.keras.optimizers.Adafactor`.
     *   Added `warmstart_embedding_matrix` to `tf.keras.utils`.
         This utility can be used to warmstart an embeddings matrix so you
         reuse previously-learned word embeddings when working with a new set
         of words which may include previously unseen words (the embedding
         vectors for unseen words will be randomly initialized).
    
    *   `tf.Variable`:
    
     *   Added `CompositeTensor` as a baseclass to `ResourceVariable`. This
         allows `tf.Variable`s to be nested in `tf.experimental.ExtensionType`s.
     *   Added a new constructor argument `experimental_enable_variable_lifting`
         to `tf.Variable`, defaulting to True. When it's `False`, the variable
         won't be lifted out of `tf.function`, thus it can be used as a
         `tf.function`-local variable: during each execution of the
         `tf.function`, the variable will be created and then disposed, similar
         to a local (i.e. stack-allocated) variable in C/C++. Currently
         `experimental_enable_variable_lifting=False` only works on non-XLA
         devices (e.g. under `tf.function(jit_compile=False)`).
    
    *   TF SavedModel:
     *   Added `fingerprint.pb` to the SavedModel directory. The `fingerprint.pb`
         file is a protobuf containing the "fingerprint" of the SavedModel. See
         the [RFC](https://github.com/tensorflow/community/pull/415) for more
         details regarding its design and properties.
    
    *   `tf.data`:
     *   Graduated experimental APIs:
         * [`tf.data.Dataset.ragged_batch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset/#ragged_batch), which batches elements of `tf.data.Dataset`s into `tf.RaggedTensor`s.
         * [`tf.data.Dataset.sparse_batch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset/#sparse_batch), which batches elements of `tf.data.Dataset`s into `tf.sparse.SparseTensor`s.
    
    Bug Fixes and Other Changes
    
    *   `tf.image`
     *   Added an optional parameter `return_index_map` to `tf.image.ssim` which
         causes the returned value to be the local SSIM map instead of the global
         mean.
    
    *   TF Core:
    
     *   `tf.custom_gradient` can now be applied to functions that accept
         "composite" tensors, such as `tf.RaggedTensor`, as inputs.
     *   Fix device placement issues related to datasets with ragged tensors of
         strings (i.e. variant encoded data with types not supported on GPU).
     *   'experimental_follow_type_hints' for tf.function has been deprecated.
         Please use input_signature or reduce_retracing to minimize retracing.
    
    *   `tf.SparseTensor`:
     *   Introduced `set_shape`, which sets the static dense shape of the sparse tensor and has the same semantics as `tf.Tensor.set_shape`.
    
    Security
    
    * TF is currently using giflib 5.2.1 which has [CVE-2022-28506](https://nvd.nist.gov/vuln/detail/CVE-2022-28506). TF is not affected by the CVE as it does not use `DumpScreen2RGB` at all.
    *   Fixes an OOB seg fault in `DynamicStitch` due to missing validation ([CVE-2022-41883](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41883))
    *   Fixes an overflow in `tf.keras.losses.poisson` ([CVE-2022-41887](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41887))
    *   Fixes a heap OOB failure in `ThreadUnsafeUnigramCandidateSampler` caused by missing validation ([CVE-2022-41880](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880))
    *   Fixes a segfault in `ndarray_tensor_bridge` ([CVE-2022-41884](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884))
    *   Fixes an overflow in `FusedResizeAndPadConv2D` ([CVE-2022-41885](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885))
    *   Fixes a overflow in `ImageProjectiveTransformV2` ([CVE-2022-41886](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886))
    *   Fixes an FPE in `tf.image.generate_bounding_box_proposals` on GPU ([CVE-2022-41888](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888))
    *   Fixes a segfault in `pywrap_tfe_src` caused by invalid attributes ([CVE-2022-41889](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889))
    *   Fixes a `CHECK` fail in `BCast` ([CVE-2022-41890](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890))
    *   Fixes a segfault in `TensorListConcat` ([CVE-2022-41891](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891))
    *   Fixes a `CHECK_EQ` fail in `TensorListResize` ([CVE-2022-41893](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893))
    *   Fixes an overflow in `CONV_3D_TRANSPOSE` on TFLite ([CVE-2022-41894](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894))
    *   Fixes a heap OOB in `MirrorPadGrad` ([CVE-2022-41895](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895))
    *   Fixes a crash in `Mfcc` ([CVE-2022-41896](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896))
    *   Fixes a heap OOB in `FractionalMaxPoolGrad` ([CVE-2022-41897](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897))
    *   Fixes a `CHECK` fail in `SparseFillEmptyRowsGrad` ([CVE-2022-41898](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898))
    *   Fixes a `CHECK` fail in `SdcaOptimizer` ([CVE-2022-41899](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899))
    *   Fixes a heap OOB in `FractionalAvgPool` and `FractionalMaxPool`([CVE-2022-41900](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41900))
    *   Fixes a `CHECK_EQ` in `SparseMatrixNNZ` ([CVE-2022-41901](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41901))
    *   Fixes an OOB write in grappler ([CVE-2022-41902](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41902))
    *   Fixes a overflow in `ResizeNearestNeighborGrad` ([CVE-2022-41907](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41907))
    *   Fixes a `CHECK` fail in `PyFunc` ([CVE-2022-41908](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41908))
    *   Fixes a segfault in `CompositeTensorVariantToComponents` ([CVE-2022-41909](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41909))
    *   Fixes a invalid char to bool conversion in printing a tensor ([CVE-2022-41911](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41911))
    *   Fixes a heap overflow in `QuantizeAndDequantizeV2` ([CVE-2022-41910](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41910))
    *   Fixes a `CHECK` failure in `SobolSample` via missing validation ([CVE-2022-35935](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935))
    *   Fixes a `CHECK` fail in `TensorListScatter` and `TensorListScatterV2` in eager mode ([CVE-2022-35935](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935))
    
    Thanks to our Contributors
    
    This release contains contributions from many people at Google, as well as:
    
    103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal,
    amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane,
    Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren,
    Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam,
    Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande,
    george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar,
    Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam,
    Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler,
    mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm,
    Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram,
    Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra,
    RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd,
    Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv,
    Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa,
    syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon,
    tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant,
    Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak,
    Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
    

    2.10.1

    This release introduces several vulnerability fixes:
    
    *   Fixes an OOB seg fault in `DynamicStitch` due to missing validation ([CVE-2022-41883](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41883))
    *   Fixes an overflow in `tf.keras.losses.poisson` ([CVE-2022-41887](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41887))
    *   Fixes a heap OOB failure in `ThreadUnsafeUnigramCandidateSampler` caused by missing validation ([CVE-2022-41880](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880))
    *   Fixes a segfault in `ndarray_tensor_bridge` ([CVE-2022-41884](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884))
    *   Fixes an overflow in `FusedResizeAndPadConv2D` ([CVE-2022-41885](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885))
    *   Fixes a overflow in `ImageProjectiveTransformV2` ([CVE-2022-41886](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886))
    *   Fixes an FPE in `tf.image.generate_bounding_box_proposals` on GPU ([CVE-2022-41888](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888))
    *   Fixes a segfault in `pywrap_tfe_src` caused by invalid attributes ([CVE-2022-41889](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889))
    *   Fixes a `CHECK` fail in `BCast` ([CVE-2022-41890](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890))
    *   Fixes a segfault in `TensorListConcat` ([CVE-2022-41891](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891))
    *   Fixes a `CHECK_EQ` fail in `TensorListResize` ([CVE-2022-41893](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893))
    *   Fixes an overflow in `CONV_3D_TRANSPOSE` on TFLite ([CVE-2022-41894](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894))
    *   Fixes a heap OOB in `MirrorPadGrad` ([CVE-2022-41895](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895))
    *   Fixes a crash in `Mfcc` ([CVE-2022-41896](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896))
    *   Fixes a heap OOB in `FractionalMaxPoolGrad` ([CVE-2022-41897](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897))
    *   Fixes a `CHECK` fail in `SparseFillEmptyRowsGrad` ([CVE-2022-41898](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898))
    *   Fixes a `CHECK` fail in `SdcaOptimizer` ([CVE-2022-41899](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899))
    *   Fixes a heap OOB in `FractionalAvgPool` and `FractionalMaxPool`([CVE-2022-41900](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41900))
    *   Fixes a `CHECK_EQ` in `SparseMatrixNNZ` ([CVE-2022-41901](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41901))
    *   Fixes an OOB write in grappler ([CVE-2022-41902](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41902))
    *   Fixes a overflow in `ResizeNearestNeighborGrad` ([CVE-2022-41907](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41907))
    *   Fixes a `CHECK` fail in `PyFunc` ([CVE-2022-41908](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41908))
    *   Fixes a segfault in `CompositeTensorVariantToComponents` ([CVE-2022-41909](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41909))
    *   Fixes a invalid char to bool conversion in printing a tensor ([CVE-2022-41911](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41911))
    *   Fixes a heap overflow in `QuantizeAndDequantizeV2` ([CVE-2022-41910](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41910))
    *   Fixes a `CHECK` failure in `SobolSample` via missing validation ([CVE-2022-35935](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935))
    *   Fixes a `CHECK` fail in `TensorListScatter` and `TensorListScatterV2` in eager mode ([CVE-2022-35935](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935))
    

    2.10.0

    Breaking Changes
    
    *   Causal attention in `keras.layers.Attention` and
     `keras.layers.AdditiveAttention` is now specified in the `call()` method via
     the `use_causal_mask` argument (rather than in the constructor), for
     consistency with other layers.
    *   Some files in `tensorflow/python/training` have been moved to
     `tensorflow/python/tracking` and `tensorflow/python/checkpoint`. Please
     update your imports accordingly, the old files will be removed in Release
     2.11.
    

    2.9.3

    This release introduces several vulnerability fixes:
    
    *   Fixes an overflow in `tf.keras.losses.poisson` ([CVE-2022-41887](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41887))
    *   Fixes a heap OOB failure in `ThreadUnsafeUnigramCandidateSampler` caused by missing validation ([CVE-2022-41880](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880))
    *   Fixes a segfault in `ndarray_tensor_bridge` ([CVE-2022-41884](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884))
    *   Fixes an overflow in `FusedResizeAndPadConv2D` ([CVE-2022-41885](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885))
    *   Fixes a overflow in `ImageProjectiveTransformV2` ([CVE-2022-41886](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886))
    *   Fixes an FPE in `tf.image.generate_bounding_box_proposals` on GPU ([CVE-2022-41888](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888))
    *   Fixes a segfault in `pywrap_tfe_src` caused by invalid attributes ([CVE-2022-41889](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889))
    *   Fixes a `CHECK` fail in `BCast` ([CVE-2022-41890](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890))
    *   Fixes a segfault in `TensorListConcat` ([CVE-2022-41891](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891))
    *   Fixes a `CHECK_EQ` fail in `TensorListResize` ([CVE-2022-41893](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893))
    *   Fixes an overflow in `CONV_3D_TRANSPOSE` on TFLite ([CVE-2022-41894](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894))
    *   Fixes a heap OOB in `MirrorPadGrad` ([CVE-2022-41895](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895))
    *   Fixes a crash in `Mfcc` ([CVE-2022-41896](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896))
    *   Fixes a heap OOB in `FractionalMaxPoolGrad` ([CVE-2022-41897](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897))
    *   Fixes a `CHECK` fail in `SparseFillEmptyRowsGrad` ([CVE-2022-41898](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898))
    *   Fixes a `CHECK` fail in `SdcaOptimizer` ([CVE-2022-41899](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899))
    *   Fixes a heap OOB in `FractionalAvgPool` and `FractionalMaxPool`([CVE-2022-41900](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41900))
    *   Fixes a `CHECK_EQ` in `SparseMatrixNNZ` ([CVE-2022-41901](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41901))
    *   Fixes an OOB write in grappler ([CVE-2022-41902](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41902))
    *   Fixes a overflow in `ResizeNearestNeighborGrad` ([CVE-2022-41907](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41907))
    *   Fixes a `CHECK` fail in `PyFunc` ([CVE-2022-41908](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41908))
    *   Fixes a segfault in `CompositeTensorVariantToComponents` ([CVE-2022-41909](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41909))
    *   Fixes a invalid char to bool conversion in printing a tensor ([CVE-2022-41911](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41911))
    *   Fixes a heap overflow in `QuantizeAndDequantizeV2` ([CVE-2022-41910](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41910))
    *   Fixes a `CHECK` failure in `SobolSample` via missing validation ([CVE-2022-35935](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935))
    *   Fixes a `CHECK` fail in `TensorListScatter` and `TensorListScatterV2` in eager mode ([CVE-2022-35935](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935))
    

    2.9.2

    This releases introduces several vulnerability fixes:
    
    *   Fixes a `CHECK` failure in tf.reshape caused by overflows ([CVE-2022-35934](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35934))
    *   Fixes a `CHECK` failure in `SobolSample` caused by missing validation ([CVE-2022-35935](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935))
    *   Fixes an OOB read in `Gather_nd` op in TF Lite ([CVE-2022-35937](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35937))
    *   Fixes a `CHECK` failure in `TensorListReserve` caused by missing validation ([CVE-2022-35960](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35960))
    *   Fixes an OOB write in `Scatter_nd` op in TF Lite ([CVE-2022-35939](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35939))
    *   Fixes an integer overflow in `RaggedRangeOp` ([CVE-2022-35940](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35940))
    *   Fixes a `CHECK` failure in `AvgPoolOp` ([CVE-2022-35941](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35941))
    *   Fixes a `CHECK` failures in `UnbatchGradOp` ([CVE-2022-35952](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35952))
    *   Fixes a segfault TFLite converter on per-channel quantized transposed convolutions ([CVE-2022-36027](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36027))
    *   Fixes a `CHECK` failures in `AvgPool3DGrad` ([CVE-2022-35959](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35959))
    *   Fixes a `CHECK` failures in `FractionalAvgPoolGrad` ([CVE-2022-35963](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35963))
    *   Fixes a segfault in `BlockLSTMGradV2` ([CVE-2022-35964](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35964))
    *   Fixes a segfault in `LowerBound` and `UpperBound` ([CVE-2022-35965](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35965))
    *   Fixes a segfault in `QuantizedAvgPool` ([CVE-2022-35966](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35966))
    *   Fixes a segfault in `QuantizedAdd` ([CVE-2022-35967](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35967))
    *   Fixes a `CHECK` fail in `AvgPoolGrad` ([CVE-2022-35968](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35968))
    *   Fixes a `CHECK` fail in `Conv2DBackpropInput` ([CVE-2022-35969](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35969))
    *   Fixes a segfault in `QuantizedInstanceNorm` ([CVE-2022-35970](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35970))
    *   Fixes a `CHECK` fail in `FakeQuantWithMinMaxVars` ([CVE-2022-35971](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35971))
    *   Fixes a segfault in `Requantize` ([CVE-2022-36017](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36017))
    *   Fixes a segfault in `QuantizedBiasAdd` ([CVE-2022-35972](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35972))
    *   Fixes a `CHECK` fail in `FakeQuantWithMinMaxVarsPerChannel` ([CVE-2022-36019](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36019))
    *   Fixes a segfault in `QuantizedMatMul` ([CVE-2022-35973](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35973))
    *   Fixes a segfault in `QuantizeDownAndShrinkRange` ([CVE-2022-35974](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35974))
    *   Fixes segfaults in `QuantizedRelu` and `QuantizedRelu6` ([CVE-2022-35979](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35979))
    *   Fixes a `CHECK` fail in `FractionalMaxPoolGrad` ([CVE-2022-35981](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35981))
    *   Fixes a `CHECK` fail in `RaggedTensorToVariant` ([CVE-2022-36018](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36018))
    *   Fixes a `CHECK` fail in `QuantizeAndDequantizeV3` ([CVE-2022-36026](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36026))
    *   Fixes a segfault in `SparseBincount` ([CVE-2022-35982](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35982))
    *   Fixes a `CHECK` fail in `Save` and `SaveSlices` ([CVE-2022-35983](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35983))
    *   Fixes a `CHECK` fail in `ParameterizedTruncatedNormal` ([CVE-2022-35984](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35984))
    *   Fixes a `CHECK` fail in `LRNGrad` ([CVE-2022-35985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35985))
    *   Fixes a segfault in `RaggedBincount` ([CVE-2022-35986](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35986))
    *   Fixes a `CHECK` fail in `DenseBincount` ([CVE-2022-35987](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35987))
    *   Fixes a `CHECK` fail in `tf.linalg.matrix_rank` ([CVE-2022-35988](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35988))
    *   Fixes a `CHECK` fail in `MaxPool` ([CVE-2022-35989](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35989))
    *   Fixes a `CHECK` fail in `Conv2DBackpropInput` ([CVE-2022-35999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35999))
    *   Fixes a `CHECK` fail in `EmptyTensorList` ([CVE-2022-35998](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35998))
    *   Fixes a `CHECK` fail in `tf.sparse.cross` ([CVE-2022-35997](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35997))
    *   Fixes a floating point exception in `Conv2D` ([CVE-2022-35996](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35996))
    *   Fixes a `CHECK` fail in `AudioSummaryV2` ([CVE-2022-35995](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35995))
    *   Fixes a `CHECK` fail in `CollectiveGather` ([CVE-2022-35994](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35994))
    *   Fixes a `CHECK` fail in `SetSize` ([CVE-2022-35993](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35993))
    *   Fixes a `CHECK` fail in `TensorListFromTensor` ([CVE-2022-35992](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35992))
    *   Fixes a `CHECK` fail in `TensorListScatter` and `TensorListScatterV2` ([CVE-2022-35991](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35991))
    *   Fixes a `CHECK` fail in `FakeQuantWithMinMaxVarsPerChannelGradient` ([CVE-2022-35990](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35990))
    *   Fixes a `CHECK` fail in `FakeQuantWithMinMaxVarsGradient` ([CVE-2022-36005](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36005))
    *   Fixes a `CHECK` fail in `tf.random.gamma` ([CVE-2022-36004](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36004))
    *   Fixes a `CHECK` fail in `RandomPoissonV2` ([CVE-2022-36003](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36003))
    *   Fixes a `CHECK` fail in `Unbatch` ([CVE-2022-36002](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36002))
    *   Fixes a `CHECK` fail in `DrawBoundingBoxes` ([CVE-2022-36001](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36001))
    *   Fixes a `CHECK` fail in `Eig` ([CVE-2022-36000](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36000))
    *   Fixes a null dereference on MLIR on empty function attributes ([CVE-2022-36011](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36011))
    *   Fixes an assertion failure on MLIR empty edge names ([CVE-2022-36012](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36012))
    *   Fixes a null-dereference in `mlir::tfg::GraphDefImporter::ConvertNodeDef` ([CVE-2022-36013](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36013))
    *   Fixes a null-dereference in `mlir::tfg::TFOp::nameAttr` ([CVE-2022-36014](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36014))
    *   Fixes an integer overflow in math ops ([CVE-2022-36015](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36015))
    *   Fixes a `CHECK`-fail in `tensorflow::full_type::SubstituteFromAttrs` ([CVE-2022-36016](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36016))
    *   Fixes an OOB read in `Gather_nd` op in TF Lite Micro ([CVE-2022-35938](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35938))
    
    Links
    • PyPI: https://pypi.org/project/tensorflow
    • Changelog: https://pyup.io/changelogs/tensorflow/
    • Repo: https://github.com/tensorflow/tensorflow/tags
    • Homepage: https://www.tensorflow.org/

    Update keras from 2.6.0 to 2.11.0.

    Changelog

    2.11.0

    Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0 for more details.
    
    **Full Changelog**: https://github.com/keras-team/keras/compare/v2.10.0...v2.11.0
    

    2.11.0rc3

    What's Changed
    * Cherrypick pull request 17225 from lgeiger:fix-mixed-precision-ema by qlzh727 in https://github.com/keras-team/keras/pull/17226
    
    
    **Full Changelog**: https://github.com/keras-team/keras/compare/v2.11.0-rc2...v2.11.0-rc3
    

    2.11.0rc2

    What's Changed
    * Cherrypick for cl/482011499: Throw error on deprecated fields. by qlzh727 in https://github.com/keras-team/keras/pull/17179
    
    
    **Full Changelog**: https://github.com/keras-team/keras/compare/v2.11.0-rc1...v2.11.0-rc2
    

    2.11.0rc1

    Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0-rc1 for more details.
    
    What's Changed
    * Fix TypeError positional argument when LossScalerOptimizer is used conjointly with tfa wrappers by lucasdavid in https://github.com/keras-team/keras/pull/16332
    * Add type check to axis by sachinprasadhs in https://github.com/keras-team/keras/pull/16208
    * minor documention fix by bmatschke in https://github.com/keras-team/keras/pull/16331
    * Fix typos in data_adapter.py by taegeonum in https://github.com/keras-team/keras/pull/16326
    * Add `exclude_from_weight_decay` to AdamW by markub3327 in https://github.com/keras-team/keras/pull/16274
    * Switching learning/brain dependency to OSS compatible test_util by copybara-service in https://github.com/keras-team/keras/pull/16362
    * Typo fix in LSTM docstring by peskaf in https://github.com/keras-team/keras/pull/16364
    * Copy loss and metric to prevent side effect by drauh in https://github.com/keras-team/keras/pull/16360
    * Denormalization layer by markub3327 in https://github.com/keras-team/keras/pull/16350
    * Fix `reset_states` not working when invoked within a `tf.function` in graph mode. by copybara-service in https://github.com/keras-team/keras/pull/16400
    * Reduce the complexity of the base layer by pulling out the logic related to handling call function args to a separate class. by copybara-service in https://github.com/keras-team/keras/pull/16375
    * Add subset="both" functionality to {image|text}_dataset_from_directory() by Haaris-Rahman in https://github.com/keras-team/keras/pull/16413
    * Fix non-float32 efficientnet calls by hctomkins in https://github.com/keras-team/keras/pull/16402
    * Fix prediction with structured output by itmo153277 in https://github.com/keras-team/keras/pull/16408
    * Add reference to resource variables. by sachinprasadhs in https://github.com/keras-team/keras/pull/16409
    * added audio_dataset.py by hazemessamm in https://github.com/keras-team/keras/pull/16388
    * Fix Syntax error for combined_model.compile of WideDeepModel by gadagashwini in https://github.com/keras-team/keras/pull/16447
    * Missing `f` prefix on f-strings fix by code-review-doctor in https://github.com/keras-team/keras/pull/16459
    * Update CONTRIBUTING.md by rthadur in https://github.com/keras-team/keras/pull/15998
    * adds split_dataset utility  by prakashsellathurai in https://github.com/keras-team/keras/pull/16398
    * Support increasing batch size by markus-hinsche in https://github.com/keras-team/keras/pull/16337
    * Add ConvNeXt models by sayakpaul in https://github.com/keras-team/keras/pull/16421
    * Fix OrthogonalRegularizer to implement the (1,1) matrix norm by Kiwiakos in https://github.com/keras-team/keras/pull/16521
    * fix: weight keys so that imagenet init works by sayakpaul in https://github.com/keras-team/keras/pull/16528
    * Preprocess input correction by AdityaKane2001 in https://github.com/keras-team/keras/pull/16527
    * Fix typo in documentation by sushreebarsa in https://github.com/keras-team/keras/pull/16534
    * Update index_lookup.py by tilakrayal in https://github.com/keras-team/keras/pull/16460
    * update codespaces bazel install by haifeng-jin in https://github.com/keras-team/keras/pull/16575
    * reduce too long lines in engine/ by haifeng-jin in https://github.com/keras-team/keras/pull/16579
    * Fix typos by eltociear in https://github.com/keras-team/keras/pull/16568
    * Fix mixed precision serialization of group convs by lgeiger in https://github.com/keras-team/keras/pull/16571
    * reduce layers line-too-long by haifeng-jin in https://github.com/keras-team/keras/pull/16580
    * resolve line-too-long in root directory by haifeng-jin in https://github.com/keras-team/keras/pull/16584
    * resolve line-too-long in metrics by haifeng-jin in https://github.com/keras-team/keras/pull/16586
    * resolve line-too-long in optimizers by haifeng-jin in https://github.com/keras-team/keras/pull/16587
    * resolve line-too-long in distribute by haifeng-jin in https://github.com/keras-team/keras/pull/16594
    * resolve line-too-long in integration_test by haifeng-jin in https://github.com/keras-team/keras/pull/16599
    * resovle line-too-long in legacy-tf-layers by haifeng-jin in https://github.com/keras-team/keras/pull/16600
    * resolve line-too-long in initializers by haifeng-jin in https://github.com/keras-team/keras/pull/16598
    * resolve line-too-long in api by haifeng-jin in https://github.com/keras-team/keras/pull/16592
    * resolve line-too-long in benchmarks by haifeng-jin in https://github.com/keras-team/keras/pull/16593
    * resolve line-too-long in feature_column by haifeng-jin in https://github.com/keras-team/keras/pull/16597
    * resolve line-too-long in datasets by haifeng-jin in https://github.com/keras-team/keras/pull/16591
    * resolve line-too-long in dtensor by haifeng-jin in https://github.com/keras-team/keras/pull/16595
    * resolve line-too-long in estimator by haifeng-jin in https://github.com/keras-team/keras/pull/16596
    * resolve line-too-long in applications by haifeng-jin in https://github.com/keras-team/keras/pull/16590
    * resolve line-too-long in mixed_precision by haifeng-jin in https://github.com/keras-team/keras/pull/16605
    * resolve line-too-long in models by haifeng-jin in https://github.com/keras-team/keras/pull/16606
    * resolve line-too-long in premade_models by haifeng-jin in https://github.com/keras-team/keras/pull/16608
    * resolve line-too-long in tests by haifeng-jin in https://github.com/keras-team/keras/pull/16613
    * resolve line-too-long in testing_infra by haifeng-jin in https://github.com/keras-team/keras/pull/16612
    * resolve line-too-long in saving by haifeng-jin in https://github.com/keras-team/keras/pull/16611
    * resolve line-too-long in preprocessing by haifeng-jin in https://github.com/keras-team/keras/pull/16609
    * resolve line-too-long in utils by haifeng-jin in https://github.com/keras-team/keras/pull/16614
    * Optimize L2 Regularizer (use tf.nn.l2_loss) by szutenberg in https://github.com/keras-team/keras/pull/16537
    * let the linter ignore certain lines, prepare to enforce line length by haifeng-jin in https://github.com/keras-team/keras/pull/16617
    * Fix typo by m-ahmadi in https://github.com/keras-team/keras/pull/16607
    * Explicitely set `AutoShardPolicy.DATA` for `TensorLike` datasets by lgeiger in https://github.com/keras-team/keras/pull/16604
    * Fix all flake8 errors by haifeng-jin in https://github.com/keras-team/keras/pull/16621
    * Update lint.yml by haifeng-jin in https://github.com/keras-team/keras/pull/16648
    * Fix typo error of tf.compat.v1.keras.experimental for export and load model by gadagashwini in https://github.com/keras-team/keras/pull/16636
    * Fix documentation in keras.datasets.imdb by luckynozomi in https://github.com/keras-team/keras/pull/16673
    * Update __init__.py by Wehzie in https://github.com/keras-team/keras/pull/16557
    * Fix documentation in keras.layers.attention.multi_head_attention by balvisio in https://github.com/keras-team/keras/pull/16683
    * Fix missed parameter from AUC config by weipeilun in https://github.com/keras-team/keras/pull/16499
    * Fix bug for KerasTensor._keras_mask should be None by haifeng-jin in https://github.com/keras-team/keras/pull/16689
    * Fixed some spellings by synandi in https://github.com/keras-team/keras/pull/16693
    * Fix batchnorm momentum in ResNetRS by shkarupa-alex in https://github.com/keras-team/keras/pull/16726
    * Add variable definitions in optimizer usage example by miker2241 in https://github.com/keras-team/keras/pull/16731
    * Fixed issue 16749 by asukakenji in https://github.com/keras-team/keras/pull/16751
    * Fix usage of deprecated Pillow interpolation methods by neoaggelos in https://github.com/keras-team/keras/pull/16746
    * :memo: Add typing to some callback classes by gabrieldemarmiesse in https://github.com/keras-team/keras/pull/16692
    * Add support for Keras mask & causal mask to MultiHeadAttention by ageron in https://github.com/keras-team/keras/pull/16619
    * Update standard name by chunduriv in https://github.com/keras-team/keras/pull/16772
    * Fix error when labels contains brackets when plotting model by cBournhonesque in https://github.com/keras-team/keras/pull/16739
    * Fixing the incorrect link in input_layer.py by tilakrayal in https://github.com/keras-team/keras/pull/16767
    * Formatted callback.py to render correctly by jvishnuvardhan in https://github.com/keras-team/keras/pull/16765
    * Fixed typo in docs by ltiao in https://github.com/keras-team/keras/pull/16778
    * docs: Fix a few typos by timgates42 in https://github.com/keras-team/keras/pull/16789
    * Add ignore_class to sparse crossentropy and IoU by lucasdavid in https://github.com/keras-team/keras/pull/16712
    * Updated f-string method by cyai in https://github.com/keras-team/keras/pull/16799
    * Fix NASNet input shape computation by ianstenbit in https://github.com/keras-team/keras/pull/16818
    * Fix incorrect ref. to learning_rate_schedule during module import by lucasdavid in https://github.com/keras-team/keras/pull/16813
    * Fixing the incorrect link in backend.py by tilakrayal in https://github.com/keras-team/keras/pull/16806
    * Corrected DepthwiseConv1D docstring by AdityaKane2001 in https://github.com/keras-team/keras/pull/16807
    * Typo and grammar: "recieved" by ehrencrona in https://github.com/keras-team/keras/pull/16814
    * Fix typo in doc by DyeKuu in https://github.com/keras-team/keras/pull/16821
    * Update README.md by freddy1020 in https://github.com/keras-team/keras/pull/16823
    * Updated f-string method by cyai in https://github.com/keras-team/keras/pull/16775
    * Add `is_legacy_optimizer` to optimizer config to keep saving/loading consistent. by copybara-service in https://github.com/keras-team/keras/pull/16842
    * Used Flynt to update f-string method by cyai in https://github.com/keras-team/keras/pull/16774
    * CONTRIBUTING.md file updated by nivasgopi30 in https://github.com/keras-team/keras/pull/16084
    * Updated f-string method by cyai in https://github.com/keras-team/keras/pull/16777
    * added an encoding parameter to TextVectorization layer by tonyrubanraj in https://github.com/keras-team/keras/pull/16805
    * Incorrectly rendered table by chunduriv in https://github.com/keras-team/keras/pull/16839
    * fix(v1):  avoid calling training_v1.Model.metrics during PREDICT by s22chan in https://github.com/keras-team/keras/pull/16603
    * Update `tf.keras.preprocessing.image*` to `tf.keras.utils*` by chunduriv in https://github.com/keras-team/keras/pull/16864
    * Updating get_file() to respect KERAS_HOME environment variable by adrianjhpc in https://github.com/keras-team/keras/pull/16877
    * Add f-string format and check with flynt for the whole codebase by edumucelli in https://github.com/keras-team/keras/pull/16872
    * configurable `distribute_reduction_method` in Model. by kretes in https://github.com/keras-team/keras/pull/16664
    * Fix docs of `metrics` parameter in `compile` by matangover in https://github.com/keras-team/keras/pull/16893
    * [Refactoring] making the code more Succinct  and Pythonic by maldil in https://github.com/keras-team/keras/pull/16874
    * Fix Value Error for Units of tf.keras.layers.LSTM by gadagashwini in https://github.com/keras-team/keras/pull/16929
    * Fix Value error for Units of tf.keras.layers.SimpleRNN by gadagashwini in https://github.com/keras-team/keras/pull/16926
    * Fix value error for Units of tf.keras.layers.Dense by gadagashwini in https://github.com/keras-team/keras/pull/16921
    * Fixed: 16936 broken hyperlink by Anselmoo in https://github.com/keras-team/keras/pull/16937
    * Fix Value error of tf.keras.layers.GRU by gadagashwini in https://github.com/keras-team/keras/pull/16963
    * Update `Returns` section in `compute_output_shape` by chunduriv in https://github.com/keras-team/keras/pull/16955
    * Implement compute_output_shape() method for MultiHeadAttention 16951 by Pouyanpi in https://github.com/keras-team/keras/pull/16989
    * Typo fixed by edwardyehuang in https://github.com/keras-team/keras/pull/17000
    * PR for solving issue 16797 by JaimeArboleda in https://github.com/keras-team/keras/pull/16870
    * Add imports to base_rnn example by beyarkay in https://github.com/keras-team/keras/pull/17025
    * Update conv layer docs to reflect lack of CPU support for channels_first by ianstenbit in https://github.com/keras-team/keras/pull/17034
    * Fixed Broken link of paper jozefowicz15  et al by mohantym in https://github.com/keras-team/keras/pull/17038
    * GitHub Workflows security hardening by sashashura in https://github.com/keras-team/keras/pull/17050
    * Update normalization.py to fix a bug when "denormalizing" by Vincent-SV in https://github.com/keras-team/keras/pull/17054
    * Fix IndexError when outs is empty by rhelmeczi in https://github.com/keras-team/keras/pull/17081
    * Fix typos in docstrings by pitmonticone in https://github.com/keras-team/keras/pull/17096
    * EarlyStopping add initial warm-up 16793 by inonbe in https://github.com/keras-team/keras/pull/17022
    * More Tests for customizable reduction strategy in model by lucasdavid in https://github.com/keras-team/keras/pull/16922
    * Fix Batch Normalization inference behavior when virtual_batch_size is set by myaaaaaaaaa in https://github.com/keras-team/keras/pull/17065
    * Include dictionary comprehension by boneyag in https://github.com/keras-team/keras/pull/17119
    * Fixes ConvNeXt and RegNet when input_tensor is given by shanejohnpaul in https://github.com/keras-team/keras/pull/17068
    * Cherrypick the fix for zlib by qlzh727 in https://github.com/keras-team/keras/pull/17153
    * Cherrypick all the fixes since last branch cut to r2.11. by qlzh727 in https://github.com/keras-team/keras/pull/17160
    * Cherrypick 2 more changes for the optimizer docstring fix. by qlzh727 in https://github.com/keras-team/keras/pull/171
    opened by pyup-bot 0
  • Bump qs from 6.5.2 to 6.5.3

    Bump qs from 6.5.2 to 6.5.3

    Bumps qs from 6.5.2 to 6.5.3.

    Changelog

    Sourced from qs's changelog.

    6.5.3

    • [Fix] parse: ignore __proto__ keys (#428)
    • [Fix] utils.merge: avoid a crash with a null target and a truthy non-array source
    • [Fix] correctly parse nested arrays
    • [Fix] stringify: fix a crash with strictNullHandling and a custom filter/serializeDate (#279)
    • [Fix] utils: merge: fix crash when source is a truthy primitive & no options are provided
    • [Fix] when parseArrays is false, properly handle keys ending in []
    • [Fix] fix for an impossible situation: when the formatter is called with a non-string value
    • [Fix] utils.merge: avoid a crash with a null target and an array source
    • [Refactor] utils: reduce observable [[Get]]s
    • [Refactor] use cached Array.isArray
    • [Refactor] stringify: Avoid arr = arr.concat(...), push to the existing instance (#269)
    • [Refactor] parse: only need to reassign the var once
    • [Robustness] stringify: avoid relying on a global undefined (#427)
    • [readme] remove travis badge; add github actions/codecov badges; update URLs
    • [Docs] Clean up license text so it’s properly detected as BSD-3-Clause
    • [Docs] Clarify the need for "arrayLimit" option
    • [meta] fix README.md (#399)
    • [meta] add FUNDING.yml
    • [actions] backport actions from main
    • [Tests] always use String(x) over x.toString()
    • [Tests] remove nonexistent tape option
    • [Dev Deps] backport from main
    Commits
    • 298bfa5 v6.5.3
    • ed0f5dc [Fix] parse: ignore __proto__ keys (#428)
    • 691e739 [Robustness] stringify: avoid relying on a global undefined (#427)
    • 1072d57 [readme] remove travis badge; add github actions/codecov badges; update URLs
    • 12ac1c4 [meta] fix README.md (#399)
    • 0338716 [actions] backport actions from main
    • 5639c20 Clean up license text so it’s properly detected as BSD-3-Clause
    • 51b8a0b add FUNDING.yml
    • 45f6759 [Fix] fix for an impossible situation: when the formatter is called with a no...
    • f814a7f [Dev Deps] backport from main
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies javascript 
    opened by dependabot[bot] 0
  • Bump tensorflow from 2.9.1 to 2.9.3 in /requirements

    Bump tensorflow from 2.9.1 to 2.9.3 in /requirements

    Bumps tensorflow from 2.9.1 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies python 
    opened by dependabot[bot] 0
  • Dockerfile

    Dockerfile

    FROM ubuntu:20.04

    RUN apt-get update && yes|apt-get upgrade &&
    apt-get install -y git curl wget nano bzip2 sudo net-tools &&
    apt-get install -y --no-install-recommends apt-utils

    RUN wget https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh &&
    bash Anaconda3-2020.07-Linux-x86_64.sh -b &&
    rm Anaconda3-2020.07-Linux-x86_64.sh

    ENV PATH="/root/anaconda3/bin:${PATH}"

    RUN sudo apt-get update --fix-missing &&
    sudo apt-get install -y gcc g++ &&
    sudo apt-get clean

    RUN sudo rm -rf /var/lib/apt/lists/*

    RUN sudo chown -R root ~/anaconda3/bin &&
    sudo chmod -R +x ~/anaconda3/bin &&
    conda install -c conda-forge jupyterlab &&
    conda install -c conda-forge nodejs &&
    jupyter serverextension enable dask_labextension &&
    conda install -c conda-forge jupyter_kernel_gateway &&
    conda clean -afy

    RUN echo "Version 22.10.0-beta"

    RUN pip install cytoolz &&
    pip install dask-labextension &&
    pip install llvmlite --ignore-installed &&
    pip install git+https://github.com/hi-primus/[email protected]#egg=pyoptimus[pandas] &&
    pip install git+https://github.com/hi-primus/[email protected]#egg=pyoptimus[dask]

    CMD jupyter notebook --port=8888 --no-browser --ip=0.0.0.0 --allow-root

    EXPOSE 8888:8888 8889:8889

    opened by Jandersolutions 0
  • Creating new instances forces to download nltk data

    Creating new instances forces to download nltk data

    Description of the bug When creating instances of optimus engine, the main function optimus.Optimus is using nltk to download stopwords, lemmatizer and POS tagger. The download is made using nltk.downloader with the default parameter halt_on_error=True. When running the package in closed environments (e.g, on private subnets) the download is being blocked which causes the whole process to stuck.

    To Reproduce Steps to reproduce the behavior:

    from optimus import Optimus
    op = Optimus("pandas")
    

    Expected behavior Add an optional boolean parameter which indicates optimus to load nltk data when creating a storage engine instance (default False). For example:

    op = Optimus("pandas", load_nltk_data=True)
    

    Desktop OS:

    • OS: Windows 10 & Ubuntu 20.04 TLS

    Additional context As a workaround I am using the preferred engine directly by importing it from optimus. For example:

    from optimus.optimus import start_pandas
    op = start_pandas()
    
    opened by israelvainberg 0
Releases(v22.10.0-beta)
A Big Data ETL project in PySpark on the historical NYC Taxi Rides data

Processing NYC Taxi Data using PySpark ETL pipeline Description This is an project to extract, transform, and load large amount of data from NYC Taxi

Unnikrishnan 2 Dec 12, 2021
Pandas and Dask test helper methods with beautiful error messages.

beavis Pandas and Dask test helper methods with beautiful error messages. test helpers These test helper methods are meant to be used in test suites.

Matthew Powers 18 Nov 28, 2022
Pyspark project that able to do joins on the spark data frames.

SPARK JOINS This project is to perform inner, all outer joins and semi joins. create_df.py: load_data.py : helps to put data into Spark data frames. d

Joshua 1 Dec 14, 2021
A data structure that extends pyspark.sql.DataFrame with metadata information.

MetaFrame A data structure that extends pyspark.sql.DataFrame with metadata info

Invent Analytics 8 Feb 15, 2022
Improving your data science workflows with

Make Better Defaults Author: Kjell Wooding [email protected] This is the git repo for Makefiles: One great trick for making your conda environments mo

Kjell Wooding 18 Dec 23, 2022
Instant search for and access to many datasets in Pyspark.

SparkDataset Provides instant access to many datasets right from Pyspark (in Spark DataFrame structure). Drop a star if you like the project. ?? Motiv

Souvik Pratiher 31 Dec 16, 2022
Calculate multilateral price indices in Python (with Pandas and PySpark).

IndexNumCalc Calculate multilateral price indices using the GEKS-T (CCDI), Time Product Dummy (TPD), Time Dummy Hedonic (TDH), Geary-Khamis (GK) metho

Dr. Usman Kayani 3 Apr 27, 2022
Pyspark Spotify ETL

This is my first Data Engineering project, it extracts data from the user's recently played tracks using Spotify's API, transforms data and then loads it into Postgresql using SQLAlchemy engine. Data is shown as a Spark Dataframe before loading and the whole ETL job is scheduled with crontab. Token never expires since an HTTP POST method with Spotify's token API is used in the beginning of the script.

null 16 Jun 9, 2022
Churn prediction with PySpark

It is expected to develop a machine learning model that can predict customers who will leave the company.

null 3 Aug 13, 2021
PySpark bindings for H3, a hierarchical hexagonal geospatial indexing system

h3-pyspark: Uber's H3 Hexagonal Hierarchical Geospatial Indexing System in PySpark PySpark bindings for the H3 core library. For available functions,

Kevin Schaich 12 Dec 24, 2022
PySpark Structured Streaming ROS Kafka ApacheSpark Cassandra

PySpark-Structured-Streaming-ROS-Kafka-ApacheSpark-Cassandra The purpose of this project is to demonstrate a structured streaming pipeline with Apache

Zekeriyya Demirci 5 Nov 13, 2022
signac-flow - manage workflows with signac

signac-flow - manage workflows with signac The signac framework helps users manage and scale file-based workflows, facilitating data reuse, sharing, a

Glotzer Group 44 Oct 14, 2022
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

DAGsHub 359 Dec 22, 2022
PyEmits, a python package for easy manipulation in time-series data.

PyEmits, a python package for easy manipulation in time-series data. Time-series data is very common in real life. Engineering FSI industry (Financial

Thompson 5 Sep 23, 2022
Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen 3.7k Jan 3, 2023
Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.

Data lineage made simple, reliable, and automated. Effortlessly track the flow of data, understand dependencies and analyze impact. Features Visualiza

null 898 Jan 9, 2023
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Jan 4, 2023
🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

???? ??. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python and HoloViz Panel.

Marc Skov Madsen 97 Dec 8, 2022
Fast, flexible and easy to use probabilistic modelling in Python.

Please consider citing the JMLR-MLOSS Manuscript if you've used pomegranate in your academic work! pomegranate is a package for building probabilistic

Jacob Schreiber 3k Jan 2, 2023