Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋

Overview

How to eat TensorFlow2 in 30 days ? 🔥 🔥

Click here for Chinese Version(中文版)

《10天吃掉那只pyspark》

《20天吃掉那只Pytorch》

《30天吃掉那只TensorFlow2》

极速通道

1. TensorFlow2 🍎 or Pytorch 🔥

Conclusion first:

For the engineers, priority goes to TensorFlow2.

For the students and researchers,first choice should be Pytorch.

The best way is to master both of them if having sufficient time.

Reasons:

    1. Model implementation is the most important in the industry. Deployment supporting tensorflow models (not Pytorch) exclusively is the present situation in the majority of the Internet enterprises in China. What's more, the industry prefers the models with higher availability; in most cases, they use well-validated modeling architectures with the minimized requirements of adjustment.
    1. Fast iterative development and publication is the most important for the researchers since they need to test a lot of new models. Pytorch has advantages in accessing and debugging comparing with TensorFlow2. Pytorch is most frequently used in academy since 2019 with a large amount of the cutting-edge results.
    1. Overall, TensorFlow2 and Pytorch are quite similar in programming nowadays, so mastering one helps learning the other. Mastering both framework provides you a lot more open-sourced models and helps you switching between them.

2. Keras 🍏 and tf.keras 🍎

Conclusion first:

Keras will be discontinued in development after version 2.3.0, so use tf.keras.

Keras is a high-level API for the deep learning frameworks. It help the users to define and training DL networks with a more intuitive way.

The Keras libraries installed by pip implement this high-level API for the backends in tensorflow, theano, CNTK, etc.

tf.keras is the high-level API just for Tensorflow, which is based on low-level APIs in Tensorflow.

Most but not all of the functions in tf.keras are the same for those in Keras (which is compatible to many kinds of backend). tf.keras has a tighter combination to TensorFlow comparing to Keras.

With the acquisition by Google, Keras will not update after version 2.3.0 , thus the users should use tf.keras from now on, instead of using Keras installed by pip.


3. What Should You Know Before Reading This Book 📖 ?

It is suggested that the readers have foundamental knowledges of machine/deep learning and experience of modeling using Keras or TensorFlow 1.0.

For those who have zero experience of machine/deep learning, it is strongly suggested to refer to "Deep Learning with Python" along with reading this book.

"Deep Learning with Python" is written by François Chollet, the inventor of Keras. This book is based on Keras and has no machine learning related prerequisites to the reader.

"Deep Learning with Python" is easy to understand as it uses various examples to demonstrate. No mathematical equation is in this book since it focuses on cultivating the intuitive to the deep learning.


4. Writing Style 🍉 of This Book

This is a introduction reference book which is extremely friendly to human being. The lowest goal of the authors is to avoid giving up due to the difficulties, while "Don't let the readers think" is the highest target.

This book is mainly based on the official documents of TensorFlow together with its functions.

However, the authors made a thorough restructuring and a lot optimizations on the demonstrations.

It is different from the official documents, which is disordered and contains both tutorial and guidance with lack of systematic logic, that our book redesigns the content according to the difficulties, readers' searching habits, and the architecture of TensorFlow. We now make it progressive for TensorFlow studying with a clear path, and an easy access to the corresponding examples.

In contrast to the verbose demonstrating code, the authors of this book try to minimize the length of the examples to make it easy for reading and implementation. What's more, most of the code cells can be used in your project instantaneously.

Given the level of difficulty as 9 for learning Tensorflow through official documents, it would be reduced to 3 if learning through this book.

This difference in difficulties could be demonstrated as the following figure:


5. How to Learn With This Book

(1) Study Plan

The authors wrote this book using the spare time, especially the two-month unexpected "holiday" of COVID-19. Most readers should be able to completely master all the content within 30 days.

Time required everyday would be between 30 minutes to 2 hours.

This book could also be used as library examples to consult when implementing machine learning projects with TensorFlow2.

Click the blue captions to enter the corresponding chapter.

Date Contents Difficulties Est. Time Update Status
  Chapter 1: Modeling Procedure of TensorFlow ⭐️ 0hour
Day 1 1-1 Example: Modeling Procedure for Structured Data ⭐️ ⭐️ ⭐️ 1hour
Day 2 1-2 Example: Modeling Procedure for Images ⭐️ ⭐️ ⭐️ ⭐️ 2hours
Day 3 1-3 Example: Modeling Procedure for Texts ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hours
Day 4 1-4 Example: Modeling Procedure for Temporal Sequences ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hours
  Chapter 2: Key Concepts of TensorFlow ⭐️ 0hour
Day 5 2-1 Data Structure of Tensor ⭐️ ⭐️ ⭐️ ⭐️ 1hour
Day 6 2-2 Three Types of Graph ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hours
Day 7 2-3 Automatic Differentiate ⭐️ ⭐️ ⭐️ 1hour
  Chapter 3: Hierarchy of TensorFlow ⭐️ 0hour
Day 8 3-1 Low-level API: Demonstration ⭐️ ⭐️ ⭐️ ⭐️ 1hour
Day 9 3-2 Mid-level API: Demonstration ⭐️ ⭐️ ⭐️ 1hour
Day 10 3-3 High-level API: Demonstration ⭐️ ⭐️ ⭐️ 1hour
  Chapter 4: Low-level API in TensorFlow ⭐️ 0hour
Day 11 4-1 Structural Operations of the Tensor ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hours
Day 12 4-2 Mathematical Operations of the Tensor ⭐️ ⭐️ ⭐️ ⭐️ 1hour
Day 13 4-3 Rules of Using the AutoGraph ⭐️ ⭐️ ⭐️ 0.5hour
Day 14 4-4 Mechanisms of the AutoGraph ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hours
Day 15 4-5 AutoGraph and tf.Module ⭐️ ⭐️ ⭐️ ⭐️ 1hour
  Chapter 5: Mid-level API in TensorFlow ⭐️ 0hour
Day 16 5-1 Dataset ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hours
Day 17 5-2 feature_column ⭐️ ⭐️ ⭐️ ⭐️ 1hour
Day 18 5-3 activation ⭐️ ⭐️ ⭐️ 0.5hour
Day 19 5-4 layers ⭐️ ⭐️ ⭐️ 1hour
Day 20 5-5 losses ⭐️ ⭐️ ⭐️ 1hour
Day 21 5-6 metrics ⭐️ ⭐️ ⭐️ 1hour
Day 22 5-7 optimizers ⭐️ ⭐️ ⭐️ 0.5hour
Day 23 5-8 callbacks ⭐️ ⭐️ ⭐️ ⭐️ 1hour
  Chapter 6: High-level API in TensorFlow ⭐️ 0hour
Day 24 6-1 Three Ways of Modeling ⭐️ ⭐️ ⭐️ 1hour
Day 25 6-2 Three Ways of Training ⭐️ ⭐️ ⭐️ ⭐️ 1hour
Day 26 6-3 Model Training Using Single GPU ⭐️ ⭐️ 0.5hour
Day 27 6-4 Model Training Using Multiple GPUs ⭐️ ⭐️ 0.5hour
Day 28 6-5 Model Training Using TPU ⭐️ ⭐️ 0.5hour
Day 29 6-6 Model Deploying Using tensorflow-serving ⭐️ ⭐️ ⭐️ ⭐️ 1hour
Day 30 6-7 Call Tensorflow Model Using spark-scala ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hours
  Epilogue: A Story Between a Foodie and Cuisine ⭐️ 0hour

(2) Software environment for studying

All the source codes are tested in jupyter. It is suggested to clone the repository to local machine and run them in jupyter for an interactive learning experience.

The authors would suggest to install jupytext that converts markdown files into ipynb, so the readers would be able to open markdown files in jupyter directly.

#For the readers in mainland China, using gitee will allow cloning with a faster speed
#!git clone https://gitee.com/Python_Ai_Road/eat_tensorflow2_in_30_days

#It is suggested to install jupytext that converts and run markdown files as ipynb.
#!pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -U jupytext
    
#It is also suggested to install the latest version of TensorFlow to test the demonstrating code in this book
#!pip install -i https://pypi.tuna.tsinghua.edu.cn/simple  -U tensorflow
import tensorflow as tf

#Note: all the codes are tested under TensorFlow 2.1
tf.print("tensorflow version:",tf.__version__)

a = tf.constant("hello")
b = tf.constant("tensorflow2")
c = tf.strings.join([a,b]," ")
tf.print(c)
tensorflow version: 2.1.0
hello tensorflow2

6. Contact and support the author 🎈 🎈

If you find this book helpful and want to support the author, please give a star ⭐️ to this repository and don't forget to share it to your friends 😊

Please leave comments in the WeChat official account "算法美食屋" (Machine Learning cook house) if you want to communicate with the author about the content. The author will try best to reply given the limited time available.

image.png


30天吃掉那只 TensorFlow2

📚 gitbook电子书地址: https://lyhue1991.github.io/eat_tensorflow2_in_30_days

🚀 github项目地址:https://github.com/lyhue1991/eat_tensorflow2_in_30_days

🐳 kesci专栏地址:https://www.kesci.com/home/column/5d8ef3c3037db3002d3aa3a0

极速通道

一,TensorFlow2 🍎 or Pytorch 🔥

先说结论:

如果是工程师,应该优先选TensorFlow2.

如果是学生或者研究人员,应该优先选择Pytorch.

如果时间足够,最好TensorFlow2和Pytorch都要学习掌握。

理由如下:

  • 1,在工业界最重要的是模型落地,目前国内的大部分互联网企业只支持TensorFlow模型的在线部署,不支持Pytorch。 并且工业界更加注重的是模型的高可用性,许多时候使用的都是成熟的模型架构,调试需求并不大。

  • 2,研究人员最重要的是快速迭代发表文章,需要尝试一些较新的模型架构。而Pytorch在易用性上相比TensorFlow2有一些优势,更加方便调试。 并且在2019年以来在学术界占领了大半壁江山,能够找到的相应最新研究成果更多。

  • 3,TensorFlow2和Pytorch实际上整体风格已经非常相似了,学会了其中一个,学习另外一个将比较容易。两种框架都掌握的话,能够参考的开源模型案例更多,并且可以方便地在两种框架之间切换。


二,Keras 🍏 and tf.keras 🍎

先说结论:

Keras库在2.3.0版本后将不再更新,用户应该使用tf.keras。

Keras可以看成是一种深度学习框架的高阶接口规范,它帮助用户以更简洁的形式定义和训练深度学习网络。

使用pip安装的Keras库同时在tensorflow,theano,CNTK等后端基础上进行了这种高阶接口规范的实现。

而tf.keras是在TensorFlow中以TensorFlow低阶API为基础实现的这种高阶接口,它是Tensorflow的一个子模块。

tf.keras绝大部分功能和兼容多种后端的Keras库用法完全一样,但并非全部,它和TensorFlow之间的结合更为紧密。

随着谷歌对Keras的收购,Keras库2.3.0版本后也将不再进行更新,用户应当使用tf.keras而不是使用pip安装的Keras.


三,本书 📖 面向读者 👼

本书假定读者有一定的机器学习和深度学习基础,使用过Keras或者Tensorflow1.0或者Pytorch搭建训练过模型。

对于没有任何机器学习和深度学习基础的同学,建议在学习本书时同步参考学习《Python深度学习》一书。

《Python深度学习》这本书是Keras之父Francois Chollet所著,该书假定读者无任何机器学习知识,以Keras为工具,

使用丰富的范例示范深度学习的最佳实践,该书通俗易懂,全书没有一个数学公式,注重培养读者的深度学习直觉。


四,本书写作风格 🍉

本书是一本对人类用户极其友善的TensorFlow2.0入门工具书,不刻意恶心读者是本书的底限要求,Don't let me think是本书的最高追求。

本书主要是在参考TensorFlow官方文档和函数doc文档基础上整理写成的。

但本书在篇章结构和范例选取上做了大量的优化。

不同于官方文档混乱的篇章结构,既有教程又有指南,缺少整体的编排逻辑。

本书按照内容难易程度、读者检索习惯和TensorFlow自身的层次结构设计内容,循序渐进,层次清晰,方便按照功能查找相应范例。

不同于官方文档冗长的范例代码,本书在范例设计上尽可能简约化和结构化,增强范例易读性和通用性,大部分代码片段在实践中可即取即用。

如果说通过学习TensorFlow官方文档掌握TensorFlow2.0的难度大概是9的话,那么通过学习本书掌握TensorFlow2.0的难度应该大概是3.

谨以下图对比一下TensorFlow官方教程与本教程的差异。


五,本书学习方案

1,学习计划

本书是作者利用工作之余和疫情放假期间大概2个月写成的,大部分读者应该在30天可以完全学会。

预计每天花费的学习时间在30分钟到2个小时之间。

当然,本书也非常适合作为TensorFlow的工具手册在工程落地时作为范例库参考。

点击学习内容蓝色标题即可进入该章节。

日期 学习内容 内容难度 预计学习时间 更新状态
  一、TensorFlow的建模流程 ⭐️ 0hour
day1 1-1,结构化数据建模流程范例 ⭐️ ⭐️ ⭐️ 1hour
day2 1-2,图片数据建模流程范例 ⭐️ ⭐️ ⭐️ ⭐️ 2hour
day3 1-3,文本数据建模流程范例 ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hour
day4 1-4,时间序列数据建模流程范例 ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hour
  二、TensorFlow的核心概念 ⭐️ 0hour
day5 2-1,张量数据结构 ⭐️ ⭐️ ⭐️ ⭐️ 1hour
day6 2-2,三种计算图 ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hour
day7 2-3,自动微分机制 ⭐️ ⭐️ ⭐️ 1hour
  三、TensorFlow的层次结构 ⭐️ 0hour
day8 3-1,低阶API示范 ⭐️ ⭐️ ⭐️ ⭐️ 1hour
day9 3-2,中阶API示范 ⭐️ ⭐️ ⭐️ 1hour
day10 3-3,高阶API示范 ⭐️ ⭐️ ⭐️ 1hour
  四、TensorFlow的低阶API ⭐️ 0hour
day11 4-1,张量的结构操作 ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hour
day12 4-2,张量的数学运算 ⭐️ ⭐️ ⭐️ ⭐️ 1hour
day13 4-3,AutoGraph的使用规范 ⭐️ ⭐️ ⭐️ 0.5hour
day14 4-4,AutoGraph的机制原理 ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hour
day15 4-5,AutoGraph和tf.Module ⭐️ ⭐️ ⭐️ ⭐️ 1hour
  五、TensorFlow的中阶API ⭐️ 0hour
day16 5-1,数据管道Dataset ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hour
day17 5-2,特征列feature_column ⭐️ ⭐️ ⭐️ ⭐️ 1hour
day18 5-3,激活函数activation ⭐️ ⭐️ ⭐️ 0.5hour
day19 5-4,模型层layers ⭐️ ⭐️ ⭐️ 1hour
day20 5-5,损失函数losses ⭐️ ⭐️ ⭐️ 1hour
day21 5-6,评估指标metrics ⭐️ ⭐️ ⭐️ 1hour
day22 5-7,优化器optimizers ⭐️ ⭐️ ⭐️ 0.5hour
day23 5-8,回调函数callbacks ⭐️ ⭐️ ⭐️ ⭐️ 1hour
  六、TensorFlow的高阶API ⭐️ 0hour
day24 6-1,构建模型的3种方法 ⭐️ ⭐️ ⭐️ 1hour
day25 6-2,训练模型的3种方法 ⭐️ ⭐️ ⭐️ ⭐️ 1hour
day26 6-3,使用单GPU训练模型 ⭐️ ⭐️ 0.5hour
day27 6-4,使用多GPU训练模型 ⭐️ ⭐️ 0.5hour
day28 6-5,使用TPU训练模型 ⭐️ ⭐️ 0.5hour
day29 6-6,使用tensorflow-serving部署模型 ⭐️ ⭐️ ⭐️ ⭐️ 1hour
day30 6-7,使用spark-scala调用tensorflow模型 ⭐️ ⭐️ ⭐️ ⭐️ ⭐️ 2hour
  后记:一个吃货和一道菜的故事 ⭐️ 0hour

2,学习环境

本书全部源码在jupyter中编写测试通过,建议通过git克隆到本地,并在jupyter中交互式运行学习。

为了直接能够在jupyter中打开markdown文件,建议安装jupytext,将markdown转换成ipynb文件。

此外,本项目也与和鲸社区达成了合作,可以在和鲸专栏fork本项目,并直接在云笔记本上运行代码,避免环境配置痛苦。

🐳 和鲸专栏地址:https://www.kesci.com/home/column/5d8ef3c3037db3002d3aa3a0

#克隆本书源码到本地,使用码云镜像仓库国内下载速度更快
#!git clone https://gitee.com/Python_Ai_Road/eat_tensorflow2_in_30_days

#建议在jupyter notebook 上安装jupytext,以便能够将本书各章节markdown文件视作ipynb文件运行
#!pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -U jupytext
    
#建议在jupyter notebook 上安装最新版本tensorflow 测试本书中的代码
#!pip install -i https://pypi.tuna.tsinghua.edu.cn/simple  -U tensorflow
import tensorflow as tf

#注:本书全部代码在tensorflow 2.1版本测试通过
tf.print("tensorflow version:",tf.__version__)

a = tf.constant("hello")
b = tf.constant("tensorflow2")
c = tf.strings.join([a,b]," ")
tf.print(c)
tensorflow version: 2.1.0
hello tensorflow2

六,鼓励和联系作者 🎈 🎈

如果本书对你有所帮助,想鼓励一下作者,记得给本项目加一颗星星star ⭐️ ,并分享给你的朋友们喔 😊 !

如果对本书内容理解上有需要进一步和作者交流的地方,欢迎在公众号"算法美食屋"下留言。作者时间和精力有限,会酌情予以回复。

也可以在公众号后台回复关键字:加群,加入读者交流群和大家讨论。

image.png


Comments
  • spark-scala调用tensorflow2.0 模型会报错

    spark-scala调用tensorflow2.0 模型会报错

    有个疑问,原生SavedModelBundle 、Session 类并没有实现serializable 接口,直接

    val broads = sc.broadcast(bundle) 会报 Serialization stack: - object not serializable (class: org.tensorflow.SavedModelBundle, value: org.tensorflow.SavedModelBundle@6a1ebcff)

    的异常,自己要修改原码增加 serializable 接口,要改不少代码,文中是如何做到这点的呢?

    opened by boluoyu 11
  • @符号增加正态扰动的含义?

    @符号增加正态扰动的含义?

    在3-1低阶API示范中准备数据的时候有一条注释是:

    @表示矩阵乘法,增加正态扰动

    具体位置在3-1低阶API示范的“一、线性回归模型”的“1、准备数据”的第一段程序片的最后一行,已附上图片不知道能不能显示 20200517115005

    而在tensorflow的API(matmul )中却这样写道: Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent:

    d = a @ b @ [[10], [11]] d = tf.matmul(tf.matmul(a, b), [[10], [11]])

    在网上找了一圈也没找到关于“矩阵相乘增加正态扰动”等之类的资料,请问增加正态扰动的含义是什么呢或者说是在什么地方用到呢?是与变量X = tf.random.uniform([n,2],minval=-10,maxval=10)此处的random有关吗还是其他?谢谢!!

    opened by songyp0505 4
  • 1-1结构化数据建模流程规范的问题

    1-1结构化数据建模流程规范的问题

    在1-1章中, 作者使用到的y_test = dftest_raw['Survived'].values,其中dftest_raw是没有Survived这一列的, 这个时候会报错。

    不知道作者使用的test data是官方的test data,还是从train data中分割一部分出来成为test data呢? 谢谢!

    opened by Tokkiu 4
  • 使用继承Model基类构建自定义模型的模型加载问题

    使用继承Model基类构建自定义模型的模型加载问题

    模型保存

    model.save('./data/tf_model_savedmodel', save_format="tf")
    

    经测试,只能以这种方式保存,不能保存成keras的h5形式

    模型加载

    model_loaded = tf.keras.models.load_model('./data/tf_model_savedmodel')
    
    

    error

    ValueError: Could not find matching function to call loaded from the SavedModel. Got:
      Positional arguments (2 total):
        * Tensor("x:0", shape=(None, 200), dtype=int32)
        * Tensor("training:0", shape=(), dtype=bool)
      Keyword arguments: {}
    
    Expected these arguments to match one of the following 4 option(s):
    
    Option 1:
      Positional arguments (2 total):
        * TensorSpec(shape=(None, 200), dtype=tf.int32, name='input_1')
        * True
      Keyword arguments: {}
    
    Option 2:
      Positional arguments (2 total):
        * TensorSpec(shape=(None, 200), dtype=tf.int32, name='x')
        * False
      Keyword arguments: {}
    
    Option 3:
      Positional arguments (2 total):
        * TensorSpec(shape=(None, 200), dtype=tf.int32, name='x')
        * True
      Keyword arguments: {}
    
    Option 4:
      Positional arguments (2 total):
        * TensorSpec(shape=(None, 200), dtype=tf.int32, name='input_1')
        * False
      Keyword arguments: {}
    

    成功加载

    load_model = tf.saved_model.load('./data/saved_model')
    
    

    但是这样加载的模型没有编译,无法直接使用model.xxx方法

    目前解决方法

    以tensorflow serving的docker形式部署saved_model 格式的模型

    opened by oohx 3
  • 1-3的Valid Loss为什么在上升?

    1-3的Valid Loss为什么在上升?

    源文档中: Epoch=1,Loss:0.442317516,Accuracy:0.7695,Valid Loss:0.323672801,Valid Accuracy:0.8614 Epoch=2,Loss:0.245737702,Accuracy:0.90215,Valid Loss:0.356488883,Valid Accuracy:0.8554 Epoch=3,Loss:0.17360799,Accuracy:0.93455,Valid Loss:0.361132562,Valid Accuracy:0.8674 Epoch=4,Loss:0.113476314,Accuracy:0.95975,Valid Loss:0.483677238,Valid Accuracy:0.856 Epoch=5,Loss:0.0698405355,Accuracy:0.9768,Valid Loss:0.607856631,Valid Accuracy:0.857 Epoch=6,Loss:0.0366807655,Accuracy:0.98825,Valid Loss:0.745884955,Valid Accuracy:0.854

    我复现后: Epoch=1,Loss:0.679053724,Accuracy:0.55235,Valid Loss:0.572207093,Valid Accuracy:0.717 Epoch=2,Loss:0.467248648,Accuracy:0.7762,Valid Loss:0.491477,Valid Accuracy:0.7588 Epoch=3,Loss:0.349681437,Accuracy:0.8475,Valid Loss:0.514342368,Valid Accuracy:0.7628 Epoch=4,Loss:0.278649092,Accuracy:0.8863,Valid Loss:0.564446032,Valid Accuracy:0.763 Epoch=5,Loss:0.2197005,Accuracy:0.9159,Valid Loss:0.643948495,Valid Accuracy:0.7548 Epoch=6,Loss:0.163983703,Accuracy:0.94135,Valid Loss:0.770707726,Valid Accuracy:0.7524

    可以看到Valid Loss在逐渐上升

    opened by yzho0907 2
  • Some Suggestions for '1-3' Maybe

    Some Suggestions for '1-3' Maybe

    Excuse me. QAQ But I hope to get suggestions!


    Where the issue happens

    Chapter 1-3,文本数据建模流程范例

    # 构建词典
    def clean_text(text):
        ...
        tf.strings.regex_replace(stripped_html,
             '[%s]' % re.escape(string.punctuation),'')
    

    Issue Detail

    In re.escape(string.punctuation),'', should '' be this->' ' ? Otherwise, we'll get "himbut" from "him,but". Additionally, I'm considering we should remove "'" from string.punctuation. Otherwise, we'll get "It's a good" from "it s a good".

    My Edition for These Codes

    def clean_text(text):
        # A string include all punctuations which has been escaped by re.
        # Use '\\' for escape of metacharacters.
        escaped_punctuation = re.escape(string.punctuation.replace("'", ""))
        lowercase = tf.strings.lower(text)
        stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
        cleaned_punctuation = tf.strings.regex_replace(stripped_html,
                                                       '[%s]' % escaped_punctuation, ' ')
    
        return cleaned_punctuation
    
    opened by LEON-REIN 2
  • train-split

    train-split

    应该是将泰坦尼克训练集划分为2份,源代码中好像缺失这个步骤,添加如下代码,确认下哈。

    from sklearn.model_selection import train_test_split dftrain_raw,dftest_raw = train_test_split(dftrain_raw111,test_size=0.2)

    opened by jackzhenguo 2
  • 3-2在GPU上运行会报错,查了一下发现有人在CPU下可以运行

    3-2在GPU上运行会报错,查了一下发现有人在CPU下可以运行

    会报错如下 (0) Internal: No unary variant device copy function found for direction: 1 and Variant type_index: class tensorflow::data::`anonymous namespace'::DatasetVariantWrapper [[{{node while_input_4/_12}}]] (1) Internal: No unary variant device copy function found for direction: 1 and Variant type_index: class tensorflow::data::`anonymous namespace'::DatasetVariantWrapper [[{{node while_input_4/_12}}]] [[Func/while/body/_1/input/_60/_20]]

    opened by since2016 2
  • 3-1 低阶API示范 构建数据管道迭代器

    3-1 低阶API示范 构建数据管道迭代器

    3-1 低阶API示范 构建数据管道迭代器data_iter(features, labels, batch_size=8)函数中, yield tf.gather(X,indexs), tf.gather(Y,indexs) 是不是该写成 tf.gather(features,indexs), tf.gather(labels,indexs)

    opened by mc0514 2
  • 1-2 load_image函数打标签错误

    1-2 load_image函数打标签错误

    ![image](https://user-images.githubusercontent.com/55381998/79407314-c5b0ac00-7fcb-11ea-8546-54e90495fbf1.png 标签全为0,导致后续训练正确率均为1. Train for 100 steps, validate for 20 steps Epoch 1/10 100/100 [==============================] - 16s 162ms/step - loss: 0.0116 - accuracy: 0.9904 - val_loss: 1.2626e-09 - val_accuracy: 1.0000 Epoch 2/10 100/100 [==============================] - 11s 106ms/step - loss: 5.7853e-09 - accuracy: 1.0000 - val_loss: 1.2602e-09 - val_accuracy: 1.0000 Epoch 3/10 100/100 [==============================] - 11s 105ms/step - loss: 5.7422e-09 - accuracy: 1.0000 - val_loss: 1.2595e-09 - val_accuracy: 1.0000 ...

    opened by Josephine621 2
  • 1-2,图片数据建模流程范例

    1-2,图片数据建模流程范例

    `#使用并行化预处理num_parallel_calls 和预存数据prefetch来提升性能 ds_train = tf.data.Dataset.list_files("./data/cifar2/train//.jpg")
    .map(load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    .shuffle(buffer_size = 1000).batch(BATCH_SIZE)
    .prefetch(tf.data.experimental.AUTOTUNE)

    ds_test = tf.data.Dataset.list_files("./data/cifar2/test//.jpg")
    .map(load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(tf.data.experimental.AUTOTUNE) ` 我经过处理后打印标签都相同,不知何处问题?

    opened by stringAlice 2
  • spark scala调用tf模型在集群上,有成功验证的同学可以一起交流下哇

    spark scala调用tf模型在集群上,有成功验证的同学可以一起交流下哇

    Stack trace: ExitCodeException exitCode=134: /bin/bash: line 1: 16828 Aborted LD_LIBRARY_PATH="/usr/lib/hadoop-current/lib/native::/usr/lib/hadoop-current/lib/native:/usr/lib/bigboot-current/lib::/opt/apps/ecm/service/hadoop/2.8.5-1.4.2/package/hadoop-2.8.5-1.4.2/lib/native:/usr/lib/hadoop-current/lib/native:/usr/lib/bigboot-current/lib::/opt/apps/ecm/service/hadoop/2.8.5-1.4.2/package/hadoop-2.8.5-1.4.2/lib/native:/opt/apps/ecm/service/hadoop/2.8.5-1.4.2/package/hadoop-2.8.5-1.4.2/lib/native" /usr/lib/jvm/java-1.8.0/bin/java -server -Xmx1024m '-Dlog4j.ignoreTCL=true' -Djava.io.tmpdir=/mnt/disk1/yarn/usercache/hadoop/appcache/application_1625746977028_0206/container_1625746977028_0206_01_000002/tmp '-Dspark.history.ui.port=18080' '-Dspark.driver.port=41638' '-Dspark.shuffle.service.port=7337' -Dspark.yarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1625746977028_0206/container_1625746977028_0206_01_000002 -Dspark.logger.appender=rolling -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:41638 --executor-id 1 --hostname emr-worker-3.cluster-230630 --cores 1 --app-id application_1625746977028_0206 --user-class-path file:/mnt/disk1/yarn/usercache/hadoop/appcache/application_1625746977028_0206/container_1625746977028_0206_01_000002/app.jar > /mnt/disk2/log/hadoop-yarn/containers/application_1625746977028_0206/container_1625746977028_0206_01_000002/stdout 2> /mnt/disk2/log/hadoop-yarn/containers/application_1625746977028_0206/container_1625746977028_0206_01_000002/stderr

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
        at org.apache.hadoop.util.Shell.run(Shell.java:869)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:235)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    
    opened by shuaidan0412 0
  • 使用spark-scala调用tensorflow模型

    使用spark-scala调用tensorflow模型

    我在单机版是跑通了的,但是我在SPARK集群上进行模型读取的时候 tf.SavedModelBundle.load,显示:Could not find SavedModel .pb or .pbtxt at supplied export directory path 请问集群版的怎么读取模型文件呢?

    opened by feifeiontheway 1
  • 6-2 验证数据集的准确率跟损失计算有问题

    6-2 验证数据集的准确率跟损失计算有问题

    你好, 非常感谢提供6-2的训练流程,三,自定义训练循环中最后验证集准确率跟损失计算有问题,我刚接触tf2,所没找到是哪里的问题。示例中验证集的损失明显过大,准确率过低。为了验证我的想法,我把训练集同时当做了验证集,train_model(model,train_dataset,train_dataset,EPOCH) Epoch=5,Loss:0.30140844,Accuracy:0.917645752,Valid Loss:3.73952508,Valid Accuracy:0.430102259 结果中差别过大。 如果方便的话麻烦调试一下,万分感谢!

    opened by myhrbeu 0
Owner
lyhue1991
Don't let me think, let me eat!😋😋
lyhue1991
Custom TensorFlow2 implementations of forward and backward computation of soft-DTW algorithm in batch mode.

Batch Soft-DTW(Dynamic Time Warping) in TensorFlow2 including forward and backward computation Custom TensorFlow2 implementations of forward and backw

null 19 Aug 30, 2022
null 202 Jan 6, 2023
Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Regression Metrics Installation To install the package from the PyPi repository you can execute the following command: pip install regressionmetrics I

Ashish Patel 11 Dec 16, 2022
Pointer networks Tensorflow2

Pointer networks Tensorflow2 原文:https://arxiv.org/abs/1506.03134 仅供参考与学习,内含代码备注 环境 tensorflow==2.6.0 tqdm matplotlib numpy 《pointer networks》阅读笔记 应用场景

HUANG HAO 7 Oct 27, 2022
Tf alloc - Simplication of GPU allocation for Tensorflow2

tf_alloc Simpliying GPU allocation for Tensorflow Developer: korkite (Junseo Ko)

Junseo Ko 3 Feb 10, 2022
Tensorflow2 Keras-based Semantic Segmentation Models Implementation

Tensorflow2 Keras-based Semantic Segmentation Models Implementation

Hah Min Lew 1 Feb 8, 2022
Collection of TensorFlow2 implementations of Generative Adversarial Network varieties presented in research papers.

TensorFlow2-GAN Collection of tf2.0 implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will

null 41 Apr 28, 2022
Just-Now - This Is Just Now Login Friendlist Cloner Tools

JUST NOW LOGIN FRIENDLIST CLONER TOOLS Install $ apt update $ apt upgrade $ apt

MAHADI HASAN AFRIDI 21 Mar 9, 2022
Just Go with the Flow: Self-Supervised Scene Flow Estimation

Just Go with the Flow: Self-Supervised Scene Flow Estimation Code release for the paper Just Go with the Flow: Self-Supervised Scene Flow Estimation,

Himangi Mittal 50 Nov 22, 2022
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Nerdy Rodent 2.3k Jan 4, 2023
Gesture-controlled Video Game. Just swing your finger and play the game without touching your PC

Gesture Controlled Video Game Detailed Blog : https://www.analyticsvidhya.com/blog/2021/06/gesture-controlled-video-game/ Introduction This project is

Devbrat Anuragi 35 Jan 6, 2023
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 1, 2023
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.

CLIP-Guided-Diffusion Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab. Original colab notebooks by Ka

Nerdy Rodent 336 Dec 9, 2022
TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)

tf-metal-experiments TensorFlow Metal Backend on Apple Silicon Experiments (just for fun) Setup This is tested on M1 series Apple Silicon SOC only. Te

Timothy Liu 161 Jan 3, 2023
A texturizer that I just made. Nothing special here.

texturizer This is a little project that I did with an hour's time. It texturizes an image given a image and a texture to texturize it with. There is

null 1 Nov 11, 2021
Prevent `CUDA error: out of memory` in just 1 line of code.

?? Koila Koila solves CUDA error: out of memory error painlessly. Fix it with just one line of code, and forget it. ?? Features ?? Prevents CUDA error

RenChu Wang 1.7k Jan 2, 2023
Just Randoms Cats with python

Random-Cat Just Randoms Cats with python.

OriCode 2 Dec 21, 2021
Public repository created to store my custom-made tools for Just Dance (UbiArt Engine)

Woody's Just Dance Tools Public repository created to store my custom-made tools for Just Dance (UbiArt Engine) Development and updates Almost all of

Wodson de Andrade 8 Dec 24, 2022
Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code

Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.

Yasunori Shimura 7 Jul 27, 2022