Full Stack Deep Learning Labs

Overview

Full Stack Deep Learning Labs

Welcome!

Project developed during lab sessions of the Full Stack Deep Learning Bootcamp.

  • We will build a handwriting recognition system from scratch, and deploy it as a web service.
  • Uses Keras, but designed to be modular, hackable, and scalable
  • Provides code for training models in parallel and store evaluation in Weights & Biases
  • We will set up continuous integration system for our codebase, which will check functionality of code and evaluate the model about to be deployed.
  • We will package up the prediction system as a REST API, deployable as a Docker container.
  • We will deploy the prediction system as a serverless function to Amazon Lambda.
  • Lastly, we will set up monitoring that alerts us when the incoming data distribution changes.

Schedule for the November 2019 Bootcamp

  • First session (90 min)
    • Setup (10 min): Get set up with jupyterhub.
    • Introduction to problem and project structure (20 min).
    • Gather handwriting data (10 min).
    • Lab 1 (20 min): Introduce EMNIST. Training code details. Train & evaluate character prediction baselines.
    • Lab 2 (30 min): Introduce EMNIST Lines. Overview of CTC loss and model architecture. Train our model on EMNIST Lines.
  • Second session (60 min)
    • Lab 3 (40 min): Weights & Biases + parallel experiments
    • Lab 4 (20 min): IAM Lines and experimentation time (hyperparameter sweeps, leave running overnight).
  • Third session (90 min)
    • Review results from the class on W&B
    • Lab 5 (45 min) Train & evaluate line detection model.
    • Lab 6 (45 min) Label handwriting data generated by the class, download and version results.
  • Fourth session (75 min)
    • Lab 7 (15 min) Add continuous integration that runs linting and tests on our codebase.
    • Lab 8 (60 min) Deploy the trained model to the web using AWS Lambda.
Comments
  • Error: the command training/run_experiment.py could not be found within PATH or Pipfile's [scripts]

    Error: the command training/run_experiment.py could not be found within PATH or Pipfile's [scripts]

    I'm following this FSDL bootcamp with google colab and got these problems. I knew there are similar issues opened and some suggestions. But tried and failed in my case. Would you advise what to do in colab? Thanks.

    Issue description

    • pwd '/content/gdrive/My Drive/fsdl-text-recognizer-project'

    • !pip install pipenv ... Successfully installed pipenv-2018.11.26 virtualenv-16.7.2 virtualenv-clone-0.5.3

    • !pipenv install --dev Creating a virtualenv for this project… Pipfile: /content/gdrive/My Drive/fsdl-text-recognizer-project/Pipfile ...

    • cd lab2_sln/ /content/gdrive/My Drive/fsdl-text-recognizer-project/lab2_sln

    • ls notebooks/ readme.md tasks/ text_recognizer/ training/

    • !pipenv run training/run_experiment.py --save '{"dataset": "EmnistDataset", "model": "CharacterModel", "network": "mlp", "train_args": {"batch_size": 256}}' Error: the command training/run_experiment.py could not be found within PATH or Pipfile's [scripts].

    opened by gitfourteen 4
  • samples = samples_by_char[char], how to access samples_by_char

    samples = samples_by_char[char], how to access samples_by_char

    Hi ,

    In emnist_lines_dataset.py, we have samples_by_char defaultdict(list) used for select_letter_samples_for_string() method, inside of which samples = samples_by_char[char] is returning [ ] since samples_by_char[char] is actually, samples_by_char[ , 'char'] if am correct. Am stuck with accessing defaultdict(list) with only one of the tuple, i.e char, Any help?

    Many thanks, Varsha

    opened by VarshaSLalapura 1
  • ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

    ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

    ### In colab, virtual environment failed to import tensroflow.

    Run command as following:

    1. !pwd /content/gdrive/My Drive/fsdl-text-recognizer-project/lab2_sln

    2. !nvcc --version(for now I have no idea what this info will help debug, just list here FYI)

    nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Sat_Aug_25_21:08:01_CDT_2018 Cuda compilation tools, release 10.0, V10.0.130

    1. !pipenv run python (failed to import tensorflow!) Python 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux Type "help", "copyright", "credits" or "license" for more information.

    import tensorflow as tf

    Traceback (most recent call last): File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/imp.py", line 343, in load_dynamic return _load(spec) ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/init.py", line 24, in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/python/init.py", line 49, in from tensorflow.python import pywrap_tensorflow File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "/root/.local/share/virtualenvs/fsdl-text-recognizer-project-iJwlu5A8/lib/python3.6/imp.py", line 343, in load_dynamic return _load(spec) ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

    Failed to load the native TensorFlow runtime.

    See https://www.tensorflow.org/install/errors

    for some common reasons and solutions. Include the entire stack trace above this error message when asking for help.

    KeyboardInterrupt

    KeyboardInterrupt

    KeyboardInterrupt

    ^C

    1. !python (this is OK) Python 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux Type "help", "copyright", "credits" or "license" for more information.

    import tensorflow as tf tf.__version__ '1.14.0'

    KeyboardInterrupt

    KeyboardInterrupt

    KeyboardInterrupt

    ^C

    opened by gitfourteen 1
  • [1-Typos] Fix typos.

    [1-Typos] Fix typos.

    Hi!

    In this PR I am just fixing some typos, where in some readme.md files, the paths for the solutions were lab[lab number]_soln instead of lab[lab number]_sln.

    Thanks,

    opened by andretadeu 1
  • Suggest to loosen the dependency on boltons

    Suggest to loosen the dependency on boltons

    Hi, your project fsdl-text-recognizer-project requires "boltons==20.0.0" in its dependency. After analyzing the source code, we found that the following versions of boltons can also be suitable without affecting your project, i.e., boltons 19.0.0, 19.0.1, 19.1.0, 19.2.0, 19.3.0, 20.1.0, 20.2.0, 20.2.1, 21.0.0. Therefore, we suggest to loosen the dependency on boltons from "boltons==20.0.0" to "boltons>=19.0.0,<=21.0.0" to avoid any possible conflict for importing more packages or for downstream projects that may use fsdl-text-recognizer-project.

    May I pull a request to further loosen the dependency on boltons?

    By the way, could you please tell us whether such dependency analysis may be potentially helpful for maintaining dependencies easier during your development?



    We also give our detailed analysis as follows for your reference:

    Your project fsdl-text-recognizer-project directly uses 1 APIs from package boltons.

    boltons.cacheutils.cachedproperty.__init__
    
    

    Beginning from the 1 APIs above, 0 functions are then indirectly called, including 0 boltons's internal APIs and 0 outsider APIs. The specific call graph is listed as follows (neglecting some repeated function occurrences).

    [/full-stack-deep-learning/fsdl-text-recognizer-project]
    +--boltons.cacheutils.cachedproperty.__init__
    

    We scan boltons's versions and observe that during its evolution between any version from [19.0.0, 19.0.1, 19.1.0, 19.2.0, 19.3.0, 20.1.0, 20.2.0, 20.2.1, 21.0.0] and 20.0.0, the changing functions (diffs being listed below) have none intersection with any function or API we mentioned above (either directly or indirectly called by this project).

    diff: 20.0.0(original) 19.0.0
    ['boltons.funcutils.wraps', 'boltons.dictutils.OneToOne.__init__', 'boltons.pathutils.shrinkuser', 'boltons.funcutils.inspect_formatargspec', 'boltons.funcutils.FunctionBuilder.get_func', 'boltons.funcutils.FunctionBuilder', 'boltons.setutils._ComplementSet.__rsub__', 'boltons.cacheutils.MinIDMap.iteritems', 'boltons.tbutils.TracebackInfo.from_frame', 'boltons.cacheutils.ThresholdCounter.most_common', 'boltons.pathutils.expandpath', 'boltons.strutils.StringBuffer.__init__', 'boltons.strutils.StringBuffer.getvalue', 'boltons.setutils._ComplementSet', 'boltons.strutils.StringBuffer', 'boltons.strutils.StringBuffer.truncate', 'boltons.strutils.StringBuffer.write', 'boltons.dictutils.OrderedMultiDict', 'boltons.fileutils.iter_find_files', 'boltons.setutils.complement', 'boltons.funcutils.format_exp_repr', 'boltons.cacheutils.MinIDMap', 'boltons.mathutils.Bits', 'boltons.funcutils.FunctionBuilder.get_invocation_str', 'boltons.funcutils.FunctionBuilder.add_arg', 'boltons.dictutils.OneToOne', 'boltons.pathutils.augpath', 'boltons.strutils.unwrap_text', 'boltons.iterutils.chunked_iter', 'boltons.statsutils.Stats', 'boltons.cacheutils.ThresholdCounter', 'boltons.urlutils.OrderedMultiDict.sorted', 'boltons.funcutils.format_nonexp_repr', 'boltons.strutils.gzip_bytes', 'boltons.setutils._ComplementSet.__len__', 'boltons.setutils._ComplementSet.__bool__', 'boltons.dictutils.OrderedMultiDict.sorted', 'boltons.urlutils.OrderedMultiDict', 'boltons.funcutils.FunctionBuilder.get_defaults_dict', 'boltons.setutils._ComplementSet.__iter__', 'boltons.iterutils.flatten', 'boltons.queueutils.BasePriorityQueue', 'boltons.funcutils.FunctionBuilder.get_sig_str', 'boltons.cacheutils.ThresholdCounter.get_common_count', 'boltons.funcutils.format_invocation', 'boltons.statsutils.Stats.format_histogram', 'boltons.tbutils.TracebackInfo', 'boltons.funcutils.FunctionBuilder.get_arg_names', 'boltons.iterutils.partition', 'boltons.fileutils.FilePerms', 'boltons.funcutils.FunctionBuilder.from_func', 'boltons.cacheutils.MinIDMap.__iter__', 'boltons.iterutils.bucketize', 'boltons.dictutils.OneToOne.unique', 'boltons.fileutils.FilePerms.from_path']
    
    diff: 20.0.0(original) 19.0.1
    ['boltons.dictutils.OneToOne.__init__', 'boltons.pathutils.shrinkuser', 'boltons.funcutils.inspect_formatargspec', 'boltons.funcutils.FunctionBuilder', 'boltons.setutils._ComplementSet.__rsub__', 'boltons.cacheutils.MinIDMap.iteritems', 'boltons.tbutils.TracebackInfo.from_frame', 'boltons.cacheutils.ThresholdCounter.most_common', 'boltons.pathutils.expandpath', 'boltons.strutils.StringBuffer.__init__', 'boltons.strutils.StringBuffer.getvalue', 'boltons.setutils._ComplementSet', 'boltons.strutils.StringBuffer', 'boltons.strutils.StringBuffer.truncate', 'boltons.strutils.StringBuffer.write', 'boltons.dictutils.OrderedMultiDict', 'boltons.fileutils.iter_find_files', 'boltons.setutils.complement', 'boltons.funcutils.format_exp_repr', 'boltons.cacheutils.MinIDMap', 'boltons.funcutils.FunctionBuilder.get_invocation_str', 'boltons.dictutils.OneToOne', 'boltons.pathutils.augpath', 'boltons.strutils.unwrap_text', 'boltons.iterutils.chunked_iter', 'boltons.statsutils.Stats', 'boltons.cacheutils.ThresholdCounter', 'boltons.urlutils.OrderedMultiDict.sorted', 'boltons.funcutils.format_nonexp_repr', 'boltons.strutils.gzip_bytes', 'boltons.setutils._ComplementSet.__len__', 'boltons.setutils._ComplementSet.__bool__', 'boltons.dictutils.OrderedMultiDict.sorted', 'boltons.urlutils.OrderedMultiDict', 'boltons.funcutils.FunctionBuilder.get_defaults_dict', 'boltons.setutils._ComplementSet.__iter__', 'boltons.iterutils.flatten', 'boltons.queueutils.BasePriorityQueue', 'boltons.funcutils.FunctionBuilder.get_sig_str', 'boltons.cacheutils.ThresholdCounter.get_common_count', 'boltons.funcutils.format_invocation', 'boltons.statsutils.Stats.format_histogram', 'boltons.tbutils.TracebackInfo', 'boltons.funcutils.FunctionBuilder.get_arg_names', 'boltons.iterutils.partition', 'boltons.fileutils.FilePerms', 'boltons.cacheutils.MinIDMap.__iter__', 'boltons.iterutils.bucketize', 'boltons.dictutils.OneToOne.unique', 'boltons.fileutils.FilePerms.from_path']
    
    diff: 20.0.0(original) 19.1.0
    ['boltons.dictutils.OneToOne.__init__', 'boltons.pathutils.shrinkuser', 'boltons.funcutils.inspect_formatargspec', 'boltons.funcutils.FunctionBuilder', 'boltons.setutils._ComplementSet.__rsub__', 'boltons.cacheutils.MinIDMap.iteritems', 'boltons.tbutils.TracebackInfo.from_frame', 'boltons.cacheutils.ThresholdCounter.most_common', 'boltons.pathutils.expandpath', 'boltons.setutils._ComplementSet', 'boltons.dictutils.OrderedMultiDict', 'boltons.fileutils.iter_find_files', 'boltons.setutils.complement', 'boltons.funcutils.format_exp_repr', 'boltons.cacheutils.MinIDMap', 'boltons.funcutils.FunctionBuilder.get_invocation_str', 'boltons.dictutils.OneToOne', 'boltons.pathutils.augpath', 'boltons.strutils.unwrap_text', 'boltons.iterutils.chunked_iter', 'boltons.statsutils.Stats', 'boltons.cacheutils.ThresholdCounter', 'boltons.urlutils.OrderedMultiDict.sorted', 'boltons.funcutils.format_nonexp_repr', 'boltons.setutils._ComplementSet.__len__', 'boltons.setutils._ComplementSet.__bool__', 'boltons.dictutils.OrderedMultiDict.sorted', 'boltons.urlutils.OrderedMultiDict', 'boltons.setutils._ComplementSet.__iter__', 'boltons.iterutils.flatten', 'boltons.funcutils.FunctionBuilder.get_sig_str', 'boltons.cacheutils.ThresholdCounter.get_common_count', 'boltons.funcutils.format_invocation', 'boltons.statsutils.Stats.format_histogram', 'boltons.tbutils.TracebackInfo', 'boltons.iterutils.partition', 'boltons.fileutils.FilePerms', 'boltons.cacheutils.MinIDMap.__iter__', 'boltons.iterutils.bucketize', 'boltons.dictutils.OneToOne.unique', 'boltons.fileutils.FilePerms.from_path']
    
    diff: 20.0.0(original) 19.2.0
    ['boltons.strutils.unwrap_text', 'boltons.fileutils.iter_find_files', 'boltons.pathutils.shrinkuser', 'boltons.iterutils.chunked_iter', 'boltons.cacheutils.ThresholdCounter.get_common_count', 'boltons.funcutils.format_invocation', 'boltons.funcutils.format_exp_repr', 'boltons.cacheutils.ThresholdCounter', 'boltons.funcutils.format_nonexp_repr', 'boltons.cacheutils.MinIDMap', 'boltons.cacheutils.MinIDMap.__iter__', 'boltons.cacheutils.MinIDMap.iteritems', 'boltons.cacheutils.ThresholdCounter.most_common', 'boltons.pathutils.expandpath', 'boltons.pathutils.augpath']
    
    diff: 20.0.0(original) 19.3.0
    ['boltons.strutils.unwrap_text', 'boltons.fileutils.iter_find_files', 'boltons.pathutils.shrinkuser', 'boltons.iterutils.chunked_iter', 'boltons.cacheutils.ThresholdCounter.get_common_count', 'boltons.funcutils.format_invocation', 'boltons.cacheutils.ThresholdCounter', 'boltons.cacheutils.MinIDMap', 'boltons.cacheutils.MinIDMap.__iter__', 'boltons.cacheutils.MinIDMap.iteritems', 'boltons.cacheutils.ThresholdCounter.most_common', 'boltons.pathutils.expandpath', 'boltons.pathutils.augpath']
    
    diff: 20.0.0(original) 20.1.0
    ['boltons.funcutils.wraps', 'boltons.funcutils.FunctionBuilder', 'boltons.ioutils.SpooledIOBase', 'boltons.socketutils.NetstringSocket.setmaxsize', 'boltons.ioutils.SpooledIOBase.readable', 'boltons.setutils.IndexedSet.index', 'boltons.funcutils.InstancePartial._partialmethod', 'boltons.funcutils.InstancePartial', 'boltons.funcutils.CachedInstancePartial', 'boltons.funcutils.CachedInstancePartial._partialmethod', 'boltons.urlutils.URL', 'boltons.socketutils.NetstringSocket._calc_msgsize_maxsize', 'boltons.funcutils.update_wrapper', 'boltons.setutils.IndexedSet._get_apparent_index', 'boltons.socketutils.NetstringSocket', 'boltons.ioutils.SpooledIOBase.writable', 'boltons.ioutils.SpooledIOBase.seekable', 'boltons.socketutils.NetstringSocket.read_ns', 'boltons.setutils.IndexedSet', 'boltons.funcutils.FunctionBuilder.from_func', 'boltons.ecoutils.get_profile_json']
    
    diff: 20.0.0(original) 20.2.0
    ['boltons.funcutils.wraps', 'boltons.strutils.strip_ansi', 'boltons.listutils.BarrelList.__iter__', 'boltons.funcutils.FunctionBuilder', 'boltons.ioutils.SpooledIOBase', 'boltons.iterutils.lstrip_iter', 'boltons.iterutils.rstrip', 'boltons.socketutils.NetstringSocket.setmaxsize', 'boltons.iterutils.lstrip', 'boltons.ioutils.SpooledIOBase.readable', 'boltons.listutils.BarrelList', 'boltons.setutils.IndexedSet.index', 'boltons.funcutils.InstancePartial._partialmethod', 'boltons.iterutils.untyped_sorted', 'boltons.funcutils.InstancePartial', 'boltons.funcutils.CachedInstancePartial', 'boltons.funcutils.CachedInstancePartial._partialmethod', 'boltons.setutils.IndexedSet.__rsub__', 'boltons.urlutils.URL', 'boltons.socketutils.NetstringSocket._calc_msgsize_maxsize', 'boltons.strutils.int_ranges_from_int_list', 'boltons.funcutils.update_wrapper', 'boltons.setutils.IndexedSet._get_apparent_index', 'boltons.socketutils.NetstringSocket', 'boltons.ioutils.SpooledIOBase.writable', 'boltons.fileutils.AtomicSaver.__init__', 'boltons.iterutils.strip_iter', 'boltons.ioutils.SpooledIOBase.seekable', 'boltons.strutils.complement_int_list', 'boltons.iterutils.rstrip_iter', 'boltons.socketutils.NetstringSocket.read_ns', 'boltons.iterutils.strip', 'boltons.fileutils.AtomicSaver', 'boltons.setutils.IndexedSet', 'boltons.funcutils.FunctionBuilder.from_func', 'boltons.ecoutils.get_profile_json']
    
    diff: 20.0.0(original) 20.2.1
    ['boltons.funcutils.wraps', 'boltons.strutils.strip_ansi', 'boltons.listutils.BarrelList.__iter__', 'boltons.iterutils.SequentialGUIDerator.reseed', 'boltons.funcutils.FunctionBuilder', 'boltons.ioutils.SpooledIOBase', 'boltons.iterutils.lstrip_iter', 'boltons.iterutils.GUIDerator.__init__', 'boltons.iterutils.rstrip', 'boltons.socketutils.NetstringSocket.setmaxsize', 'boltons.iterutils.lstrip', 'boltons.ioutils.SpooledIOBase.readable', 'boltons.listutils.BarrelList', 'boltons.setutils.IndexedSet.index', 'boltons.funcutils.InstancePartial._partialmethod', 'boltons.iterutils.untyped_sorted', 'boltons.funcutils.InstancePartial', 'boltons.funcutils.CachedInstancePartial', 'boltons.funcutils.CachedInstancePartial._partialmethod', 'boltons.setutils.IndexedSet.__rsub__', 'boltons.urlutils.URL', 'boltons.socketutils.NetstringSocket._calc_msgsize_maxsize', 'boltons.iterutils.GUIDerator.__next__', 'boltons.strutils.int_ranges_from_int_list', 'boltons.funcutils.update_wrapper', 'boltons.iterutils.GUIDerator.reseed', 'boltons.setutils.IndexedSet._get_apparent_index', 'boltons.socketutils.NetstringSocket', 'boltons.ioutils.SpooledIOBase.writable', 'boltons.fileutils.AtomicSaver.__init__', 'boltons.iterutils.GUIDerator', 'boltons.iterutils.redundant', 'boltons.iterutils.strip_iter', 'boltons.ioutils.SpooledIOBase.seekable', 'boltons.strutils.complement_int_list', 'boltons.iterutils.rstrip_iter', 'boltons.funcutils.format_invocation', 'boltons.socketutils.NetstringSocket.read_ns', 'boltons.iterutils.strip', 'boltons.fileutils.AtomicSaver', 'boltons.setutils.IndexedSet', 'boltons.funcutils.FunctionBuilder.from_func', 'boltons.ecoutils.get_profile_json', 'boltons.iterutils.SequentialGUIDerator']
    
    diff: 20.0.0(original) 21.0.0
    ['boltons.funcutils.wraps', 'boltons.jsonutils.reverse_iter_lines', 'boltons.strutils.strip_ansi', 'boltons.listutils.BarrelList.__iter__', 'boltons.iterutils.SequentialGUIDerator.reseed', 'boltons.funcutils.FunctionBuilder', 'boltons.typeutils.make_sentinel', 'boltons.iterutils.lstrip_iter', 'boltons.ioutils.SpooledIOBase', 'boltons.iterutils.GUIDerator.__init__', 'boltons.iterutils.rstrip', 'boltons.socketutils.NetstringSocket.setmaxsize', 'boltons.formatutils.DeferredValue', 'boltons.iterutils.lstrip', 'boltons.dictutils.OrderedMultiDict', 'boltons.ioutils.SpooledIOBase.readable', 'boltons.funcutils.format_exp_repr', 'boltons.listutils.BarrelList', 'boltons.setutils.IndexedSet.index', 'boltons.funcutils.InstancePartial._partialmethod', 'boltons.iterutils.untyped_sorted', 'boltons.fileutils.copy_tree', 'boltons.funcutils.InstancePartial', 'boltons.funcutils.CachedInstancePartial.__set_name__', 'boltons.dictutils.OrderedMultiDict.addlist', 'boltons.funcutils.CachedInstancePartial', 'boltons.funcutils.CachedInstancePartial._partialmethod', 'boltons.fileutils.atomic_save', 'boltons.dictutils.FrozenDict', 'boltons.setutils.IndexedSet.__rsub__', 'boltons.urlutils.URL', 'boltons.dictutils.OrderedMultiDict.poplast', 'boltons.socketutils.NetstringSocket._calc_msgsize_maxsize', 'boltons.funcutils.noop', 'boltons.funcutils.format_nonexp_repr', 'boltons.iterutils.GUIDerator.__next__', 'boltons.strutils.int_ranges_from_int_list', 'boltons.funcutils.update_wrapper', 'boltons.iterutils.GUIDerator.reseed', 'boltons.setutils.IndexedSet._get_apparent_index', 'boltons.socketutils.NetstringSocket', 'boltons.formatutils.DeferredValue.__init__', 'boltons.funcutils.CachedInstancePartial.__get__', 'boltons.ioutils.SpooledIOBase.writable', 'boltons.fileutils.AtomicSaver.__init__', 'boltons.iterutils.GUIDerator', 'boltons.iterutils.redundant', 'boltons.strutils.multi_replace', 'boltons.iterutils.strip_iter', 'boltons.ioutils.SpooledIOBase.seekable', 'boltons.strutils.complement_int_list', 'boltons.iterutils.rstrip_iter', 'boltons.funcutils.format_invocation', 'boltons.socketutils.NetstringSocket.read_ns', 'boltons.iterutils.strip', 'boltons.fileutils.AtomicSaver', 'boltons.setutils.IndexedSet', 'boltons.funcutils.FunctionBuilder.from_func', 'boltons.iterutils.bucketize', 'boltons.ecoutils.get_profile_json', 'boltons.iterutils.SequentialGUIDerator']
    
    

    Therefore, we believe that it is quite safe to loose your dependency on boltons from "boltons==20.0.0" to "boltons>=19.0.0,<=21.0.0". This will improve the applicability of fsdl-text-recognizer-project and reduce the possibility of any further dependency conflict with other projects.

    opened by Agnes-U 0
  • Testing issue

    Testing issue "SyntaxError: invalid syntax"

    I'm trying to run the test in lab1 using the following command in Google Colab but it gives me the issue below:

    pytest -s text_recognizer/tests/test_character_predictor.py

    File "", line 1 pytest -s text_recognizer/tests/test_character_predictor.py ^ SyntaxError: invalid syntax

    opened by MarMarhoun 0
  • Piplock error- getting an error of incompatible versions even when I have the needed version on my project

    Piplock error- getting an error of incompatible versions even when I have the needed version on my project

    Locking Failed! [pipenv.exceptions.ResolutionFailure]: req_dir=requirements_dir [pipenv.exceptions.ResolutionFailure]: File "c:\users\ved\anaconda3\envs\fsdl-text-recognizer\lib\site-packages\pipenv\utils.py", line 726, in resolve_deps [pipenv.exceptions.ResolutionFailure]: req_dir=req_dir, [pipenv.exceptions.ResolutionFailure]: File "c:\users\ved\anaconda3\envs\fsdl-text-recognizer\lib\site-packages\pipenv\utils.py", line 480, in actually_resolve_deps [pipenv.exceptions.ResolutionFailure]: resolved_tree = resolver.resolve() [pipenv.exceptions.ResolutionFailure]: File "c:\users\ved\anaconda3\envs\fsdl-text-recognizer\lib\site-packages\pipenv\utils.py", line 395, in resolve [pipenv.exceptions.ResolutionFailure]: raise ResolutionFailure(message=str(e)) [pipenv.exceptions.ResolutionFailure]: pipenv.exceptions.ResolutionFailure: ERROR: ERROR: Could not find a version that matches tensorflow-estimator<2.3.0,==2.2.0rc0,>=2.2.0rc0 [pipenv.exceptions.ResolutionFailure]: Tried: 1.10.6, 1.10.7, 1.10.8, 1.10.9, 1.10.10, 1.10.11, 1.10.12, 1.13.0, 1.14.0, 1.15.0, 1.15.1, 1.15.2, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.3.0 [pipenv.exceptions.ResolutionFailure]: Skipped pre-versions: 1.13.0rc0, 1.14.0rc0, 1.14.0rc1, 2.1.0rc0, 2.2.0rc0, 2.3.0rc0, 2.4.0rc0 [pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again. Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation. Hint: try $ pipenv lock --pre if it is a pre-release dependency. ERROR: ERROR: Could not find a version that matches tensorflow-estimator<2.3.0,==2.2.0rc0,>=2.2.0rc0 Tried: 1.10.6, 1.10.7, 1.10.8, 1.10.9, 1.10.10, 1.10.11, 1.10.12, 1.13.0, 1.14.0, 1.15.0, 1.15.1, 1.15.2, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.3.0 Skipped pre-versions: 1.13.0rc0, 1.14.0rc0, 1.14.0rc1, 2.1.0rc0, 2.2.0rc0, 2.3.0rc0, 2.4.0rc0 There are incompatible versions in the resolved dependencies. [pipenv.exceptions.ResolutionFailure]: File "c:\users\ved\anaconda3\envs\fsdl-text-recognizer\lib\site-packages\pipenv\utils.py", line 726, in resolve_deps [pipenv.exceptions.ResolutionFailure]: req_dir=req_dir, [pipenv.exceptions.ResolutionFailure]: File "c:\users\ved\anaconda3\envs\fsdl-text-recognizer\lib\site-packages\pipenv\utils.py", line 480, in actually_resolve_deps [pipenv.exceptions.ResolutionFailure]: resolved_tree = resolver.resolve() [pipenv.exceptions.ResolutionFailure]: File "c:\users\ved\anaconda3\envs\fsdl-text-recognizer\lib\site-packages\pipenv\utils.py", line 395, in resolve [pipenv.exceptions.ResolutionFailure]: raise ResolutionFailure(message=str(e)) [pipenv.exceptions.ResolutionFailure]: pipenv.exceptions.ResolutionFailure: ERROR: ERROR: Could not find a version that matches tensorflow-estimator<2.3.0,==2.2.0rc0,>=2.2.0rc0 [pipenv.exceptions.ResolutionFailure]: Tried: 1.10.6, 1.10.7, 1.10.8, 1.10.9, 1.10.10, 1.10.11, 1.10.12, 1.13.0, 1.14.0, 1.15.0, 1.15.1, 1.15.2, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.3.0 [pipenv.exceptions.ResolutionFailure]: Skipped pre-versions: 1.13.0rc0, 1.14.0rc0, 1.14.0rc1, 2.1.0rc0, 2.2.0rc0, 2.3.0rc0, 2.4.0rc0 [pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again. Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation. Hint: try $ pipenv lock --pre if it is a pre-release dependency. ERROR: ERROR: Could not find a version that matches tensorflow-estimator<2.3.0,==2.2.0rc0,>=2.2.0rc0 Tried: 1.10.6, 1.10.7, 1.10.8, 1.10.9, 1.10.10, 1.10.11, 1.10.12, 1.13.0, 1.14.0, 1.15.0, 1.15.1, 1.15.2, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.3.0 Skipped pre-versions: 1.13.0rc0, 1.14.0rc0, 1.14.0rc1, 2.1.0rc0, 2.2.0rc0, 2.3.0rc0, 2.4.0rc0 There are incompatible versions in the resolved dependencies.

    opened by Haaarrrssshhh 0
  • Training error

    Training error

    When I run this - pipenv run python training/run_experiment.py --save '{"dataset": "EmnistDataset", "model": "CharacterModel", "network": "mlp", "train_args": {"batch_size": 256}}' I get an error saying this , please help

    2020-12-10 14:38:10.508059: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll usage: run_experiment.py [-h] [--gpu GPU] [--save] [--nowandb] experiment_config run_experiment.py: error: unrecognized arguments: EmnistDataset, model: CharacterModel, network: mlp, train_args: {batch_size: 256}}'

    Do I need to manually install datasets or something?

    opened by Haaarrrssshhh 1
  • Pip-sync Error: When I run pip-sync requirements.txt requirements-dev.txt , I am getting the following error. And instead of installing packages it uninstalls package.

    Pip-sync Error: When I run pip-sync requirements.txt requirements-dev.txt , I am getting the following error. And instead of installing packages it uninstalls package.

    Found existing installation: wincertstore 0.2 ERROR: Cannot uninstall 'wincertstore'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. Traceback (most recent call last): File "c:\users\harsh zota\anaconda3\envs\fsdl-text-recognizer\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "c:\users\harsh zota\anaconda3\envs\fsdl-text-recognizer\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "C:\Users\Harsh Zota\anaconda3\envs\fsdl-text-recognizer\Scripts\pip-sync.exe_main.py", line 7, in File "C:\Users\Harsh Zota\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 829, in call return self.main(*args, **kwargs) File "C:\Users\Harsh Zota\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "C:\Users\Harsh Zota\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Users\Harsh Zota\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "c:\users\harsh zota\anaconda3\envs\fsdl-text-recognizer\lib\site-packages\piptools\scripts\sync.py", line 153, in cli ask=ask, File "c:\users\harsh zota\anaconda3\envs\fsdl-text-recognizer\lib\site-packages\piptools\sync.py", line 197, in sync + sorted(to_uninstall) File "c:\users\harsh zota\anaconda3\envs\fsdl-text-recognizer\lib\subprocess.py", line 363, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['c:\users\harsh zota\anaconda3\envs\fsdl-text-recognizer\python.exe', '-m', 'pip', 'uninstall', '-y', 'wincertstore']' returned non-zero exit status 1.

    opened by Haaarrrssshhh 1
  • Not able to set up python environment

    Not able to set up python environment

    When I create the environment on my machine, I get below error:

    Collecting package metadata (repodata.json): done Solving environment: failed

    ResolvePackageNotFound:

    • cudatoolkit=10.1
    • cudnn=7.6
    opened by hks1 1
Full body anonymization - Realistic Full-Body Anonymization with Surface-Guided GANs

Realistic Full-Body Anonymization with Surface-Guided GANs This is the official

Håkon Hukkelås 30 Nov 18, 2022
A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering.

DeepFilterNet A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering. libDF contains Rust code used for dat

Hendrik Schröter 292 Dec 25, 2022
Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data

SEDE SEDE (Stack Exchange Data Explorer) is new dataset for Text-to-SQL tasks with more than 12,000 SQL queries and their natural language description

Rupert. 83 Nov 11, 2022
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Deep Cognition and Language Research (DeCLaRe) Lab 23 Dec 16, 2022
The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BMC The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing". BibTex entry available here. B

Orange 383 Dec 16, 2022
JstDoS - HTTP Protocol Stack Remote Code Execution Vulnerability

jstDoS If you are going to skid that, please give credits ! ^^ ¿How works? This

apolo 4 Feb 11, 2022
I3-master-layout - Simple master and stack layout script

Simple master and stack layout script | ------ | ----- | | | | | Ma

Tobias S 18 Dec 5, 2022
(CVPR 2022) A minimalistic mapless end-to-end stack for joint perception, prediction, planning and control for self driving.

LAV Learning from All Vehicles Dian Chen, Philipp Krähenbühl CVPR 2022 (also arXiV 2203.11934) This repo contains code for paper Learning from all veh

Dian Chen 300 Dec 15, 2022
FastyAPI is a Stack boilerplate optimised for heavy loads.

FastyAPI A FastAPI based Stack boilerplate for heavy loads. Explore the docs » View Demo · Report Bug · Request Feature Table of Contents About The Pr

Ali Chaayb 47 Dec 27, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 2, 2023
Providing the solutions for high-frequency trading (HFT) strategies using data science approaches (Machine Learning) on Full Orderbook Tick Data.

Modeling High-Frequency Limit Order Book Dynamics Using Machine Learning Framework to capture the dynamics of high-frequency limit order books. Overvi

Chang-Shu Chung 1.3k Jan 7, 2023
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation

CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation (CVPR 2021, oral presentation) CoCosNet v2: Full-Resolution Correspondence

Microsoft 308 Dec 7, 2022
CL-Gym: Full-Featured PyTorch Library for Continual Learning

CL-Gym: Full-Featured PyTorch Library for Continual Learning CL-Gym is a small yet very flexible library for continual learning research and developme

Iman Mirzadeh 36 Dec 25, 2022
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack Case study of the FCA. The code can be find in FCA. Cas

IDRL 21 Dec 15, 2022
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries and layers can then be written using Ivy, with simultaneous support for all frameworks. Ivy currently supports Jax, TensorFlow, PyTorch, MXNet and Numpy. Check out the docs for more info!

Ivy 8.2k Jan 2, 2023
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
FTIR-Deep Learning - FTIR Deep Learning With Python

CANDIY-spectrum Human analyis of chemical spectra such as Mass Spectra (MS), Inf

Wei Mei 1 Jan 3, 2022