A Python library for Deep Graph Networks

Related tags

Deep Learning PyDGN
Overview

PyDGN

Wiki

Description

This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitting, loading and the most common experimental settings. It also handles both model selection and risk assessment procedures, by trying many different configurations in parallel (CPU or GPU). This repository is built upon the Pytorch Geometric Library, which provides support for data management.

If you happen to use or modify this code, please remember to cite our tutorial paper:

Bacciu Davide, Errica Federico, Micheli Alessio, Podda Marco: A Gentle Introduction to Deep Learning for Graphs, Neural Networks, 2020. DOI: 10.1016/j.neunet.2020.06.006.

If you are interested in a rigorous evaluation of Deep Graph Networks, check this out:

Errica Federico, Podda Marco, Bacciu Davide, Micheli Alessio: A Fair Comparison of Graph Neural Networks for Graph Classification. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Code

Installation:

(We assume git and Miniconda/Anaconda are installed)

First, make sure gcc 5.2.0 is installed: conda install -c anaconda libgcc=5.2.0. Then, echo $LD_LIBRARY_PATH should always contain :/home/[your user name]/miniconda3/lib. Then run from your terminal the following command:

source setup/install.sh [<your_cuda_version>]
pip install pydgn

Where <your_cuda_version> is an optional argument that can be either cpu, cu102 or cu111 for Pytorch >= 1.8.0. If you do not provide a cuda version, the script will default to cpu. The script will create a virtual environment named pydgn, with all the required packages needed to run our code. Important: do NOT run this command using bash instead of source!

Remember that PyTorch MacOS Binaries dont support CUDA, install from source if CUDA is needed

Usage:

Preprocess your dataset (see also Wiki)

python build_dataset.py --config-file [your data config file]

Exampla

python build_dataset.py --config-file DATA_CONFIGS/config_PROTEINS.yml 

Launch an experiment in debug mode (see also Wiki)

python launch_experiment.py --config-file [your exp. config file] --splits-folder [the splits MAIN folder] --data-splits [the splits file] --data-root [root folder of your data] --dataset-name [name of the dataset] --dataset-class [class that handles the dataset] --max-cpus [max cpu parallelism] --max-gpus [max gpu parallelism] --gpus-per-task [how many gpus to allocate for each job] --final-training-runs [how many final runs when evaluating on test. Results are averaged] --result-folder [folder where to store results]

Example (GPU required)

python launch_experiment.py --config-file MODEL_CONFIGS/config_SupToyDGN_RandomSearch.yml --splits-folder DATA_SPLITS/CHEMICAL/ --data-splits DATA_SPLITS/CHEMICAL/PROTEINS/PROTEINS_outer10_inner1.splits --data-root DATA --dataset-name PROTEINS --dataset-class pydgn.data.dataset.TUDatasetInterface --max-cpus 1 --max-gpus 1 --final-training-runs 1 --result-folder RESULTS/DEBUG

To debug your code it is useful to add --debug to the command above. Notice, however, that the CLI will not work as expected here, as code will be executed sequentially. After debugging, if you need sequential execution, you can use --max-cpus 1 --max-gpus 1 --gpus-per-task [0/1] without the --debug option.

Grid Search 101

Have a look at one of the config files.

Random Search 101

Specify a num_samples in the config file with the number of random trials, replace grid with random, and specify a sampling method for each hyper-parameter. We provide different sampling methods:

  • choice --> pick at random from a list of arguments
  • uniform --> pick uniformly from min and max arguments
  • normal --> sample from normal distribution with mean and std
  • randint --> pick at random from min and max
  • loguniform --> pick following the recprocal distribution from log_min, log_max, with a specified base

There is one config file, namely config_SupToyDGN_RandomSearch.yml, which you can check to see an example.

Data Splits

We provide the data splits taken from

Errica Federico, Podda Marco, Bacciu Davide, Micheli Alessio: A Fair Comparison of Graph Neural Networks for Graph Classification. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Code

in the DATA_SPLITS folder.

Credits:

This is a joint project with Marco Podda (Github /Homepage), whom I thank for his relentless dedication.

Many thanks to Antonio Carta (Github/Homepage) for incorporating the Ray library (see v0.4.0) into PyDGN! This will be of tremendous help.

Many thanks to Danilo Numeroso (Github /Homepage) for implementing a very flexible random search! This is a very convenient alternative to grid search.

Contributing

This research software is provided as-is. We are working on this library in our spare time.

If you find a bug, please open an issue to report it, and we will do our best to solve it. For generic/technical questions, please email us rather than opening an issue.

License:

PyDGN is GPL 3.0 licensed, as written in the LICENSE file.

Troubleshooting

As of 15th of August 2021, there is an issue with Pytorch 1.9.0 which impacts the CLI. This is why the setup script installs Pytorch 1.8.1 in the pydgn conda environment until Pytorch 1.10 is released (known to solve the issue).

--

If you get errors like /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found:

  • make sure gcc 5.2.0 is installed: conda install -c anaconda libgcc=5.2.0
  • echo $LD_LIBRARY_PATH should contain :/home/[your user name]/[your anaconda or miniconda folder name]/lib
  • after checking the above points, you can reinstall everything with pip using the --no-cache-dir option
Comments
  • Keep getting raylet error

    Keep getting raylet error

    🔨 Describe the bug

    Hi, I am keep getting raylet error when tryin g to run the example. Is there a way to stop using ray since I am running the experiment on my local computer?

    Thank you!

    (raylet) /home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via pip install 'ray[default]'. Please update your install command. (raylet) warnings.warn( 2022-10-14 13:08:12,974 WARNING worker.py:1189 -- The agent on node EW22-05284 failed with the following error: Traceback (most recent call last): File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in loop.run_until_complete(agent.run()) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run modules = self._load_modules() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules c = cls(self) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init prometheus_exporter.new_stats_exporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter exporter = PrometheusStatsExporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init self.serve_http() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http start_http_server( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server TmpServer.address_family, addr = _get_best_family(addr, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family infos = socket.getaddrinfo(address, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known

    (raylet) Traceback (most recent call last): (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 366, in (raylet) raise e (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in (raylet) loop.run_until_complete(agent.run()) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete (raylet) return future.result() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run (raylet) modules = self._load_modules() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules (raylet) c = cls(self) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init (raylet) self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init (raylet) prometheus_exporter.new_stats_exporter( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter (raylet) exporter = PrometheusStatsExporter( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init (raylet) self.serve_http() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http (raylet) start_http_server( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server (raylet) TmpServer.address_family, addr = _get_best_family(addr, port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family (raylet) infos = socket.getaddrinfo(address, port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo (raylet) for res in _socket.getaddrinfo(host, port, family, type, proto, flags): (raylet) socket.gaierror: [Errno -2] Name or service not known (raylet) /home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via pip install 'ray[default]'. Please update your install command. (raylet) warnings.warn( 2022-10-14 13:08:14,624 WARNING worker.py:1189 -- The agent on node EW22-05284 failed with the following error: Traceback (most recent call last): File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in loop.run_until_complete(agent.run()) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run modules = self._load_modules() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules c = cls(self) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init prometheus_exporter.new_stats_exporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter exporter = PrometheusStatsExporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init self.serve_http() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http start_http_server( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server TmpServer.address_family, addr = _get_best_family(addr, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family infos = socket.getaddrinfo(address, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known

    bug 
    opened by jwtxwd 4
  • Training engine and returned data list

    Training engine and returned data list

    Feature description

    When shuffling the dataset, the training engine will return the data list shuffled according to the permutation of the data sampler. We should make sure that we reorder the data list.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 2
  • [WIP] feat(plotter): Add W&B Logging

    [WIP] feat(plotter): Add W&B Logging

    This PR proposes to add Weights & Biases logging to the library using helpful tensorboard based utilities. Instead of a separate logger, currently I propose we just monkeypatch and upload the Tensorboard logs to W&B.

    Leaving it as a Draft for now to start a conversation.

    CC: @diningphil @gravins

    opened by SauravMaheshkar 2
  • Fix Ray not deallocating GPU memory

    Fix Ray not deallocating GPU memory

    🔨 Describe the bug

    Some IDLE workers do not release the memory. This is a problem as described (together with potential fix) here: https://docs.ray.io/en/latest/ray-core/tasks/using-ray-with-gpus.html#workers-not-releasing-gpu-resources

    bug 
    opened by diningphil 1
  • Add option to specify subset of GPUs

    Add option to specify subset of GPUs

    Feature description

    In case one wants to force specific GPU IDs to be used, it should be possible to do so when running pydgn-train.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Bump aiohttp from 3.7 to 3.7.4

    Bump aiohttp from 3.7 to 3.7.4

    Bumps aiohttp from 3.7 to 3.7.4.

    Release notes

    Sourced from aiohttp's releases.

    aiohttp 3.7.3 release

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_
    • Make BaseConnector.close() a coroutine and wait until the client closes all connections. Drop deprecated "with Connector():" syntax. [#3736](https://github.com/aio-libs/aiohttp/issues/3736) <https://github.com/aio-libs/aiohttp/issues/3736>_
    • Reset the sock_read timeout each time data is received for a aiohttp.client response. [#3808](https://github.com/aio-libs/aiohttp/issues/3808) <https://github.com/aio-libs/aiohttp/issues/3808>_
    • Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View [#3880](https://github.com/aio-libs/aiohttp/issues/3880) <https://github.com/aio-libs/aiohttp/issues/3880>_
    • Fixed querying the address families from DNS that the current host supports. [#5156](https://github.com/aio-libs/aiohttp/issues/5156) <https://github.com/aio-libs/aiohttp/issues/5156>_
    • Change return type of MultipartReader.aiter() and BodyPartReader.aiter() to AsyncIterator. [#5163](https://github.com/aio-libs/aiohttp/issues/5163) <https://github.com/aio-libs/aiohttp/issues/5163>_
    • Provide x86 Windows wheels. [#5230](https://github.com/aio-libs/aiohttp/issues/5230) <https://github.com/aio-libs/aiohttp/issues/5230>_

    Improved Documentation

    • Add documentation for aiohttp.web.FileResponse. [#3958](https://github.com/aio-libs/aiohttp/issues/3958) <https://github.com/aio-libs/aiohttp/issues/3958>_
    • Removed deprecation warning in tracing example docs [#3964](https://github.com/aio-libs/aiohttp/issues/3964) <https://github.com/aio-libs/aiohttp/issues/3964>_
    • Fixed wrong "Usage" docstring of aiohttp.client.request. [#4603](https://github.com/aio-libs/aiohttp/issues/4603) <https://github.com/aio-libs/aiohttp/issues/4603>_
    • Add aiohttp-pydantic to third party libraries [#5228](https://github.com/aio-libs/aiohttp/issues/5228) <https://github.com/aio-libs/aiohttp/issues/5228>_

    Misc

    ... (truncated)

    Changelog

    Sourced from aiohttp's changelog.

    3.7.4 (2021-02-25)

    Bugfixes

    • (SECURITY BUG) Started preventing open redirects in the aiohttp.web.normalize_path_middleware middleware. For more details, see https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg.

      Thanks to Beast Glatisant <https://github.com/g147>__ for finding the first instance of this issue and Jelmer Vernooij <https://jelmer.uk/>__ for reporting and tracking it down in aiohttp. [#5497](https://github.com/aio-libs/aiohttp/issues/5497) <https://github.com/aio-libs/aiohttp/issues/5497>_

    • Fix interpretation difference of the pure-Python and the Cython-based HTTP parsers construct a yarl.URL object for HTTP request-target.

      Before this fix, the Python parser would turn the URI's absolute-path for //some-path into / while the Cython code preserved it as //some-path. Now, both do the latter. [#5498](https://github.com/aio-libs/aiohttp/issues/5498) <https://github.com/aio-libs/aiohttp/issues/5498>_


    3.7.3 (2020-11-18)

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_

    ... (truncated)

    Commits
    • 0a26acc Bump aiohttp to v3.7.4 for a security release
    • 021c416 Merge branch 'ghsa-v6wp-4m6f-gcjg' into master
    • 4ed7c25 Bump chardet from 3.0.4 to 4.0.0 (#5333)
    • b61f0fd Fix how pure-Python HTTP parser interprets //
    • 5c1efbc Bump pre-commit from 2.9.2 to 2.9.3 (#5322)
    • 0075075 Bump pygments from 2.7.2 to 2.7.3 (#5318)
    • 5085173 Bump multidict from 5.0.2 to 5.1.0 (#5308)
    • 5d1a75e Bump pre-commit from 2.9.0 to 2.9.2 (#5290)
    • 6724d0e Bump pre-commit from 2.8.2 to 2.9.0 (#5273)
    • c688451 Removed duplicate timeout parameter in ClientSession reference docs. (#5262) ...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump aiohttp from 3.7 to 3.7.4 in /.github

    Bump aiohttp from 3.7 to 3.7.4 in /.github

    Bumps aiohttp from 3.7 to 3.7.4.

    Release notes

    Sourced from aiohttp's releases.

    aiohttp 3.7.3 release

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_
    • Make BaseConnector.close() a coroutine and wait until the client closes all connections. Drop deprecated "with Connector():" syntax. [#3736](https://github.com/aio-libs/aiohttp/issues/3736) <https://github.com/aio-libs/aiohttp/issues/3736>_
    • Reset the sock_read timeout each time data is received for a aiohttp.client response. [#3808](https://github.com/aio-libs/aiohttp/issues/3808) <https://github.com/aio-libs/aiohttp/issues/3808>_
    • Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View [#3880](https://github.com/aio-libs/aiohttp/issues/3880) <https://github.com/aio-libs/aiohttp/issues/3880>_
    • Fixed querying the address families from DNS that the current host supports. [#5156](https://github.com/aio-libs/aiohttp/issues/5156) <https://github.com/aio-libs/aiohttp/issues/5156>_
    • Change return type of MultipartReader.aiter() and BodyPartReader.aiter() to AsyncIterator. [#5163](https://github.com/aio-libs/aiohttp/issues/5163) <https://github.com/aio-libs/aiohttp/issues/5163>_
    • Provide x86 Windows wheels. [#5230](https://github.com/aio-libs/aiohttp/issues/5230) <https://github.com/aio-libs/aiohttp/issues/5230>_

    Improved Documentation

    • Add documentation for aiohttp.web.FileResponse. [#3958](https://github.com/aio-libs/aiohttp/issues/3958) <https://github.com/aio-libs/aiohttp/issues/3958>_
    • Removed deprecation warning in tracing example docs [#3964](https://github.com/aio-libs/aiohttp/issues/3964) <https://github.com/aio-libs/aiohttp/issues/3964>_
    • Fixed wrong "Usage" docstring of aiohttp.client.request. [#4603](https://github.com/aio-libs/aiohttp/issues/4603) <https://github.com/aio-libs/aiohttp/issues/4603>_
    • Add aiohttp-pydantic to third party libraries [#5228](https://github.com/aio-libs/aiohttp/issues/5228) <https://github.com/aio-libs/aiohttp/issues/5228>_

    Misc

    ... (truncated)

    Changelog

    Sourced from aiohttp's changelog.

    3.7.4 (2021-02-25)

    Bugfixes

    • (SECURITY BUG) Started preventing open redirects in the aiohttp.web.normalize_path_middleware middleware. For more details, see https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg.

      Thanks to Beast Glatisant <https://github.com/g147>__ for finding the first instance of this issue and Jelmer Vernooij <https://jelmer.uk/>__ for reporting and tracking it down in aiohttp. [#5497](https://github.com/aio-libs/aiohttp/issues/5497) <https://github.com/aio-libs/aiohttp/issues/5497>_

    • Fix interpretation difference of the pure-Python and the Cython-based HTTP parsers construct a yarl.URL object for HTTP request-target.

      Before this fix, the Python parser would turn the URI's absolute-path for //some-path into / while the Cython code preserved it as //some-path. Now, both do the latter. [#5498](https://github.com/aio-libs/aiohttp/issues/5498) <https://github.com/aio-libs/aiohttp/issues/5498>_


    3.7.3 (2020-11-18)

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_

    ... (truncated)

    Commits
    • 0a26acc Bump aiohttp to v3.7.4 for a security release
    • 021c416 Merge branch 'ghsa-v6wp-4m6f-gcjg' into master
    • 4ed7c25 Bump chardet from 3.0.4 to 4.0.0 (#5333)
    • b61f0fd Fix how pure-Python HTTP parser interprets //
    • 5c1efbc Bump pre-commit from 2.9.2 to 2.9.3 (#5322)
    • 0075075 Bump pygments from 2.7.2 to 2.7.3 (#5318)
    • 5085173 Bump multidict from 5.0.2 to 5.1.0 (#5308)
    • 5d1a75e Bump pre-commit from 2.9.0 to 2.9.2 (#5290)
    • 6724d0e Bump pre-commit from 2.8.2 to 2.9.0 (#5273)
    • c688451 Removed duplicate timeout parameter in ClientSession reference docs. (#5262) ...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Weighted Additive Loss

    Weighted Additive Loss

    Feature description

    Improve AdditiveLoss by adding the possibility of weighting the different losses.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Telegram Bot support

    Telegram Bot support

    Feature description

    Add telegram bot support to send messages whenever an experiment breaks suddenly or the entire set of experiments (with the chance to have granularity) has finished

    Ideas on how to do it

    Once you create a bot, it should be easy to send a message to a particular chat.

    Additional info

    No response

    opened by diningphil 1
  • Accumulate predictions for metric that require global statistics

    Accumulate predictions for metric that require global statistics

    Feature description

    Metrics like AP require that all the samples of the train/valt/test dataset are taken into account when computing a score. We should add an option to allow, with greater usage of memory, to accumulate all predictions and target values until the end of an epoch, and subsequently compute the epoch score.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Metric improvement

    Metric improvement

    Feature description

    Metric should always use the result from the _handle_reduction function, even when accumulating the number of samples.

    This is because _handle_reduction may do something more complicated in some metrics that override the function.

    Be careful, however, that this works for both "mean" and "sum" reductions.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Support for Single-experiment/Multi-GPU

    Support for Single-experiment/Multi-GPU

    Feature description

    Allow an experiment to run on multiple GPUs.

    Ideas on how to do it

    No response

    Additional info

    Remember to set the appropriate seed on all GPUs using torch.cuda.manual_seed_all

    opened by diningphil 1
Releases(v1.3.0.post2)
Owner
Federico Errica
Teaching Machines
Federico Errica
Awesome Deep Graph Clustering is a collection of SOTA, novel deep graph clustering methods

ADGC: Awesome Deep Graph Clustering ADGC is a collection of state-of-the-art (SOTA), novel deep graph clustering methods (papers, codes and datasets).

yueliu1999 297 Dec 27, 2022
This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

BUPT GAMMA Lab 519 Jan 2, 2023
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"

GSDN-F and GSDN-EF This repository provides a reference implementation of GSDN-F and GSDN-EF as described in the paper "Understanding Graph Neural Net

Guoji Fu 18 Nov 14, 2022
This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

null 12 Oct 28, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
A PyTorch implementation of "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks" (KDD 2019).

ClusterGCN ⠀⠀ A PyTorch implementation of "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks" (KDD 2019). A

Benedek Rozemberczki 697 Dec 27, 2022
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 1, 2022
A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Kordel K. France 2 Nov 14, 2022
Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Troyanskaya Laboratory 323 Jan 1, 2023
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
Transfer Learning library for Deep Neural Networks.

Transfer and meta-learning in Python Each folder in this repository corresponds to a method or tool for transfer/meta-learning. xfer-ml is a standalon

Amazon 245 Dec 8, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
A PyTorch implementation of "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective" (WWW 2019)

SEAL ⠀⠀⠀ A PyTorch implementation of Semi-Supervised Graph Classification: A Hierarchical Graph Perspective (WWW 2019) Abstract Node classification an

Benedek Rozemberczki 202 Dec 27, 2022
A weakly-supervised scene graph generation codebase. The implementation of our CVPR2021 paper ``Linguistic Structures as Weak Supervision for Visual Scene Graph Generation''

README.md shall be finished soon. WSSGG 0 Overview 1 Installation 1.1 Faster-RCNN 1.2 Language Parser 1.3 GloVe Embeddings 2 Settings 2.1 VG-GT-Graph

Keren Ye 35 Nov 20, 2022
Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Datset)

Graphlevel-SSL Overview Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Dataset). It is unified framework to co

JunSeok 8 Oct 15, 2021
A PoC Corporation Relationship Knowledge Graph System on top of Nebula Graph.

Corp-Rel is a PoC of Corpartion Relationship Knowledge Graph System. It's built on top of the Open Source Graph Database: Nebula Graph with a dataset

Wey Gu 20 Dec 11, 2022