Find exposed data in Azure with this public blob scanner

Overview

License: MIT

BlobHunter

A tool for scanning Azure blob storage accounts for publicly opened blobs.
BlobHunter is a part of "Hunting Azure Blobs Exposes Millions of Sensitive Files" research:
https://www.cyberark.com/resources/threat-research-blog/hunting-azure-blobs-exposes-millions-of-sensitive-files

Overview

BlobHunter helps you identify Azure blob storage containers which store files that are publicly opened to everyone over the internet.
It can help you check for poorly configured containers storing sensitive data.
This can be helpful on large Azure subscriptions where there are lots of storage accounts that could be hard to track.
BlobHunter produces an informative csv result file with important details on each publicly opened container in the scanned environment.

Requirements

  1. Python 3.5+

  2. Azure CLI

  3. requirements.txt packages

  4. Azure user with one of the following built-in roles:

    Or any Azure user with a role that allows to perform the following Azure actions:

    Microsoft.Resources/subscriptions/read
    Microsoft.Resources/subscriptions/resourceGroups/read
    Microsoft.Storage/storageAccounts/read
    Microsoft.Storage/storageAccounts/listkeys/action
    Microsoft.Storage/storageAccounts/blobServices/containers/read
    Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
    

Build

Example for installation on Ubuntu:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
pip3 install -r requirements.txt

Usage

Simply run

python3 BlobHunter.py

If you are not logged in in the Azure CLI, a browser window will be prompted at you for inserting your Azure user credentials.

Demo

BlobHunter

References

For any question or feedback, please contact Daniel Niv, Asaf Hecht and CyberArk Labs. This project is not accepting contributions at this time.

License

Copyright (c) 2021 CyberArk Software Ltd. All rights reserved.
Licensed under the MIT License.
For the full license text see LICENSE.

Comments
  • This request is not authorized to perform this operation

    This request is not authorized to perform this operation

    Hi,

    I am owner of the subscription and still I am getting below error -

    Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/azure/storage/blob/_models.py", line 399, in _get_next_cb use_location=self.location_mode) File "/usr/local/lib/python3.7/dist-packages/azure/storage/blob/_generated/operations/_service_operations.py", line 336, in list_containers_segment raise models.StorageErrorException(response, self._deserialize) azure.storage.blob._generated.models._models_py3.StorageErrorException: Operation returned an invalid status 'This request is not authorized to perform this operation.'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "BlobHunter.py", line 222, in main() File "BlobHunter.py", line 215, in main check_subscription(tenants_ids[i], tenants_names[i], subs_ids[i], subs_names[i], credentials) File "BlobHunter.py", line 103, in check_subscription public_containers = check_storage_account(account, key) File "BlobHunter.py", line 56, in check_storage_account for cont in containers: File "/usr/local/lib/python3.7/dist-packages/azure/core/paging.py", line 129, in next return next(self._page_iterator) File "/usr/local/lib/python3.7/dist-packages/azure/core/paging.py", line 76, in next self._response = self._get_next(self.continuation_token) File "/usr/local/lib/python3.7/dist-packages/azure/storage/blob/_models.py", line 401, in _get_next_cb process_storage_error(error) File "/usr/local/lib/python3.7/dist-packages/azure/storage/blob/_shared/response_handlers.py", line 147, in process_storage_error raise error azure.core.exceptions.HttpResponseError: This request is not authorized to perform this operation.

    kind/bug 
    opened by mdaslamansari 4
  • Script doesnt work if there is no blob endpoint

    Script doesnt work if there is no blob endpoint

    Summary

    Script breaks if there is no blob endpoint in storage account.

    Steps to Reproduce

    Steps to reproduce the behavior:

    1. Create a subscription with storage accounts
    2. Leave some storage accounts without blob endpoint (only file endpoint etc)
    3. Run the scripts
    4. Error thrown

    Actual Results (including error logs, if applicable)

    Traceback (most recent call last): File ".\BlobHunter.py", line 229, in main() File ".\BlobHunter.py", line 222, in main check_subscription(tenants_ids[i], tenants_names[i], subs_ids[i], subs_names[i], credentials) File ".\BlobHunter.py", line 110, in check_subscription public_containers = check_storage_account(account, key) File ".\BlobHunter.py", line 60, in check_storage_account for cont in containers: File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\paging.py", line 129, in next return next(self._page_iterator) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\paging.py", line 76, in next self._response = self._get_next(self.continuation_token) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\storage\blob_models.py", line 402, in _get_next_cb use_location=self.location_mode) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\storage\blob_generated\operations_service_operations.py", line 356, in list_containers_segment pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 211, in run return first_node.send(pipeline_request) # type: ignore File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) [Previous line repeated 1 more times] File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline\policies_redirect.py", line 158, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\storage\blob_shared\policies.py", line 515, in send raise err File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\storage\blob_shared\policies.py", line 489, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\storage\blob_shared\policies.py", line 290, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 71, in send response = self.next.send(request) File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline_base.py", line 103, in send self._sender.send(request.http_request, **request.context.options), File "C:\Users\xxxxxxx\Desktop\blobhunt\lib\site-packages\azure\core\pipeline\transport_requests_basic.py", line 261, in send raise error azure.core.exceptions.ServiceRequestError: <urllib3.connection.HTTPSConnection object at 0x000002410ACAB198>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed

    Environment setup

    Azure account with Storage Account Contributor permission Windows Server 2016

    ##Additional information I suspect the error caused because the script tries to request to invalid URI. The getaddrinfo indicates no DNS record for the URI. I double checked by directly accessing the URI in web browser.

    kind/bug contributor 
    opened by fontvu 2
  • Script breaks if there are inactive subscriptions

    Script breaks if there are inactive subscriptions

    Summary

    When we delete a subscription, it still keeps showing up as deleted/inactive state for the next 90 days. If there are such subscriptions present in the tenant, the script breaks .

    Steps to Reproduce

    Steps to reproduce the behavior:

    1. Create a test subscription in Azure
    2. Delete the subscription. Notice that it still keeps showing up with Status as 'Disabled'
    3. Run the script
    4. We get this error: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/pravsing/.vscode/extensions/ms-python.python-2021.3.658691958/pythonFiles/lib/python/debugpy/main.py", line 45, in cli.main() File "/Users/pravsing/.vscode/extensions/ms-python.python-2021.3.658691958/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main run() File "/Users/pravsing/.vscode/extensions/ms-python.python-2021.3.658691958/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file runpy.run_path(target_as_str, run_name=compat.force_str("main")) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 262, in run_path return _run_module_code(code, init_globals, run_name, File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 95, in _run_module_code _run_code(code, mod_globals, init_globals, File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/pravsing/Documents/Projects/Azure/BlobHunter/BlobHunter.py", line 227, in main() File "/Users/pravsing/Documents/Projects/Azure/BlobHunter/BlobHunter.py", line 220, in main check_subscription(tenants_ids[i], tenants_names[i], subs_ids[i], subs_names[i], credentials) File "/Users/pravsing/Documents/Projects/Azure/BlobHunter/BlobHunter.py", line 78, in check_subscription resource_groups = [group.name for group in list(group_list)] File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/azure/core/paging.py", line 129, in next return next(self._page_iterator) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/azure/core/paging.py", line 76, in next self._response = self._get_next(self.continuation_token) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/azure/mgmt/resource/resources/v2020_06_01/operations/_resource_groups_operations.py", line 584, in get_next map_error(status_code=response.status_code, response=response, error_map=error_map) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/azure/core/exceptions.py", line 102, in map_error raise error azure.core.exceptions.ResourceNotFoundError: (SubscriptionNotFound) The subscription 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx' could not be found.

    Reproducible

    • [x] Always
    • [ ] Sometimes
    • [ ] Non-Reproducible
    kind/bug 
    opened by pravinsingh 2
  • Error when running on ubuntu

    Error when running on ubuntu

    Summary

    Running this on WSL 2 using ubutnto 20.04. command used to run was python3 BlobHunter.py and sudo python3 BlobHunter.py after installing requirements

    Traceback (most recent call last): File "BlobHunter.py", line 19, in get_credentials stderr=subprocess.DEVNULL).decode("utf-8") File "/usr/lib/python3.6/subprocess.py", line 356, in check_output **kwargs).stdout File "/usr/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command 'az account show --query user.name' returned non-zero exit status 127.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "BlobHunter.py", line 222, in main() File "BlobHunter.py", line 204, in main credentials = get_credentials() File "BlobHunter.py", line 22, in get_credentials subprocess.check_output("az login", shell=True, stderr=subprocess.DEVNULL) File "/usr/lib/python3.6/subprocess.py", line 356, in check_output **kwargs).stdout File "/usr/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr)

    kind/bug 
    opened by ryan-4824 2
  • Throttling Limits

    Throttling Limits

    Summary

    After some testing, we can see that it is not intended for users who have access to large subscriptions. In Azure Subscriptions where there are many Storage Accounts, a throtelling error occurs due to the number of requests to the Azure API.

    It could be interesting if the script controls the number of requests per second in order to avoid the throtelling error. Another option could be the possibility to choose between subscription or resource group.

    Steps to Reproduce

    A Simple ejecution after az login.

    Error_Blobhunter

    kind/bug 
    opened by windic 2
  • ModuleNotFoundError: No module named 'azure'

    ModuleNotFoundError: No module named 'azure'

    Getting the error below. Have the Azure CLI and Python 3.8 64 bit installed from Python.Org on Windows 10.

    What am I doing wrong? Thanks!

    C:\BlobHunter-main\BlobHunter-main>python Blobhunter.py Traceback (most recent call last): File "Blobhunter.py", line 1, in import azure.core.exceptions ModuleNotFoundError: No module named 'azure'

    C:\BlobHunter-main\BlobHunter-main>

    opened by seakish 2
  • Error on Kali Linux: get_credentials non-zero exit status

    Error on Kali Linux: get_credentials non-zero exit status

    Summary

    When running the BlobHunter on Kali Linux 2022.3, after logging into Azure, the script execution fails.

    Traceback (most recent call last):
      File "/home/kali/Desktop/scripts/azure/BlobHunter/BlobHunter.py", line 21, in get_credentials
        username = subprocess.check_output("az account show --query user.name", shell=True,
      File "/usr/lib/python3.10/subprocess.py", line 420, in check_output
        return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
      File "/usr/lib/python3.10/subprocess.py", line 524, in run
        raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command 'az account show --query user.name' returned non-zero exit status 1.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/kali/Desktop/scripts/azure/BlobHunter/BlobHunter.py", line 293, in <module>
        main()
      File "/home/kali/Desktop/scripts/azure/BlobHunter/BlobHunter.py", line 273, in main
        credentials = get_credentials()
      File "/home/kali/Desktop/scripts/azure/BlobHunter/BlobHunter.py", line 25, in get_credentials
        subprocess.check_output("az login", shell=True, stderr=subprocess.DEVNULL)
      File "/usr/lib/python3.10/subprocess.py", line 420, in check_output
        return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
      File "/usr/lib/python3.10/subprocess.py", line 524, in run
        raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command 'az login' returned non-zero exit status 1.
    

    azure-cli was installed via:

    1. sudo apt -y install azure-cli
    2. the official install how-to https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt#option-2-step-by-step-installation-instructions
    kind/bug contributor 
    opened by winklerrr 1
  • Run for specific subscription

    Run for specific subscription

    May i request new feature to allow selecting the specific subscription or excluding some subscription from the same.

    We have one of the subscription where a storage account has lots of containers and because of it the script is getting timed-out. We want to exclude that subscription from the scans.

    kind/enhancement 
    opened by mdaslamansari 1
  • Problem get credentials

    Problem get credentials

    I can't get it to work. I got the following error's

    `Traceback (most recent call last): File "C:\BlobHunter-main\BlobHunter.py", line 18, in get_credentials username = subprocess.check_output("az account show --query user.name", shell=True, File "C:\Users\schni\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 420, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "C:\Users\schni\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 524, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command 'az account show --query user.name' returned non-zero exit status 1.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "C:\BlobHunter-main\BlobHunter.py", line 222, in main() File "C:\BlobHunter-main\BlobHunter.py", line 204, in main credentials = get_credentials() File "C:\BlobHunter-main\BlobHunter.py", line 22, in get_credentials subprocess.check_output("az login", shell=True, stderr=subprocess.DEVNULL) File "C:\Users\schni\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 420, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "C:\Users\schni\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 524, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command 'az login' returned non-zero exit status 1. PS C:\BlobHunter-main>`

    opened by basziee 1
  • Handle Throttling limit error improvement part 2

    Handle Throttling limit error improvement part 2

    Changes:

    1. improve try and catch handle logic
    2. setting the wait time when Throttling limit error occurs to the response header "Retry-After"

    Changelog

    • [ ] The CHANGELOG has been updated, or
    • [ ] This PR does not include user-facing changes and doesn't require a CHANGELOG update

    Test coverage

    • [ ] This PR includes new unit and integration tests to go with the code changes, or
    • [x] The changes in this PR do not require tests

    Documentation

    • [ ] Docs (e.g. READMEs) were updated in this PR
    • [ ] A follow-up issue to update official docs has been filed here: insert issue ID
    • [x] This PR does not require updating any documentation

    Behavior

    • [ ] This PR changes product behavior and has been reviewed by a PO, or
    • [ ] These changes are part of a larger initiative that will be reviewed later, or
    • [ ] No behavior was changed with this PR

    Security

    • [ ] Security architect has reviewed the changes in this PR,
    • [ ] These changes are part of a larger initiative with a separate security review, or
    • [ ] There are no security aspects to these changes
    opened by yanivyakobovich 0
  • Handle Throttling limit error improvement

    Handle Throttling limit error improvement

    Changes: Handle Throttling limit error improvement.

    Changelog

    • [ ] The CHANGELOG has been updated, or
    • [ ] This PR does not include user-facing changes and doesn't require a CHANGELOG update

    Test coverage

    • [ ] This PR includes new unit and integration tests to go with the code changes, or
    • [x] The changes in this PR do not require tests

    Documentation

    • [ ] Docs (e.g. READMEs) were updated in this PR
    • [ ] A follow-up issue to update official docs has been filed here: insert issue ID
    • [x] This PR does not require updating any documentation

    Behavior

    • [ ] This PR changes product behavior and has been reviewed by a PO, or
    • [ ] These changes are part of a larger initiative that will be reviewed later, or
    • [ ] No behavior was changed with this PR

    Security

    • [ ] Security architect has reviewed the changes in this PR,
    • [ ] These changes are part of a larger initiative with a separate security review, or
    • [ ] There are no security aspects to these changes
    opened by yanivyakobovich 0
Owner
CyberArk
CyberArk, the undisputed leader in Privileged Account Security, secures secrets used by machines and users to protect traditional and cloud-native apps.
CyberArk
Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen 3.7k Jan 3, 2023
Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.

Data lineage made simple, reliable, and automated. Effortlessly track the flow of data, understand dependencies and analyze impact. Features Visualiza

null 898 Jan 9, 2023
🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

???? ??. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python and HoloViz Panel.

Marc Skov Madsen 97 Dec 8, 2022
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

DAGsHub 359 Dec 22, 2022
Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather than invoking the Python interpreter, Tuplex generates optimized LLVM bytecode for the given pipeline and input data set.

Tuplex 791 Jan 4, 2023
A data parser for the internal syncing data format used by Fog of World.

A data parser for the internal syncing data format used by Fog of World. The parser is not designed to be a well-coded library with good performance, it is more like a demo for showing the data structure.

Zed(Zijun) Chen 40 Dec 12, 2022
Fancy data functions that will make your life as a data scientist easier.

WhiteBox Utilities Toolkit: Tools to make your life easier Fancy data functions that will make your life as a data scientist easier. Installing To ins

WhiteBox 3 Oct 3, 2022
A Big Data ETL project in PySpark on the historical NYC Taxi Rides data

Processing NYC Taxi Data using PySpark ETL pipeline Description This is an project to extract, transform, and load large amount of data from NYC Taxi

Unnikrishnan 2 Dec 12, 2021
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

null 2 Nov 20, 2021
Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Yongxian (Caroline) Lun 1 Dec 27, 2021
Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

FangWei 1 Jan 16, 2022
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Emmanuel Boateng Sifah 1 Jan 19, 2022
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Trung-Duy Nguyen 27 Nov 1, 2022
PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

PostQF Copyright © 2022 Ralph Seichter PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the ma

Ralph Seichter 11 Nov 24, 2022
Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

BigScience Workshop 3 Mar 3, 2022
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

Blaze 3.1k Jan 5, 2023
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Iron 1.3k Dec 30, 2022
Aggregating gridded data (xarray) to polygons

A package to aggregate gridded data in xarray to polygons in geopandas using area-weighting from the relative area overlaps between pixels and polygons. Check out the binder link above for a sample code run!

Kevin Schwarzwald 42 Nov 9, 2022