Like ThreeJS but for Python and based on wgpu

Overview

CI Documentation Status PyPI version

pygfx

A render engine, inspired by ThreeJS, but for Python and targeting Vulkan/Metal/DX12 (via wgpu).

Introduction

This is a Python render engine build on top of WGPU (instead of OpenGL).

We take a lot of inspiration from ThreeJS, e.g.:

  • Materials and Geometry are combined in world objects.
  • No event system, but controls that make it relatively easy to integrate with one.
  • Decoupled cameras and controls.
  • The code for the render engines is decoupled from the objects, allowing multiple render engines (e.g. wgpu and svg).

Further we aim for a few niceties:

  • Proper support for high-res screens.
  • Builtin anti-aliasing.
  • Custom post-processing steps.
  • Support for picking objects and parts within objects.
  • (approximate) order-independent transparency (OIT) (not implemented yet).

WGPU is awesome (but also very new)

Working with the WGPU API feels so much nicer than OpenGL. It's well defined, no global state, we can use compute shaders, use storage buffers (random access), etc.

Fair enough, the WGPU API is very new and is still changing a lot, but eventually it will become stable. One of the biggest downsides right now is the lack of software rendering. No luck trying to run wgpu on a VM or CI.

Because of how Vulkan et. al. work, the WGPU API is aimed at predefining objects and pipelines and then executing these. Almost everything is "prepared". The main reasoning for this is consistency and stable drivers, but it also has a big advantage for us Pythoneers: the amount of code per-draw-per-object is very limited. This means we can have a lot of objects and still be fast.

As an example, see collections_line.py: drawing 1000 line objects with 30k points each at 57 FPS (on my laptop).

How to build a visialization

See also the examples, they all do something like this:

  • Instantiate a renderer and a canvas to render to.
  • Create a scene and populate it with world objects.
  • Create a camera (and maybe a control).
  • Define an animate function that calls: renderer.render(scene, camera)

On world objects, materials, and geometry

There are a few different world object classes. The class defines (semantically) the kind of object being drawn, e.g. Line, Image, Mesh, Volume. World objects have a position and orientation in the scene, and can have children (other world objects), creating a tree. World objects can also have a geometry and/or a material.

The geometry of an object defines its base data, usually per-vertex attributes such as positions, normals, and texture coordinates. There are several pre-defined geometries, most of which simply define certain 3D shapes.

The material of an object defines how an object is rendered. Usually each WorldObject class has one or more materials associated with it. E.g. a line can be drawn solid, segmented or with arrows. A volume can be rendered as a slice, MIP, or something else.

Installation

pip install -U pygfx

Or, to get the latest from GitHub:

pip install -U https://github.com/pygfx/pygfx/archive/main.zip

Current status

Under development, many things can change. We don't even do releases yet ...

Comments
  • Add sphinx gallery

    Add sphinx gallery

    Closes: https://github.com/pygfx/pygfx/issues/345 #145

    This PR contributes a sphinx gallery that converts examples into a nice RST file including output renders. It also reorganizes the example folder into introductory, feature_demo, and validation examples to track them better and to display examples by category in the gallery.

    Note that this is a large PR. Most of the changes are fairly light-weight, though, so a lot of the touching lines are things like adding a title in the docstring or renaming an example. The critical pieces are in conf.py (the sphinx config) and in utils/gallery_scraper.py.

    opened by FirefoxMetzger 48
  • Screenshot testing of examples on CI

    Screenshot testing of examples on CI

    This PR adds the framework for running examples on CI, just as in https://github.com/pygfx/wgpu-py/pull/238

    Please help me identify which examples we want to run in the test suite!

    opened by Korijn 48
  • Lighting and Physically-Based Rendering system

    Lighting and Physically-Based Rendering system

    This PR is the combination and refactoring of #309 and #319. It preliminarily realizes relatively complete lighting, shadows, and PBR system.

    Main features:

    • Lights & Shadows (and some helpers)

      • AmbientLight
      • DirectionalLight & DirectionalLightShadow
      • PointLight & PointShadow
      • SpotLight & SpotLightShadow
    • New Phong Lighting model and Physically-Based Lighting model

    • IBL support

    • Two materials that are affected by light, MeshPhongMaterial, MeshStandardMaterial

      • The shader logic of MeshPhongMaterial has been greatly modified, so some old examples using MeshPhongMaterial may have some compatibility problems.
      • MeshStandardMaterial is a standard physically based material, using Metallic-Roughness workflow (Disney Principled)

    I think for me, the process of realizing these functions is also the process of exploring some "wgpu" best practices. There are still some issues that need to be discussed, some of which have been written in the code comments. Maybe I need to sort it out later, and we can discuss some specific issues in detail.

    opened by panxinmiao 37
  • Try to add lighting system

    Try to add lighting system

    Try to add a simple lighting system.

    Now there are only PointLightand DirectionalLight, and only MeshPhongMaterialand MeshFlatMaterial are affected by the light. When using these two materials, if no light is added to the scene, a default DirectionalLight perpendicular to the screen used, Otherwise, use the light added to the scene.

    Maybe some implements are not good enough, but it's a beginning.

    Run the light_viewer.py example to see the specific effect.

    image image image

    opened by panxinmiao 36
  • rgb Image colors washed out

    rgb Image colors washed out

    Howdie, and thanks for this very cool looking library... just getting my toes wet.

    forgive the basic question, and apologies if this is discussed elsewhere, but I'm seeing washed out colors in the astronaut image when I follow the examples:

    # using either:..
    
    image = gfx.Image(
        gfx.Geometry(grid=tex),
        gfx.ImageBasicMaterial(clim=(0, 255)),
    )
    
    image = gfx.Mesh(
        gfx.plane_geometry(512, 512),
        gfx.MeshBasicMaterial(map=tex.get_view(filter="linear")),
    )
    

    compared to vispy:

    Screen Shot 2022-09-20 at 3 04 45 PM

    I'm on a mac (12.6), python 3.10, PyQt5 (though I also see it with jupyter-rfb in notebook).

    curious if this would a lower OS-level driver-type issue? or something I can tackle within the pygfx code?

    opened by tlambert03 34
  • Per-point sizes and colors

    Per-point sizes and colors

    Trying my hand at a first contribution. I'm a bit confused by how buffers are bound and accessed.

    Right now, this doesn't work (sizes are not read at all, it seems).

    opened by brisvag 25
  • Text rendering

    Text rendering

    Text rendering

    An overview of our options. Latest update 16-11-2021.

    Goals

    I think our goal should be:

    • Aim for high quality text rendering (though not necessarily perfect).
    • Make sure that there is a default font that looks good and is complete and always available.
    • Provide support (one way or another) for all languages, including CJK.
    • Preferably support user-provided fonts, so graphs in papers can be publication-worthy quality with a uniform style.
    • Preferably an approach where we can start simple and improve on with further steps.

    Methods to render text

    Leveraging a GUI toolkit

    We could leverage the text support of e.g. Qt to create text overlays. We have an example for this in pygfx.

    Bitmaps

    Historically, fonts are rendered using glyph bitmaps that match the screen resolution. All glyphs in use are put into an atlas: a large image/texture that has all glyphs packed together.

    A downside is that the bitmaps should match the screen pixels, so the bitmaps are font-size specific, and rendering glyphs at an angle is not possible without degrading quality.

    SDF

    In 2007 Chris Green from Valve published an approach that uses scalar distance fields (SDF) to render glyphs. This approach makes it possible to create a special kind if bitmap (which encodes distances to the edge instead of a mask), that can then be used to render that glyph at any scale and rotation. This was a big step in games, but also for generic visualization.

    There are variations on the way to calculate an SDF. The current industy standard seems to be anti-aliased euclidian distance transform.

    MSDF

    Around 2015, Viktor Chlumsky published his Master thesis, in which he proposes a method for a Multi-channel Signed Distance Field (MSDF). The key contribution is that directional information is encoded too, making is possible to produce sharp corners. It produces better results than normal SDF, at a smaller scale. The fragment shader is changed only slightly, causing just a minor performance penalty.

    Glyphy

    The Glyphy C library converts bezier curves to arcs and then calculates the signed distance to those arcs in the shader, producing better results than normal SDF. It's not well documented, but is by the author of Harfbuzz ...

    Vector rendering

    It's also possible to render the Bezier curves from the glyphs directly on the GPU. See e.g. http://wdobbie.com/. This approach seems to require some preprocessing to divide the bezier curves into cells. If this can be changed to create an atlas (i.e. each cell is one glyph) then this may be a feasible approach for a dynamic context like ours. Also see Slug, which looks pretty neat, but is not open source and covered by a patent.

    Also see this fairly simple approach: https://medium.com/@evanwallace/easy-scalable-text-rendering-on-the-gpu-c3f4d782c5ac It uses a prepass to calculate the winding number for the rough shape, then composes the final shape and applies aa in a fragment shader. A disadvantage (for us) is that it needs multiple passes, making it harder to fit into our renderer, I think.

    How do other libs do text rendering?

    How visvis does text

    https://github.com/almarklein/visvis/tree/master/text

    Visvis produces glyphs on the fly using FreeType. Freetype is available by default on Linux, on Windows it must be installed. If FreeType is not available, visvis falls back on system that uses pre-rendered fonts. Visvis also ships with FreeSans, so there is always a good sans font available. Further, it can make use of other fonts installed on the system.

    The rendered glyps are on a fixed resolution and put in an atlas texture. When rendered, the glyph is sampled using a smoothing kernel, giving reasonable results at multiple scales.

    Further, visvis includes a mini-SDL to create bold, italic, and math symbols using escaping with backslash.

    How vispy does text

    https://github.com/vispy/vispy/tree/main/vispy/util/fonts and https://github.com/vispy/vispy/tree/main/vispy/visuals/text

    Vispy produces SDF glyphs on the fly. It first gets a glyph bitmap using FreeType (via Rougier's FreeType wrapper) on Windows and Linux, and Quartz on MacOS. The bitmap is then converted to an SDF using either the GPU or Cython. It can use most fonts installed on the system. The code to list and load fonts is platform specific.

    The SDF glyphs are packed into an atlas texture, and rendered using regular SDF rendering.

    How Matplotlib does text

    Unless I'm mistaken, they use the GUI backend to render text.

    Discussion

    SDF rendering already provides pretty solid results and is widely used. MSDF seems like a very interesting extension that may make it possible to create better results with smaller SDF sizes. Whether this is true depends also on the complexity of the glyphs - it has been reported that it does not provide much advantage for e.g. Chinese glyphs. We'd need tests to visually judge what glyph sizes are needed in both approaches.

    In terms of dependencies, to create SDF we can get away with just FreeType to create the bitmaps, plus steal some code from vispy to generate the SDF. For MSDF there is that one project, but its not that actively maintained, so at this point it may be a liability. It may be worth investigating if its possible to generate an MSDF from a bitmap.

    Vector rendering seems to be popular lately. It produces better results, but is more complex and less performant than SDFs. Further you may need a bitmap fallback for high scales. There is also the question of how to generate the vector-data from the font files.

    If you want to do render text perfectly, you should take into account kerning, ligatures, combinatory glyphs, left-to-right languages, etc. We can do kerning, but the rest is way more complex. Tools that handle these correctly are e.g. Slug and Harfbuzz. The latter might be feasible to use - there are multiple Python wrappers.

    More details obtaining a glyph

    I imagine that somewhere in our code we have a method create_glyph(font, char) that returns an SDF/MSDF/bezier-curve-set and some metadata. We can place this info in an atlas texture together with other used glyphs. Now, how does this function work?

    The below more or less assumes SDF / MSDF, but similar questions apply to the Bezier-based rendering.

    Using prerendered glyph maps

    We could produce prerendered glyph atlas, so create_glyph() samples one glyph from that (offline) atlas and to put it in our GPU atlas. That way, the dependencies for rendering the glyphs are only needed by devs, and not the users. See e.g. https://github.com/Chlumsky/msdf-atlas-gen A disadvantages is that the font and available characters are limited by what atlasses we chose to ship. If we'd ship CJK fonts, that would grow the package size a lot, so maybe we'd need a pygfx-cjk-fonts or something.

    Some measurements for rendering all chars from a font:

    • OpenSans -> 1011 glyphs -> 1.5MB @ 32 0.5MB @ 16
    • FreeSans -> 4372 glyphs -> 7MB @ 32 2.3MB @ 16
    • NotoSans -> 2300 glyps -> 3.5MB @ 32 1.1MB @ 16
    • NotoSansJP -> 16735 glyps -> ?? 15.1MB @ 16

    This shows that with 16x16 MSDF glyphs, the atlas is about 3x the size of the .ttf file. Or 9x for 32x32. Not bad, but I don't now yet whether 16x16 is enough, especially for the more complex glyphs in CJK fonts.

    Using a tool to render glyph on the fly

    If we can produce the SDF glyph from the fontfile at the user, then support for all languages, and custom fonts is obtained. The cost is that we need a dependency for this.

    One option is FreeType. There is Rougiers https://github.com/rougier/freetype-py which provides binaries for multiple platforms (but not MacOS M1 or Linux aarch64 yet). IIUC freetype can render a glyph bitmap at any scale, but to create an SDF we need to do some extra work ourselves. I am not sure if a MSDF is feasible, because you'd miss the vectors, or can we do edge detection on a high-rez glyph to find the orientation?

    Another option is https://github.com/Chlumsky/msdfgen. We'd need to build and distribute the code for all platforms that we support. Since this code is not that actively maintained, this feels a bit tricky.

    Typesetting

    Typesetting is the process of positioning the quads that contain the glyphs. This includes kerning (some combination of glyhs have a custom distance). Also inclded are justifcation, alignment, text wrapping etc.

    I think its not too hard to do this in pure Python, though existing solutions may make things easier. Examples are Slug and Harfbuzz, but these are not feasible dependencies. I don't know of any Python libraries for typesetting.

    For the kerning we would need the kerning maps. FreeType can provide this info. I think that msdfgen also provides it in the CSV/JSON metadata.

    Formatting

    We should consider a formatting approach so users can create bold/italic text etc. Visvis uses escaping, but maybe an micro-HTML-subset would be nicer. Or something using braces. Or maybe its out of scope. This is definitely something that we can do later.

    Fonts of interest

    • OpenSans is a pretty nice and open font.
    • FreeSans is a GNU project that aims to cover almost all Unicode, except CJK.
    • Noto is a family of over 100 fonts that aims to have full Unicode coverage. It basically has a font for each script. The plain NotoSans has over 3k glyphs and with support for Latin, Cyrillic and Greek scripts it covers 855 languages.
    discussion 
    opened by almarklein 25
  • Text rendering

    Text rendering

    Closes #271 and closes #20

    Todo:

    • General
      • [x] High level version of the API.
      • [x] Document the text rendering process.
      • [x] Be specific about what font properties affect what, and where the user can set them.
      • [x] Text can be a child of another object.
      • [x] Make it such that some steps can be overloaded for custom behavior.
      • [x] Make it such that the builtin steps can (hopefully) be replaced without affecting the API.
      • [x] Flesh out the API.
      • [x] Make things work for RTL fonts.
      • [x] Make basics work for vertical scripts.
      • [x] Fix bug that when text is replaced, parts of the old text seems to linger sometimes.
      • [x] Implement geometry.bounding_box() (or gfx.show() wont work).
    • Discuss
      • [x] Place to put properties that affect positioning.
      • [x] Screen-space vs world-space, how to express in API.
      • [x] What builtin fonts to include (Noto sans!).
      • [x] What should happen with the text object's transform if it's set to render in screen space.
    • Itemization
      • [x] Plain text
      • [x] Very basic markdown.
      • [x] Tests.
    • Font selection
      • [x] Basic font selection (stub).
      • [x] Include builtin fonts, a selection of noto fonts.
      • [x] Decide what to do about font management. #379
      • [x] Clean up the font manager module.
      • [x] Include / delete builtin fonts.
      • [x] Tests.
    • Shaping
      • [x] Stub shaping that just about works.
      • [x] Basic shaping with Freetype.
      • [x] Advanced shaping with Harfbuzz.
      • [x] Tests.
    • Glyph generation and atlas
      • [x] Glyph generation with Freetype.
      • [x] Atlas that can pack variable sized glyphs.
      • [x] Dynamic atlas sizing (grow and shrink).
      • [x] Tests.
    • Layout (minimal stuff)
      • [x] Put chars in a straight line.
      • [x] Text size.
      • [x] Alignment: TextGeometry.anchor
      • [x] Support custom layout.
      • [x] Rotate text at an angle, also in screen space.
      • [x] Tests.
    • Rendering
      • [x] Render rectangles at the glyph positions.
      • [x] Render the texture content.
      • [x] Actual SDF rendering.
      • [x] Approximate weight.
      • [x] Slanting.
      • [x] Outlines.
      • [x] Picking.
      • [x] Validation examples.
    • Examples
      • [x] Minimal text example.
      • [x] Show white on black, and black on white and make sure they feel equally thick.
      • [x] A waterfall of glyphs, to strain the atlas.
      • [x] Use new gfx.show()
    • Final stuff
      • [x] Update new validation example screenshots.
      • [x] Docs.
      • [x] More tests?
      • [x] Fix todos & self-review.

    Tasks for later are listed in #391.


    Status: can render text, but each character is just a little rectangle. Note: it says "Hello world" :) image

    opened by almarklein 24
  • Multiple canvases, subplots, and overlays

    Multiple canvases, subplots, and overlays

    Closes #112, closes #116. Depends on https://github.com/pygfx/wgpu-py/pull/181

    This touches on some of the core mechanics of the renderer and how it interacts with the canvas. This affects having multiple canvases, subplots, overlays, sub-scene rendering, and post-processing.

    • [x] Support for using multiple canvases in the same application, and rendering world-objects in multiple, because the device is shared/global.
    • [x] Support for overlays because you can renderer.render() multiple times and set a new clear param.
    • [x] Support for subplots because of a new region param in renderer.render() which is in logical pixels and is translated to a viewport in physical pixels.
    • [x] Remove post-processing system (in favor of a single submit-step).
    • [x] Fix issues with submit related to texture size and format.
    • [x] Allow multiple renderers per target?
    • [x] Rename any leftover 'submits" to "flush to target" or something similar.
    • [ ] ~Maybe provide hooks for submit step in wgpu-py.~
    • [x] Maybe rethink offscreen rendering (if we allow WgpuRenderer(texture)).
    • [x] Resolve any todo's that I added along the way.
    • [x] Update docs in guide a bit. #119

    How the renderer now works (high level POV)

    The renderer's .render(scene, camera) function renders that scene to it's internal render target. This target is an implementation details (it may change e.g. with OIT). You can call .render() multiple times. The color and depth are automatically cleared before the first .render() call.

    The final result is automatically "submitted" to the texture provided by the canvas. This submit-step includes any FSAA. One can also call .submit_to_texture() to submit the result to another texture instead.

    Approach per use-case

    For each of these at least one example is added.

    Use multiple canvases

    You can now use multiple canvases. Each canvas can have exactly one renderer associated with it. You can render one scene to multiple canvases.

    image

    Blending scenes

    By calling .render() multiple times, the results should just blend, as if they were in one scene. For opaque objects this is easy. Eventually this should work with semitransparent objects too (if/when we get OIT working). Not sure what specific use-cases would be, but anyway, this is how it works :)

    image

    Subplots

    By rendering multiple times, but to different region rects, you can implement subplots.

    image

    Overlays

    By rendering multiple scenes on top of each other and calling .clear(color=False, depth=True) in between, the last scene is always on top.

    image

    A scene in a scene

    The typical example of having a surface in your scene (a TV screen if you like) that depicts another scene can now be implemented by rendering the subscene via an offscreen canvas, then submitting the result into a texture that is rendered via a mesh in the outer scene.

    sceneinscene

    Post-processing

    Similar to the above we can make the outer scene just a single quad that draws the inner scene and do something to it. But the canvas and renderer can be re-used.

    An alternative is to provide hooks for the submit step to allow for simple post-processing mechanics. Also e.g. fog. Maybe for later.

    Before this PR there is a system for post-processing steps, basically because the submit-step has been generalized. I plan to remove that to simplify things.

    image

    opened by almarklein 23
  • Update the example in the guide

    Update the example in the guide

    (I don't think this PR has an issue tracking the problem. If it does, please reference)

    The current example in the guide is broken; gfx.BoxGeometry changed to gfx.box_geometry and the example doesn't add light sources, so you will only see a black screen.

    This PR updates the guide to match what we are currently doing in the cube example. It also rewrites it a bit to reflect the addition of light sources and adds some (static) visualizations.

    I also found a typo in Contributing.md that I'm fixing here.

    opened by FirefoxMetzger 20
  • Default Canvas

    Default Canvas

    In a previous discussion (https://github.com/pygfx/pygfx/pull/371#discussion_r1014485572) I learned that you often don't need to interact with the canvas object directly when using pygfx. There are, of course, exceptions when this can be advantageous, but in general this is not the case. Because of this, we have introduced renderer.request_draw which wraps canvas.request_draw.

    Taking this thought and thinking it to its conclusion I think it would make sense to also see if we can avoid all calls to canvas by default so that we only have to instantiate it when needed. In particular, I was wondering if we could change

    from wgpu.gui.auto import WgpuCanvas, run
    import pygfx as gfx
    
    renderer = gfx.renderers.WgpuRenderer(WgpuCanvas())
    

    to

    import pygfx as gfx, run 
    
    renderer = gfx.renderers.WgpuRenderer()
    

    and make wgpu.gui.auto.WgpuCanvas the default canvas. We would, of course, keep the parameter and allow users to set it manually if needed. My angle here is to see if we can reduce boilerplate code.

    On the same token, we could wrap run and avoid the import of wgpu for minimal examples. It's always nice to not have to import lower-level functionality when dealing with something on a more high level.

    opened by FirefoxMetzger 19
  • Weird line rendering with a long line where relatively few points have a high alpha value

    Weird line rendering with a long line where relatively few points have a high alpha value

    For example in this video the length of each of these lines (left to right) is 8410, and only a few points along the line have high alpha values. The artifact is that arrows appear at the edges of the where the colors fade out (could it be an interpolation thing?).

    https://user-images.githubusercontent.com/9403332/210306005-19fa7d20-b237-416b-af0e-849ae57a6200.mp4

    The lines are instantiated like this:

    
    pygfx.LineMaterial
    
    pygfx.Line(
      geometry=pygfx.Geometry(positions=<[8410, 3]>, colors=<[8410, 4]),
      material=material(thickness=20, vertex_colors=True)
    )
    

    I tried adjusting the camera near and far planes but it doesn't make a difference. GPU is an nvidia RTX 3090, OS is Debian 11, vulkaninfo seems fine.

    Anyways the purpose is to use color intensity of a very long line that represents timepoints to indicate the magnitude of a quantitative variable, not sure if it's possible to do this with a simple Mesh instead if that's better than a Line?

    image

    opened by kushalkolar 2
  • Use view-space for lighting and any other rendering calculations

    Use view-space for lighting and any other rendering calculations

    At present, all rendering calculations in our shader are in world-space, but it is better to convert all objects to view-space before entering the shader.

    That is to say, we don't need to pass in "model_matrix" and "view_matrix" respectively to the shader, just pass in their product "model_view_matrix".

    This has many advantages:

    1. When using view space for calculation, all camera positions are fixed at the coordinate origin, which is conducive to simplifying some algorithms.

    2. When the scene is large, it is calculated in the world space. Some numbers may be large (objects that are far from the origin of the world coordinates), which will lead to calculation accuracy problems. In view space, all numbers are relatively small.

    3. Each vertex in the shader needs to perform the operation of projection_matrix * view_matrix * model_matrix * local_pos. When using view space, we only need to perform the calculation of projection_matrix * model_view_matrix * local_pos, which saves one matrix multiplication. I think this point is also true for other calculations. Some "constant calculations" (The calculation results are the same for each vertex or fragment) should be performed on the CPU as much as possible (only calculate once), rather than on each vertex or fragment (such as the conversion of constant color from srgb-space to linear-space). This is a good way to improve rendering performance.

    If we decide to make this change, it may have a large impact and may affect other features developed at the same time. Therefore, we'd better make this change when the version is relatively stable.

    opened by panxinmiao 0
  • Remove

    Remove "plot_" prefix from examples?

    The plot prefix is a mechanism from Sphinx-gallery to select examples for inclusion in the gallery. We now also have # sphinx_gallery_pygfx_render = True, which we can set to False, right? And the prefix can (I assume) be set to an empty string?

    cc @FirefoxMetzger

    enhancement docs 
    opened by almarklein 4
  • Rename WorldObjectShader.type to WorldObjectShader.kind

    Rename WorldObjectShader.type to WorldObjectShader.kind

    I am looking into the way shaders are written in pygfx. According to the docs, this is done by subclassing WorldObjectShader which has to define a class-level variable called type. Would it be useful to rename this variable to kind in order to avoid aliasing the built-in type function?

    opened by FirefoxMetzger 1
  • Picking improvements

    Picking improvements

    • Interactive selection (e.g. by rectangle or polygon)
      • Support retrieving picking info for a rectangular region in a single call, instead of just a single pixel
      • Disable objects from participating in picking
    enhancement 
    opened by Korijn 0
Opinionated code formatter, just like Python's black code formatter but for Beancount

beancount-black Opinionated code formatter, just like Python's black code formatter but for Beancount Try it out online here Features MIT licensed - b

Launch Platform 16 Oct 11, 2022
Like a cowsay but without cows!

Foxsay This is a simple program that generates pictures of a cute fox with a message. It is like a cowsay but without cows! Fox girls are better! Usag

Anastasia Kim 28 Feb 20, 2022
Like Dirt-Samples, but cleaned up

Clean-Samples Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the

TidalCycles 39 Nov 30, 2022
It's like Shape Editor in Maya but works with skeletons (transforms).

Skeleposer What is Skeleposer? Briefly, it's like Shape Editor in Maya, but works with transforms and joints. It can be used to make complex facial ri

Alexander Zagoruyko 1 Nov 11, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Dinghan Shen 49 Dec 22, 2022
arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

Andrej 671 Dec 31, 2022
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022
Simple, but essential Bayesian optimization package

BayesO: A Bayesian optimization framework in Python Simple, but essential Bayesian optimization package. http://bayeso.org Online documentation Instal

Jungtaek Kim 74 Dec 5, 2022
Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020

XDVioDet Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020. The proj

peng 64 Dec 12, 2022
GPT, but made only out of gMLPs

GPT - gMLP This repository will attempt to crack long context autoregressive language modeling (GPT) using variations of gMLPs. Specifically, it will

Phil Wang 80 Dec 1, 2022
A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021) A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Eff

null 120 Dec 12, 2022
Image restoration with neural networks but without learning.

Warning! The optimization may not converge on some GPUs. We've personally experienced issues on Tesla V100 and P40 GPUs. When running the code, make s

Dmitry Ulyanov 7.4k Jan 1, 2023
MNIST, but with Bezier curves instead of pixels

bezier-mnist This is a work-in-progress vector version of the MNIST dataset. Samples Here are some samples from the training set. Note that, while the

Alex Nichol 15 Jan 16, 2022
Minimal But Practical Image Classifier Pipline Using Pytorch, Finetune on ResNet18, Got 99% Accuracy on Own Small Datasets.

PyTorch Image Classifier Updates As for many users request, I released a new version of standared pytorch immage classification example at here: http:

JinTian 106 Nov 6, 2022
Official code of ICCV2021 paper "Residual Attention: A Simple but Effective Method for Multi-Label Recognition"

CSRA This is the official code of ICCV 2021 paper: Residual Attention: A Simple But Effective Method for Multi-Label Recoginition Demo, Train and Vali

null 163 Dec 22, 2022
Implementation of a Transformer, but completely in Triton

Transformer in Triton (wip) Implementation of a Transformer, but completely in Triton. I'm completely new to lower-level neural net code, so this repo

Phil Wang 152 Dec 22, 2022
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 3, 2023
Lava-DL, but with PyTorch-Lightning flavour

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Sami BARCHID 4 Oct 31, 2022