Main repository for Vispy

Overview

VisPy: interactive scientific visualization in Python

Main website: http://vispy.org

Build Status Coverage Status Zenodo Link


VisPy is a high-performance interactive 2D/3D data visualization library. VisPy leverages the computational power of modern Graphics Processing Units (GPUs) through the OpenGL library to display very large datasets. Applications of VisPy include:

  • High-quality interactive scientific plots with millions of points.
  • Direct visualization of real-time data.
  • Fast interactive visualization of 3D models (meshes, volume rendering).
  • OpenGL visualization demos.
  • Scientific GUIs with fast, scalable visualization widgets (Qt or IPython notebook with WebGL).

Announcements

  • Release! Version 0.6.5, September 24, 2020
  • Release! Version 0.6.4, December 13, 2019
  • Release! Version 0.6.3, November 27, 2019
  • Release! Version 0.6.2, November 4, 2019
  • Release! Version 0.6.1, July 28, 2019
  • Release! Version 0.6.0, July 11, 2019
  • Release! Version 0.5.3, March 28, 2018
  • Release! Version 0.5.2, December 11, 2017
  • Release! Version 0.5.1, November 4, 2017
  • Release! Version 0.5, October 24, 2017
  • Release! Version 0.4, May 22, 2015
  • VisPy tutorial in the IPython Cookbook
  • Release! Version 0.3, August 29, 2014
  • EuroSciPy 2014: talk at Saturday 30, and sprint at Sunday 31, August 2014
  • Article in Linux Magazine, French Edition, July 2014
  • GSoC 2014: two GSoC students are currently working on VisPy under the PSF umbrella
  • Release!, Version 0.2.1 04-11-2013
  • Presentation at BI forum, Budapest, 6 November 2013
  • Presentation at Euroscipy, Belgium, August 2013
  • EuroSciPy Sprint, Belgium, August 2013
  • Release! Version 0.1.0 14-08-2013

Using VisPy

VisPy is a young library under heavy development at this time. It targets two categories of users:

  1. Users knowing OpenGL, or willing to learn OpenGL, who want to create beautiful and fast interactive 2D/3D visualizations in Python as easily as possible.
  2. Scientists without any knowledge of OpenGL, who are seeking a high-level, high-performance plotting toolkit.

If you're in the first category, you can already start using VisPy. VisPy offers a Pythonic, NumPy-aware, user-friendly interface for OpenGL ES 2.0 called gloo. You can focus on writing your GLSL code instead of dealing with the complicated OpenGL API - VisPy takes care of that automatically for you.

If you're in the second category, we're starting to build experimental high-level plotting interfaces. Notably, VisPy now ships a very basic and experimental OpenGL backend for matplotlib.

Installation

Please follow the detailed installation instructions on the VisPy website.

Structure of VisPy

Currently, the main subpackages are:

  • app: integrates an event system and offers a unified interface on top of many window backends (Qt4, wx, glfw, IPython notebook with/without WebGL, and others). Relatively stable API.
  • gloo: a Pythonic, object-oriented interface to OpenGL. Relatively stable API.
  • scene: this is the system underlying our upcoming high level visualization interfaces. Under heavy development and still experimental, it contains several modules.
    • Visuals are graphical abstractions representing 2D shapes, 3D meshes, text, etc.
    • Transforms implement 2D/3D transformations implemented on both CPU and GPU.
    • Shaders implements a shader composition system for plumbing together snippets of GLSL code.
    • The scene graph tracks all objects within a transformation graph.
  • plot: high-level plotting interfaces.

The API of all public interfaces are subject to change in the future, although app and gloo are relatively stable at this point.

Genesis

VisPy began when four developers with their own visualization libraries decided to team up: Luke Campagnola with PyQtGraph, Almar Klein with Visvis, Cyrille Rossant with Galry, Nicolas Rougier with Glumpy.

Now VisPy looks to build on the expertise of these developers and the broader open-source community to build a high-performance OpenGL library.


External links

Issues
  • [WIP] Layout grids using Cassowary

    [WIP] Layout grids using Cassowary

    • [ ] Get baseline implementation working (just layout, no padding / margins)
    • [ ] Implement margins
    • [ ] Implement padding
    • [ ] document code
    • [ ] Regular test cases
    • [ ] Pathological extreme test cases
    opened by bollu 135
  • [WIP] IPython extension mechanism for Vispy

    [WIP] IPython extension mechanism for Vispy

    closes #977

    So, I tried to take a stab at this, and here's my first attempt. I foresee this needing a lot of iteration to get right.

    _Things to decide:_

    • location of the ipython extension file
    • features of the extension
    • different magic operators that this could have

    _TODO:_

    • [ ] I'm not sure how to write test cases for this. I'm guessing iPython provides some way to do this, but I'll have to look into this
    • [ ] Documentation for this, so that it's discover-able
    • [ ] list the extension on iPython's extension wiki
    • [ ] Publish the extension ( push it to PyPi with the tag Framework :: IPython)
    component: jupyter-widget-backend 
    opened by bollu 118
  • ENH: Text

    ENH: Text

    Currently working:

    1. Calculation (in shaders) and utilization of SDF.
    2. Linux text display and system font listing.
    3. OSX text display and system font listing.
    4. Win32 text display (auto-download of compiled 32- and 64-bit freetype libs) and system font listing.
    5. Automatic downloading of a couple nice open-source fonts.
    6. Make it a Visual.
    7. Very basic testing, we can improve once the visual-testing PR is merged.
    type: enhancement 
    opened by larsoner 112
  • Scenegraph overhaul

    Scenegraph overhaul

    This PR is a continuation of #895 (and supersedes it). I am keeping both PRs open for now so that the prior changes may be reviewed in isolation. Major tasks for this PR are:

    • [x] Remove multi-parenting in scenegraph
    • [x] Nodes have a single transform (and related simplifications)
    • [x] SceneCanvas.draw_visual should be simple and efficient
    • [x] Implement picking (finally)
    • [ ] ViewNode and a way to automatically create a view on a scenegraph subtree
    • [x] Other simplifications:
      • [x] Remove all ViewBox clip methods except fragment.
      • [x] Remove render/framebuffer/canvas scenegraph nodes (might restore these later if there turns out to be a need)

    Visuals to migrate:

    • [x] Line
    • [x] Mesh
    • [x] GridLines
    • [x] Markers
    • [x] Volume
    • [x] Image
    • [x] Text
    • [x] Axis
    • [x] Polygon
    • [x] Box
    • [x] Cube
    • [x] Ellipse
    • [x] Isocurve
    • [x] Isosurface
    • [x] Plane
    • [x] Rectangle
    • [x] RegularPolygon
    • [x] Spectrogram
    • [x] Tube
    • [x] XYZAxis
    • [x] Histogram
    • [x] LinePlot
    • [x] Isoline
    • [x] SurfacePlot
    • [x] ColorBar

    EDIT BY Eric89GXL:

    Related issues:

    • [x] Toggling a node toggles children #986
    • [x] Text visual changing viewport #981
    • [x] Type of Canvas size #976
    • [x] Centering of text visuals #975: working now?
    • [x] Scene benchmarks broken #965: need to fix?
    • [x] Depth drawing precision #953: need to add?
    • [x] Get SceneCanvas from node #927: exists?
    • [x] Plane and box PR #901: need to update visuals
    • [x] Original PR #895
    • [x] Reuse FBO in viewbox #791
    • [x] Picking example #693
    • [x] Axes, ticks, grid #677
    • [x] Grid plot with individual zooming #434: does it work?
    • [x] Picking #140
    • [ ] Line editing example PR #926: modify to use picking?
    • [ ] Camera translate speed #914: need to add?
    • [ ] Performance of the SceneGraph #628

    closes #986 closes #981 closes #976 closes #975 closes #965 closes #953 closes #927 closes #791 closes #693 closes #677 closes #434 closes #140

    opened by campagnola 103
  • Reorganize visuals layer

    Reorganize visuals layer

    Reorganization to make visuals a top-level package. Fixes #448. Only one example currently works; I'll deal with the rest after feed back on a couple of issues:

    1. Any complaints about the current structure?
    2. Poll: should transforms be top-level as well?
    3. There are many modules for users to import, so it would be nice to pick one place where most of the common names are imported by default. Currently there is a bit of this going on in scene.__init__, but it's a mess. I'd like to make many of the examples look like:
    import vispy.scene
    
    canvas = vispy.scene.SceneCanvas()
    line = vispy.scene.Line(...)
    line.transform = vispy.scene.AffineTransform()
    

    vispy.plot should do something similar, but with a more extensive set of imports aimed toward scientific vis.

    component: visuals 
    opened by campagnola 102
  • FIX: change kernel texture internalformat and interpolation

    FIX: change kernel texture internalformat and interpolation

    to fix precision issues, fixes #1068

    This fixes problems with ImageVisual using spatial filtering using 1D kernel lookups.

    opened by kmuehlbauer 100
  • WIP: Ripping gloo in two via GLIR

    WIP: Ripping gloo in two via GLIR

    This PR refactors gloo to break it in two pieces, one is the high level gloo interface, which generates GLIR commands. The other is the GLIR interpreter.

    In the progress I will also fix some outstanding issues with gloo, as described in #464, like getting rid of local strorage and allow specifying uniforms/attributes before the source code is set.

    Closes #464, #510, #450, #338, #407, #351

    Overview of changes that this PR makes

    All gloo objects:

    • No more activate, deactivate methods. No more handle and target properties. Only an id property.
    • Gloo objects are context aware. They are associated with the context that is current at the moment of instantiation. If no context is active, they are associated to a "pending context" which will become in use by the first app.Cancas that requests a canvas. This means all our examples still work.

    Program:

    • The API of Program changes WRT attributes and uniforms. There are no more active_uniforms. Just a function to get info on all variables in shading code.
    • There are no variable objects, variable.py is gone. These objects served mainly for deferred setting of values. The GLIR command queue takes over this function.
    • There are no more VertexShader and FragmentShader classes, shaders.py is gone. It turns out that shaders do not really have a function other then temporarily existsting to compile the code. Vispy programs should now have more GPU memory available. Just call program.set_shaders() to set source code.
    • There are now also warnings when attributes/uniforms are set that are not active. Due to this I found some unnecessary lines of code in some examples.

    Buffer:

    • No more local storage. This means that setting discontinuous data does not work anymore. Much simpler code though.
    • As a consequence data cannot be set on views.
    • No more automatic conversion. Passing float64 is not allowed. Or should we convert in this case?

    Texture:

    • There is no more local copy.
    • Passing float64 is not allowed.
    • You can use set_size with completely different size and format.
    • You can now use set_data using any type and shape.
    • Texture does not support views anymore.
    • Texture with no data (only shape) can be initialized with Texture((100,100))

    Framebuffer

    • ColorBuffer, DepthBuffer and StencilBuffer are no more. Just use RenderBuffer. You do not have to specify the format, as the FrameBuffer will do this when attaching (if format is None).
    • Deactivating a FrameBuffer will make the previous framebuffer active. This is the job of the GLIR implementation, because in gloo you may not always have access to this information.

    Outstanding questions for GLIR spec

    • Should GLIR pass enums by string or int? There seem to be more people in favor of strings.

    Work to do after this

    • There is a call to glGetViewport in text.py. This needs to be removed; we need to get viewport from the event instead.
    • GPU objects must be deleted when Gloo objects get cleaned up. I won't be surprised if that will cause problems, so let's do this after Vispy with the current changes seems stable.
    • Uniform arrays #345
    component: gloo 
    opened by almarklein 98
  • Marker Visual

    Marker Visual

    Remark: the purpose of this pull request is to prepare a much more interesting pull request where I implemented a simple abstract renderer for markers (which explain why I became maniac on marker and edge sizes)

    • corrected GLSL function for markers where some key points are defined relative to fragment size instead of being defined relative to marker size $v_size: the defaults were visible in the demo/gloo/how_markers.py example on the clobber marker that was kind of dislocating when becoming too small; they could be seen also as deformation of bars or cross markers when displayed at very different sizes.
    • changed frag rendering so that the size of the marker is the guaranted size in pixel of the face. The edge is now outside the face and if too small will be mixed with face antialiasing or not displayed if below 1/10th pixel wide
    • added marker function rescaling so that the edge have the same width for the same edge_width parameter whatever the marker (and as far as I checked the edge width is exactly edge_width wide in pixels) and whatever the part of the marker (before tailed_arrow had a wider edge on the tail than on the arrow)
    • added very common markers triangle up and down and star (at least in matplotlib) and changed the total size of the fragment in vert and frag for the edge to be rendered correctly
    • added a RescalingRelativeEdgeMarkerVisual that rescales as the scene is rescaled (see the added example below) and for which edge is expressed as a fraction of the marker size This new marker visual can be used with all markers and rescale the whole marker as the minimum of x and y rescaling.
    • added a RescalingXYRelativeEdgeMarkerVisual that is similar to the previous one but rescale independently in the x and y directions. It works only with specific markers called markersxy which are defined for arbitrary x and y sizes ($v_size becomes a vec2). Some of these markers keep a 1:1 aspect ratio (called square markers), using only the min or the max of the x and y size. The others used both sizes (hbar, vbar, cross, x, rect and ellipse). Note that the distinction between hbar and vbar does not much sense because of the 2 sizes but is kept for the moment. Ellipse marker code may seem complicated but the key problem is to correctly rescale the function so that the edge width is regular around the perimeter of the ellipse. The correct rescaling is given by the gradient of the level set defining the marker (the marker function in the code) taken at the point on the ellipse for which the normal goes through the point currently considered in the fragment shader. I could not find a way to keep the code of this computation siple so I used an approximation: the point on the ellipse is the point corresponding to the hyperbola constant coordinate in elliptic coordinate system (http://en.wikipedia.org/wiki/Elliptic_coordinate_system). It's much simpler to compute and the result is nice looking (you can see some defaults by rescaling the ellipse so that it has a large size and is almost flat).
    type: enhancement component: visuals 
    opened by bdurin 92
  • WIP: Volume rendering & camera changes

    WIP: Volume rendering & camera changes

    This PR implements volume rendering. It's basically a port of the technique that I use in Visvis.

    • New Volume visual with two rendering styles (iso and mip).
    • Volume example in examples/scene/volume.py Try it, and use the keys to switch cameras etc.
    • Refactored cameras a bit in general
    • Refactored TurnTableCamera: no more mode and distance. You can just change the fov while getting the same zoom level. Setting fov to 0 means orthographic. Zooming is done by moving camera backward.
    • New FlyCamera to fly around your data.

    This is about ready. There are more things to do, but that can be picked up in another PR. (I added todo items for this in volume.py)

    I wish to do more with the camera's, but we should probably discuss this first. Will make a new issue for this.

    todo:

    • [x] avoid warning when swithching render styles
    • [x] camera's

    closes issue #725

    volume

    Closes #725

    type: enhancement component: visuals 
    opened by almarklein 86
  • OOGL/GL namespaces

    OOGL/GL namespaces

    There are currently two low-level GL namespaces: gl and oogl. The first one contains raw OpenGL ES commands, the second one implements an object-oriented layer on top of OpenGL. Even though we definitely need both, I fear it might be confusing for new-comers that even the simplest examples need two different namespaces that do basically the same thing.

    I think it would be worth writing extremely simple functions in oogl that implement the most basic things, e.g. something like oogl.clear(r, g, b, a), maybe oogl.draw_elements(...), etc. At the very least, we could just write dumb wrappers like:

    def clear(*args):
        gl.glClearColor(*args)
        gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
    

    It would simplify the simplest examples, making them cleaner and easier to understand. I can already see beginners asking: "But what is the difference between vispy.gl and vispy.oogl? Why do I need them both just to display a black screen?" and so on.

    Another advantage is that it would make the code conventions more uniform, i.e. we use names_with_underscores in OOGL but camelCase in GL, which is not really nice and might confuse beginners.

    One goal could be the following: the most simple things could be done with either vispy.gl and no oogl (raw OpenGL), or vispy.oogl and no pure GL.

    type: enhancement 
    opened by rossant 82
  • MNT: unpin mesalib

    MNT: unpin mesalib

    There has been some issue with mesalib, which might be fixed by https://github.com/conda-forge/mesalib-feedstock/pull/40.

    Just having a look, if this will also fix the issues here.

    opened by kmuehlbauer 1
  • How to get TextVisual width

    How to get TextVisual width

    Hello,

    is it possible to get the width of the TextVisual? For example, I have created an TextVisual (scene.visuals.Text) with text "ABCD" and I want to know the width from letter A to letter D in pixels. Height is easy, it's just font_size, but getting width seems more difficult (at least for me). The number of letters, font type or font size can be arbitrary.

    opened by juraj6 1
  • 3D interpolation filters for VolumeVisual

    3D interpolation filters for VolumeVisual

    I was wondering whether it was possible to implement new interpolation methods for the VolumeVisual, and I ended up on some of @rossant 's code from 2015 in build_spatial_filters.py. It generates automatically the glsl code for 2D spatial interpolation filters (used by Imagevisual, for example).

    I understand very little of it, but I assume some of it might be generalizable to 3D. Is it possible? If yes, how hard is it and where should I look for some background on it?

    The original code seems to be written by @rossant , with some small changes by @kmuehlbauer .

    cc @almarklein, @campagnola, @djhoese

    opened by brisvag 3
  • [WIP] Remove border from central widget

    [WIP] Remove border from central widget

    This makes the border_width of SceneCanvas.central_widget 0. By default a Widget's border width is 1, which probably makes sense for things like Grid and ViewBox, but maybe less sense for Widget.central_widget (though feel free to correct my understanding here!). The other problem is that central_widget is a property, so there's no way to pass through a border_width like in the constructor of Widget or in something like Widget.add_view.

    The issue fixed by this PR originally came up in #3357 on napari, though it was easy enough to workaround by overriding SceneCanvas.central_widget in napari's VispyCanvas.

    I marked this as WIP mostly because I'm new to vispy and don't have much confidence in my changes yet.

    opened by andy-sweet 7
  • unable download the demo data

    unable download the demo data

    I want to run the demo of volume.py, but the demo data needed can't download. How can I download the demo data? Thanks!!!

    opened by littlebird-0904 7
  • Is there way to visualize two volume at the same time? I mean in a composite way.

    Is there way to visualize two volume at the same time? I mean in a composite way.

    Hello everyone, I started to use vispy recently previously I used to use the VTK package. As I have four volumes and I want to visualize all four-volume on the same canvas within the same scene. I couldn't find an example for this case. I found one example as here: https://github.com/vispy/vispy/blob/main/examples/scene/volume.py but it's not what I am looking for.

    So far I tried below code:

    from vispy import scene
    import SimpleITK as sitk
    
    directory1 = "/home/yuvi/Desktop/DATASETS_&_OUTPUTS/Local-Dataset/CLSM/16-bit_TO_8-bit/Channel_-sub-6232_20x_MsGcg_RbCol4_SMACy3_islet1-2/"
    directory2 = "/home/yuvi/Desktop/DATASETS_&_OUTPUTS/Local-Dataset/CLSM/LSM-DATA/output/Channel_-01-1"
    
    itk_image = sitk.ImageSeriesReader()
    dicom_names1 = itk_image.GetGDCMSeriesFileNames(directory1)
    itk_image.SetFileNames(dicom_names1)
    image1 = itk_image.Execute()
    i1 = sitk.GetArrayFromImage(image1)
    
    itk_image1 = sitk.ImageSeriesReader()
    dicom_names2 = itk_image1.GetGDCMSeriesFileNames(directory2)
    itk_image1.SetFileNames(dicom_names2)
    image2 = itk_image1.Execute()
    i2 = sitk.GetArrayFromImage(image2)
    
    # Prepare the canvas
    canvas = scene.SceneCanvas(keys='interactive', bgcolor='w')
    canvas.measure_fps()
    
    # Set up a viewbox to display the image with interactive pan/zoom
    view1 = canvas.central_widget.add_view()
    view2 = canvas.central_widget.add_view()
    
    
    vol1 = scene.visuals.Volume(i1, cmap="Greens", method='mip', raycasting_mode='volume',
                                parent=view1.scene)  # this can be used to visualize volume image
    vol2 = scene.visuals.Volume(i2, cmap="Blues", method='mip', raycasting_mode='volume',
                                parent=view2.scene) 
    
    fov = 60
    cam1 = scene.cameras.TurntableCamera(elevation=0, azimuth=0, parent=view1.scene, fov=fov)
    cam2 = scene.cameras.TurntableCamera(elevation=0, azimuth=0, parent=view2.scene, fov=fov)
    
    view1.camera = cam1  # Select turntable at first
    view2.camera = cam2 
    
    canvas.show()
    canvas.app.run()
    

    As you can see the output in the below picture. Screenshot from 2021-11-09 23-20-42 Can you tell me how to add two views and visualize them in the same canvas? Is there something like add actor in VTK that helps to combine two views? And one more thing, when I try to rotate the output image in 360 degrees like we can do in napari or in VTK but in vispy I couldn't rotate the image in 360. It gets stuck when I try to rotate the image vertically. I mean the generated image doesn't flip in a 360 degree.

    Any help will be appreciated. Thank you!

    opened by Yuvi-416 17
  • [WIP] Ignore all flat triangles in triangulation

    [WIP] Ignore all flat triangles in triangulation

    This partially fixes #2247. By ignoring flat/zero-area triangles (i.e. where all three points are collinear), we avoid the assertion failure associated with overwriting the entry in _edges_lookup. But there is still an unexpected triangle and vertex in the output.

    I added a few simple and similar tests to capture what I expect from the output, and where the behavior breaks down. Before the change both test_triangulate_collinear_path and test_triangulate_collinear_path_with_repeat fail with the assertion failure. After the change, only test_triangulate_collinear_path_with_repeat fails, not due to the assertion, but because a triangle is found, which also contains a vertex that is not in the input - instead I would expect no triangles to be returned.

    I think this PR represents a small improvement from main for two reasons.

    1. The original assertion failure makes me think that the triangulation algorithm assumed that it would never see duplicate edges based on the way it was implemented (i.e. the state is not just exceptional, it should be impossible). This PR prevents one way of reaching that state.
    2. Before this change there was an implementation comment # ignore flat tris, which suggested that the _add_tri was trying to skip zero-area triangles. But the implementation only ignored triangles with at least 2 points that have duplicate coordinates. This PR adds a proper check for that condition.

    However, I have some reservations about this PR in its current state.

    • I'm not exactly sure what to expect from the output of triangulation. It looks like we always get some vertices that weren't in the input, but in most cases they are not actually used in any of the output triangles, so that seems fine (as reflected by the tests).
    • It doesn't fix all the issues described in #2247 (i.e. there is still an unexpected triangle and vertex).
    • It does a little refactoring/simplification in the implementation, which I think is correct, but would be good for someone to check my geometry!

    Note that some of the test cases represent degenerate cases, so I think an alternative fix here is to raise an exception with a suitable message and document that behavior. Personally, I would expect a triangulation algorithm to expect some degenerate cases and handle them fairly gracefully (perhaps with a warning), but this is my first time thinking about triangulation, so I'm very much a beginner here.

    Any thoughts/comments/ideas welcome!

    opened by andy-sweet 0
  • Unexpected triangulation results

    Unexpected triangulation results

    I came here via a napari issue, where we run into the same assertion failure (e.g. assert (c, b) not in self._edges_lookup) mentioned here and in #1948.

    @sofroniewn found a small collection of 2D points that seems to reproduce the issue. That example is extra-degenerate for a few reasons.

    • There are three collinear points.
    • There is a repeated point in the middle.
    • Traversing the points in the given order creates a polygon with no area.

    Here's some code that should cause the assertion:

    from vispy.geometry import PolygonData
    data = np.array([
            [4, 4],
            [3, 4],
            [1, 4],
            [4, 4],
            [1, 2],
    ])
    vertices, triangles = PolygonData(vertices=data).triangulate()
    

    And here's a diagram of those points (and their order) just to clarify:

    Triangulate path with repeat_ input (2)

    The napari issue also describes a variant of the example where one of the collinear points is slightly perturbed to avoid the assertion failure, but the output includes some extra vertices and triangles that we don't expect.

    I pushed a branch up to my fork of vispy that includes some test cases that cover above and some simpler examples. I'm unsure if my test assertions are correct because I'm not sure if the triangulated vertices are allowed to include extras.

    That branch also includes a fix that prevents the assertion failure mentioned here (but not the unexpected output vertices) by ignoring "flat" triangles that are formed from 3 collinear points (i.e. a zero area triangles). There is some existing code that seemed to attempt that based on an implementation comment, but only captured a specific case where at least two points have equal coordinates. Let me know if you think it's worth turning that into a PR - I'd be happy to do that.

    In general, the Python triangulation implementation seems to have a few quirks and maybe even some missing parts. I also couldn't easily find access to the paper mentioned in the docstring. Therefore, it might be worth reconsidering a dependency to handle triangulation rather than maintaining this code: shapely could be a good option as previously mentioned and @nclack (also on napari) mentioned considering earcut-like algorithms (I think there are a few Python implementations of that).

    Also, let me know if I should create a separate issue to track this!

    Originally posted by @andy-sweet in https://github.com/vispy/vispy/issues/1847#issuecomment-958181091

    opened by djhoese 2
  • Selecting points with TurntableCamera and Markers Visual

    Selecting points with TurntableCamera and Markers Visual

    Hello,

    I've been wrestling with this for a bit now, I am trying to build a widget that will allow me to select points in 3d using the Turntable camera and the Markers visual. I have an example linked below of me trying to accomplish this.

    3dview example

    My approach is the simplest one I think. I just try to convert all the points to screen coordinates, and then use the mouse event to filter the points to only the ones within the selection rectangle.

    One thing I found I had to do was scale the points-in-screen-coordinates to the camera distance, which I thought would have been included in the scene transform.

    Even with that though, the selection does not seem to be correct. It looks like the transform from world coordinates to screen coordinates is still incorrect. Is there something I am doing with the transform that is incorrect?

    Would appreciate another set of eyes on this. Thanks.

    opened by ericgyounkin 12
  • Cannot initiate on glfw backend

    Cannot initiate on glfw backend

    Hi,

    I recently started using p5py with vispy, but have encountered a problem. When I try to initiate vispy app, following exception occur.

        app.Canvas.__init__(
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\vispy\app\canvas.py", line 211, in __init__
        self.create_native()
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\vispy\app\canvas.py", line 228, in create_native
        self._app.backend_module.CanvasBackend(self, **self._backend_kwargs)
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\vispy\app\backends\_glfw.py", line 298, in __init__
        self._on_resize(self._id, size[0], size[1])
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\vispy\app\backends\_glfw.py", line 394, in _on_resize
        self._vispy_canvas.events.resize(
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\vispy\util\event.py", line 453, in __call__
        self._invoke_callback(cb, event)
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\vispy\util\event.py", line 471, in _invoke_callback
        _handle_exception(self.ignore_callback_errors,
      << caught exception here: >>
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\vispy\util\event.py", line 469, in _invoke_callback
        cb(event)
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\p5\sketch\base.py", line 157, in on_resize
        p5.renderer.reset_view()
      File "C:\Users\freed\.virtualenvs\p5py-muTHg_J2\lib\site-packages\p5\sketch\renderer2d.py", line 82, in reset_view
        self.texture_prog['modelview'] = self.modelview_matrix.T.flatten()
    TypeError: 'NoneType' object does not support item assignment
    ERROR: Invoking <bound method Sketch.on_resize of <Sketch (Glfw) at 0x1727d255a60>> for ResizeEvent
    

    I've tried with macOS Big Sur(Intel) and WIndows 64bit(AMD) but both don't work. Also tried with Python 3.8 and 3.9. I checked OpenGL version > 2.1. sys_info says as follows.

    >>> pprint.pprint(vispy.sys_info())
    ('Platform: Windows-10-10.0.19042-SP0\n'
     'Python:   3.8.10 (tags/v3.8.10:3d8993a, May  3 2021, 11:48:03) [MSC v.1928 '
     '64 bit (AMD64)]\n'
     'NumPy:    1.21.2\n'
     'Backend:  Glfw\n'
     'pyqt4:    None\n'
     'pyqt5:    None\n'
     'pyqt6:    None\n'
     'pyside:   None\n'
     'pyside2:  None\n'
     'pyside6:  None\n'
     'pyglet:   None\n'
     'glfw:     glfw 2.3.0\n'
     'sdl2:     None\n'
     'wx:       None\n'
     'egl:      None\n'
     'osmesa:   None\n'
     'tkinter:  None\n'
     'jupyter_rfb: None\n'
     '_test:    None\n'
     '\n'
     "GL version:  '4.6.0 NVIDIA 471.96'\n"
     'MAX_TEXTURE_SIZE: 32768\n'
     "Extensions: 'GL_AMD_multi_draw_indirect GL_AMD_seamless_cubemap_per_texture "
     'GL_AMD_vertex_shader_viewport_index GL_AMD_vertex_shader_layer '
     'GL_ARB_arrays_of_arrays GL_ARB_base_instance GL_ARB_bindless_texture '
     'GL_ARB_blend_func_extended GL_ARB_buffer_storage GL_ARB_clear_buffer_object '
     'GL_ARB_clear_texture GL_ARB_clip_control GL_ARB_color_buffer_float '
     'GL_ARB_compatibility GL_ARB_compressed_texture_pixel_storage '
     'GL_ARB_conservative_depth GL_ARB_compute_shader '
     'GL_ARB_compute_variable_group_size GL_ARB_conditional_render_inverted '
     'GL_ARB_copy_buffer GL_ARB_copy_image GL_ARB_cull_distance '
     'GL_ARB_debug_output GL_ARB_depth_buffer_float GL_ARB_depth_clamp '
     'GL_ARB_depth_texture GL_ARB_derivative_control GL_ARB_direct_state_access '
     'GL_ARB_draw_buffers GL_ARB_draw_buffers_blend GL_ARB_draw_indirect '
     'GL_ARB_draw_elements_base_vertex GL_ARB_draw_instanced '
     'GL_ARB_enhanced_layouts GL_ARB_ES2_compatibility GL_ARB_ES3_compatibility '
     'GL_ARB_ES3_1_compatibility GL_ARB_ES3_2_compatibility '
     'GL_ARB_explicit_attrib_location GL_ARB_explicit_uniform_location '
     'GL_ARB_fragment_coord_conventions GL_ARB_fragment_layer_viewport '
     'GL_ARB_fragment_program GL_ARB_fragment_program_shadow '
     'GL_ARB_fragment_shader GL_ARB_fragment_shader_interlock '
     'GL_ARB_framebuffer_no_attachments GL_ARB_framebuffer_object '
     'GL_ARB_framebuffer_sRGB GL_ARB_geometry_shader4 GL_ARB_get_program_binary '
     'GL_ARB_get_texture_sub_image GL_ARB_gl_spirv GL_ARB_gpu_shader5 '
     'GL_ARB_gpu_shader_fp64 GL_ARB_gpu_shader_int64 GL_ARB_half_float_pixel '
     'GL_ARB_half_float_vertex GL_ARB_imaging GL_ARB_indirect_parameters '
     'GL_ARB_instanced_arrays GL_ARB_internalformat_query '
     'GL_ARB_internalformat_query2 GL_ARB_invalidate_subdata '
     'GL_ARB_map_buffer_alignment GL_ARB_map_buffer_range GL_ARB_multi_bind '
     'GL_ARB_multi_draw_indirect GL_ARB_multisample GL_ARB_multitexture '
     'GL_ARB_occlusion_query GL_ARB_occlusion_query2 '
     'GL_ARB_parallel_shader_compile GL_ARB_pipeline_statistics_query '
     'GL_ARB_pixel_buffer_object GL_ARB_point_parameters GL_ARB_point_sprite '
     'GL_ARB_polygon_offset_clamp GL_ARB_post_depth_coverage '
     'GL_ARB_program_interface_query GL_ARB_provoking_vertex '
     'GL_ARB_query_buffer_object GL_ARB_robust_buffer_access_behavior '
     'GL_ARB_robustness GL_ARB_sample_locations GL_ARB_sample_shading '
     'GL_ARB_sampler_objects GL_ARB_seamless_cube_map '
     'GL_ARB_seamless_cubemap_per_texture GL_ARB_separate_shader_objects '
     'GL_ARB_shader_atomic_counter_ops GL_ARB_shader_atomic_counters '
     'GL_ARB_shader_ballot GL_ARB_shader_bit_encoding GL_ARB_shader_clock '
     'GL_ARB_shader_draw_parameters GL_ARB_shader_group_vote '
     'GL_ARB_shader_image_load_store GL_ARB_shader_image_size '
     'GL_ARB_shader_objects GL_ARB_shader_precision '
     'GL_ARB_shader_storage_buffer_object GL_ARB_shader_subroutine '
     'GL_ARB_shader_texture_image_samples GL_ARB_shader_texture_lod '
     'GL_ARB_shading_language_100 GL_ARB_shader_viewport_layer_array '
     'GL_ARB_shading_language_420pack GL_ARB_shading_language_include '
     'GL_ARB_shading_language_packing GL_ARB_shadow GL_ARB_sparse_buffer '
     'GL_ARB_sparse_texture GL_ARB_sparse_texture2 GL_ARB_sparse_texture_clamp '
     'GL_ARB_spirv_extensions GL_ARB_stencil_texturing GL_ARB_sync '
     'GL_ARB_tessellation_shader GL_ARB_texture_barrier '
     'GL_ARB_texture_border_clamp GL_ARB_texture_buffer_object '
     'GL_ARB_texture_buffer_object_rgb32 GL_ARB_texture_buffer_range '
     'GL_ARB_texture_compression GL_ARB_texture_compression_bptc '
     'GL_ARB_texture_compression_rgtc GL_ARB_texture_cube_map '
     'GL_ARB_texture_cube_map_array GL_ARB_texture_env_add '
     'GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar '
     'GL_ARB_texture_env_dot3 GL_ARB_texture_filter_anisotropic '
     'GL_ARB_texture_filter_minmax GL_ARB_texture_float GL_ARB_texture_gather '
     'GL_ARB_texture_mirror_clamp_to_edge GL_ARB_texture_mirrored_repeat '
     'GL_ARB_texture_multisample GL_ARB_texture_non_power_of_two '
     'GL_ARB_texture_query_levels GL_ARB_texture_query_lod '
     'GL_ARB_texture_rectangle GL_ARB_texture_rg GL_ARB_texture_rgb10_a2ui '
     'GL_ARB_texture_stencil8 GL_ARB_texture_storage '
     'GL_ARB_texture_storage_multisample GL_ARB_texture_swizzle '
     'GL_ARB_texture_view GL_ARB_timer_query GL_ARB_transform_feedback2 '
     'GL_ARB_transform_feedback3 GL_ARB_transform_feedback_instanced '
     'GL_ARB_transform_feedback_overflow_query GL_ARB_transpose_matrix '
     'GL_ARB_uniform_buffer_object GL_ARB_vertex_array_bgra '
     'GL_ARB_vertex_array_object GL_ARB_vertex_attrib_64bit '
     'GL_ARB_vertex_attrib_binding GL_ARB_vertex_buffer_object '
     'GL_ARB_vertex_program GL_ARB_vertex_shader '
     'GL_ARB_vertex_type_10f_11f_11f_rev GL_ARB_vertex_type_2_10_10_10_rev '
     'GL_ARB_viewport_array GL_ARB_window_pos GL_ATI_draw_buffers '
     'GL_ATI_texture_float GL_ATI_texture_mirror_once GL_S3_s3tc '
     'GL_EXT_texture_env_add GL_EXT_abgr GL_EXT_bgra GL_EXT_bindable_uniform '
     'GL_EXT_blend_color GL_EXT_blend_equation_separate GL_EXT_blend_func_separate '
     'GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_compiled_vertex_array '
     'GL_EXT_Cg_shader GL_EXT_depth_bounds_test GL_EXT_direct_state_access '
     'GL_EXT_draw_buffers2 GL_EXT_draw_instanced GL_EXT_draw_range_elements '
     'GL_EXT_fog_coord GL_EXT_framebuffer_blit GL_EXT_framebuffer_multisample '
     'GL_EXTX_framebuffer_mixed_formats GL_EXT_framebuffer_multisample_blit_scaled '
     'GL_EXT_framebuffer_object GL_EXT_framebuffer_sRGB GL_EXT_geometry_shader4 '
     'GL_EXT_gpu_program_parameters GL_EXT_gpu_shader4 GL_EXT_multi_draw_arrays '
     'GL_EXT_multiview_texture_multisample GL_EXT_multiview_timer_query '
     'GL_EXT_packed_depth_stencil GL_EXT_packed_float GL_EXT_packed_pixels '
     'GL_EXT_pixel_buffer_object GL_EXT_point_parameters '
     'GL_EXT_polygon_offset_clamp GL_EXT_post_depth_coverage '
     'GL_EXT_provoking_vertex GL_EXT_raster_multisample GL_EXT_rescale_normal '
     'GL_EXT_secondary_color GL_EXT_separate_shader_objects '
     'GL_EXT_separate_specular_color GL_EXT_shader_image_load_formatted '
     'GL_EXT_shader_image_load_store GL_EXT_shader_integer_mix GL_EXT_shadow_funcs '
     'GL_EXT_sparse_texture2 GL_EXT_stencil_two_side GL_EXT_stencil_wrap '
     'GL_EXT_texture3D GL_EXT_texture_array GL_EXT_texture_buffer_object '
     'GL_EXT_texture_compression_dxt1 GL_EXT_texture_compression_latc '
     'GL_EXT_texture_compression_rgtc GL_EXT_texture_compression_s3tc '
     'GL_EXT_texture_cube_map GL_EXT_texture_edge_clamp GL_EXT_texture_env_combine '
     'GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic '
     'GL_EXT_texture_filter_minmax GL_EXT_texture_integer GL_EXT_texture_lod '
     'GL_EXT_texture_lod_bias GL_EXT_texture_mirror_clamp GL_EXT_texture_object '
     'GL_EXT_texture_shadow_lod GL_EXT_texture_shared_exponent GL_EXT_texture_sRGB '
     'GL_EXT_texture_sRGB_R8 GL_EXT_texture_sRGB_decode GL_EXT_texture_storage '
     'GL_EXT_texture_swizzle GL_EXT_timer_query GL_EXT_transform_feedback2 '
     'GL_EXT_vertex_array GL_EXT_vertex_array_bgra GL_EXT_vertex_attrib_64bit '
     'GL_EXT_window_rectangles GL_EXT_import_sync_object GL_IBM_rasterpos_clip '
     'GL_IBM_texture_mirrored_repeat GL_KHR_context_flush_control GL_KHR_debug '
     'GL_EXT_memory_object GL_EXT_memory_object_win32 GL_NV_memory_object_sparse '
     'GL_EXT_win32_keyed_mutex GL_KHR_parallel_shader_compile GL_KHR_no_error '
     'GL_KHR_robust_buffer_access_behavior GL_KHR_robustness GL_EXT_semaphore '
     'GL_EXT_semaphore_win32 GL_NV_timeline_semaphore GL_KHR_shader_subgroup '
     'GL_KTX_buffer_region GL_NV_alpha_to_coverage_dither_control '
     'GL_NV_bindless_multi_draw_indirect GL_NV_bindless_multi_draw_indirect_count '
     'GL_NV_bindless_texture GL_NV_blend_equation_advanced '
     'GL_NV_blend_equation_advanced_coherent '
     'GL_NVX_blend_equation_advanced_multi_draw_buffers GL_NV_blend_minmax_factor '
     'GL_NV_blend_square GL_NV_clip_space_w_scaling GL_NV_command_list '
     'GL_NV_compute_program5 GL_NV_compute_shader_derivatives '
     'GL_NV_conditional_render GL_NV_conservative_raster '
     'GL_NV_conservative_raster_dilate GL_NV_conservative_raster_pre_snap '
     'GL_NV_conservative_raster_pre_snap_triangles '
     'GL_NV_conservative_raster_underestimation GL_NV_copy_depth_to_color '
     'GL_NV_copy_image GL_NV_depth_buffer_float GL_NV_depth_clamp '
     'GL_NV_draw_texture GL_NV_draw_vulkan_image GL_NV_ES1_1_compatibility '
     'GL_NV_ES3_1_compatibility GL_NV_explicit_multisample GL_NV_feature_query '
     'GL_NV_fence GL_NV_fill_rectangle GL_NV_float_buffer GL_NV_fog_distance '
     'GL_NV_fragment_coverage_to_color GL_NV_fragment_program '
     'GL_NV_fragment_program_option GL_NV_fragment_program2 '
     'GL_NV_fragment_shader_barycentric GL_NV_fragment_shader_interlock '
     'GL_NV_framebuffer_mixed_samples GL_NV_framebuffer_multisample_coverage '
     'GL_NV_geometry_shader4 GL_NV_geometry_shader_passthrough GL_NV_gpu_program4 '
     'GL_NV_internalformat_sample_query GL_NV_gpu_program4_1 GL_NV_gpu_program5 '
     'GL_NV_gpu_program5_mem_extended GL_NV_gpu_program_fp64 GL_NV_gpu_shader5 '
     'GL_NV_half_float GL_NV_light_max_exponent GL_NV_memory_attachment '
     'GL_NV_mesh_shader GL_NV_multisample_coverage GL_NV_multisample_filter_hint '
     'GL_NV_occlusion_query GL_NV_packed_depth_stencil '
     'GL_NV_parameter_buffer_object GL_NV_parameter_buffer_object2 '
     'GL_NV_path_rendering GL_NV_path_rendering_shared_edge GL_NV_pixel_data_range '
     'GL_NV_point_sprite GL_NV_primitive_restart GL_NV_primitive_shading_rate '
     'GL_NV_query_resource GL_NV_query_resource_tag GL_NV_register_combiners '
     'GL_NV_register_combiners2 GL_NV_representative_fragment_test '
     'GL_NV_sample_locations GL_NV_sample_mask_override_coverage '
     'GL_NV_scissor_exclusive GL_NV_shader_atomic_counters '
     'GL_NV_shader_atomic_float GL_NV_shader_atomic_float64 '
     'GL_NV_shader_atomic_fp16_vector GL_NV_shader_atomic_int64 '
     'GL_NV_shader_buffer_load GL_NV_shader_storage_buffer_object '
     'GL_NV_shader_subgroup_partitioned GL_NV_shader_texture_footprint '
     'GL_NV_shading_rate_image GL_NV_stereo_view_rendering GL_NV_texgen_reflection '
     'GL_NV_texture_barrier GL_NV_texture_compression_vtc '
     'GL_NV_texture_dirty_tile_map GL_NV_texture_env_combine4 '
     'GL_NV_texture_multisample GL_NV_texture_rectangle '
     'GL_NV_texture_rectangle_compressed GL_NV_texture_shader '
     'GL_NV_texture_shader2 GL_NV_texture_shader3 GL_NV_transform_feedback '
     'GL_NV_transform_feedback2 GL_NV_uniform_buffer_unified_memory '
     'GL_NV_vertex_array_range GL_NV_vertex_array_range2 '
     'GL_NV_vertex_attrib_integer_64bit GL_NV_vertex_buffer_unified_memory '
     'GL_NV_vertex_program GL_NV_vertex_program1_1 GL_NV_vertex_program2 '
     'GL_NV_vertex_program2_option GL_NV_vertex_program3 GL_NV_viewport_array2 '
     'GL_NV_viewport_swizzle GL_NVX_conditional_render GL_NVX_linked_gpu_multicast '
     'GL_NV_gpu_multicast GL_NVX_gpu_multicast2 GL_NVX_progress_fence '
     'GL_NVX_gpu_memory_info GL_NVX_multigpu_info GL_NVX_nvenc_interop '
     'GL_NV_shader_thread_group GL_NV_shader_thread_shuffle '
     'GL_KHR_blend_equation_advanced GL_KHR_blend_equation_advanced_coherent '
     'GL_OVR_multiview GL_OVR_multiview2 GL_SGIS_generate_mipmap '
     'GL_SGIS_texture_lod GL_SGIX_depth_texture GL_SGIX_shadow GL_SUN_slice_accum '
     "GL_WIN_swap_hint WGL_EXT_swap_control '\n")
    

    What could be the suggestion for fixing this? thanks!

    opened by Indosaram 3
Releases(v0.9.4)
Main repository for Vispy

VisPy: interactive scientific visualization in Python Main website: http://vispy.org VisPy is a high-performance interactive 2D/3D data visualization

vispy 2.6k Feb 17, 2021
Statistics and Visualization of acceptance rate, main keyword of CVPR 2021 accepted papers for the main Computer Vision conference (CVPR)

Statistics and Visualization of acceptance rate, main keyword of CVPR 2021 accepted papers for the main Computer Vision conference (CVPR)

Hoseong Lee 51 Nov 24, 2021
This repository contains a streaming Dataflow pipeline written in Python with Apache Beam, reading data from PubSub.

Sample streaming Dataflow pipeline written in Python This repository contains a streaming Dataflow pipeline written in Python with Apache Beam, readin

Israel Herraiz 7 Aug 16, 2021
This is a small repository for me to implement my simply Data Visualisation skills through Python.

Data Visualisations This is a small repository for me to implement my simply Data Visualisation skills through Python. Steam Population Chart from 10/

null 9 Oct 27, 2021
Generate visualizations of GitHub user and repository statistics using GitHub Actions.

GitHub Stats Visualization Generate visualizations of GitHub user and repository statistics using GitHub Actions. This project is currently a work-in-

JoelImgu 1 Nov 27, 2021
Main repository for Vispy

VisPy: interactive scientific visualization in Python Main website: http://vispy.org VisPy is a high-performance interactive 2D/3D data visualization

vispy 2.6k Feb 13, 2021
Main repository for Vispy

VisPy: interactive scientific visualization in Python Main website: http://vispy.org VisPy is a high-performance interactive 2D/3D data visualization

vispy 2.6k Feb 17, 2021
Statistics and Visualization of acceptance rate, main keyword of CVPR 2021 accepted papers for the main Computer Vision conference (CVPR)

Statistics and Visualization of acceptance rate, main keyword of CVPR 2021 accepted papers for the main Computer Vision conference (CVPR)

Hoseong Lee 51 Nov 24, 2021
Main repository for the Sphinx documentation builder

Sphinx Sphinx is a tool that makes it easy to create intelligent and beautiful documentation for Python projects (or other documents consisting of mul

null 4.3k Nov 27, 2021
Tesseract Open Source OCR Engine (main repository)

Tesseract OCR About This package contains an OCR engine - libtesseract and a command line program - tesseract. Tesseract 4 adds a new neural net (LSTM

null 42.8k Nov 25, 2021
Main repository for the Sphinx documentation builder

Sphinx Sphinx is a tool that makes it easy to create intelligent and beautiful documentation for Python projects (or other documents consisting of mul

null 4.3k Nov 26, 2021
Main repository of the zim desktop wiki project

Zim - A Desktop Wiki Editor Zim is a graphical text editor used to maintain a collection of wiki pages. Each page can contain links to other pages, si

Zim Desktop Wiki 1.4k Dec 3, 2021
Tesseract Open Source OCR Engine (main repository)

Tesseract OCR About This package contains an OCR engine - libtesseract and a command line program - tesseract. Tesseract 4 adds a new neural net (LSTM

null 42.9k Dec 3, 2021
Main repository for the HackBio'2021 Virtual Internship Experience for #Team-Greider ❤️

Hello ?? #Team-Greider The team of 20 people for HackBio'2021 Virtual Bioinformatics Internship ?? ??️ ??‍?? HackBio: https://thehackbio.com ?? Ask us

Siddhant Sharma 8 Aug 8, 2021
This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.

Speech-Backbones This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab. Grad-TTS Official implementation of the Grad-

HUAWEI Noah's Ark Lab 107 Nov 25, 2021
Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

Adrien Barbaresi 318 Nov 25, 2021
Main repo for Inboxen.org

Inboxen This is the complete system with everything you need to set up Inboxen. The current maintainer of this repo is Matt Molyneaux GPG keys GPG key

Inboxen 238 Nov 24, 2021
Ella is a CMS based on Python web framework Django with a main focus on high-traffic news websites and Internet magazines.

Ella CMS Ella is opensource CMS based on Django framework, designed for flexibility. It is composed from several modules: Ella core is the main module

null 294 Sep 24, 2020
A bot for FaucetCrypto a cryptocurrency faucet. The bot can currently claim PTC ads, main reward and all the shortlinks except exe.io and fc.lc.

A bot for the high paying popular cryptocurrency faucet Faucet Crypto. The bot is built using Python and Selenium, currently it is under active develo

Sourav R S 48 Nov 15, 2021
Code for ACL 2021 main conference paper "Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances".

Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances This repository contains the code and pre-trained mode

ICTNLP 60 Nov 19, 2021
Recon is a script to perform a full recon on a target with the main tools to search for vulnerabilities.

?? Recon ?? The step of recognizing a target in both Bug Bounties and Pentest can be very time-consuming. Thinking about it, I decided to create my ow

Dirso 126 Nov 21, 2021
(@Tablada32BOT is my bot in twitter) This is a simple bot, its main and only function is to reply to tweets where they mention their bot with their @

Remember If you are going to host your twitter bot on a page where they can read your code, I recommend that you create an .env file and put your twit

null 3 Jun 4, 2021
A Telegram Bot for adding Footer caption beside main caption of Telegram Channel Messages.

Footer-Bot A Telegram Bot for adding Footer caption beside main caption of Telegram Channel Messages. Best for Telegram Movie Channels. Made by @AbirH

Abir Hasan 31 Sep 24, 2021
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 28 Nov 29, 2021
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

LancoPKU 28 Nov 29, 2021
Tool to add main subject to items on Wikidata using a WMFs CirrusSearch for named entity recognition or a manually supplied list of QIDs

ItemSubjector Tool made to add main subject statements to items based on the title using a home-brewed CirrusSearch-based Named Entity Recognition alg

Dennis Priskorn 5 Oct 30, 2021
IndoBERTweet is the first large-scale pretrained model for Indonesian Twitter. Published at EMNLP 2021 (main conference)

IndoBERTweet ?? ???? 1. Paper Fajri Koto, Jey Han Lau, and Timothy Baldwin. IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effe

IndoLEM 19 Nov 12, 2021
CropImage is a simple toolkit for image cropping, detecting and cropping main body from pictures.

CropImage is a simple toolkit for image cropping, detecting and cropping main body from pictures. Support face and saliency detection.

Haofan Wang 6 Nov 13, 2021
Angora is a mutation-based fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution.

Angora Angora is a mutation-based coverage guided fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without s

null 727 Nov 20, 2021